Thanks to Minecraft and pester-power, we bought our son an axolotl some years ago on the condition he would look after it (figuring we'd end up looking after it anyway).
And since they live a long time it's given us an out every time he asks for a new pet ;)
They are relatively popular legal pets in Europe - though not sold in many mainstream pet shops. 100% of those kept as pets here are bred in captivity.
Having kept them before, they are genuinely about as hard to care for as goldfish but need bigger tanks and a little bit more cleaning.
Also super easy to breed, we let the spawn hatch once and ended up with about 70 larvae, they cannibalize quickly but 6 grew to full size and we sold them on very easily.
Also note most goldfish are abused, they need huge tanks 40G+ if I remember right, and not the 8-16 fl oz bowls they're stereotypically kept in. And they need filtration and would do better with a water heater. So idk if pointing to goldfish is the best indicator, even if imo you're technically correct - the refrigerating system isn't really a lot to maintain and that's the main difference.
To add to this, ideal temperature is 16C to 18C (we use a Pi to keep a running graph of the temperature). Lower is generally ok, apparently if it gets into the mid-20's they get so stress they try and escape the tank. We have a chiller, though we have also used the fairly common trick of putting ice bottles in the tank as needed (milk containers filled with water and left in the freeze for 24 hours work well).
Indoor water temp was fine for us, maybe depends a lot on the climate where you live, cool is easy here!
Hopefully we kept them well. We had them for years and in that time they spawned regularly, ended up giving our original pair up for adoption when we moved house.
Tank was smaller than 40g. You're right though bigger is better, that's the general advice with all pets (including goldfish which can grow huge!).
By precisely timing them you can measure/check various facts like distance, diameter and so on. In fact, if you time them precisely from different locations on earth you can determine the shape of the occulting body (e.g. an asteroid occulting a star). And on occasion you can get a 'grazing occultation', for example a star goes behind mountains on the moon resulting in it blinking on and off; observe from multiple latitudes and it's possible to recover the profile of the range.
I've noticed various TV's/TV networks dropping pause/rewind etc and even making the off switch hard to access; it feels as if they want more control for them (and less for the consumer). Especially given the financial clout of the advertising business, anything that allows users to pause and fast-forward through ad breaks would be in their crosshairs.
It reminds me of loopback recording on soundcards disappearing; if I'm not mistaken that was directly due to industry pressure.
Yes, though sharing BLE with other protocols is challenging (even with first-class citizens like ANT+ there are various caveats). The proprietary protocols are Shockburst/Gazelle [0] which are based of the ancient nrf24 setup.
Having said that, the radio peripheral on the chip is dead simple to drive bare-metal. Create a packet (with the convenience function), put its address in a register and hit the 'send' bit (more or less, glossing over waiting for ready bits here). Receiving is as easy - point to where you want packets to land, go into RX mode and wait for the "packet received" bit to be set.
The older Nordic SDK wasn't too bad (once you get over the learning curve). Trying to start a project from scratch is challenging though, so much easier to pick the closest example, get that going and modify from there.
However, they've deprecated the old SDK [0] in favour of Zephyr [1] and quite a number of people struggle with it (check the forums and general internet). I have less experience with Zephyr, but both of them use a lot of python support tools which seem to suffer from versioning amd compatibility problems (even trying to keep a stable platform has been difficult here, what works one time doesn't work a few months later). YMMV.
There are good reasons for depreciating the nrf5 sdk. However, I’m not sure how long the Nordic Semi lead is going to last.
Previously if there was a project that came up that didn’t strictly need BLE, I’d recommend the nrf5 sdk because it was reliable and stable. Now with the new sdk they are encouraging people to write firmware that’s much easier to port to other mcus (with zephyr) and the development experience has much higher cognitive load.
Depends on the model, if it doesn't fit into VRAM performance will suffer. Response here is immediate (at ~15 tokens/sec) on a pair of ebay RTX 3090s in an ancient i3770 box.
£1200 UKP, so a little less. Targetted at having 48GB (2x24Gb) VRAM for running the larger models; having said that, a single 12Gb RTX3060 in another box seems pretty close in local testing (with smaller models).
Have been trying forever to find a coherent guide on building dual-GPU box for this purpose, do you know of any? Like selecting the MB, the case, cooling, power supply and cables, any special voodoo required to pair the GPUs etc.
I'm not aware of any particular guides, the setup here was straightforward - an old motherboard with two PCIe X16 slots (Asus P8Z77V or P8Z77WS), a big enough power supply (Seasonic 850W) and the stock linux Nividia drivers. The RTX 3090's are basic Dell models (i.e. not OC'ed gamer versions), and worth noting they only get hot if used continuously - if you're the only one using them, the fans spin up during a query and back down between. Good 'smoke test' is something like 'while 1; do 'ollama run llama3.3 "Explain cosmology"'; done.
With llama3.3 70B, two RTX3090s gives you 48GB of VRAM and the model uses about 44Gb; so the first start is slow (loading the model into VRAM) but after that response is fast (subject to comment above about KEEP_ALIVE).
Is this true? I don't have access to the article, but the Boeing crashes and other general loss of control/CFIT must account for a large percentage. Only stats I could find with a quick search are from Airbus who say LOC is the leading cause over the past 20 years (I can see enough of the WSJ article to see it says "...from 2014", so a decade and not like for like).
Edit: Possibly the linked stats exclude deliberate acts - wikipedia expands on this [1] [2]
Somehow the universe knows how to organise the sand in an egg timer to form an orderly pile. Simulating that with a classical computer seems impossible - yet the universe "computes" the correct result in real time. It feels like there is a huge gap between what actually happens and what can be done with a computer (even a quantum one).
It's a good questiopn why that is so. But I wouldn't draw from that the conclusion that Universe somehow "calculates Pi", and then puts it in all the forces it "has" so it turns out in our formulas. That is rather fantastical way of thinking and I do see its poetic appeal. A bit like "God doesn't play dice, or does he?"
What is calculation anyway we may ask. Isn't it just term-rewriting?
Pi is just a description used for calculating perfectly/near-perfect spheres. A sphere is nature's building block, since every point on it's surface is the same distance from the centre.
> yet the universe "computes" the correct result in real time
Does it? In what sense the result is "correct"? It's not because it's perfectly regular, or unique, or predictable, or reproducible. So what's "correct" about it?
Completely out of my depth here, but maybe there is a difference between evolution of a physical system and useful computation: and maybe there's much less useful computation that can be extracted from a physical system than the entire amount of computation that would be theoretically needed to simulate it exactly. Maybe you can construct physical systems that perform vast, but measurable, amounts of computation, but you can extract only a fixed max amount of useful information from them?
And then you have this strange phenomenon: you build controlled systems that perform an enormous amount of deterministic, measurable computation, but you can't make them do any useful work...
It does seem to, and can anyone credibly say they aren't out of their depth in these waters? (the sandpile thing is not original, it dates back many years). Taking the idea that the "universe is a simulation" [0], what sort of computer (or other device) could it be running on? (and how could we tell we're living in a VM?)
From the same school of thought, to simulate the path of a single particle seems it should require a device comprised of more than a single particle. Therefore, if the universe is a simulation, the simulator must have more than the number of particles in the universe.
> Somehow the universe knows how to organise the sand in an egg timer to form an orderly pile. Simulating that with a classical computer seems impossible
Is it really?
There's only ~500,000 grains of sand in an egg timer.
I don't know anything here, but this seems like something that shouldn't be impossible.
Maybe it's not that hard to simulate, but let's start with looking at just two of the sand grains that happen to hit each other? They collide, how they rebound is all angles, internal structure, Young's modulus, they have electrostatic interactions, even the Van der Walls force come into play. Sand grains aren't regular, consider how determining the precise point at which two irregular objects collide is quite a challenge (and this isn't even a game, approximations to save compute time won't do what the real world does 'naturally').
So while we can - for something as simple and regular as an eggtimer - come up with some workable approximations, the approximation would surely fall short when it comes to the detail (an analytical solution for the path of every single grain).
A close approximation should arguably include collapses/slides, which happen spontaneously because the pile organises itself to a critical angle; then an incredibly small event can trigger a large slide of salt/sand/whatever/rocks (or whatever else the pile is made of). Even working out something like "What's the biggest and smallest slides that could occur given a pile of some particular substance?".
Every approximation will by definition deviate from what really happens - I suppose that's why we talk of "working approximations", i.e. they work well enough for a given purpose. So it probably comes down to what the approximation is being used for.
There is the idea that we are all living in a simulation; if so maybe if we look closely enough at the detail all the way from the universe to atoms then we'll start to see some fuzziness (well, of course there's quantum physics....).
When the output looks the same as the original we would say that the simulation was successful. That is how computer games do it. We're not asking for the exact position of each grain, just the general outline of the pile.
An image of something is likely to be the simplest model of that thing that happened, and it has A LOT less information than a 3D model of arbitrary resolution would have.
Simulation is never an "image". It may simulate each grain, just saying it doesn't need to simulate each precisely, because the law of large numbers kicks in.
This is the basis for example Monte Carlo simulation, it simulates real world with random numbers it generates.
Every video game engine is a simulation and many of them are a very simplified model of images of things happening instead of simulating the actual physics. Even "physics" in these engines is often just rendering an image.
The real issue is that the sand isn't orderly sorted. At a micro level, it's billions and trillions of individual interactions between atoms that create the emergent behavior of solid grains of sand packing reasonably tightly but not phasing through each other.
And since they live a long time it's given us an out every time he asks for a new pet ;)
reply