> because it's not going as planned and Israel is still getting hammered
What makes you say that? Iran is a country twice the size of Texas, and dismantling the military-industrial complex of a massive country takes time and money. Iran was outed as a paper tiger last summer, and hasn't been able to meaningfully defend their airspace, navy, or commanders. They are being absolutely destroyed. The question is whether this will be sufficient to cause regime change before the country is sent back to the stone age like Gaza.
>Leaf blowers and cars driving past are exactly the kind of thing that ANC works well on, a fairly constant noise.
I can tell you haven't heard a leaf blower in a while, if ever. The revving the operators inevitably do causes it to bounce up and down over the spectrum at completely random-seeming intervals. Punches right through ANC, windows, doors, walls.
As if that's not bad enough they pollute more than a gigantic SUV because of how much oil they burn being a two-stroke.
very bottom of the page: "This service is provided "as is" without warranty. MalusCorp is not responsible for any legal consequences, moral implications, or late-night guilt spirals resulting from use of our services."
Mars' gravity is only 38% of Earth so I think quite a few would be crazy feats of strength or odd trajectories of objects. At least they would be if I were making them.
I would say hard no. It doesn't. But it's been trained on humans saying that in explaining their behavior, so that is "reasonable" text to generate and spit out at you. It has no concept of the idea that a human-serving language model should not be saying it to a human because it's not a useful answer. It doesn't know that it's not a useful answer. It knows that based on the language its been trained on that's a "reasonable" (in terms of matrix math, not actual reasoning) response.
Way too many people think that it's really thinking and I don't think that most of them are. My abstract understanding is that they're basically still upjumped Markov chains.
This is one of the interesting things I've noticed. LLMs are good at natural language, and even writing novel code. But If you try to get it to do something that's simple and solidly within the discrete math world, like "sort this list of lines by length" it'll fuck it up like a first time ever programmer, or just fail the task. Like the longest line will be in some random spot not even the middle.
I know, it's not really an appropriate use of the tool, but I'm a lazy programmer and used what I had ready access to. And it took like 5 iterations.
Discrete, concrete things like "stop", or "no" is just like... not in its wheelhouse.
reply