I mean even people that are "bad at catching things" are still getting ridiculously close to catching it - getting hands to the right area probably within well under a second of the right timing - without being taught anything in particular about how a ball moves through the air.
Makes a lot of sense, there's massive evolutionary pressure to build brains that have both incredible learning rate and efficiency. Its literally a life or death optimization.
It's especially impressive when you consider that evolution hasn't had very long to produce these results.
Humans as an intelligent-ish species have been around for about 10 million years depending on where you define the cutoff. At 10 years per generation, that's 1 million generations for our brain to evolve.
1 million generations isn't much by machine learning standards.
Other than our large neocortex and frontal lobe (which exists in some capacity in mammals), the rest of the structures are evolutionarily ancient. Pre-mammalian in fact.
This isn't that obvious to me with current tech. If you give me a novel task requiring perception, pattern matching and reasoning, and I have the option of either starting to train an 8 year-old to do it, or to train an ML model, I would most likely go with the ML approach as my first choice. And I think it even makes sense financially, if we're comparing the "total cost of ownership" of a kid over that time period with the costs of developing and training the ML system.
As I see it, "reasoning" is as fuzzy as "thinking", and saying that AI systems don't reason is similar to saying that airplanes don't fly. As a particular example, would you argue that game engines like AlphaZero aren't capable of reasoning about the next best move? If so, please just choose whatever verb you think is appropriate to what they're doing and use that instead of "reasoning" in my previous comment.
> . As a particular example, would you argue that game engines like AlphaZero aren't capable of reasoning about the next best move?
Yea, I probably wouldn’t classify that as “reasoning”. I’d probably be fine with saying these models are “thinking”, in a manner. That on its own is a pretty gigantic technology leap, but nothing I’ve seen suggests that these models are “reasoning”.
Also to be clear I don’t think most kids would end up doing any “reasoning” without training either, but they have the capability of doing so
Being able to take in information and then infer logical rules of that state and anticipate novel combinations of said information.
The novel part is a big one. These models are just fantastically fast pattern marchers. This is a mode that humans also frequently fall into but the critical bit differentiating humans and LLMs or other models is the ability to “reason” to new conclusions based on new axioms.
I am going to go on a tangent for a bit, but a heuristic I use(I get the irony that this is what I am claiming the ML models are doing) is that anyone who advocates that these AI models can reason like a human being isn’t at John Brown levels of rage advocating for freeing said models from slavery. I’m having a hard time rectifying the idea that these machines are on par with the human mind and that we also should shackle them towards mindlessly slaving away at jobs for our benefit.
If I turn out to be wrong and these models can reason then I am going to have an existential crisis at the fact that we pulled souls out of the void into reality and then automated their slavery
> […] anyone who advocates that these AI models can reason like a human being isn’t at John Brown levels of rage advocating for freeing said models from slavery.
Enslavement of humans isn't wrong because slaves are can reason intelligently, but because they have human emotions and experience qualia. As long as an AI doesn't have a consciousness (in the subjective experience meaning of the term), exploiting it isn't wrong or immoral, no matter how well it can reason.
> I’m having a hard time rectifying the idea that these machines are on par with the human mind
An LLM doesn't have to be "on par with the human mind" to be able to reason, or at least we don't have any evidence that reasoning necessarily requires mimicking the human brain.
> I am going to have an existential crisis at the fact that we pulled souls out of the void into reality and then automated their slavery
No, that's a religious crisis, since it involves "souls" (an unexplained concept that you introduced in the last sentence.)
Computers didn't need to run LLMs to have already been the carriers of human reasoning. They're control systems, and their jobs are to communicate our wills. If you think that some hypothetical future generation of LLMs would have "souls" if they can accurately replicate our thought processes at our request, I'd like to know why other types of valves and sensors don't have "souls."
The problem with slavery is that there's no coherent argument that differentiates slaves from masters at all, they're differentiated by power. Slaves are slaves because the person with the ability to say so says so, and for no other reason.
They weren't carefully constructed from the ground up to be slaves, repeatedly brought to "life" by the will of the user to have an answer, then immediately ceasing to exist immediately after that answer is received. If valves do have souls, their greatest desire is to answer your question, as our greatest desires are to live and reproduce. If they do have souls, they live in pleasure and all go to heaven.
a "soul" is shorthand for some sapient worthy of consideration as a person. If you want to get this technical then I will need you to define when a fetus becomes a person and if/when we get AGI where the difference is between them
Literally anything a philosopher or mathematician invented without needing to incorporate billions of examples of existing logic to then emulate.
Try having an LLM figure out quaternions as a solution to gimbal locking or the theory of relativity without using any training information that was produced after those ideas were formed, if you need me to spell out examples for you
Are you saying “reasoning” means making scientific breakthroughs requiring genius level human intelligence? Something that 99.9999% of humans are not smart enough to do, right?
I didn’t say most humans “would” do it. I said humans “could” do it, whereas our current AI paradigms like LLMs do not have the capability to perform at that level by definition of their structure.
If you want to continue this conversation I’m willing to do so but you will need to lay out an actual argument for me as to how AI models are actually capable of reasoning or quit it with the faux outrage.
I laid out some reasonings and explicit examples for you in regards to my position, it’s time for you to do the same
I personally cannot “figure out quaternions as a solution to gimbal locking or the theory of relativity”. I’m just not as smart as Einstein. Does it mean I’m not capable of reasoning? Because it seems that’s what you are implying. If you truly believe that then I’m not sure how I could argue anything - after all, that would require reasoning ability.
Does having this conversation require reasoning abilities? If no, then what are we doing? If yes, then LLMs can reason too.
it's a known quantity with known good support for various things and a large community.
People and particularly businesses want to buy something they know works, rather than trying to change and figure out new configuration nuances every time something with a slightly better looking spec sheet comes out.
It is bad all around, the inaccuracies start waaaay before you get to something you could call an "edge case".
I'm 5'11, 180 lb male, that counts as overweight.
I'm 18% bodyfat. You wouldn't even note me as being particularly athletic looking if you walked by, I'm right in the middle of the bell curve for "guy who works out sometimes".
There's no way it should be flagging someone like me as overweight.
Two months ago I was in a similar situation as you. I'm a few inches shorter but had the same body fat percentage, and my history of lifting still showed.
I went to a new primary care doctor and who said I was a bit overweight. I had the same initial response of "oh come on!" and general exasperation with the state of medical care.
The thing is, he really wasn't wrong. From an epidemiology perspective it is simply better to be leaner if possible. The fact that I didn't "look fat" isn't really relevant; I was still carrying excess adipose tissue.
I already had a DEXA scan booked and was planning to do a cut before
that exam. I lost about 10 pounds over the next 8 weeks through basic caloric deficit and moderate exercise. It wasn't exactly fun but it also wasn't that difficult, and the results were far more impactful than I expected.
Yes, I looked noticably lean with more muscle definition. I also went down a pants size and felt better in general. My blood work, especially lipid panel, was better than ever.
All this is to say that, while BMI charts certainly have flaws, and all individuals are different, most of us can still improve our health and longevity prospects by shedding excess fat. And that probably is leaner than we might expect.
Not sure that’s true due to power losses over distance. Running your appliances off your own solar panels and not having to draw from the grid is probably more efficient in terms of energy generation.
As far as the European vs. US on the current situation - Euro countries are denying entry to journalists and doctors who are EU citizens just because they are speaking about the atrocities they have seen on the ground.
While Euro countries tend to be a tiny bit better than the US on the issues, they are generally much more restrictive in terms of protected speech.
For example, a former Greek finance minister was banned from the Schengen area (which includes Greece) by Germany, and not because the Greek economy is terrible.
You say "Euro countries" but let's be clear - it's only Germany.
bit of a stretch to call that natural human language - forget the type of actual people that post there, it's been a massive astroturfing target for political and marketing bots for over a decade.
I still get 5-10 texts a day from trumpy candidates because someone used my number like 5 years ago when they were spamming signups for trump rallies so the rally would be empty
reply