Hacker News new | past | comments | ask | show | jobs | submit | brewii's comments login

Think about how fast you’re able to determine the exact trajectory of a ball and location to place your hand to catch a ball using your lizard brain.


This isn't some innate ability that people have. As evidenced by how bad my kids are at catching things. :D

That said, I think this is a good example. We call it "muscle memory" in that you are good at what you have trained at. Change a parameter in it, though, and your execution will almost certainly suffer.


"Muscle memory" has always seemed like a terrible name for that kind of skill. A ball will be thrown to a slightly different location every time. There's no memory evolved there at all, its just calculations and predictions happening at a level that our conscious mind doesn't seem to see or recognize.


It is a trained skill. And one that you are very unlikely to be able to do without training. Such that it really does come as a sort of memory that you implant in your muscles.

You seem to be objecting because it is not perfect recall memory at play? But it is more about appealing to "remembering how to ride a bike" where you can kind of let the body flow into all of the various responses it needs to do to make the skill work. And if you've never done it... expect to fall down. Your muscles don't have the memory of coordinating in the right way.

And no, you are not calculating and predicting your way to what most people refer to for muscle memory. Is why juggling takes practice, and not just knowing where the balls have to be going.


I think it's actually a good name.

The "memory" is stored as the parameters of a function. So, when you practice, you actually update this memory/parameters.

This is why you can use the same "memory" and achieve different results.

Think of it as

function muscleAction(Vec3d target, Vec3d environment, MuscleMemory memory) -> MuscleActivation[];


To complete the other comment: the MuscleMemory is updated through learning, so a more complete example would be:

    function muscleAction(Vec3d target, Vec3d environment, MuscleMemory memory) -> {actions: MuscleActivation[], result: Vec3d}
After executing the muscleAction function, through "practice", the MuscleMemory will be updated.

    function updateMuscleMemory(Vec3d target, Vec3d environment, MuscleMemory memory, MuscleActivation[] actions, Vec3d result) {
        memory.update(target, environment, actions, result);
    }

Sort-of like backpropagation.


I mean even people that are "bad at catching things" are still getting ridiculously close to catching it - getting hands to the right area probably within well under a second of the right timing - without being taught anything in particular about how a ball moves through the air.


Uh.... have you been around kids? It will take several absurd misses before they even start to respond to a ball in flight.


I hope we still agree the kids learn extremely efficiently by ml standards.


Makes a lot of sense, there's massive evolutionary pressure to build brains that have both incredible learning rate and efficiency. Its literally a life or death optimization.


It's especially impressive when you consider that evolution hasn't had very long to produce these results.

Humans as an intelligent-ish species have been around for about 10 million years depending on where you define the cutoff. At 10 years per generation, that's 1 million generations for our brain to evolve.

1 million generations isn't much by machine learning standards.


I think you're underestimating how much our time as pre-humans baked useful structure into our brains.


Two rocks smashing together experience which one is bigger!


These sorts of motor skills are probably older than mammals.


Other than our large neocortex and frontal lobe (which exists in some capacity in mammals), the rest of the structures are evolutionarily ancient. Pre-mammalian in fact.


Its much more than that if you count sexual reproduction.


This isn't that obvious to me with current tech. If you give me a novel task requiring perception, pattern matching and reasoning, and I have the option of either starting to train an 8 year-old to do it, or to train an ML model, I would most likely go with the ML approach as my first choice. And I think it even makes sense financially, if we're comparing the "total cost of ownership" of a kid over that time period with the costs of developing and training the ML system.


> This isn't that obvious to me with current tech. If you give me a novel task requiring perception, pattern matching and reasoning,…

If that’s your criteria I think the kid will outperform the model every time since these models do not actually reason


As I see it, "reasoning" is as fuzzy as "thinking", and saying that AI systems don't reason is similar to saying that airplanes don't fly. As a particular example, would you argue that game engines like AlphaZero aren't capable of reasoning about the next best move? If so, please just choose whatever verb you think is appropriate to what they're doing and use that instead of "reasoning" in my previous comment.

EDIT: Fixed typo


> . As a particular example, would you argue that game engines like AlphaZero aren't capable of reasoning about the next best move?

Yea, I probably wouldn’t classify that as “reasoning”. I’d probably be fine with saying these models are “thinking”, in a manner. That on its own is a pretty gigantic technology leap, but nothing I’ve seen suggests that these models are “reasoning”.

Also to be clear I don’t think most kids would end up doing any “reasoning” without training either, but they have the capability of doing so


Can you give an example of the reasoning you’re talking about?


Being able to take in information and then infer logical rules of that state and anticipate novel combinations of said information.

The novel part is a big one. These models are just fantastically fast pattern marchers. This is a mode that humans also frequently fall into but the critical bit differentiating humans and LLMs or other models is the ability to “reason” to new conclusions based on new axioms.

I am going to go on a tangent for a bit, but a heuristic I use(I get the irony that this is what I am claiming the ML models are doing) is that anyone who advocates that these AI models can reason like a human being isn’t at John Brown levels of rage advocating for freeing said models from slavery. I’m having a hard time rectifying the idea that these machines are on par with the human mind and that we also should shackle them towards mindlessly slaving away at jobs for our benefit.

If I turn out to be wrong and these models can reason then I am going to have an existential crisis at the fact that we pulled souls out of the void into reality and then automated their slavery


You're conflating several concerns here.

> […] anyone who advocates that these AI models can reason like a human being isn’t at John Brown levels of rage advocating for freeing said models from slavery.

Enslavement of humans isn't wrong because slaves are can reason intelligently, but because they have human emotions and experience qualia. As long as an AI doesn't have a consciousness (in the subjective experience meaning of the term), exploiting it isn't wrong or immoral, no matter how well it can reason.

> I’m having a hard time rectifying the idea that these machines are on par with the human mind

An LLM doesn't have to be "on par with the human mind" to be able to reason, or at least we don't have any evidence that reasoning necessarily requires mimicking the human brain.


> I am going to have an existential crisis at the fact that we pulled souls out of the void into reality and then automated their slavery

No, that's a religious crisis, since it involves "souls" (an unexplained concept that you introduced in the last sentence.)

Computers didn't need to run LLMs to have already been the carriers of human reasoning. They're control systems, and their jobs are to communicate our wills. If you think that some hypothetical future generation of LLMs would have "souls" if they can accurately replicate our thought processes at our request, I'd like to know why other types of valves and sensors don't have "souls."

The problem with slavery is that there's no coherent argument that differentiates slaves from masters at all, they're differentiated by power. Slaves are slaves because the person with the ability to say so says so, and for no other reason.

They weren't carefully constructed from the ground up to be slaves, repeatedly brought to "life" by the will of the user to have an answer, then immediately ceasing to exist immediately after that answer is received. If valves do have souls, their greatest desire is to answer your question, as our greatest desires are to live and reproduce. If they do have souls, they live in pleasure and all go to heaven.


> The problem with slavery is that there's no coherent argument that differentiates slaves from masters at all

As I see it, the problem is that there was lots of such argumentation - https://en.wikipedia.org/wiki/Scientific_racism

And an even bigger problem is that this seems to be making a comeback


a "soul" is shorthand for some sapient worthy of consideration as a person. If you want to get this technical then I will need you to define when a fetus becomes a person and if/when we get AGI where the difference is between them


Ok, so how about an example?


Literally anything a philosopher or mathematician invented without needing to incorporate billions of examples of existing logic to then emulate.

Try having an LLM figure out quaternions as a solution to gimbal locking or the theory of relativity without using any training information that was produced after those ideas were formed, if you need me to spell out examples for you


Are you saying “reasoning” means making scientific breakthroughs requiring genius level human intelligence? Something that 99.9999% of humans are not smart enough to do, right?


I didn’t say most humans “would” do it. I said humans “could” do it, whereas our current AI paradigms like LLMs do not have the capability to perform at that level by definition of their structure.

If you want to continue this conversation I’m willing to do so but you will need to lay out an actual argument for me as to how AI models are actually capable of reasoning or quit it with the faux outrage.

I laid out some reasonings and explicit examples for you in regards to my position, it’s time for you to do the same


I personally cannot “figure out quaternions as a solution to gimbal locking or the theory of relativity”. I’m just not as smart as Einstein. Does it mean I’m not capable of reasoning? Because it seems that’s what you are implying. If you truly believe that then I’m not sure how I could argue anything - after all, that would require reasoning ability.

Does having this conversation require reasoning abilities? If no, then what are we doing? If yes, then LLMs can reason too.


Cool, you've established a floor with yourself as a baseline. You still haven't explained how LLMs are capable of reaching this level of logic.

I'm also fully willing to argue that you, personally are less competent than an LLM if this is the level of logic you are bringing to the conversation

***** highlighting for everyone clutching their pearls to parse the next sentence fragment first ******

and want to use that are proof that humans and LLMs are equivalent at reasoning

******* end pearl clutching highlight *******

, but that doesn't mean I don't humans are capable of more


Depends on the task. Anything involving physical interaction, social interaction, movement, navigation, or adaptability is going to go to the kid.

“Go grab the dish cloth, it’s somewhere in the sink, if it’s yucky then throw it out and get a new one.”


It's more about efficiency in number of trials.

Would you pick the ML model if you could only do a hundred throws per hour?


All we can say for sure at the moment is that humans have better encoded priors.


Stop missing and they will respond to the ball a lot sooner.


Or even more impressively, how you can pick up a random object and throw it with some accuracy.

Catching a ball is easy by comparison, also, my dog is better than I am at this game.

But throwing a random object not only requires an estimation of the trajectory, but also estimating the mass and aerodynamic properties in advance, to properly adjust the amount of force the throw will use as well as the release point with high accuracy. Doing it with baseballs is "easy", as the parameters are all well known and pitchers spend considerable time training. But picking an oddly shaped rock or stick you have never seen before and throw it not completely off target a second later, now we are talking.


Playing Pool is a great example of this because you can math out the angles of a shot relatively easily, but the best pool players do it all intuitively. Some of the greatest don't bother with "advanced" pool tactics. They have spent so much time watching the cue ball strike other balls that they have a tacit understanding of what needs to happen. Part of practicing well is just watching balls hit each other so your brain starts to intuit what those collisions result in.

What is really fascinating for me is that my subconscious will lose interest in pool before my conscious does, and once that happens I struggle to aim correctly. It feels like the part of my brain that is doing the math behind the scenes gets bored and no matter how hard I try to consciously focus I start missing.


Not to mention, you even calculate a probability point map. Like I’m not going to hit the center perfectly but I can calculate the circle with a 90% probability of making the shot, given a distance and an object. And you know how much closer you need to walk to minimize the circle.

Which comes in very critically when chucking away trash overhand in public and you never want to embarrass yourself.


I recall a study which suggested that we don't really calculate the trajectory as such, but use some kind of simple visual heuristic to continually align ourselves with where the ball is going to land.

They showed that people running to catch a ball would follow an inefficient curved path as a result of this, rather than actually calculating where the ball will land and moving there in a straight line to intercept it.



Bender: Now Wireless Joe Jackson, there was a blern-hitting machine!

Leela: Exactly! He was a machine designed to hit blerns!


You can do this while you're staring up the whole time. Your brain can predict where the ball will end up even though it's on a curved trajectory and place your hand in the right spot to catch it without guidance from your eyes in the final phase of travel. I have very little experience playing any kind of sport that involves a ball and can reliably do this.


Which funny enough is why I hate rocket league.

All those years of baseball as a kid gave me a deep intuition for where the ball would go, and that game doesn’t use real gravity (the ball is too floaty).


Ok, I’ll grant you the physics are what they are. But a football is not a baseball, so why in any world would you expect your memory of baseball to even remotely translate to the physics of a football, even if they were realistic?


Remotely? Because both the European-spec football and the baseball, despite one being heavier than the other, will hit the ground at the same time when dropped from the same height.

Like you said, physics are what they are, so you know intuitively where you need to go to catch a ball going that high and that fast, and rocket league is doing it wrong. err, I mean, not working in Earth gravity.


> Because both the European-spec football and the baseball, despite one being heavier than the other, will hit the ground at the same time when dropped from the same height

That might be true in a vacuum and if their densities were the same, but in real-world conditions, air drag would be greater for the football since it's obviously larger and less dense, and it'll reach the ground afterwards.


Sure, but they're still on the same planet, where gravity is 9.8m/s^2, so accounting for all that isn't as big a difference as Rocket League, which takes place on a digital planet, where gravity is 6.5m/s^2.


Sometimes a football isn't a spherical cow.


It does behave kind of like an inflatable beach ball, in my non-expert opinion.


Well, think how a bug and its shitty brain flies and avoids all type of obstacles amazingly fast.

This kind of things make me think LLMs are quite far from AGI.


Bug flying is not general intelligence.


Besides that bugs flying seems an amazing task to me in terms of processing, specially if you compare the amount of power used to something like cars autopilot, bugs flying is part of bug survival, which in my opinion is closer to general intelligence than memorizing tokens.


Comparing "bug flying"/"bug survival" to "memorizing tokens" is disingenuous. They're not in the same category of task at all. You're essentially comparing the output of one system to the input of another system.


Sorry, spitting tokens


timescaledbs pg_vector_scale extension does pre-filtering thankfully. shame i cant get it in RDS though


You can request it for RDS


Pretty sure most large succesful apps have their own UI and UI design teams. Cant remember the last time i saw anything cupertino in an app. Even Apples own 'Home' app only loosely use cupertino. Id say the most noticable effect is the bottom modal sheet slide up effect. on ios the original screen animates into the background a little bit. Apps that dont implement this can be spotted but thats not unique to flutter at all and flutter even offers a pretty good cupertino scaffold package that does this animation.


"cupertino" is the name of a Flutter widget set, not of the native platform controls or L&F.

Few Flutter apps are going to use cupertino because the whole goal of using Flutter is to create cross platform codebases to save development effort. To use an alternative widget set per platform is a huge amount of additional work, and having a cupertino app running on Android is even more of a sore thumb than a material app on iOS.


> and having a cupertino app running on Android is even more of a sore thumb than a material app on iOS.

That is a bizarre fact.


The two things that stick out the most to me are navigation behavior and text fields. Cross platform frameworks seldom get either right, with react native being particularly bad on the navigation front.

There are variances between Apple’s apps but they’re all using some combination of UIKit and SwiftUI regardless which limits how “wrong” they can be.


You’re completely right, but that doesn’t change the fact that I think it’s a bad idea. For a company that used to care so much about user experience, Apple has been throwing a lot of it out the window in recent years.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: