Hacker News new | past | comments | ask | show | jobs | submit | jonahss's comments login

I think the best part was the little video-game video of Stevens checking different datasets by walking around. Love it.


These companies are not trying to be companies that sell an LLM to summarize text or write emails. They're trying to make a full Artificial General Intelligence. The LLMs pull in some money today, but are just a step towards what they're actually trying to build. If they can build such a thing (which may or may not be possible, or may not happen soon), then they can immediately use it to make itself better. At this point they don't need nearly as many people working for them, and can begin building products or making money or making scientific discoveries in any field they choose. In which case, they're in essence, the last company to ever exist, and are building the last product we'll ever need (or the first instance of the last product we'll ever produce). And that's why investors think they're worth so much money.

some ppl don't believe this cus it seems crazy.

anyways, yes they're trying to make their own chips to not be beholden to nvidia, and are investing in other chip startups. And at the same time, nvidia is thinking that if they can make an AI, why should they ever even sell their chips, and so they're working on that too.


> they're in essence, the last company to ever exist, and are building the last product we'll ever need

Physical reality is the ultimate rate-limiter. You can train on all of humanity's past experiences, but you can't parallelize new discoveries the same way.

Think about why we still run physical experiments in science. Even with our most advanced simulation capabilities, we need to actually build the fusion reactor, test the drug molecule, or observe the distant galaxy. Each of these requires stepping into genuinely unknown territory where your training data ends.

The bottleneck isn't computational - it's experimental. No matter how powerful your AGI becomes, it still has to interact with reality sequentially. You can't parallelize reality itself. NASA can run millions of simulations of Mars missions, but ultimately needs to actually land rovers on Mars to make real discoveries.

This is why the "last company" thesis breaks down. Knowledge of the past can be centralized, but exploration of the future is inherently distributed and social. Even if you built the most powerful AGI system imaginable, it would still benefit from having millions of sensors, experiments, and interaction points running in parallel across the world.

It's the difference between having a really good map vs. actually exploring new territory. The map can be centralized and copied infinitely. But new exploration is bounded by physics and time.


To conquer the physical world the idea of AGI must merge with the idea of a self replicating machine.

The magnum opus of this notion is the Von Neumann probe.

With the entire galaxy and eventually universe to run these experiments the map will become as close to the territory as it can.


Fully agree, self replication is key. But we can't automate GPU production yet.

Current GPU manufacturing is probably one of the most complex human endeavors we've ever created. You need incredibly precise photolithography, ultra-pure materials, clean rooms, specialized equipment that itself requires other specialized equipment to make... It's this massive tree of interdependent technologies and processes.

This supply chain can only exist if it is economically viable, so it needs large demand to pay for the cost of development. Plus you need the accumulated knowledge and skills of millions of educated workers - engineers, scientists, technicians, operators - who themselves require schools, universities, research institutions. And those people need functioning societies with healthcare, food production, infrastructure...

Getting an AI to replicate autonomously would be like asking it to bootstrap modern economy from scratch.


I think that we're going to approach it from the top and bottom.

The second we have humanoid robots that can do maintenance on themselves as well as operate their assembly lines and assembly lines in general will be a massive shift.

I think the baseline for that will be a humanoid robot that has the price tag of a luxury car and that can load/unload the dishwasher as well as load/unload the washing machine/dryer and fold and put away clothes. That will be total boomer-bait for people who want to 'age in place' and long term care homes in general.

Once we have that we can focus on self-replication on the micro-scale. There is tremendous prior art in the form of ribosomes and cells in general. A single cell hundreds of millions of years ago was able to completely reshape the entire face of the earth and create every single organism that has come and gone on the Earth. From fungi to great whales to giraffes, jellyfish, flying squirrels, and sequoia trees the incredible variety of proteins in a myriad of configurations that life has produced is remarkable.

If we can harness that sort of self replication to make power our economy it will make the idea of bootstrapping the economy on this world and others much easier.


It seems that anyone who has ever played games like Factorio or Satisfactory can readily extrapolate similar real-world conclusions. Physical inefficiencies are merely an interface issue that erodes over time with intelligent modularizations and staging of form factors at various scales.


This might come as a surprise to some people, but the real world is infinitely more complex than a sim game.


> They're trying to make a full Artificial General Intelligence.

> then they can immediately use it to make itself better.

"AGI" is a notoriously ill-defined term. While a lot of people use the "immediately make itself better" framing, many expert definitions of AGI don't assume it will be able to iteratively self-improve at exponentially increasing speed. After all, even the "smartest" humans ever (on whatever dimensions you want to assess) haven't been able to sustain self-improving at even linear rates.

I agree with you that AGI may not even be possible or may not be possible for several decades. However, I think it's worth highlighting there are many scenarios where AI could become dramatically more capable than it currently is, including substantially exceeding the abilities of groups of top expert humans on literally hundreds of dimensions and across broad domains - yet still remain light years short of iteratively self-improving at exponential rates.

Yet I hear a lot of people discussing the first scenario and the second scenario as if they're neighbors on a linear difficulty scale (I'm not saying you necessarily believe that. I think you were just stating the common 'foom' scenario without necessarily endorsing it). Personally, I think the difficulty scaling between them may be akin to the difference between inter-planetary and inter-stellar travel. There's a strong chance that last huge leap may remain sci-fi.


>If they can build such a thing (which may or may not be possible, or may not happen soon), then they can immediately use it to make itself better.

This sounds like a perpetual motion machine or what we heard over and over in the 3d printing fad.

We have natural general intelligence in 8 billion people on earth and it hasn't solved all of these problems in this sort of instant way, I don't see how a synthetic one without rights, arms, legs, eyes, ability to move around, start companies, etc. changes that.


LLMs are a very good tool for a particular class of problems. They can sift through endless amounts of data and follow reasonably ambiguous instructions to extract relevant parts without getting bored. So, if you use them well, you can dramatically cut down the routine part of your work, and focus on more creative part.

So if you had that great idea that takes a full day to prototype, hence you never bothered, an LLM can whip out something reasonably usable under an hour. So, it will make idea-driven people more productive. The problem is, you don't become a high-level thinking without doing some monkey work first, and if we delegate it all to LLMs, where will the next generation of big thinkers come from?


AGI is only coming with huge amounts of good data.

Unfortunately for AI in general, LLMs are forcing data moats, either passive or due to aggressive legal attack, or generating so much crud data that the good data will get drowned out.

In fact, I'm not sure why I continue to uh, contribute, my OBVIOUSLY BRILLIANT commentary on this site knowing it is fodder for AI training.

The internet has always been bad news for the "subject expert" and I think AI will start forcing people to create secret data or libraries.


Current LLMs need huge amounts of data but before we get AGI we'll probably get better algorithms that are less limited by that.


> This sounds like a perpetual motion machine or what we heard over and over in the 3d printing fad.

Except that it is actually what humanity and these 8 billion people are doing, making each successive generation "better", for some definition of better that is constantly in flux based on what it believed at the current time.

It's not guaranteed though, it's possible to regress. Also, it's not humanity as a whole, but a bunch of subgroups that have slightly differing ideas of what better means at the edges, but that also share results for future candidate changes (whether explicitly through the international scientific community or implicitly through memes and propaganda at a national or group level).

It took a long time to hit on strategies that worked well, but we've found a a bunch over time, from centralized government (we used to be small tribes on plains in in caves) to the scientific method to capitalism (and whether it's what we'll consider the best choice in the future or not it's been invaluable for the last several centuries), they've all moved us forward, which is simple to see if you sample every 100 years or so going into the past.

The difference between what we've actually got in reality with the uman race and what's being promised with GAI is speed of iteration. If a areal GAI can indeed emulate what we have currently with the advancement of the human race but at a faster cycle, then it makes sense it would surpass us at some point, whether very quickly or eventually. That's a big if though, so who knows.


I pretty much agree with this article - It seems like LLM companies are just riding the hype, and the idea that LLMs will lead onto General AI feels like quite a stretch. They’re simply too imprecise and unreliable for most technical tasks. There's just no way to clearly specify your requirements, so you can never guarantee you’ll get what you actually need. Plus, their behaviour is constantly changing which only makes them even more unreliable.

This is why our team developing The Ac28R have taken a completely new approach. It's a new kind of AI which can write complex accurate code, handling everything from databases to complex financial models. The AI is based on visual specifications which allow you to specify exactly what you want, The Ac28R’s analytical engine builds all the code you need - No guesswork involved.


Please keep your ads to Twitter or LinkedIn or whatever


i wrote about the prospect of financial returns from AGI here if anyone's interested - https://sergey.substack.com/p/will-all-this-ai-investment-pa...


No way, the real Yu-gi-oh is played with hologram projectors and the cards have personalities. Plus the rules aren't ever quite explained.


I've heard that switching to all of that will add 20-40 seconds of dead time waiting for the display to change, as the NFC transfers power to run the whole procedure. That'd be too long an interaction time with no feedback.

And of course the cost goes way up.


>> It just seems that from every angle I look at this thing, I see clear problems with bringing it to market as a viable product.

Yeah, the goal is to make a cool thing, and then make a fun game.


So, the goal was to build a devkit so that I could use it to develop my own game. I wanted the physical things to exist, so that I could try it out with people and find out which interactions are fun, vs which aren't.

Well the internet liked the idea, and saw some of the same promise in it that I did (plus HN is a sucker for e-ink). With all the interest, and people asking how they can get their hands on one, I ran a crowdfunding campaign to make devkits for everyone who wanted.

Turns out making 25 of something is way more work than making 2 of a thing. The supplier changed the display firmware on me, I had to make things to more exact measurements so parts were interchangeable, I had to write docs and make videos, etc etc. Took a whole year.

Now the pressure is off and I don't owe people products after they gave me money for them, so I can take a break to clear out my backlog of minor projects, then get onto designing my own game, using my own devkit :)

Once I have an actually fun game, I could increase volume and bring down the costs. My goal is to make the game accessible for $80. I'll need the e-ink price to come down a little, and use injection molding instead of resin casting and wood. Plus the base won't need a Raspberry Pi, that's just for my quick iteration. Final product will need to be embedded.

(Almost all the devkits sold were to friends and family, with only a few going to actual game designers. CrowdSupply itself puts in an order for more units along with the campaign, so that they can stock them after the campaign ends and initial delivery is over. Except they negotiate a different price for those because the margins work differently)


I wouldn’t have made any until I had a viable game to demonstrate the system.

As a matter of fact, I wouldn’t have committed to hardware until I had the game sketched out. You have constrained yourself to certain choices before you know the implications of those choices.

Is color going to hinder it?

Is this really a board game where the tiles are eink devices? In which case, you’d want the connections on the edges instead of the back.

Do you need more space for text?

Do the cards need to remain in the board?

And so on. You’ve already effectively made these decisions without knowing how they’ll affect game development.

You should’ve come up with a game, then developed the hardware prototype around that game. Even if the initial hardware would have cost more with your current setup. Because, as you’ve noted, when you’re ready to go to manufacturing, you be able to take advantage of proper tooling and economies of scale.

I really think the lack of a game is the thing that’s going to be your major roadblock.


I didn't mean people who work at Uber explaining how their load-balancing works.

I meant walking up to a super cool music visualization at an outdoor art festival and they guy there happily explaining to me their entire system built out of a node flow diagram implemented on Max but adapted to visual graphics using a plugin called Vsynth.

Or going over to a friend's house and seeing their modular synth system and they happily explain to you how it works for an hour.

Just this weekend I met an amazing engineer with a street-legal steam-powered motorcycle which he patiently explained for an hour.


That doesn't seem very bay-specific, any DIYer will happily talk about their project.


Yeah that makes sense. Got me wondering to what extent it’s possible with something like load balancing too. Maybe meetups don’t have to corner the market on that kind of info sharing


The pcb on the card is just a charge pump that the display demands in order to regulate its voltage. If I shipped that off the card and onto the base, I'd need more contacts.

Those flat flex displays are awesome, though way pricier. Then I'd lose the nice stiff pcb board which allows all the contacts on the back to mate. I could go wireless, but then the power delivery needs to charge a capacitor and the delay between button press and display refresh will go to 30 seconds or beyond


The lightnote is so cool, and the design is perfection


Haha, guilty. But! The actual wires assigned to each pin correspond to the Sephirot. Keter is VCC and Malkhut is GND, etc etc :P


Awrite that makes up for that then. :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: