Hacker Newsnew | past | comments | ask | show | jobs | submit | NervousRing's commentslogin

There are a lot of things in the article that I find a bit wishy-washy.

> It’s a signpost for a broader trend—one that treats friction as failure, learning as delivery, and formation as a plug-in.

I think building something provides a lot more friction (and learning opportunities) than reading books. We had computer classes for at least 7 years in school and beyond some loops and recursion, I didn't really think I understood computers. One month of trying to build an app and that worked. Similarly, hundreds of hours of YC (and other) video content paled in comparison to trying to salvage a startup that was going bankrupt.

> And what’s the real tragedy of this model? It’s not that it fails—it’s that it succeeds. Brilliantly. But at the wrong task: a perfect system solving for performance, not presence.

I don't know why this line feels like it's written by ChatGPT. Maybe because it has that tone of trying to say something deep in a verbatim manner that is the signature style of ChatGPT.

> Training children to outperform machines may win the game, but it misses the point: Machines don’t need meaning. We do.

And I haven't found more aimless people than those coming out of current highschools. They then go to the universities their peers go to, get the job their peers do and try to fill their weekends with entertainment. No judgement, but I don't think AI will reduce any of that.


Everyone should just invent new products, new fields of employment then? So easy to you, what have you invented? What new, innovative job do you do that nobody else does?


I've heard of q-plasma and q-total. What is q-science?


It’s the ratio of fusion energy released to heating energy crossing the vacuum vessel boundary.


Hey! Thanks a lot for the detailed review.

So for now, this seems to be a better introductory text?


I'd say this is better for those closer to the engineering side of the science<->engineering continuum, for sure. For AI researchers proper, I happen to strongly believe in symbolic approaches, so I'd say it's more of a tie/subjective/use-both situation.

AIAMA certainly has an absolute mountain of existing material, and it's never a bad idea to be working off the same baseline as a large majority of your interlocutors!


I don't think that qualifies as "full" self driving.


Aren't we talking about Waymo? It's not perfect everywhere but it's definitely already better than most people driving around SF. That's pretty full to me.

I guess they haven't been allowed on freeways yet?


No one has demonstrated stable plasma operations for any lengths of time and they are claiming to not just get Q-plasma > 1 but Q-total > 1 by 2030? This is more optimistic than Full Self Driving by 2016


1. We don't have a perfect understanding of plasma dynamics and how they'll react to different conditions. Predicting plasma instabilities before they mess with your reactor remains a big challenge for our computation capabilities.

2. Yeah, material science is also a big one. When you are working with the magnetic forces typical in a modern fusion reactor, your materials undergo a lot of mechanical stress. The "first wall" that has to bear the brunt of the nuclear reactions becomes radioactive. Some plasma ions invariably go off trajectory and we have a "diverter" to prevent them from hurting the reactor but that reduces the temperature.

3. Our reactors aren't efficient enough. Everyone taking about "q" value means the energy they put into creating the reaction to get the plasma to fuse. It's called q-plasma which is a misleading metric. The true breakthrough will be sustained q-total, which will be the ratio of the total energy you get out over the total energy you put in. Nobody in the industry likes to talk about it, because we are decades away from reaching this.

4. Modern designs are becoming extremely expensive. The most serious design right now is being funded not by a state of a country but by the biggest countries in the planet.

5. Someone help me here I've ran out of points


6. Volumetric power density sucks compared to fission reactors. This leads to #4, since for a given power output the reactor is an order of magnitude larger or even worse. Designs with some hope of evading this showstopper may be possible, but they are rare on the ground, and the tokamak doesn't appear to be one of them.


5. The fuel is notoriously difficult to contain. There will be leaks, and even small ones can spoil the reaction and tip your device into unpredictability. Also, the fuel has a tendency to infiltrate metals and embrittle them.


5. Working big pieces of steel and making large process plants is not cheap or easy.


I think it understands the context better and it was possibly fine tuned better. I have been using GPT since 3 and while the replies have obviously gotten more accurate, it still makes weird assumptions at times, whereas Claude seems to "get it" more often. In tasks other than coding, I've found gpt to be more detailed by default and yet Claude seems to hit the mark better.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: