Hacker News new | past | comments | ask | show | jobs | submit login

What a joke. Carmack is going to sit at home and solve what teams of scientists can't do in decades.

I'm complaining less about Carmack wanting to spend his time doing this and more about the comments here acting like he is some 10000x research scientist.




There are few AI researchers (maybe around 5 or so) that could credibly claim technical accomplishments of any sort in the same ballpark as Carmack's.

People with this level of track record should not be underestimated, there aren't many of them out there... They matter.


Carmack has had a lot of commercial success, which is great and I value his creativity in the space but working on a research question that is about ten paradigm shifts away is a different task than putting in hard labour to build games.


And maybe the output is he helps bring about one of those ten paradigm shifts, which would be a wonderful success. He didn’t say “solve” it, he said work on it.


Carmack is an accomplished and inventive engineer. He is not and would never claim to be the most important graphics researcher of his generation.

How can you rank him against AI researchers, a field where he has not attempted to contribute?


Not to mention all the mathematicians who contributed to the theoretical foundations of machine learning.

This announcement is meaningless IMO. Those who will make meaningful, core contributions to AI tech are doing it with pen and paper, not computers.


That first line is pure nonsense.


No one expects him to emerge with a fully formed AGI. By through experimenting he might contribute some new incremental but still useful improvements.


This feels somewhat obligatory...

http://ars.userfriendly.org/cartoons/?id=20001018


I wouldn't be that... mean. But if it's anything like his aerospace pursuits, yeah, I wouldn't bet on any breakthroughs.


To be fair aerospace was never really a full time thing and was budget restricted. Given fulltime and Bezos money who knows.


I recently "retired" to do the same, and logic here is - there is no harm in trying, if you have resources (of course, he has magnitudes more).

You can get up to date in the field in under half a year of extensive reading. And many of those scientists are too busy solving more specific goals, that their labs set. I doubt there are more than 1,000 researchers in the world specifically working on AGI.


“_Specifically_ working on Artificial _General_ Intelligence” is a bit like “As a physics major I’ve decided to specialise in physics.”


Well.... There's the possibility that there's a certain degree of myopia in the AI field as a whole. As in, we know that there are some pretty gaping holes in our models and understanding, and most of the effort is spent on refining approaches that we have already validated.

Maybe a better analogy would be "specialize physical theories that work in all environments, not have to be adapted separately for the ocean, space, the atmosphere, the forest" etc.


There's certainly a lot to be done, and we'll likely need new approaches. But is it really productive to be tackling a general problem when we don't even know how to solve specific sub-problems? Especially if solving said sub-problems would bring a lot of value in its own right.


It’s basic research. No one knows anything about which approaches will work. If a genius millionaire technologist wants to dedicate all their time and effort to any novel approach, I’d strongly endorse it. (Not that it matters; they’ll do it anyway).

I feel likewise for any research effort; it’s not like this will put all cancer research on hold, or more immediately practical AI research. It’s just a few hundred people globally :) And it’s such promising technology.


Not sure where is your analogy applicable. There are lots of people doing image recognition, voice recognition, NLP. None of it on its own relates that much to reinforcement learning and multitask solving. In fact in the last year I saw only a few papers trying to do nearly all of the above with a single NN.


And is that single NN better at any of these tasks when compared to specialized approaches? Don't get me wrong, I agree that AGI is the end goal, I am just not convinced that trying to solve the general problem (before simpler problems are solved) is the most productive way forward.

To go back to my analogy, physicists don't have a unified theory yet, but have a good understanding of, say, quantum mechanics or planet motion. Solving these sub-problems got them closer to the end-goal, however, and brough a lot of value on the way. Why should we tackle AGI any differently?


I would have to find the paper, but generally yes. If I remember correctly one paper presented a network, that was able to recognize giraffes on images, never seeing a giraffe during training.

It was trained on embedding sentences and images into the same latent space.


I doubt he will sit at home working on it alone for long.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: