Hacker News new | past | comments | ask | show | jobs | submit login

He invented modern graphics as a practical problem by himself as the sole researcher. Given the tools at the time that may have been a harder problem.



> He invented modern graphics as a practical problem by himself as the sole researcher

That is a ridiculous exaggeration. Carmack was clever enough to gain ~1 year advantage in performance over his competitors for the Doom engine, using Binary Space Partitioning, which was first applied to 3D graphics in 1969, before he was born. The Quake engine got a significant performance boost from Michael Abrash, who is a specialist in code optimization.


> He invented modern graphics as a practical problem by himself as the sole researcher.

No, he didn't, and that is not a claim that he would ever make himself.


Agreed, and if anyone could make that claim it'd be Eric Veach (who then went on to develop Google Adwords).


Yes, a practical problem. The math behind computer graphics (i.e. optics) had been around hundreds of years. The trick was using numerical analysis to optimize and approximate on limited hardware.

We don't have the laws of AGI like we had the laws of optics (Asimov notwithstanding.) Tons of research effort was poured into the wrong avenues in vision (hand-tuned HoG, transforms, optical flow analysis) and ML (support vector machines, computational learning theory) until a chain of breakthroughs hit on the right mathematical approach for vision and supervised learning more generally.

We have some mathematical approaches to try with AGI (e.g. policy optimization/max-Q in reinforcement learning), but they equations are plagued with fundamental issues (e.g. reward sparcity, easily-gamed artificial objectives.)

Carmack optimized some very difficult equations when he worked on graphics, but in AGI we still don't have the right equations to optimize.


A dog brain can't design an artificial dog brain, and a human brain probably can't design a human-level AGI. It will likely be machine-evolved on cheap, massively parallel hardware, with the key problem being speeding up evolutionary search.


A dog brain can't design an artificial dog brain, and a human brain probably can't design a human-level AGI.

That's ridiculous. A dog can't draw a dog, therefore a human can't draw a human?


Technically, a human can't draw a human. A human can only draw something that looks like a human to another human. The human viewer is doing most of the work by imagining that the drawn human is real (or by recalling the real human suggested by the drawing).

As an example, consider all of these ASCII stick figures:

http://www.ascii-art.de/ascii/s/stickman.txt

Clearly, it isn't necessary to deeply comprehend the essential nature of humanity in order to draw one.

Knowing how a mind works well enough to build one is an immense task. A better analogy would be building a human body from scratch, which is also something humans can't do.


The philosophical musings about what it means to "draw a human" are pedantic and really not relevant. My point is that "dog cannot do dog-related task" does not imply or even suggest that "human cannot do human-related task". It's pseudo-logic of the sort that is unfortunately often very convincing to people.


They are relevant because you used "draw a human" to prove something wrong. If mine was irrelevant, so was yours.

The reason "dog brain can't design an artificial dog brain" is a useful contribution is that it gives people an intuitive understanding of a complex truth: things can't fully model themselves. Dogs can't fully understand dogs. People can't fully understand people.

It's plausible to me that humans can evolve something akin to AGI. It's also plausible to me that a vast number of humans working together will manage to stumble into creating AGI. But I see no reason to think that humans have the intellectual capacity to understand a human-level mind well enough to build one intentionally.


It is sorta interesting to wonder about how far we can bootstrap ourselves. We went from trees to moon landings and neural networks, maybe we and our tools can rise to the top of the Kardashev scale (barring any cataclysms.)

If so, it implies there's a sort of intelligence-completeness, where species can accomplish any physically possible objective.

Or maybe we will peter out before achieving practical fusion, quantum gravity or AGI. Either way, "X can't create X" is a silly argument. Humans create humans, there's whole websites dedicated to that.


We've never hand-designed anything as complex and novel as what bio evolution discovers. We can't even ship error-free word processors. The more bug-free space shuttle code doesn't do much relative to AGI.

"Humans create humans" via what bio evolution found.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: