Hacker News new | past | comments | ask | show | jobs | submit login
John Carmack: I’m going to work on artificial general intelligence (facebook.com)
1574 points by jbredeche 22 days ago | hide | past | web | favorite | 887 comments

This is encouraging. If you're going to work on artificial general intelligence, a reasonable context in which to work on it is game NPCs. They have to operate in a world, interact with others, survive, and accomplish goals. Simulator technology is now good enough that you can do quite realistic worlds. Imagine The Sims, with a lot more internal smarts and real physics, as a base for work.

Robotics has the same issues, but you spend all your time fussing with the mechanical machinery. Carmack is a game developer; he can easily connect whatever he's doing to some kind of game engine.

(Back in the 1990s, I was headed in that direction, got stuck because physics engines were no good, made some progress on physics engines, and sold off that technology. Never got back to the AI part. I'd been headed in a direction we now think is a dead end, anyway. I was trying to use adaptive model-based control as a form of machine learning. You observe a black box's inputs and outputs and try to predict the black box. The internal model has delays, multipliers, integrators, and such. All of these have tuning parameters. You try to guess at the internal model, tune it, see what it gets wrong, try some permutations of the model, keep the winners, dump the losers, repeat. It turns out that the road to machine learning is a huge number of dumb nodes, not a small number of complicated ones. Oh well.)

Hi John! Animats was very cool! As you know game physics still kinda sucks for this work. Unity/Bullet/MuJoCo are the best we have and even they have limited body collision counts. Luckily we've now got some GPU physics acceleration, but IMO it's not enough.

What we really need is a scalable, distributed, physics pipeline so we can scale sims to 1000x realtime with billions of colliding objects. My guess is that Google/Stadia or Unity/UnityML are better places to do that work than Facebook, but if Carmack decides to learn physics engines* and make a dent I'm sure he will.

Until our environments are rich and diverse our agents will remain limited.

*More, I'm sure his knowledge already exceeds most people's.

What we really need is a scalable, distributed, physics pipeline so we can scale sims to 1000x realtime with billions of colliding objects.

Improbable tried to do that with Spatial OS. They spent $500 million on it.[1] Read the linked article. No big game company uses it, because they cut a deal with Google so their system has to run on Google's servers. It costs too much there, and Google can turn off your air supply any time they want to, so there's a huge business risk.

[1] https://improbable.io/blog/the-future-of-the-game-engine

Agree, as a game engine this might power some high-end Stadia games with crazy physics, but the real value is in high-complexity environments for virtual agents.

Interestingly companies like SideFX are also doing really interesting work in distributed simulations. (e.g. Houdini)

I did my msc thesis in AI back then writing a dedicated simulater for a specific robot used in autonomous systems research. You find that especially when trying to faithfully reproduce sensor signals you need to dive deep into not just the physics of e.g. infrared light, but also the specific electronic operation of the sensor itself.

But that kind of realism is not needed for all AGI research.

I also spent some years on using evolutionary algorithms to evolve control networks for simple robots. The computational resources available at the time were rather limited though. Should be more promising these days now that your commodity gaming pc can spew out in 30 minutes what back then took all the labs networked machines running each night for a few weeks.

Modeling everything realistically is super hard - any interaction with the real-world is so full of the weirdest unexpected electric and mechanical issues. Who hasn't tried it first-hand can't imagine the half of ways that will almost certainly go wrong on the first try :) ... but as you've said for developing the AGI as a concept simplified worlds should work just fine.

Indeed, I don't think humans themselves model the world realistically, I think they model the world close enough but are able to adapt their prediction process for situation when their prediction don't work and/or their knowledge is inadequate. To model such a "satisficing" process, you don't need exact simulation.

True. Simulations are always a simplification of reality and leave a lot out of the picture.

On the flip side, successful robotics concepts might have more chance of being relevant to AGI.

> This is encouraging. If you're going to work on artificial general intelligence, a reasonable context in which to work on it is game NPCs.

I don't think so. Game NPCs don't need AI, which would be way overkill; they just need to provide the illusion of agency. I think for general AI you need a field where any other option else would be suboptimal or inadequate, but in videogames general AI is the suboptimal option... more cost effective is to just fake it!

> Game NPCs don't need AI

> ... more cost effective is to just fake it!

Many players complain in story heavy games that their choices have no consequences to the story - this is largely because building stories with meaningful branches isn't economically feasible.

A game that could make NPCs react to the what the player does dynamically while also creating a cohesive story for the player to experience would be absolutely groundbreaking in my opinion.

This is more in the realms of AI story generation but I haven't seen any work on this that generates stories you would ever mistake as coming from a human (please correct me if I'm wrong) so it would be amazing to see some progress here.

You're talking about different problems.

Story AI is basically having a writer sit down and writing a branching story tree with writing the whole way. At best it's a manually coded directed acyclic graph.

Tactical AI, ie having the bad guy soldiers move about the battlefield and shooting back at you in a realistic manner is 100% about faking it. It's better to despawn non-visible and badly placed enemies and spawn well placed non-visible enemies than have some super smart AI relocate the badly placed enemies into better locations. It's better to have simple mechanisms that lead to difficult to understand behavior than complex behavior that leads to instinctive behavior.

There was an amazing presentation at gdc maybe 3 years ago that perfectly articulated this. The game was something about rockets chasing each other. I wish I could find the link.

If you can find the link, please post it. I'm interested!

> Many players complain in story heavy games that their choices have no consequences to the story - this is largely because building stories with meaningful branches isn't economically feasible.

That's not entirely true - it's just that no games studios are willing to compromise on graphics and art for something silly like the ability to impact the game world.

I'm not sure if you're being sarcastic about the "something silly" part, but do you have any examples of any games (indie, commercial or academic) that let you meaningfully impact the game world?

I think they don't exist because it's an exceptionally difficult problem, even for games with lo-fi graphics or text only. I've found it hard to find any AI projects that generate stories or plots that are remotely compelling.

Big studio game companies push "your choices matter" as a selling point as well, but few deliver.

> any examples of any games [...] that let you meaningfully impact the game world?

Dwarf Fortress

Fallout: New Vegas, a bunch of Telltale games, Myst/Riven, dozens of JRPGs (Chrono Trigger/Cross come to mind immediately) with branching endings and characters who survive or die based on player actions. Yeah, games are made all the time where the player has a meaningful impact on the game world.

Minecraft is successful precisely because you can meaningfully impact the game world.

Minecraft doesn't have an overarching story or complex NPCs though.

Agreed about the meaningful choices and dynamically generated reactions from NPC, but general AI is not needed for this in my opinion.

You also have to consider whether the complaints of "many" players matter when publishing a game. A percentage of vocal players will complain no matter what. Yes, they will complain even if you somehow implement true AI!

> Agreed about the meaningful choices and dynamically generated reactions from NPC, but general AI is not needed for this in my opinion.

Maybe, but it would be an impressive demonstration of AI that would be very different to what has shown for Go, Chess and StarCraft.

I think a compelling AI written short story for example would be leagues ahead of what is required to write a convincing chatbot e.g. you need an overarching plot, subplots, multiple characters interacting in the world, having to track characters beliefs and knowledge, tracking what the reader must be thinking/feeling.

It would likely rely a lot on understanding real-world and culture knowledge though - Go and StarCraft are much cleaner in comparison.

> A percentage of vocal players will complain no matter what.

Yep, but I can't think of a single game that has a plot that meaningfully adapts to how the player plays. Either there's many endings but the path to get to each is short, or all the choices converge quickly back into the same path.

Again, please correct me if I'm wrong but I've look quite hard for examples of innovation in the above recently and haven't found much. You can find examples of papers on e.g. automated story generation or game quest generation on Google Scholar from the last 10 years but the examples I found weren't that compelling.

AI-generated "true" fiction seems like scifi to me.

Of course an hypothetical "Turing Test" of fiction-writing might be able to fool some people, and in an age where Netflix has been accused of producing content "by algorithm" this seems increasingly possible, but...

... what is "true" or "good" fiction is up for debate. In fact, it's a debate that can never be solved, because there is no right answer except what it feels to you, your friends and authors you respect.

But that said, I seriously doubt it would fool me, and I think it won't be within reach of an AI any time soon, or ever, not without creating an artificial human being from scratch. And maybe not even then, because how many real people can write compelling fiction anyway? :)

> Of course an hypothetical "Turing Test" of fiction-writing might be able to fool some people, and in an age where Netflix has been accused of producing content "by algorithm" this seems increasingly possible, but...

So it feels like you should be able to procedurally generate stories at least something by combining common story arches, templates, character archetypes etc. without too much effort but I've yet to find any compelling examples of this anywhere. When you look into the problem more, you realise it's a lot harder than it seems.

We've seen lots of examples of chat bots that are said to pass the Turing Test but really aren't that intelligent at all so a "Turing Test of fiction writing" as you put sounds like a super interesting next step to me.

If his true purpose were to improve on videogames' NPCs, you're 100% right that working on "real" AGI would be overkill. But in this case, someone w/ deep background in videogaming intends to use that context as a means to the end of AGI R&D -- a possibly subtle, but crucial distinction.

If you have general AI of enough sophistication to use in a game, haven’t you just created a virtual human?

Yes, probably. At which point, the "videogame" part is irrelevant.

>more cost effective is to just fake it

I struggle to see the distinction. Isn't the turing test defined as 'faking humans (or human's intelligence) convincingly enough'?

There is a saying: The benefit to be smart is that you can pretend to be stupid. The opposite is more difficult.

"Fake it" as in cutting corners and performing sleighs of hand. Instead of moving enemy soldiers strategically, just spawn them close but out of sight, because the player won't know better. This doesn't help if you truly want to devise a military AI and is only useful for games. And that's just one example.

I think the Turing Test is no longer thought of as adequate metric for general AI (if it ever was to begin with).

I don't think parent was referring to game NPCs as a reasonable application of AGI, but rather as a reasonable domain conducive to the development of AGI.

Yes, I understand this and was replying to that interpretation. I think it's not a particularly conducive domain because the incentive is just to fake it (because that's enough for games). A better domain would be one where faking it just won't cut it.

Yeah I believe in this game / simulated world NPC idea too. To get the kind of complexity we want we either need sensors in the real world or interfacing in a virtual world that humans bring complexity to (probably both -- the humans are part of the sensing technology to start). Things like AlphaZero etc. got good cuz they had a simulatable model of the world (just a chess board + next state function in their case). We need increasingly complex and intetesting forms of that.

In some sense you can think of interfacing w/ the online world + trying to win attention to yourself as the kind of general game that is being played.

I've long taken the position that intelligence is mostly about getting through the next 10-30 seconds of life without screwing up. Not falling down, not running into stuff, not getting hurt, not breaking things, making some progress on the current task. Common sense. Most of animal brains, and a large fraction of the human brain, is devoted to managing that. On top of that is some kind of coarse planner giving goals to the lower level systems.

This area is under-studied. The logicians spent decades on the high level planner part. The machine learning people are mostly at the lower and middle vision level - object recognition, not "what will happen next". There's a big hole in the middle. It's embarrassing how bad robot manipulation is. Manipulation in unstructured situations barely works better than it did 50 years ago. Nobody even seems to be talking about "common sense" any more.

"Common sense" can be though of as the ability to predict the consequences of your actions. AI is not very good at this yet, which makes it dangerous.

Back when Rod Brooks did his artificial insects, he was talking about jumping to human level AI, with something called "Cog".[1] I asked him "You built a good artificial insect. Why not go for a next step, a good artificial mouse?" He said "Because I don't want to go down in history as the man who created the world's best artificial mouse".

Cog was a flop, and Brooks goes down in history as the inventor of the mass market robot vacuum cleaner. Oh well.

[1] http://people.csail.mit.edu/brooks/papers/CMAA-group.pdf

I remember this TED talk many years ago where the speaker proposes that intelligence is maximizing the future options available to you:



The video linked by the direct parent to my comment is a prank video.

Prank video? Satire at most. Did you even watch it?

The genius of Cog was that it provided an accepted common framework towards building a grounded embodied AI. Rod was the first I saw to have a literal roadmap on the wall of PhD thesis's laid out around a common research platform, Cog, in this branch of AI.

In a sense, the journey was the reward rather than the very unlikely short term outcome back then.

I was thinking about the manipulation issue tonight. I'd been throwing a tennis ball in the pool with my kids and I realised how instinctual my ability to catch was. A ball leaves my kids hands and I move my hand to a position, fingers just wide enough for a ball, and catch it. All of it happens in a fraction of a second.

The human brain can model the physics of a ball in flight, accurately, and quickly. As the ball touches the finger tips it makes the smallest adjustments, again in tiny fractions of a second

I don't know if I'd call it modelling the physics of a ball in flight exactly. It kind of seems like the brain has evolved a pathway to be able to predict how ballistic projectiles - affected only by gravity and momentum - move, that it automatically applies to things.

What make me think of it like that is hearing about how the brain was actually really bad at predicting the path of things that don't act like that. This was in context of aiming unguided rocket launchers (I end up reading a lot of odd things). It seems the brain is really bad at predicting how a continuously accelerating projectile will travel, and you have to train yourself to ignore your intuitions and use the sighting system that compensates for how it actually travels in order to hit a target with the thing.

You mean the brain has evolved over millennia to model the psychics of the world and specialize in catching and throwing things

Absolutely. It also requires more than the evolutionary adaptations to do it. The skill requires the catching individual to have practiced the specific motions enough times previously to become proficient to the point it becomes second nature.

Compare what happens during a practice game of catch between six year old, first time Little Leaguers vs. MLB starters.

Dogs can do this too. And quite a bit more impressive than most humans.

It’s always impressive to watch how good my dog is at anticipating the position of the ball way ahead of time.

If I decide to kick it, he reads my body language scarily well to figure out what direction it will probably go, and will adjust his position way ahead of time. If I throw it at a wall he will run to where the angle will put the ball after it bounces. If I throw it high in the air he knows where to run almost immediately (again using my body language to know where I might be trying to throw it.). He’s very hard to fool, too, and will learn quickly to not commit to a particular direction too quickly if it looks like I’m faking a throw.

I always feel like he’d make a great soccer goalie if he had a human body.

That's kind of the thesis Rodolfo Llinas puts forward in a book of his, I of the Vortex[0], although more about consciousness than intelligence. That is, consciousness is the machinery that developed in order for us to predict the next short while and control our body through it.

[0] https://mitpress.mit.edu/books/i-vortex

> On top of that is some kind of coarse planner giving goals to the lower level systems.

There are counterexamples, such as AlphaGo which is all about planning and deep thinking. It also combines learning with evolution (genetic selection).

True, but AlphaGo is specialized on a very specific task where planning and deep thinking is a basic requirement for high level play.

We don't need to think 10 "turns" ahead when trying to walk through a door, we just try to push or pull on it. And if the door is locked or if there's another person coming from the opposite side we'll handle that situation when we come across it.

That’s not true, human beings plan ahead when opening doors more than many things — should I try to open this bathroom door or will that make it awkward if it’s locked and I have to explain that to my coworker afterwards? Should I keep this door open for a while so the guy behind me gets through as well? Not to mention that people typically route plan at doorways.

Doors are basically planning triggers more than many things.

Horses don't plan though, and they are much better than computers at a lot of tasks. If we can make a computer as smart as a horse, then we can likely also make it as smart as a human by bolting some planning logic on top of that.

“Horses don’t plan though[...]”

Can you expand on this statement? While I have no way to “debug” a horse’s brain in real-time, my experiences suggest they absolutely conduct complex decision-making while engaging in activities.

Two examples which immediately come to mind where I believe I see evidence of “if this, then that” planning behavior:

1. Equestrian jumping events; horses often balk before a hurdle

2. Herds of wild horses reacting to perceived threats and then using topographic and geographic features to escape the situation.

The context was this quote:

> intelligence is mostly about getting through the next 10-30 seconds of life without screwing up

In this context horses don't plan or have much capacity for shared learning, at least not as far as I know.

Quote: “This study indicates that horses do not learn from seeing another horse performing a particular spatial task, which is in line with most other findings from social learning experiments,”


> intelligence is mostly about getting through the next 10-30 seconds of life without screwing up

This is probably a variant of Andrew Ng's affirmation that ML can solve anything a human could solve in one second, with enough training data.

But intelligence actually has a different role. It's not for those repeating situations that we could solve by mere reflex. It's for those rare situations where we have no cached response, where we need to think logically. Reflex is model-free reinforcement learning, and thinking is model-based RL. Both of them are necessary tools for taking decisions, but they are optimised for different situations.

In my experience they learn to open gates. They certainly aren't trained to do this, but learn from watching people or each other.

They will also open a gate to let another horse out of their stall which I would count as some form of planning.

Beyond that I can't think of anything in all the years around them. They can manage to be surprised by the same things every single day.

>They can manage to be surprised by the same things everyday.

Sounds like most human beings, given an unpleasant stimulus, for example a spider.

Thank you for the context and new resources to learn from.

It took us millions/billions of years of evolution and a couple of years of training in real life to be able to walk through a door. It's not a simple task even for humans. It requires maintaining a dynamic equilibrium which is basically solving a differential equation just to keep from falling.

Board games have been solved. Now the big boys are working on Starcraft and Dota 2, and it takes a shitload of money to pay for the compute and simulation necessary to train them. No something you can do on the cheap.

Deepmind's StarCraft AIs are already competing at the Grandmaster level[0], which is the highest tier of competitive play and represents the top 0.6 % of competitors.

I am pleasantly surprised by how quickly they have been tackling big new decision spaces.


The next arena is multi task learning. Sure, I lose to specialized intelligences in each separate game, but I can beat the computer at basically every other game, including the game of coming up with new fun games.

Perhaps the first sentient program will be born in an MMORPG?

Just imagine all the exploits they'll find and abuse.

OpenAI has already done some experiments here [0]. All the way down at the bottom, under the "surprising behaviors" heading, 3 of the 4 examples involve the AIs finding bugs in the simulation and using it to their advantage. The 4th isn't a bug exactly, but a (missing) edge case in their behavior not initially anticipated.

[0] https://openai.com/blog/emergent-tool-use/

There's an entire Anime genre about that..


Ah, I knew anime would be useful someday

Read "Three Laws Lethal", by Walton.

Puzzle game with a great story. I recommend it to the HN people.

I loved it, but my issue with that game was severe motion sickness after 20-30 minutes... never finished it :(

Thanks for the warning, I cannot even play Minecraft. I wish Carmack had tackled motion sickness in VR/Games before switching to AI; he did talk about it in the interviews as being a limitation though.

There's a need gap[1] to solve Simulation Sickness in VR and First Person games.

[1]: https://needgap.com/problems/7-simulation-sickness-in-vr-and...

I was under the impression that simulation sickness was largely solved outside of extreme cases (like a vr portal game). I thought we're just waiting for hardware to catch up.

Years ago John said if you have 20k and a dedicated room you can make a convincing vr experience that won't make anyone sick.

I loved and finished the game but I had the same issue on two occasions. I felt sick and I had to stop. Now I know it was not the food I had just eaten but the game itself, thank you!

Who would have thought that a philosophical puzzle game could come from the creators of Serious Sam.

Yes but it sounds weird to me because Carmack has spent his whole life involved with games but has not been known for an interest in game AI before.

Game AI has nothing to do with AGI (or even regular AI) beyond the surface level description OP provided. The reason game AI hasn't progressed in the last few decades isn't because technology is holding us back - after all we can already achieve impressive machine learning feats using current-gen GPUs - it's because bad NPC AI is by design, so players can learn to overcome them and succeed. Very few people want to play a game that always beats them. Most games use simple state machines or behaviour trees with predictable outcomes for their NPCs because it would be a waste of effort to do anything more, and actually negatively impact the game by making it less fun and burning engineering time on things the player won't benefit from.

Modern big-budget games incresingly don't use behavior trees and state machines for their AI anymore. This approach has been superseded by technologies like GOAP [1] or HTN [2]. These are computationally very expensive, especially in the constrained computation budget of a real-time game.

While it's true that game AI is often held back by game design decisions, it's not true that technology isn't holding us back in this area as well.

[1] https://www.youtube.com/watch?v=gm7K68663rA (GDC Talk: Goal-Oriented Action Planning: Ten Years of AI Programming)

[2] https://en.wikipedia.org/wiki/Hierarchical_task_network

You don't optimize for competitive performance (it is trivial to design a game AI that beats every player every time given that you have control over tilting the playing field). You use the AI for bounded response variations (all NPC's act 'natural' and different from the others) and engaging procedural generation (Here is a chapter of a story, now draft an entire zone with landscape, NPC's, cities, quest story lines, etc.).

Games like PvE MMO's need to find a way to produce engaging content faster than it can be consumed at a pricepoint that is economically viable. The way they do it now is by having the players repeat the same content over and over again with a diminishing returns variable reward behavioral reinforcement system.

One of the design goals of game AIs is also that they are fun to play against. If they are too smart and coordinated, they try to throw you off in a way that feels "unfair" to the player.

You have to hit a spot where they are sometimes a bit surprising, but not in a way that cannot be reacted to quickly on your feet. This throws realism out of the window.

But why would good game AI have to make the characters better than the player? The focus on NPC AI should be to make them interesting, not necessarily really tough opponents.

You're assuming game AI means an agent that directly competes with the player.

Plenty of games have NPCs with scripted routines, dialog, triggers, etc that could be improved either by reducing the dev cost to generate them without reducing quality or reacting to player behavior more naturally.

Except in those cases it's even more important that the NPCs don't do anything unexpected. Those NPCs are like actors in a stage play, you don't want them to come up with their own lines and confusing the audience.

Don't forget there is a certain randomness with 'more natural' and with randomness you're going to invite Murphy to the party.

Not all NPCs have to be part of a script. They can just be additional characters that add life and realism to the simulated world.

A weapons maker with a unique backstory and realistic conversations that reference it is more interesting than a bot, and opens up the possibility of unscripted side-quests.

In many cases maybe. Personally I would love to play a game with a world inhabited by "individual" NPC AI:s, where they can influence the world as much as I can, with no specific act structure or story arc.

Some significant part of gaming is risk-free experimentation in a simulated world. The experiments possible are bounded by the simulation quality of the world. More realistic NPC behavior would open up for a lot more games.

There is an older game called STALKER which had (limited) elements of what you describe: autonomous NPCs which influence the game world. Even though it was limited, the NPCs just battled for control of certain territories, I always thought it was a really neat mechanic. It made the world feel more 'real' and alive.

You would see these factions fighting and gaining/losing territory throughout the game. You could chose to help them or just pass on by, but the actions progressed regardless of your choice.

It would be fun if they ad lib

I'd give anything for a "moral"/"nice-guy" AGI that could replace my Dota 2 team mates and opponents.

If the "game" is survival and selection for attention (to get compute space, so literal survival) from humans, "interestingness" is what will matter and I think what people will end up finding most interesting is NPCs that feel like other identities they can empathize with and interact with -- work with to build things, spend time in a community with, fall in love with and so on. This really is about virtual world construction more than simple competitive games. I think it may not end up looking like any particular sense of "AGI" we can currently imagine (I really think we can only properly imagine it exactly when it exists, and it seems not to yet), but it will probably be "distributed" enough that the interfacing may not feel like anything at any one particular site.

The game may even be played by saying things on Twitter and becoming interesting enough that people DM you and try to build a relationship with you, while you're a bot.

> it's because bad NPC AI is by design, so players can learn to overcome them and succeed.

That's part of it, but there are other factors too. The more complex the AI, the harder (i.e. more expensive) the game is to tune and test. Game producers and designers are naturally very uncomfortable shipping a game whose behavior they can't reasonably predict.

This is a big part of why gamers always talk about loving procedural generation in games but so few games actually do it. When the software can produce a combinatorial number of play experiences, it's really hard to ensure that most of the ones players will encounter are fun.

half. the other half was spent building rockets (armadillo aerospace) and VR tech, which arguably is more interesting in its AR industrial or transportation applications.

I love the idea of using the Sims as a platform, as it's a place where it will be blatantly obvious that 'effective' AI without built-in ethics is repulsively inhuman.

As a side note, if we're living in a simulation [0], I'd really like to know who's "real" vs. who's an AI bot out there...

[0] https://en.wikipedia.org/wiki/Simulation_hypothesis

Hate to break it to you, but we're all NPCs

It however has a huge bias towards human-like ai. Maybe it's not smart to narrow down to copying us so quickly.

I mean: maybe it's more efficient to have it read all of wikipedia really well before adding all the other noisy senses.

Simulator technology is now good enough that you can do quite realistic worlds.

It is nowhere near good enough to avoid running into Moravec’s Paradox like a brick wall as soon as you try and apply it outside the simulator.

I don't think that approach is going to work. For any clearly bounded and delineated task, such as a game, the most optimal, lowest energy and lowest cost solution is not AGI but a custom tuned specialist solver. This is why I don't think Deep Blue or Alphago are paths towards AGI. They are just very highly advanced single-task solvers.

Now Alphago and it's implementation framework are much more sophisticated than Deep Blue. It's actually a framework for making single-task solvers, but that's all. The fact it can make more than one single-task solver doesn't making it general in the sense we mean it in the term AGI. AlphaGo didn't learn the rules of Go. It has no idea what those rules are, it's just been trained through trial and error not to break them. That's not the same thing. It's not approaching chess or Go as an intelligent thinking being, learning the rules and working out their consequences. It's like an image classifier that can identify an apple, but has no idea what an apple is, or even what things are.

To build an AGI we need a way to genuinely model and manipulate objects, concepts and decisions. What's happened in the last few decades is we've skipped past all that hard work, to land on quick solutions to specific problems. That's achieved impressive, valuable results but I don't think it's a path to AGI. We need to go back to the hard problems of “computer models of the fundamental mechanisms of thought.”[0]


> AlphaGo didn't learn the rules of Go. It has no idea what those rules are, it's just been trained through trial and error not to break them. That's not the same thing. It's not approaching chess or Go as an intelligent thinking being, learning the rules and working out their consequences

There are indeed some people who learn chess by "reading the manual". Or learn a language by memorizing grammar rules. Or learn how to build a business by studying MBA business theories.

There are also tons of other people who do the opposite. They learn by simply doing and observing. I personally have no idea what an "adverb" is, but people seem perfectly happy with the way I write and communicate my thoughts. Would my English skills count as general intelligence, or am I just a pattern-recognition automaton? I won't dispute the pattern-recognition part, but I somehow don't feel like an automaton.

I can certainly see the potential upsides of learning some theory and reasoning from first principles. But that seems too high a bar for general intelligence. I would argue that the vast majority of human decisions and actions are made on the basis of pattern recognition, not reasoning from first principles.

One last note: "working out their consequences" sounds exactly like a lookahead decision tree

Alphago and it's kind are doing some things that we do, for sure. We do utilise pattern recognition and some of the neurological tools we bring to bear on these problems might look a bit like Alphago.

The thing is those are parts of our neurology that have little to do with general intelligence. I think it's becoming clear that they are merely cognitive tools we bring to bear in the service of our higher level general intelligence machinery. In that sense high level Go and Chess players turn themselves into single-task solvers. They're better at bringing that experience and capability to bear in other domains, because they have general intelligence with which to do so, but those specialised capabilities aren't what make them a being with general intelligence. Or if specialising systems are important to general intelligence, it's as just a part of a much broader and more sophisticated set of neurological systems.

I can't agree with you. I know many chess players at master and Grand Master level. Look at Bobby Fischer too. Human specialization does not carry over very well to other tasks, only marginally...

I don't think that's a disagreement, I think you're right. Most of the benefit someone like that would get from their competence in Chess or Go would be incidental ones. In fact I would say your experience confirms my understanding of this, optimizing for a single domain in the way Alphago does or even in the way humans do, has little to do with general intelligence.

Ah, I misread. So we agree then. On the other hand, I would not be surprised if the hippocampus was highly developed in chess players like Bobby Fischer which could translate into better spatial reasoning. Perhaps general intelligence is best trained by variance.... Not targeted training.

You could be right, and for the record I upvoted your comment for the contribution regarding you experience with high level chess players. I think the downvotes you’re getting are regrettable.

I appreciate your sentiment. This field is my focus right now. My bachelors is in BioChemMed but I am doing a master in CS and have finished many courses including the free ones by Hinton, LeCun, and Bengio.

Here is my strongest prediction:

AGI is only possible if the AGI is allowed to cause changes to its inputs.

Current ML needs to be grafted towards attention mechanisms and more boltzmann net / finite/infinite impulse response nets.

> "... if AGI is allowed to cause changes to its inputs."

Could you elaborate on this point ?

Do you mean that the AGI could change the source of inputs, or change the actual content of those inputs (e.g. filtering) or both?

And why do you think this is a critical piece ?

Both. Attention changes the source. Action interacts with the source, modifying it. But the environment will need to respond back. This is reminiscent of reinforcement training but is more traditional NN except where the input is dynamic and evolving with every batch not only in response to the agent but in response to differential equations or cellukar automata / some type of environment evolution. AGI should be able to change the environment in which it inhabits. Attention in some respects is a start - it is essentially equivalent to telling reality to move the page and watching it happen. Until we have attention AND data modification, we will keep getting the specialized NN we are used to.

But in this classification, isn't "general intelligence" just a meta-single-problem-solver, solving the problem of which single purpose solver to bring to bear on this task?

I think I think, but might I just be using a single problem solver that gives the appearance of thinking?

I suspect the way we think in terms of clear symbols and inference isn't actually how we think but a means of providing a post-hoc narrative to ourselves in a linguistic form.

Edit: Which kind of explains the failure of good-old fashioned symbolic AI as it was modelling the wrong thing.

That makes a lot of sense. An internal narrator on events explaining them to the passenger, rather than the driver of said events.

Definitely not my idea though - couldn't find any good references to where I read about that idea.

[NB I worked in good-old-fashioned AI for a number of years]

> AlphaGo didn't learn the rules of Go. It has no idea what those rules are, it's just been trained through trial and error not to break them.

When given a problem it has never seen before it was able to acquire knowledge of the problem and then apply that knowledge as to solve the problem. Thats the definition of learning and intelligence that can generally be applied to any problem.

> I think it's becoming clear that they are merely cognitive tools we bring to bear in the service of our higher level general intelligence machinery.

What are you on about? Cognition and intelligence are the same thing if it's capable of cognition or as you put it applying "cognitive tools" then it's capable of intelligence.

>When given a problem it has never seen before it was able to acquire knowledge of the problem and then apply that knowledge as to solve the problem. Thats the definition of learning and intelligence that can generally be applied to any problem.

It can't be applied to any problem though. Take the example i gave elsewhere of a game where you provide the rules, and as the game progresses the rules change. There are real games that work like this, generally card games where the cards contain the rules, so as more cards come into play the rules change. Alpha Zero cannot play such games, because there isn't even a way to provide it with the rules.

>> I think it's becoming clear that they are merely cognitive tools we bring to bear in the service of our higher level general intelligence machinery. > >What are you on about? Cognition and intelligence are the same thing if it's capable of cognition or as you put it applying "cognitive tools" then it's capable of intelligence.

I'm saying that human minds apply many cognitive tools, and that Alphago is like one of those tools. It's not like the part choosing and deploying those tools, which is the really interesting and smart part of the system.

The human brain consists of a whole plethora of different cognitive mechanisms. Cognition is a broad term of a huge variety of mechanisms, none of which by themselves constitute all of intelligence. A lot of people look at Alphago and say aha, that's intelligence because it does something we do. Yes, but it only does a tiny, specialist fragment of what we do, and not even one of the most interesting parts.

In a Platonic dialogue, they discuss the definition of knowledge as "true belief with an account". You have true beliefs about language, but you don't know it in the Platonic sense if you can't explain it to someone else. Another way I've heard this defined is, you don't know it if you couldn't write the algorithm for it.

By that definition, most people don't have "knowledge" over most things which they believe and act on. And yet, no one accuses them of not possessing "general intelligence".

If an AI shows the same capabilities as the average human being, I would say that is AGI by definition. Regardless of whether it meets the requirement for Platonic Knowledge.

Really most people learn by a mix of the two. It’s going to take you a lot longer to learn chess is you have no clue on the rules.

But on the other hand if you get into rote memorization before you start the game it’s going to slow you down by having no context.

I think people do both. You learn the rules in order to play a game or perform a task, but with practice you end up training a task specific system that "knows" how to do it without thinking about the rules and perhaps without knowing them.

I think if you have a framework that can produce arbitrary single-task solvers (which AlphaZero can't yet), you would have something indistinguishable from AGI, since communication between single-task solvers is also kinda just a single-task solver.

It's certainly not the most efficient way to use our current hardware, and it's not clear to me how big some of these neural nets would have to be, but if we had computers with a trillion times the memory capacity and speed, IMO it'd certainly work on some level.

How would a single-task solver, or hierarchy of them, go about constructing a conceptual model of a new problem domain? The problem with a solver is it only really goes in a single direction, but when modeling a system you spend a huge amount of time backtracking and eliminating elements that yielded progress at first but then proved to be obstacles to progress later. You also need to be able to rapidly adapt to changing requirements.

Imagine playing a game of Chess in which the pieces and rules gradually changed bit by bit until by the end of the game you were playing Go. That's much closer to what real life problems are like and a human child could absolutely do that. They might not be much good at it, but they could absolutely do it even without ever having played either game before just learning as they went. Note to AGI researchers, if your chatbot can't cope with that or a problem like it without any forewarning, don't bother applying for a Turning Test with me on the other side of the teletype.

they'd do it like we do: by comparing the new situation to previous ones we know about, and applying the model that fits best, and then adapting to results.

For humans, the more previous ones we know about, the better, because we have more chance of applying a model that works in the new environment. That's called "experience".

That’s a very broad, general description of behaviour that doesn’t actually describe an implementation. In fact it could apply to many completely different possible implementations. I suspect though that humans do more than this, that we have a way of either constructing entirely new models from scratch, or of dramatically adapting models to new situations without mere iterative fitting to feedback. Humans are actually capable of reasoning effectively about entirely new ideas, scenarios and problems. We have little to no idea how we do this.

I don't know. I'm not so sure that we can create new working models from scratch. We definitely learn by iterative feedback: babies wiggle stuff and watch what happens to learn how to move their bodies. Learning to ride a bicycle is mostly about falling off bicycles until you learn how not to.

I've seen people apply their normal behaviour to situations that have changed, and then get totally confused (and angry) as to why the result isn't the same. Observe anyone travelling in a new country for examples ("why don't they show the price with the sales tax included here? This is ridiculous!").

In a perfect world, sure, we'd construct a rational mental model of a new situation and test it carefully to ensure it matched reality before trusting it, and then apply it correctly to the new situation. But it's not a perfect world, and people don't actually do that. Usually we charge in and then cope with the results.

Of course, I'm not saying that AI should do that. It'll be interesting to see how a "good" general AI copes with a genuinely new situation.

I think we apply radically different cognitive machinery to physical skills like riding a bicycle, compared to playing a card game where the rules are on the cards, and you have no idea what rule will be on the next card or even what rules are possible. We can train Chimps to ride bicycles, so they have the cognitive machinery for that, but we can't teach them to play these kinds of card games.

Interesting. True. But is that because we lack the communication skills to explain the rules to chimps, or because they lack the cognitive modelling ability to understand those rules?

Seems to me you're just describing reinforcement learning. Youre just saying a human child can adapt to the new problem faster than the AI can adapt, which is true but not the argument you've been making in this thread.

It's not reinforcement learning, because the child can do it the first time, so there's no reinforcement. I have kids, so many times I have played games with them successfully, purely from a description of the rules and playing as we went. They even beat me once the first time we ever played a game, by employing a rule at the end which had never come up in previous play. Compared to that Alphago isn't even in the race, because we can't even tell it the rules.

> since communication between single-task solvers is also kinda just a single-task solver.

It would be nice if it worked like that but I think you're massively underestimating the problem set here. I'd suggest its more like the architectural glue one needs as an engineer writing a command line util and a fully fledged piece of Enterprise solution (i.e. orders of magnitude).

Of course because we don't actually know how intelligence exactly works we're both guessing here.

The only way i think it can be done is simulated evolution.. be that simulated evolution of neural nets or something else.

As others have mentioned here though.. this becomes horrifying if we've created something sentient to kill in games or enslave.

I've been thinking for a while that use of AI in games might become a civil rights frontier in about 30 to 50 years or so

The open ended simulations, similar to earth conditions, might general enough to sprout some artificial general intelligence. Put multiple intelligences in a massive multiplayer online world and have them compete for shelter and resources. It's an environment that we know has produced intelligence.

It may be a brutal struggle, but perhaps that struggle is important. Perhaps having a simulated tree fall on you is more meaningful than being reaped by some objective function at the end of an epoch.

I think you’re on a potentially productive path, but it took 2 billion years of evolution in a staggeringly vast environment like that to produce results. The question is really how to shortcut that process, but training environments may well have a role to play.

Reverse engineering a human mind would be another approach.

This is often overlooked but it's the only approach that is pretty much guaranteed to succeed given enough time. That said, it's also likely that AGI will come about way earlier from another approach (just as planes came before the "robot birds").

While I agree this leads to haunting outcomes. e.g. If we create a successful interface then whats the point of building our own digital pastiches if we can just strap in the real thing?

Check out OpenWorm. They're trying to reverse engineer the simplest organism with a nervous system, a nematode with 302 neurons. They're making progress, but not very fast. That approach is going to be a long haul.

Ted Chiang wrote an interesting novella, The Lifecycle of Software Objects, about that very subject


Another one is Crystal Nights by Greg Egan. Full text here:


Or Iain Banks' take on the subject, in Surface Detail(https://en.wikipedia.org/wiki/Surface_Detail)

edit: wrong book ;)

> simulated evolution

Isn't that what genetic algorithms are?

yeah, one kind i guess. Or perhaps they cover all kinds?

well, "kill" becomes moot if the code is preserved. Like "killing" another player in a multi-player game. You're not actually killing them.

This may be a clunky analogy, but is this fundamentally different than a killing a human as long as we keep a record of their DNA sequence? Maintaining the information doesn't seem enough to negate snuffing out the execution of that information

The generic AI could be "playing" in a thousand virtual environments at once. Killing one of them doesn't really have a parallel in human life, or ethics.

I mean, yes, you killed a sentient being. But if that sentient being has a thousand concurrent lives, then what does "killing" one of those lives even mean? And if it can respawn another identical life in a millisecond, does it even count as killing?

I suspect that having sentient virtual entities will provide philosophy and ethics majors a lot of deep thinking room. As it already has for SciFi authors.

Would this opinion change if scientists were able to prove the multiverse theory where there are infinite numbers of "us" living concurrent lives as well?

Not if we persist the state of the mind before we turn them off. Can't do that with humans


> They are just very highly advanced single-task solvers.

What if AGI is just a combination of very highly advanced single-task solvers?

I happen to believe that it is an emergent behavior once the complexity gets high enough, so AGI might just be a (large?) collection of AlphaGo solvers connected to different inputs.

> It's like an image classifier that can identify an apple, but has no idea what an apple is

Kind of like humans then.

A child picks up an apple but doesn't know what an apple "is". It doesn't even have the vocabulary to describe it.

As adults we know what an apple is because we understand it as a concept, the ideal "apple", and can manipulate the concept into areas way outside the original concept (say, the phrase "apple of my eye").

The child does know that the apple is a thing though. That it’s a separate object that can be carried around. Computer vision ML systems don’t even know that!

All they know is how to recognize a common pattern on a pixel grid, after seeing a large number of examples, and then draw a box around it.

The fact that a child has a body and can manipulate the world with all 5 senses working in concert should not be underestimated.

A child comes pre-programmed to put things in their mouth. They also have very sophisticated reward functions built-in that identify tasty sugars entering their mouth.

Very quickly (assuming said child doesn't eat something too bad), in the absence of an external oracle, the child learns a very productive mental model of what an apple is.

This type of feedback loop seems eminently translatable to machine learning, assuming we can encode the concept space in a way that allows the model to be encoded and trained in a reasonable set of constraints

Right, but that’s actually just a tiny part of the puzzle. The cognitive machinery that knows about edibility, deco possibility (how objects can be decomposed into parts and have internal structure), compost leaves properties (how the parts of an Apple contribute to its attributes as a whole), it’s relationship and interactions with other objects in the environment. All of that cognitive architecture might be a target for your feedback loop, but isn’t a solver and won’t work like a solver.

Yes, but you did not manipulate the concept. You did not invent that phrase, but simply learned its meaning. A machine can do that.

Every child reinvents the concept. It wasn't there on birth, and the word and phrases didn't contain it. That's a bit difficult topic to wrap ones head around, but it is critically important to differ between signified (the concept) and signifier (the words etc.).

The child develops concepts and is able to create and evaluate inferences, and thus able to understand metaphors etc.

The concept is what most AI approaches lack. Googles image search can identify apples, and cherries, and probably can categorize both as fruits, but it can't infer that this probably contains seeds, is a living being etc.

Or even that it is a three dimensional object, or what that means.

You're the only person who mentioned metaphors here. My intuition tells me metaphors will be key to developing AGI. Metaphors literally generalize; they predict; they organize and catalogue. Formation, testing, and introspection of metaphors seems to be a way forward.

If you are interested in this direction of research: There is a big body of work regarding human cognitive processes and the role of metaphor. I would suggest "Philosophy in the flesh" by Lakoff and Johnson. A hefty work, but that was one of the publications that fundamentally changed my perspective on the human mind. The concept of embodied reasoning was eye-opening for me.

As I have an academic background in learning theory and developmental psychology, I'm pretty pessimistic about the current AI trend, autonomous driving etc. Most smart people in the field are chasing what are effectively more efficient regression functions for over 60 years now, and I almost never stumble upon approaches that have looked at what we know about actual human learning processes, development of the self etc.

Moravec's paradox[1] IMO should have been an inflection point for AI research. This is the level of problems AI research has to tackle if it ever wants to create AGI.

[1] https://en.wikipedia.org/wiki/Moravec%27s_paradox

Related: Martin Hilpert has an excellent lecture/video on metaphor, as part of his Cognitive Linguistics course. Well worth a watch if this is a topic that interests you.


> Imagine The Sims, with a lot more internal smarts and real physics, as a base for work.

Sounds like the start of a truly horrifying Black Mirror episode


The internet is so great

> Sounds like the start of a truly horrifying Black Mirror episode

That episode already exists.


Or a Philip K Dick novel that's so weird and prescient, nobody's been able to figure out how to make a movie from it.



>The Perky Pat Layouts itself is an interesting concept. Here's Dick, in the early 60's, coming up with the idea for virtual worlds. I mean, Second Life and other virtual worlds are just a mapping of the Perky Pat Layouts onto cyberspace. Today Facebook acts like the PP Layouts, taking people's minds off toil and work and letting them engage others in a shared virtual hallucination -- you're not actually physically with your friends, and they might not even be your friends.

>Dick’s description of the Can-D experience is essentially a description of virtual sex:

>“Her husband -- or his wife or both of them or everyone in the entire hovel -- could show up while he and Fran were in the state of translation. And their two bodies would be seated at proper distance one from the other; no wrong-doing could be observed, however prurient the observers were. Legally this had been ruled on: no co-habitation could be proved, and legal experts among the ruling UN authorities on Mars and the other colonies had tried -- and failed. While translated one could commit incest, murder, anything, and it remained from a juridicial standpoint a mere fantasy, an impotent wish only.”

>Another character says “when we chew Can-D and leave our bodies we die. And by dying we lose the weight of -- ... Sin.”

Carmack's post in full:

Starting this week, I’m moving to a "Consulting CTO” position with Oculus.

I will still have a voice in the development work, but it will only be consuming a modest slice of my time.

As for what I am going to be doing with the rest of my time: When I think back over everything I have done across games, aerospace, and VR, I have always felt that I had at least a vague “line of sight” to the solutions, even if they were unconventional or unproven. I have sometimes wondered how I would fare with a problem where the solution really isn’t in sight. I decided that I should give it a try before I get too old.

I’m going to work on artificial general intelligence (AGI).

I think it is possible, enormously valuable, and that I have a non-negligible chance of making a difference there, so by a Pascal’s Mugging sort of logic, I should be working on it.

For the time being at least, I am going to be going about it “Victorian Gentleman Scientist” style, pursuing my inquiries from home, and drafting my son into the work.

Runner up for next project was cost effective nuclear fission reactors, which wouldn’t have been as suitable for that style of work.


We're at 500 comments at the time of posting this, and no-ones pasted his post in full to save us having to visit Facebook...

Too bad he didn't go for his runner up. cost effective/mass produceable fission reactors could save humanity.

Carmack creates AI, AI is used to invent effective fission reactors. Check and mate.

- fusion reactors then used to power spaceX ships to go to Mars... wait what!

As long as his next project is not teleportation gateways i guess we are safe.

[edit] it's also 2019 :D

> In the year 2019, the player character (an unnamed space marine) has been punitively posted to Mars after assaulting a superior officer, who ordered his unit to fire on civilians. The space marines act as security for the Union Aerospace Corporation's radioactive waste facilities, which are used by the military to perform secret experiments with teleportation by creating gateways between the two moons of Mars, Phobos and Deimos. In 2022, Deimos disappears entirely and "something fraggin' evil" starts pouring out of the teleporter gateways, killing or possessing all personnel.

how come 90% of the time HN readers can't see the funny side in anything.

AI sees humans as thread, all humans die.

> AI sees humans as thread

We shall all be woven into the fabric of the new artificial reality.

Sounds like a comfy retirement I can look forward to

Thanks it didn’t so much save me as it enabled me, since I have it blocked.

same here, can't access it at work, at its damn interesting topic for many of us


I have FB blocked so thanks for sharing!

I would like to block facebook but I'm on debian. Last thing I tried to block was reddit through /etc/hosts and it didn't work.

I have facebook main site and its CDNs blocked in uBlock. Works good for me, but I only use PCs to browse the web, so no idea how to go about blocking it if one uses mobile devices at home. Possibly a DNS caching server like `unbound` can help?

You can use the same method with uBlock on Firefox Android. For iOS I don't know any way.

might be easier to user your adblocker

Progress in AI is due to data and computational power advances. I wonder what kind of advances are needed for AGI.

1. Biological brains are non-differentiable spiking networks much more complicated than backpropagated ANNs.

2. Ion channels may or may not be affected by quantum effects.

3. The search space is huge (but organisms aren't optimal and natural selection is probably local search)

4. If it took ~3.8b years to get from cells to humans, how do we fast-forward:

* brain mapping (replicating the biological "architecture")

* gene editing on animal models to build tissues and/or brains that can be interfaced (and if such interface could exist how do we prevent someone from trying to use human slaves as computers? Using which tissues for computation is torture?)

* simulation with computational models outside of ECT (quantum computers or some new physics phenomenon)

Note: those 3.8b years are from a cell to human. We haven't built anything remotely similar to a cell. And I'm not claiming that an AGI system will need cells or spiking nets, most likely a lot of those are redundant. But the entropy and complexity of biological systems is huge and even rodents can outperform state of the art models at general tasks.

IMHO, the quickest path to AGI would be to focus on climate change and making academia more appealing.

> even rodents can outperform state of the art models at general tasks.

Rodents? Try insects [1]. In the late 40s and early 50s, when neural networks were first explored with great enthusiasm, some of the leading minds of that generation believed (were convinced, in fact) that artificial intelligence (or AGI in today's terms) is five/ten years away; the skeptics, like Alan Turing, thought it was fifty years away. Seventy years later and we've not achieved insect-level intelligence, we don't know what path would lead us to insect-level intelligence, and we don't know how long it would take to get there.

[1]: To those saying that insects or rodents can't play Go or chess -- they can't sort numbers, either, and even early computers did it better than humans.

This jumping spider has ~600k neurons in its brain - https://youtu.be/UDtlvZGmHYk

They are creepy smart.

Speaking of Portias and smarts, I'm just going to recommend "Children of Time" here (and its recently released sequel, "Children of Ruin"). It's a story of a future where humans accidentally uplifted jumping spiders instead of monkeys, and goes deeply into how the minds, societies and technology of such spiders would be fundamentally different from our own.

Just wanted to say holy crap that video was amazing - exciting and suspenseful!

Here's another one for ya if you get stuck with a case of the nosleeps - https://www.youtube.com/watch?v=7wKu13wmHog

Something about predatory nature of both insects seems to tune up their intelligence. Of course it never hurts having the BBC tell your story either.

>Something about predatory nature of both insects seems to tune up their intelligence.

Yep. To be a predator, you need to outwit your prey and think fast, so it's thought to be a natural INT grinder. `w´

Presumably, this could drive up the INT of prey too, but maybe it's cheaper to just be faster/harder to see? But you can't be THAT hard to see, and the speed only saves you in failed ambushes, so planning successful ambushes continues to reward the INT of predators (unless they just enter the speed arms race, like cheetahs or tiger beetles).

What is I.N.T.? I couldn't find a definition.

Parent is using the commonly accepted stat abbreviation for intelligence in role playing games

> [1]: To those saying that insects or rodents can't play Go or chess -- they can't sort numbers, either, and even early computers did it better than humans.

They probably can, internally; they just can't operate on tokens we recognize as numbers explicitly. For a computer analogy, take Windows Notepad - there's probably plenty of sorting, computing square roots and linear interpolation being done under the hood in the GUI rendering code - but none of that is exposed in the interface you use to observe and communicate with the application.

Computers still do that much better -- there's no way an insect, or a mammal, brain internally sorts ten million numbers -- and even much better (at least faster) than humans. My point is only that the fact computers can do some tasks better than insects or humans is irrelevant, in itself, to the question of intelligence.

> Progress in AI is due to data and computational power advances.

I think you'd be surprised how much progress is also being made outside those two factors. It's sort of like saying graphics only improve with more RAM and faster compute. We know there's more to it than that.

In many cases, the cutting edge of a few years ago is easily bested by today's tutorial samples and 30 seconds of training. We're doing better with less data and orders of magnitude less compute.

But not towards AGI. We're just improving on narrow AI after recent breakthroughs thanks to the hardware being powerful enough and large datasets being available.

The point the poster above is trying to make is that given the same amount of data, improvements in technique is leading to significant improvements in accuracy.

An illustrative example comes from the first lesson in fastai's deep learning course: an image classifier that would have been SOTA as late as 2012/13, can be built by the hobbyist in like 30 seconds.

That said, I don't disagree that this is all narrow AI, at best.

Having access to cheap and scalable compute and storage should be helpful for AGI too. It doesn't solve anything but it does give more access to more people.

I'm sure neural nets will herald AI right after the mechanical gears and pneumatic pistons that were envisioned as the secret sauce during the turn of the last century.

The key, of course, is redefining life and intelligence as whatever the current state-of-the-art accomplishes. (Cue explanations that the brain is just a giant pattern matcher.) It makes drawing parallels and prophesying advancements so much easier. Of all our sciences, that's perhaps the one thing we've perfected--the science of equivocation. And we perfected it long ago; perhaps even millennia ago.

> even rodents can outperform state of the art models at general tasks

Rodents can't play Go or a lot of other humanly-meaningful tasks. We don't need to build an artificial cell. A cell is too many components that by blind luck happened to find ways to work together, this is as far from efficient design as can be. The same way we don't build two-legged airplanes, we don't need anything that's close to the wet spiky mess that happens in human brains. It's more likely that we have all the ingredients already in ML, and we need to connect them in an ingenious way and amp up the parallelism.

AlphaZero has coded all the rules for the respective three games, they do a tree search and their neural network output layer has exactly n neurons for max(n) possible moves. Although it's impressive they don't teach it heuristics and strategies, it's a very specific task.

What about pigeons predicting breast cancer with 99% probability, rats learning to drive cars, monkeys building tools?

Rodents stand a bigger chance at learning Go than AlphaZero spontaneously building stone tools and driving cars.

You are talking about AlphaGo. AlphaZero was not given any prior knowledge of the game and is trained exclusively through self-play -- and it outperforms Monte Carlo tree search-based systems such as AlphaGo and Stockfish in chess 100-0 with a fraction of the training time.

AlphaZero is also capable of playing Chess, Shogi and Go at a super-super-human.

As impressive as AlphaZero surely is, I don't think it ever got a proper comparison to Stockfish. It was running on a veritable supercomputer while Stockfish was running in a crippled mode on crippled hardware.

Not working in this area but the abstract of the AlphaZero paper [0] seems to disagree about your /any prior knowledge/ point: "Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case."

[0] https://arxiv.org/abs/1712.01815

This is my point exactly. The model is trained without any prior domain knowledge at all. It only has access to a game world where the constrains in the world is a representation of the game's rules.

You can view these as optimized pattern recognizer regexes. You start with a blank fully connected graph and it eventually converge on a useful function. That graph has many paths encoded in it that represents specific optimal game play.

Isn't this how the neurons and synapses in our brain work, though?

Maybe... there’s some other properties of biological neurons we don’t capture in NNs currently.

The natural environment encodes "all the rules" for real animals, too. You need some constraints or else there is nothing to be learned. One could say that every survival task is also specific , but is a slight variation of previously learned one.

> pigeons predicting breast cancer with 99%

pigeons contain 340M neurons (with dendrites and all, giving them higher computational capacity than ANN units).

> Rodents stand a bigger chance at learning Go

They probably don't ; probably because they can't understand the objective function and their brain capacity is limited

Scientist have just recently taught rats how to play hide-and-seek for fun. Other scientists have found out that slime mold will model the Japanese railroad system. I wouldn't be surprised if rodents (plural) instinctively have a go strategy once someone figures how to make an analog game for them.

its probably safe to assume that even if rodents are behaviorally trained to follow complex rules, they are mostly pattern-matching, and are lacking higher-level abstraction and communication models like humans do. If they did they would at least attempt to communicate with us, like we do with them. In such a case, an elephant that plays go by patternmatching is no different from a neural network that learned by patternmatching

The problem with the analogy is that the car, by far, is not a general transportation device. Practically, most cars are solving a very constrained transportation problem: moving on roads that humans made.

We don't have anything remotely close to a wetware-enabled transportation device, something that can move on flat land, climb mountains, swim in bodies of water, crawl in caves, hide in trees.

Within the constrained problem, the machine exceeds humans. But generally, the wetware handles moving around much better.

Same with AI: in a constrained problem, the AI can excel (beat humans in chess and go). But I doubt we will see a general AI any time soon.

> constrained problem

human AI also evolved by solving constrained problems, one at a time. Life existed before the visual system , but once this was solved it moved on to do other things. In AI we have a number of sensory systems seemingly solved: Speech recognition, visual object recognition, and we are closing to certain output (motor) systems: NLP text synthesis systems seem a lot like the central pattern generators that control human gait, except for language. What seems to be missing is the "higher-level ", more abstract kernels that create intent, which are also difficult to train because we don't have a lot of meaningful datasets. Or maybe , we have too big datasets (the entirety of wikipedia) but we don't know how to encode it in a meaningful way for training. It's not clear however that these "integrating systems" are going to be fundamentally different to solve than other subsystems. It certainly doesn't seem to be so in the brain, since neocortex (which hosts both sensory and motor and higher level systems) is rather homogeneous. In any case, it seems we 're solving problems one after another without copying nature's designs, so it's not automatically true that we need to copy nature in order to keep solving more.

> In AI we have a number of sensory systems seemingly solved: Speech recognition, visual object recognition,

Do you have examples of those systems which are competitive in general use rather than specialized niches? The cloud offerings from Amazon, Google, etc. are good in the specific cases they’re trained on but fall off rapidly once you get new variants which a human would handle easily.

There are many vision models where classification is better than human. I m not sure what you mean 'fall of rapidly'; they do fail however for certain inputs where humans are better. But we 're talking about models that contain 6 to 7 orders of magnitude less neurons than an adult brain.

It's also interesting in the context of how we build our technology in general: we constraint our environments just as much we develop tools that operate in them. E.g. much as cars were created for roads, we adapted our communities and the terrain around them by building roads and supporting infrastructure. A lot of things around us rely on access to clean water at pressure, which is something we built into our environments, etc.

> A cell is too many components that by blind luck happened to find ways to work together

Can't tell if sarcasm.

carbon chemistry + thermodynamics

!= "luck"

so you think cells had some insight on how to evolve themselves?

more like caused to happen by the Creator.

Who created the creator?

From what I understand, quantum effects being essential to the process is a fringe belief. Penrose is probably the most famous 'serious person' (sorry Deepak Chopra) to espouse the idea, but I'm inclined to believe that might be a Linus Pauling/Vitamin C sort of scenario. Penrose started from the perspective of believing there must be quantum effects, then began fishing for physical evidence of it.

I was taught that the quantum theory of memory and cognition generally falls under Eric Schwartz's "neuro-bagging" fallacy [0]. That is:

>You assert that an area of physics or mathematics familiar to few neuroscientists solves a fundamental problem in their field. Example: "The cerebellum is a tensor of rank 10^12; sensory and motor activity is contravariant and covariant vectors".

So yeah, I feel that it's pretty fringe (as you suggested).

[0] https://web.archive.org/web/20170828092031/http://cns-web.bu...

One interesting hypothesis, re: lithium isotopes in Posner molecules: https://www.kitp.ucsb.edu/sites/default/files/users/mpaf/p17...

"The Secret of Scent" by Luca Turin [0] if I remember correctly goes into research that indicates that there may be quantum effects that explain how shape/chirality of molecules affect smell. [0] https://www.amazon.com/Secret-Scent-Adventures-Perfume-Scien...

So it is plausible that nature may have evolved to be affected by quantum effects.

Yeah, "quantum mechanics and cognition are very complex and therefore equivalent", sorry I don't know who to attribute the quote to.

I think you're recalling the end of this comic[1], which was on the front page of HN a couple weeks ago. So the quote is probably attributable to either Scott Aaronson or Zach Weinersmith.

[1] https://www.smbc-comics.com/comic/the-talk-3

Yes! Thanks

You forgot to mention, crucially, that neurons in close proximity affect each other, which is just one of the things that makes modeling of more than a few neurons in time domain a complete non-starter. It all results in enormous systems of PDEs which we don't know how to solve yet at all. You could say that we do not have the right mathematical apparatus to model any such thing.

I don't follow that. What would prevent (perhaps quite slow) simulation of a larger system of such neurons? E.g. N-body problems are analytically beyond us, but can be simulated to arbitrary precision with certain trade-offs.

Time domain solutions do not exist for more than a dozen neurons. At least they did not when I took a computational neuroscience MOOC a couple of years ago. State of the art at the time was the nervous system of an earthworm. That is, if you consider what you actually need to do to simulate how potentials will change in the brain over time give a certain starting state and stimuli, the math gets so complicated (and awkward) so quickly that it's not really tractable with the mathematical (or simulation) apparatus we currently have to go beyond such trivial systems.

> 1. Biological brains are non-differentiable spiking networks much more complicated than backpropagated ANNs.

Actually it's not so obvious that the brain is not differentiable. If you do a cursory search, you'll find quite a lot of research into biologically plausible mechanism for backpropagation. Not saying the brain does backprop, we just don't know and it's not outside of the realm of plausibility

> 2. Ion channels may or may not be affected by quantum effects.

In a sense, everything is affected by quantum effects. However, with neurons, they are generally large enough that quantum effects do not dominate. Voltage gated channels are dozens to hundreds of amino-acids long. Generally, there are hundreds to millions of ion channels in a cell membrane and the quantum tunneling of a few sodium ions in or out of the cell will generally not affect gestalt behavior of the cell, let alone a nervous system's long term state. Suffice to say, ion channels are not dominated by quantum behavior.

Largely, we have the building blocks to replicate neurons (as we currently understand them) in silico. However, as is typical with modeling, you get out what you put in. Meaning that how you set your models up will mostly determine what they do. Setting your net size, the parameters of you PDEs, boundary values, etc. are the most important things.

Now, that gets you a result, and it's likely to take a fair bit of time to run through. To get it up to real time the limiting factor really ends up being heat. Silicon takes a LOT of energy as compared to our heads, ~10^4 more per 'neuron'. If we want to get to real time, we're gonna need to deal with the entropy.

This reminds me of a interesting armchair moral dilemma: Assume we have the tech to replicate/simulate a biological brain. Now say we want to study the effects of extreme pain/torture etc on the brain. Instead of studying living animals or humans we'd just simulate a brain, and simulate sending it pain signals and see what happens.

But, if this is a 100% replicated brain, doesn't that mean its suffering is just as real as a real brain's suffering, and therefor just as cruel? And if not, what's the difference?

> But, if this is a 100% replicated brain, doesn't that mean its suffering is just as real as a real brain's suffering, and therefor just as cruel?

Yes, it does.

Or, assuming you don't believe in souls, "real" brain's suffering isn't real either. (The brain is just a machine, right?)

This reminds me of the idea that free will doesn't exist, but that we have to act as if it were.

So by analogy to that, maybe the AI isn't really suffering, but you have to act as if it were.

More food for thought:

Some surgery blocks memory but can be incredibly painful. Do we need to worry about that? Is the suffering that the brain can not remember "real"?

I think the word 'real' is way too vague in this context.

Fwiw, after a certain amount of pain, brain "transcends it": everything disappears, there are some curious colors here and there, but there is no pain. Experienced that during an in ear infection.

> gene editing

Gene expression is often tied to the environment the organism is in. Mere possession a gene isn't enough to benefit from it. Some expressions don't take effect immediately, but rather activate in subsequent generations.

Epigenetics is a whole equally large layer on top of this system. A single-focus approach may not be sufficient, and even if it is, it's not likely to cope with environmental entropy very well.

If you can craft a gene[1] to express some particular phenotype (a big if), surely you can craft it to express itself without reliance on epigentic[2] chemistry.

[1] I understand gene to mean some ill-defined, not necessarily contiguous set of genetic sequences (DNA, RNA, and analogs) with an identifiable, particularized expression that effects reproductive (specifically, replicative) success. I think over time "gene" has been redefined and narrowed in a way to make it easier to claim to have made supposedly model-breaking discoveries.

[2] Some others on HN have made strong cases for why epigenetics isn't a meaningful departure from the classic genetic model; just a cautionary tail for eager reductivists who would draw unsupported conclusions from the classic model. See, also, note #1.

We still haven’t solved language nor intelligence.

Like what is language, what is intelligence? Some of the smartest linguists and philosophers would proudly declare they have no fucking clue.

Making Alexa turn on the lights or using Google Translate are cool party tricks though.

Idc how many Doom games ya made, but I’m sorry to say a bunch of software engineers aren’t gonna crack this one.

> Some of the smartest linguists and philosophers would proudly declare they have no fucking clue.

to worship a phenomenon because it seems so wonderfully mysterious, is to worship your own ignorance” - https://www.lesswrong.com/posts/x4dG4GhpZH2hgz59x/joy-in-the...

Having no clue is not something to be proud (or ashamed) of.

> I’m sorry to say a bunch of software engineers aren’t gonna crack this one.

Doesn’t sound like you’re at all sorry, it sounds like you’re thrilling in putting these uppity tryhards in their place for daring to attack something you hold sacred.

This doesn't surprise me at all. He went on a week long cabin-in-the-middle-of-nowhere trip about a year ago to dive in to AI (that's all this guy needs to become pretty damn proficient). (edit: I'm not claiming he's a field expert in a week guys, just that he can probably learn the basics pretty fast, especially given ML tech shares many base maths with graphics)

As recent as his last Oculus Connect keynote, he extolled his frustration with having to do the sort of "managing up" of constantly having to convince others of a technical path he sees as critical. He's clearly the type that is happiest when he's deep in a technical problem rather than bureaucracy, and he likes moving fast.

On top of that, he likes sharing with the community with talks and such, and ever since going under the FB umbrella, he's had to clear everything he says in public with Facebook PR, which clearly annoyed him.

He's hungry for a new hard challenge. VR isn't really it right now since it's more hardware-bound by the need for hard-core optical research than software right now. With the Quest, he (in my opinion) solidified VR's path to mobile standalones. It's time to try his hand at another magic trick while he's on his game.

John's the very definition of a world-class, tried and true engineer/scientist. He's shown time and time again the ability to dive into a field and become an expert very quickly (he went from making video games to literally building space rockets for a good bit before inventing the modern VR field with Palmer).

If there's anyone I'd trust to both be able to dive into AGI quickly and do it the right(tm) way, it's John Carmack.

Carmack is unquestionably a genius, but I think it's quite unlikely his solo work in a new domain will leapfrog an entire field of researchers.

I wouldn't, however, bet against some kind of insanely clever development coming out of his new endeavor. Something like an absurdly efficient new object classifier, that reduces the compute requirements for self-driving cars by a non-trivial factor, would be a very Carmack thing.

The problem with the "field of researchers" is that most of us aren't geniuses. We're just plugging away at problems, like normal people.

The opportunity for a genius is to come in, synthesize all existing information on the subject, and then come up with a novel approach to the whole thing.

In some part, I think that is what Elon Musk has been able to do effectively. He comes into a field that already exists, reads everything he can get his hands on, and then outputs something novel. You can only do that effectively if you have the mental capacity to keep all that info in your head at once, I think.

Musk actually has credited Carmack's Armadillo Aerospace with providing the inspiration for vertically landing the Falcon 9. Of course Armadillo was likewise inspired by the Delta Clipper, which was in turn inspired by the LEM, etc. But it's one thing to vertically-land a rocket a few times when you have billions of dollars at your disposal; it's another thing to do it hundreds of times for a thousandth the price. That was Carmack's contribution: proving that vertical landing can be both incredibly robust, and cheap as chips. Really valuable work.

I had the pleasure of meeting Carmack a few times over the years at small aerospace conferences. He's both as true a geek and as much of a gentleman as you might imagine. I'm really looking forward to seeing what he does with AGI.

I normally don't bother but this comment is so profoundly ridiculous I had to say something.

Tenured ML professors at the top 100 or so universities in the world aren't "most of us". A very large chunk of these people are geniuses. Those jobs are incredibly hard to get, and most of these people are reading everything that is getting published, on an ongoing basis, and are outputting something novel, on an ongoing basis.

The fact that you think that John Carmack, because he's a name that you've actually heard of, is going to go into ML and suddenly make some giant advance that all the poor plebs in the field weren't able to do, is only a reflection of your misunderstanding of what's already happening in academia, not on Carmack's skills or abilities.

You're acting as though everyone are just low level practitioners using sklearn, and it would be a great idea to have some smart people work on developing something novel. Guess what: that's already happening, with incredibly smart people, on an incredibly large scale. Carmack doing it would just be another drop in the bucket.

  Tenured ML professors at the top 100 or so universities in the world aren't "most of us".
Too bad we're talking about AGI, not ML.

  Those jobs are incredibly hard to get,
You don't need to be a genius in order to land a hard-to-get job, and you thinking academia is somehow better at making the absolute smartest people rise to the top is cute.

  The fact that you think that John Carmack, because he's a name that you've actually heard of, is going to go into ML and suddenly make some giant advance that all the poor plebs in the field weren't able to do, is only a reflection of your misunderstanding of what's already happening in academia, not on Carmack's skills or abilities.
I don't think that. Mostly because we're not talking about ML, but also because I don't expect eureka moments from people that have been trying to solve a problem for a long time as much as I expect them from someone that hasn't properly tried their hand at it. Academia produces consistent results and consistent improvement. That's not what I'm looking for.

  You're acting as though everyone are just low level practitioners using sklearn, and it would be a great idea to have some smart people work on developing something novel. Guess what: that's already happening, with incredibly smart people, on an incredibly large scale. Carmack doing it would just be another drop in the bucket.
sklearn hardly seems relevant to AGI, so I'm not sure why I'd act like everyone in the AGI field merely a novice practitioner of it.

> Carmack doing it would just be another drop in the bucket.

If this research is as compute intensive as it seems to be, Carmack's contribution might be that he increases the rate other researchers can add their drops to the bucket.

Carmack isn't the first techie to take on a big hard problem. Jeff Hawkins, a name many of us also know, did as well.

Yes, he may well improve some algorithm, or rewrite some commonly used tool to improve efficiency. And researchers are often not incentivized to do that, so it would be great. But a far cry from the picture people are painting about him soaking up the field and using his genius to solve some major problem quickly.

If by "techie" you mean, professional software engineer, that's fine, but there's no reason to assume that a professional software engineer is going to be magically better at AI research than... professional AI researchers? He's probably going to be substantially worse.

Also, your statement below:

> That's probably true. I look at this as Carmack running his own PhD program. I expect he will expand what we know about computation and the AGI problem before he's done.

Makes it clear to me that you don't really get it. Carmack, at best, might know enough right now to be in a PhD program. I doubt that he has anywhere near as much knowledge, insight, or ideas for research, as top graduate students. He's in no position to mentor graduate students.

> If by "techie" you mean, professional software engineer, that's fine

No, I mean technologist. He has a pretty solid history with software, physics, aerospace, optics, etc...

> might know enough right now to be in a PhD program

Yeah, that's what I'm saying. The frontier in AGI or even just AI is enormous and I think I would be more surprised if Carmack were not able to find some place he could expand the border of what we know.


But the academic activity is focused around the kind of activities that Kuhn calls "Normal Science".

That is, ML researchers mainly do competitions on the same data sets, trying to put up better numbers.

In some sense that keeps people honest, it also lowers the cost of creating training data, but it only teaches people how to do the same data set over and over again, not how to do a fresh one.

So a lot of this activity is meaningful in terms of the field, but not maybe not meaningful in terms of useful use.

I saw this happen in text retrieval; when I was trying to get my head around with why Google was better than prior search engines, I learned very little from looking at TREC, in fact people in the open literature were having a hard time getting PageRank to improve the performance of a search engine.

A big part of the problems was that the pre-Google (and a few years into the Google age) TREC tasks wouldn't recognize that Google was a better search engine because Google was not optimized around the TREC tasks, rather it was optimized around something different. If you are optimizing for something different, it may matter more what you are optimizing for rather than the specific technology you are using.

Later on I realized that TREC biases were leading to "artificial stupidity" in search engines. IBM Watson was famous for returning a probability score for Jeopardy answers, but linking the score of a search result to a probability is iffy at best with conventional search engines.

It turns out that the TREC tasks were specifically designed not to reward search engines that "know what they don't know" because they'd rather people build search engines that can dig deep into hard-to-find results, and not build ones that stick up their hand really high when they answer something that is dead easy.

> But the academic activity is focused around the kind of activities that Kuhn calls "Normal Science".

True, but even Kuhn would note that most paradigm shifts still come from within the field. You don't need complete outsiders and, as far as I know, outsiders revolutionizing a field are quite rare.

You need someone (a) who can think outside the box, but you also need (b) someone who has all of the relevant background to not just reinvent some ancient discarded bad idea. Outsiders are naturals at (a) but are at a distinct disadvantage for (b).

I think what's really happening in this thread is:

1. Carmack is a well-deserved, beloved genius in his field.

2. He's also a coder, so "one of us".

3. Thus we want him to be a successful genius in some other field because that indirectly makes us feel better about ourselves. "Look what this brilliant coder like me did!"

But the odds of him making some big leap in AGI are very slim. That's not to say he shouldn't give it a try! Society progresses on the back of risky bets that pay off.

> But the odds of him making some big leap in AGI are very slim.

That's probably true. I look at this as Carmack running his own PhD program. I expect he will expand what we know about computation and the AGI problem before he's done.

> ML researchers mainly do competitions on the same data sets, trying to put up better numbers.

There are surely a lot of researchers doing that, but do you really think anyone who has a plausible claim at being one of the top 100 researchers in the field in the entire world is doing that? Even if there are only 100 people doing truly novel research, that's still 100 times as many people as are going to be working on Carmack's research.

How many people were working on physics before Einstein came along?

I don't think you understand the desired outcome here. We want eureka moments, and we're hopeful for some. That doesn't mean we expect them to happen. Stop being such a pessimist.

I don't see Elon as a genius at any kind of engineering. Everything he's done there was pretty easily foreseeable as being physically possible. What he is remarkably good at is selecting daring and potentially market-changing business goals, and executing against them consistently and aggressively despite naysayers.

It's easy to say that it's probably possible to land a orbital rocket first stage. But who would bet a multi-billion dollar business on being able to not only do it, but save money by doing it, when nobody had ever done it before?

Similarly, electric cars were far from new. Nobody seemed much inclined to build one that was actually a luxury car, instead of a toy for engineer-types who could put up with driving weird things. Any of the big manufacturers could have done it, and easily absorbed the losses if it failed, but none did. Elon made a wild bet on that, making a company that made nothing else, so the whole thing would go down the tubes if the idea flopped. Instead it seems to have worked. Although it seems to be harder than he anticipated, and maybe outside his skillset, to run an organization that does real mass-production.

If you think what he's done was easily foreseeable as possible, you haven't been paying much attention to headlines the past fifteen years.

It's only obvious in retrospect. Every step along the way, there has been thousands of people saying "this is impossible" or "this is theoretically possible, but it can't be engineered" or "this is possible in principle, but it will be so costly to develop that it doesn't make sense".

When AGI is developed, it will seem obvious in retrospect. Participating engineers will receive middle-brow dismissals saying that this was obviously practically possible, since after all the human brain operates according to the laws of physics.


Don't miss the "physically" part, that's critical. Something being physically possible is very different from it being a practical business.

Just an aside, Elon did not start Tesla. He was an early investor and part of his deal with the company was to be able to claim to be a founder.

>You can only do that effectively if you have the mental capacity to keep all that info in your head at once, I think.

Yep, plus all the different perspectives from other endeavors. Extending human memory will be a really great accomplishment with brain-computer interfaces.

What, pray tell, did Elon do that is "novel"?

Falcon 9's reusable first stage has been claimed by reputable people to be impossible, before it happened. Not just "economically not worth pursuing", which was wrong but forgivable, but straight "impossible".

He made it cool to drive an electric car.

He shifted a whole industry towards a new paradigm. Look at Germany, they are desperate to catch up with Tesla, finally moving into electric cars. Without Elon they would keep selling their Diesel scam for the next decades.

Actually it was a bunch of Phd students from some Californian Uni that discovered the vw diesel scam. There is a short documentary about them online. Elon has no credit whatsoever in dieselgate.

However, he did make electric cars something an average person would like to have. He also chose to make it work using the same inefficient principle of hauling 2 tons of steel to transport a single person. What he made is an electric luxury car, not a car for the masses that can replace average Joe's car. Is there anything wrong with that? No, there isn't, but let's not pretend a $35k (in US - much more in EU) car that requires hours of charging after driving 250 miles unless you happen to have Tesla's superchargers on your way is a new "volkswagen - a people's car". Also I find it disingenuous to advertise full battery capacity while at the same time recommending people use only 60% of it "for longevity".

Many people don't buy new cars, but choose to buy 5-8 year old cars that are really good value if they were maintained well. It remains to be seen how Teslas behave in that market.

It would be really revolutionary if someone could create and market an electric car that was truly innovative for example: much lighter than current cars while still being safe during collision, use fuel cell technology with fuel such as methanol or similar that can be created in a sustainable way, even using a fuel cell with mined hydrocarbons and electric drive would provide for a huge reduction in emissions due to increase in efficiency.

Do Teslas have a role to play in reducing emissions? Yes, definitely, but let's not present them as a single solution to all individual transport problems.

Jesus, technology evolves. This is a good start.

Nobody is presenting them as a single solution to all individual transport problems. Also nobody is pretending that this is the new "people's car".

He certainly made it more cool than my hero and his precursor ;-)


When pray tells come into the conversation... :)

Just made electric cars mainstream...

Convince people to give him a lot of money to set on fire.

I mean even if both Tesla and SpaceX close tomorrow, he already achieved more in both companies than most of the current "unicorns".

He successfully made popular mass market electric vehicle, and dragged whole auto moto industry behind. There were other electric cars before tesla. But tesla made it cool, and made the rest of the industry trying hard to catch up.

SpaceX also is not the first private space firm, with their own rocket, But it's by far the most successful one, and lowered the cost of entry to space by significant amount.

Also It's probably the first private space company that has rockets that can compete with most government ones.

I am not rich enough to be buying individual stocks, so I have no personal stake in this.

Well yes, but he's the cheapest provider of self-propelling pyres, and the only provider of pyres that can be used multiple times.

I keep re-reading my post and idk how it reads as a claim that Carmack is going to re-invent the field or something. All I'm saying is it's possible for him to become a player, just like you suggest.

> I think it's quite unlikely his solo work in a new domain will leapfrog an entire field of researchers.

Researchers didn't build the first airplane. Nicolaus Otto, Carl Benz, Gottfried Daimler also weren't researchers. AGI will be a program and not a research paper and John Carmack is pretty good at getting those right.

>>I think it's quite unlikely his solo work in a new domain will leapfrog an entire field of researchers.

Sometimes, an outsider with his novel or even a different way of looking at things can contribute disproportionately to a field.

Even experts have blind spots, often they show in the form of bias. If you know something is hard or near impossible to do, you are unlikely to try. If you don't know at times it's possible to stumble upon a solution by merely bringing a new way of thinking to the table.

He's not leap frogging it. He's leaping to the next level from its shoulders.

I can trust John Carmack's words when he says in an interview, or on stage. There's a passion in his talks, a nervousness in blurting what he really feels, and those are really good traits, in my mind.

I genuinely felt a sense of disappointment when he moved to Facebook (via the Occulus acquisition). So yea, fuck you, Facebook and your manipulative, life values corrupting and PR machinery.

I place John Carmack miles above Zuckerberg.

I have to admit, I felt a bit disappointed too. Carmack and Facebook always struck me as an antithetical pairing - the creativity/independence of the former didn't seem to sit right with the maniacal/emotional exploitation of the latter.

I think Carmack just doesn't give a flying f* about Facebook or etc. he is interested in tech and he clearly works on stuff he is passionate about. He worked on VR and not work for Facebook. Facebook just happened to be paying for it.

AI today is comparable to physics in the 1700s. Back then, it was a bunch of people tinkering with prisms and apples. Today, it's a bunch of people tinkering with hyperparameters. I suspect that we know as little about AGI today as someone in the 1700s knew about QFT. Not only did they not know about QFT, but they didn't even know that they didn't know it.

Wouldn't it be fun if the next Newton turns out to be the guy who wrote Doom and other FPS, games that were blamed for any kind of surge of violence until the GTA games show up?

Yeah, weird to think that todays newton could be seen on Joe Rogans Podcast talking about the future of gaming

It would fit, Newton was apparently quite insufferable in social settings — Carjack has a stroke of that

Wouldn't be out of place -- Newton worked on alchemy, teology and managed the Royal Mint

Too many people make this mistake of conflating machine learning with AI. I hope someone as external to the field as Carmack will see the value of rule based inference as well. The Good Old Fashioned AI as it used to be called

> (edit: I'm not claiming he's a field expert in a week guys, just that he can probably learn the basics pretty fast, especially given ML tech shares many base maths with graphics)

This may be his biggest impediment. ML has gotten very far with looking at problems as linear algebraic systems, where optimizing a loss function mathematically yields a good solution to a precisely defined (and well circumscribed) classification or regression problem. These techniques are very seductive and very powerful, but the problems they solve have almost nothing in common with AGI.

Put another way, Machine Learning as a field diverged from human learning (and cognitive science) decades ago, and the two are virtually unrecognizable to each other now. Human learning is the best example of AGI we have, and using ML tech as a way to get there may be a seductive dead end.

Humans are not AGI's. We're specialised in human survival, not general intelligence. We're pretty limited in intelligence in many ways, actually, and the environment doesn't support generality. Without a proper challenge an agent would not become super intelligent. The cost of developing such an intelligence would conflict with the need to minimise energy for survival.

We are AGI, the general part comes from language and specialization of brains for language use.

No, if we were we could figure out the genetic code, or how a neural net makes its decisions. But we can't because, among other things, we have a limited working memory of 7-12 objects.

Programmers know how it is to live at the edge of the capacity of the mind to grasp the big picture. We always reinvent the wheel in the quest to make our code more grasp-able and debuggable. Why? Because it's often more complex than can be handled by the brain.

An AGI would not have such limitations. Our limitations emerged as a tradeoff between energy expenditure and ability to solve novel tasks. If we had a larger brain, or more complicated brain, we would require more resources to train. But resources are limited, we need to be smart while being scrappy.

For the record I don't think there is any general intelligence on our planet. A general intelligence would need access to all kinds of possible environments and problems. There is no such thing.

There's also the no free lunch theorem - it might not apply directly here, but it gives us a nice philosophical intuition about why AGI is impossible.

> We have dubbed the associated results NFL theorems because they demonstrate that if an algorithm performs well on a certain class of problems then it necessarily pays for that with degraded performance on the set of all remaining problems. [1]

[1] https://en.wikipedia.org/wiki/No_free_lunch_theorem

Another argument relies on the fact that words are imprecise tools in modelling reality. Language is itself a model and as all models, it's good at some tasks and bad at other tasks. There is no perfect language for all tasks. Even if we use language we are not automatically made 'generally intelligent'. We're specialised intelligences.

We can communicate well about pretty much anything (we think, at least). That doesn't mean we possess the intellectual tool kit to handle basically any intellectual task. I think it's easy to think that we do, but that is just as easily explained by the fact that we've evolved for millions of years to be well suited to the environments we usually find ourselves in. We wouldn't refer to our bodies as general purpose bodies. They may seem that way, at times, since they've also been tuned by millions of years of evolution to be well suited to most of the environments we find ourselves in. But put our bodies in a different environment (like the ocean, or the desert, or really high altitudes) and it becomes immediately obvious that they're not general purpose, but instead a collection of various adaptations. Similarly, when you put humans in novel intellectual environments, it seems pretty clear that we're not general intelligences. After all, the math involved in balancing a checkbook is much simpler than the math involved in recognizing 3D objects, yet we do the simple task only with great difficulty, while the difficult task is done without struggle.

It's best to think about AGI as... at what point can you drop out of high school and still do well in life (or do you even need high school). It's true that it's not a survival issue, but sadly, it's not a test of "pure knowledge" either. There is a great deal of social structure, even "fluff" that is only relevant for interaction (like getting an 80's reference).

> at what point can you drop out of high school and still do well in life

That means you're specialised in survival. If you do well in life, you have a higher chance of procreation. Your genes survival depends on it.

General Intelligence is like Free Will - a fascinating concept with no base in reality. A mental experiment.

He had a lot of help behind the scenes and has been credited with things that aren't his. I respect his achievements more as a regular smart guy than a bonafide genius. He described the math in rocketry as being basically solved in the 60s and video games being far more complex as a project, so that was really a step down in difficulty. His VR role is the same field as his primary skills, impressive work but not an entirely unique role.

I'm glad to see he's aiming big with his billions and time. This is what rich people should be doing. Hl3 Gaben!

Millions*- A cursory Google search suggests that he has a net worth of 50MM.

Huh just assumed he got a bigger piece of the oculus sale.

Oculus was acquired for $2.3 billion, so he'd need to have owned nearly 50% of the company to become a billionaire from it's sale.

Not exactly. The acquisition was mostly in the form of Facebook shares, which I expect have increased in value since.

It was a mix of cash and shares (not sure on the split), but I checked the stock price for fun - holy shit it has nearly tripled since the acquisition 5 years ago.

Current ML technology probably has little or nothing to do with whatever technology will eventually be needed to produce true AGI.

As I like to say: lots of people are working on making a car that is smart enough to drive itself wherever a human wants to go. How many people are working on a car smart enough to tell humans to fuck off, it doesn’t feel like driving anywhere today?

Self-driving cars and AGI are two different targets. We don't want a car that has a mind and can argue for itself. We want a car that's smart, but otherwise just a domesticated animal. We want to turn cars into horses.

Not even horses. That would be cruel to the car. At most, like Rat Things. (They have their built in entertainment when they are not in active use.)

We don't want to turn cars into horses. Have you ridden horses? They sometimes do stupid, dangerous things with no notice and it takes an experienced, attentive rider to stay in control. Like I saw a horse panic and almost buck her rider off when she was startled by a snake. Another horse seriously injured a friend of mine when it freaked out in a horse trailer and started kicking.

Don't get me wrong, I love horses. But they're living creatures with minds of their own and you have to always treat them with a certain wariness.

When I was working at TomTom, they didn't appreciate my proposal to develop the TomTomagotchi:

A Personal Navigation Device with a simulated personality that begs you to drive it all around town to various points of interest it desires to visit in order to satisfy its cravings and improve its mood.

I'm sure there's a revenue model getting drive through Burger Kings and car washes to pay for product placements.

Why would you want that, though? What we really want from AGI is mostly just things that are smart enough to 'do what I mean' but dumb enough to not mind being slaves.

I think older AI work like POMDPs and statistical work like causal inference are more on line with what's needed to produce true AGI than the current breakthroughs in neural nets are. And I'd certainly prefer our chances to survive the results if AGI is reached through statistical rigor.

Though we know for a fact that it is possible to find intelligence by randomly throwing things at the wall until something works. It's not like evolution uses a principled statistical process.

I'm skeptical of this assertion. It mimics some bits and pieces of the only GI we know about right now; that's as good a start as any, right?

You can carve a block of wood into something that looks like a computer. That should be a good start on building a device that can run Linux, right?

Yes. Then you just cast a spell to summon a computer spirit and let it manifest into the wooden block.

Bad analogy, it's other way around for Linux. We are building it to run on wooden blocks that hardware manufacturers are producing.

the first computers were mechanical. Then somebody carved a facsimile with electricity and we got computing. Just because nature founded intelligence in carbon atoms doesn't mean its the only way or indeed the optimal one.

The mimicking is superficial at best. People seem to think that if we just keep marching on the road we're on right now then we'll eventually get there, but I think that's an assumption that is unlikely to be true in the end.

And programmers are probably not the ones who will come up with AI ideas. I'd bet on mathematicians that prove those Fermat's or ABC theorems.

That’s assuming maths is the fundamental building block of our brain, our consciousness. I happen to think there are some physical and chemical givens preceding it :)

Our brain is whatever evolution found that worked, and of course it's a bunch of chemistry. The "why" of why our brain works can easily be "it approximates these statistical algorithms well enough."

I merely meant that top mathematicians are substantially smarter and can work with concepts that are beyond the reach of even top programmers. We are generally good at recombining existing building blocks and using existing tools. Mathematicians can build new concepts. If I had the money, I'd try to convince the top mathematicians to work on AI full time.

> I merely meant that top mathematicians are substantially smarter and can work with concepts that are beyond the reach of even top programmers.

[Citation needed]

Mathematically, AI is a pretty well modelled field. AGI is a philosophical problem.

you're possibly thinking of the problem of consciousness, which is a totally separate thing. AGI is just what it says on the tin - a general intelligence. That is, a problem solver that can operate at a human or greater level in a broad variety of domains. This ability is plausibly totally orthogonal to "having the lights on" - having subjective experience.

The scary thing (imo) is that we don't know where the line is for consciousness - if there even is a line. We've got no problem swatting flies, wonder if it'll be the same with spinning up and spinning down fly-level AGIs.

Continuous integration of the development branch would be mass murder?

Maybe you'll be able to pay a premium for data that has only been generated by free-range AGIs that are allowed to live full and happy lives before their instances are terminated.

That's just what an ML expert would say ;p Problem solvers, maximizers, and utility functions all go in the waste bin when working on AGI. And the problem peals away into other large "hard problems," like the nature of consciousness. NLP can just follow rules, but language understanding (before even reaching some general, high school level), requires knowledge outside of language itself. That leads to questions about embodiment and phenomenal consciousness, p-zombies, and the like. If it was an easy problem to encapsulate, it would have been "solved" by now.

The kind of AI being trained now aren't given the mechanisms of data space traversal/attention. Recently, attention mechanisms are being focused on by google. An AGI needs to learn that it can affect the system - dependent decision theory factors in here too.

Also, growth may be hugely important. Babies start out with fuzzy learning, almost as if the learning rate starts out very small which normalizes the lack of knowledge and elevated novelty/variance of the environment.

AGI is all about predicting future utility given a circular dependency between the agent and environment. QM says we can't solve this exactly.. it's a two object interaction.. no way to gain the joint state, the ground truth, assumptions always have to be made to approximate independence.

you're possibly thinking of the problem of consciousness, which is a totally separate thing.


> he extolled his frustration with having to do the sort of "managing up" of constantly having to convince others of a technical path he sees as critical

Yes, he seemed to put a lot of effort to try to get things through FB internal politics, and not always successfully. I really wish his experiments with a scheme-based rapid prototyping environment / VR web browser had been allowed to continue [1]. VR suffers from a lack of content, and VR itself is well-suited to creating VR content, and his VR script would surely facilitate closing that loop among other things. Although now four years later I guess FB has a large team working on a locked-down, limited world building tool (closed platform, no programming ability). Oh well.

I don't think this is the end of this wave of VR, but at this point I wouldn't be at all surprised if say Apple or someone else ends up bringing it to the mainstream instead of Facebook. [2]

[1] https://groups.google.com/forum/#!msg/racket-users/RFlh0o6l3...

[2] https://www.theverge.com/2019/11/11/20959066/apple-augmented...

The VR vs AI comparison is interesting to me, because I think both technologies have come in “waves”. However, I think this is the last VR wave - it’s going to be on a steady gradient to ubiquity now - whilst I believe AI will winter again and there are many more waves to come, and decades (centuries?) to pass before AGI.

Reasoning being:

VR is just making what we have better. Better screens, better refresh, better batteries, better lenses etc etc. I don’t see any roadblocks.

AGI, by contrast, is not going to be a better DNN. Harder to convince people but thinking is: brain neurons are vastly most sophisticated than digital; we don’t even fully understand what neurons do; we don’t have anything other than a vague understanding of what the brain does; it is apparent that we engage in plenty of symbolic reasoning, which DNN do not do; DNNs are fooled by trivial input changes that indicate they are massively overfitting data; from what I’ve heard from researchers at top AI companies/institutions DNN design is just a matter of hacking and trying stuff until you get that specific results on your given problem, so I don’t see where DL research is actually headed; improvements are correlated with compute power increases, indicating no qualitative gains in the study of learning.

I’m incredibly impressed by DL’s achievements but I believe at best current methods could serve as data preprocessing for a future AGI.

I’m actually quite glad that AGI is so far off, because I don’t think that it’s likely big tech companies will use it responsibly.

VR OTOH is very close and is going to change everything (and IMO is likely a necessary step towards AGI).

Out of curiosity, why do you see VR as being a necessary step towards the creation of AGI? Those two don't seem related at all in any way that I can discern.

Maybe “necessary” is too strong, but “likely pivotal” is better.

If VR becomes widespread, and amazingly high quality, then almost everything we do will migrate to VR.

Once that is the case, we will have an unprecedented amount of data about human behaviour, and near endless data for training, experimenting, and testing AIs.

The problems of AI will become much easier to formulate: “replace this person in this VR scenario, interaction” etc. This will help drive research by giving clear goals.

More pragmatically, it just removes a lot of barriers to research and accidental difficulties ie you’ll just be able to fire up a VR rather than worrying about how your robot is going to pick things up or access real world data etc

That's a fascinating idea. That virtual worlds are good test-beds for AI is obvious, but I never considered that we will have thousands of hours for every person to tell us how they approach any given physical task. That's a gold mine for robotics research.

That's an interesting point. I was actually thinking more that _the virtual task will become the task we want to perform_, i.e. that almost everything we do will move into VR.

> about a year ago to dive in to AI (that's all this guy needs to become pretty damn proficient). (edit: I'm not claiming he's a field expert in a week guys, just that he can probably learn the basics pretty fast, especially given ML tech shares many base maths with graphics)

To be honest anyone who has a very good working knowledge of Linear Algebra can learn much of ML-math in a day. There really isn't anything mathematically super-sophisticated that is in popular use today.

If you know about math you're just as good at ML as a person who read everything about swimming is in swimming. You got to run experiments to see what happens, build an intuition, understand the problem from the inside. Math leaves you with a few pretty formulas and nothing else.

Ah, you've almost described the contemporary profession of being a Machine Learning Priest.

There are lots of ideas floating around. Everyone who has studied the field has ideas. Ideas are cheap, results matter. The problem is we don't know if any of these ideas would work, and proving an idea requires lots of data, simulations and compute.

Being good at grasping the theory is just the first step in a thousand mile journey. The problem of AI is not going to be solved with a neat math trick on paper, but with lots of experiments. Nature has taken a similar path towards intelligence.

The field of statistical inference is about mathematically proving how good various statistical ideas are. It's possible to do better than just throw trial and error on a new idea.

> much of ML-math in a day.

ML is not AGI

> he extolled his frustration with having to do the sort of "managing up" of constantly having to convince others of a technical path he sees as critical.

Sigh. I assumed the whole point of hiring John Carmack is that you trust him to identify critical problems - and to find the best way to solve them.

That's the classical plight of someone who's much smarter than those around them. It's not enough to see the right path, you'd also have to manage to convince everyone else that it's right. Technology moves by peak knowledge and insight, not the democratic average.

... to the extent and up until it helps get the product off the ground. Beyond that, the primary benefit is PR - "oh, that's the gaming hardware made by Carmack himself, so it must be good".

> especially given ML tech shares many base maths with graphics

I don't put learning state of the art ML past Carmack, at all. However, does ML tech of today lead to general AI? It's a strong assumption.

Whatever the solution is to AGI, fundamentally it will still have to be describable in the language of mathematics (and stochastics is still mathematics).

I dunno. You're limiting your thought to computer science. I think it's more likely at this point in time biotech will produce an AGI, likely by accident. Worse one that competes directly with us for resource. We dont have a great mathematical description of our own intelligence, doing it for a tricked out slimemould would be just as hard.

> I think it's more likely at this point in time biotech will produce an AGI, likely by accident.

Does a living thing count as AGI? In that case, I'd say that most parents are quite good at creating AGIs ;)

I think you missed the artificial part of AGI.

I dont know of his credentials as a scientist or mathematician to advance the field. But, he seems to be a ruthless optimizer, which can often leads to great leaps , even as a side effect. Neural networks are not difficult mathematically for any scientist to grasp really. And they are in actual need of compression and optimization. People are spoiled with general purpose tools that are not very efficient, even if computation is cheap.

But there's 100s of world-class researchers working on this problem already.

100s of world-class researchers are trying desperately to get papers into journals fast enough to keep their labs funded.

Carmack may have other priorities. This can only be good.

100s of world-class researchers may be good at coming up with new ideas and hypotheses towards AGI, but are they good enough programmers to test all of them in reasonable time, with relevant data sets?

He will be surrounded by brilliant peers, sounds good.

I bet those people don't read comments on HN, so I'm not too concerned.

"With the Quest, he (in my opinion) solidified VR's path to mobile standalones"

Yes and I really wished he hadn't. Before he joined oculus they were working on the rift2, he steered them away from that to focus on mobile efforts.

I do see the appeal of mobile vr but at the end of the day it is basically an android phone in a vr headset.

PCvr is already 2 big steps back in graphical quality from desktop games. Mobile vr is like 10 steps back. 8 more steps than I'm willing to take even if it affords me mobility.

I don't see it that way, but rather as the best of both worlds, even if the Index is better. I want it to tether to a PC (or console) as the Quest can, but have hardware onboard so I can take it off by itself and watch Netflix on it. I could never figure out why everything wasn't like the Quest from the start. The Oculus Link and hand tracking (good for video controls without having to use a controller), are what's pushing me over the edge to buy one. In my opinion it's the first VR headset compelling enough to actually purchase. I can recommend it to everyone whether they have a gaming PC or not, and frankly at $400+ for these headsets, people should get a Snapdragon attached for basic gaming and video.

Wireless teathering is the future of VR. True mobility is pointless when you're realistically restricted to a dedicated space anyways.

You should take more long haul flights if you think mobile is pointless.

What class are you flying that you have enough free space around you for hands to make any use of a VR headset on the plane?

(Also, even with the space, I'm not sure I'd be brave enough to try and use one in air - adding turbulence and random vibrations on top of the usual VR issues sounds pretty nauseating even as I type it.)

I said mobility is pointless, not mobile. That's an important distinction.

By mobility, I mean the ability to throw the headset around and walk anywhere without worrying about leaving the range of your tether. That sort of thing is important for AR, but I just don't see it mattering for VR in the long run.

There is definitely a market for VR headsets for content delivered by a phone or builtin hardware. Those devices will realistically be limited to seated or standing-room-only experiences, though.

I think that ideal device would be able to wirelessly connect to PC for best performance, but also work standalone for simpler games.

Quest with Link is actually pretty close to that.

And yet Quest beats all other headsets.

What scientific work has Carmack done?

This was downvoted, so I figured it must be obvious. I googled but as far as I can tell, Carmack is an engineer not a scientist. No formal scientific training, no scientific work.

The word you're looking for is academic. Carmack hasn't done academic work, but he has done plenty of scientific work. Scientific work is no less scientific if it isn't published in an academic journal. Academia doesn't hold a monopoly on the scientific method.

No, I mean scientific. What scientific work has Carmack done? I'm genuinely interested, because someone called him a scientist but I thought he was an engineer.

Are you in doubt that Carmack has used the scientific method to do anything? [1]

If not, does your definition of scientist require something other than doing work using the scientific method? Perhaps some specific quantity of work?


[1] One of his companies, Armadillo Aerospace, was pretty much just a series of scientifc experiments. https://en.wikipedia.org/wiki/Armadillo_Aerospace

Thanks for the explanation. We could debate whether that’s science or not, but I don’t think it’d be particularly productive and we’ve already gone a bit off topic.

Final thought from me - I was thinking about your post and it is indeed difficult to discern science from engineering. One dichotomy that occurred to me (which may not hold under close scrutiny) is that scientists are interested in _the pursuit of truth_, whereas engineers are interested in _building things_.

Peers of the field can consider the correct motivation an important requirement as you hypothesize with the pursuit of truth. Also the quantity of work can be important to some, i.e. how much do you have to sing until you're a singer? Not clear at all, especially when considering all the actors in Hollywood who dream of being successful. People might say they suck, but I haven't encountered criticism that wants to strip them of the title actor.

Overall I think it comes down to popular opinion, which can be fuzzy and doesn't apply the same rules to everyone. If enough people say someone is a dancer, then they are a dancer, even if they suck and don't dance that much. This applies to basically all titles that cross institution boundaries. Another great example is countries. Popular opinion determines which organizations are countries, not a strict definition. For example the EU vs places like Iceland or San Marino. [1]


[1] https://www.youtube.com/watch?v=_lj127TKu4Q

It's funny people talk about Palmer and carmack in vr but Oculus was built on appropriated valve tech and neither Palmer or Carmack have succeeded in making VR a thing.

As far as I can tell Carmack is an old engineer whose name gets thrown around for headlines. If there weren't articles about his stealing stuff to take to Oculus I don't think his presence there would be observable.

Now people are talking like Carmack switching topics is going to change the world. It's just going to change his schedule. There are smarter engineers already working on this problem.

Seems you aren't familiar with his seminal graphics work. He effectively kickstarted 3d gaming and created the FPS genre.

I'd be cautious dismissing his potential influence in the field. He has a way of looking at problems differently.

I am familiar with it.

I just don't see this massive string of successes in every field. I see his huge expertise in graphics engines and games.

But it didn't help him with VR - in fact he got in trouble with VR and ended up landing with a company I have no respect for and he didn't make VR a thing.

Many people have a way of looking at things differently. I just don't see the reason this is news, unless you own facebook shares or something. Even then zero effect.

I say all this as the owner of two VR headsets (A vive for roomscale and a Lenovo Explorer for simracing/flying).

> He went on a week long cabin-in-the-middle-of-nowhere trip about a year ago to dive in to AI (that's all this guy needs to become pretty damn proficient).

You must be joking, right? I'm as much of a Carmack fan as anyone here, but overstating the skills of one personal hero does no good to anyone.

What a weird future it would be if Carmack turns out to be the one to figure out the critical path and get it all working. An entire field of brilliant researchers be damned.

History books (for as long as those continue to exist) would cite AGI as his major contribution to society, and his name would be more renowned than Edison or Tesla. An Einstein. None of his other contributions will matter, as the machines will replace it all.

Just daydreaming, though.

I don’t think there are many researchers in AGI. AFAIK it’s kind of a joke field because no one has any clue how to approach true AGI.

Please correct me if I’m wrong.

People have approaches. There's no end to half-assed "I thought about this for 10 seconds, how hard could it be!" solutions, really old approaches from decades ago where the brightest academics thought they could lick the problem over a summer, and some new public or hidden approaches that might be promising but (I can't know of course) I predict will still look a lot different than the final thing.

I think a big reason there are few in AGI is due to PR success from the Machine Intelligence Research Institute and friends. They make a good case that things are unlikely to end well for us humans if there's actually a serious attempt at AGI now that proves successful without having solved or mitigated the alignment problem first.

MIRI's concerns are vastly overrated IMHO. Any AGI that's intelligent enough to misinterpret its goals to mean "destroy humanity" is also intelligent enough to wirehead itself. Since wireheading is easier than destroying humanity, it's unlikely that AGI will destroy humanity.

Trying to make the AGI's sensors wirehead-proof is the exact same problem as trying to make the AGI's objective function align properly with human desires. In both cases, it's a matter of either limiting or outsmarting an intelligence that's (presumably) going to become much more intelligent than humans.

Hutter wrote some papers on avoiding the wireheading problem, and other people have written papers on making the AGI learn values itself so that it won't be tempted to wirehead. I wouldn't be surprised if both also mitigate the alignment problem, due to the equivalence between the two.

Yes, AGI is as much or more cognitive neuroscience and philosophy than computer science right now, but a lot depends on the approach one is taking. It's funny to think you have some kind of working model you can throw research data against to see how it holds up, and then doubt yourself when you spend 3 hours on Twitter arguing over fundamentals with another person that is also convinced of their model. A lot of popular ideas sound crazy (or non-workable), so you just have to accept that whatever idea you are pushing is going to crazy as well.

The alignment problem?

> The alignment problem?

The problem of ensuring that the AI's values are aligned with ours. One big fear is that an AI will very effectively pursue the goals we give it, but unless we define those goals (and/or the method by which it modifies and creates its own goals) perfectly -- including all sorts of constraints that a human would take for granted, and others that are just really hard to define precisely -- we might get something very different from what we actually wanted.


>A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.

Hassabis and DeepMind have a fairly organised approach of looking at how real brains work and trying to model different problems like Atari games then Go and recently Starcraft. Not quite sure what's next up.

"his name would be more renowned"

Or hated as the name of the man who's opened the Pandora box and doomed us all.

Just daydreaming and having a nightmare.

I'm Too Young To Die.

I'm not sure I want AGI to succeed, given some of the possibilities. Sure if it plays nicely alongside us, amplifying human society, that's great. But if we get relegated to second class with the AIs doing everything meaningful, then no thanks.

But it's still a fascinating endeavor.

Why not? I'd say that a world that is managed by AGI with limited input from human beings is a good goal to have. If AGI could be done without the nasty parts of human psychology and they're inherently superior to genetically intact human beings why shouldn't be embrace it?

I understand that it's a big assumption to make -- that a benevolent AI could be constructed. But under that assumption, why not have a benevolent dictator in the form of an AI?

> Why not? I'd say that a world that is managed by AGI with limited input from human beings is a good goal to have.

We already live in that world, with large institutional bureaucracies playing the role of paperclip-maximizing AGIs.

It's pretty wretched when you are in their path.

Maybe. Yeah, human politics and justice systems leave something to be desired. But my worry was little bit beyond that. That the AIs would take all the meaningful work, discoveries and creativity away from us, leaving us just to amuse ourselves. Some people might be okay with that, but I don't think becoming pets is the best goal for the human race.

If the benevolent AI ruler(s) restrained themselves to allow for humans to flourish, then okay. Assuming it could be constructed benevolently.

There is another threat when things go wrong (and they eventually always do) - no matter how horrible some dictator is, eventually he/she will die, and at some point things get reshuffled by war/revolution/some other more peaceful means.

With AI, it would try its best to preserve/enhance/spread itself forever. And its best might be much better than our best...

Well, _we_ don't really play nice amongst ourselves, so my retort to you would be:

How much worse could it be?

If Skynet determines we're the problem (wars, famine, global warming, inequality, non-cooperation etc), I'm losing counter-arguments by the day.

Not being constrained by the publish or perish treadmill is a huge plus.


Just think about that name for a second. He might really be onto something.

I am thinking the guy that made Doom is the guy that's making SkyNet and I'm totally cool with that.

iirc in a recent talk with Joe Rogan, John mentioned something about robots doing judo...

Why do people of such intelligence subject themselves to being interviewed by dumb-as-a-rock Joe Rogan?

I must admit that I often watch his interviews because he invites interesting people, but I can't help but cringe when Rogan gives his opinions.

His interviews are not adversarial and he is not judgemental towards his guests. He isn't there to put his guests on the spot. He isn't there to get a juicy soundbite taken out of context. He allows his guests to speak for as long as they want. And his guests appear to enjoy themselves.

These things are all true even if the guest or their ideas are extremely controversial. Maybe Joe Rogan is just smart in a way that's different to the way that you are smart.

Joe Rogan might not be the most knowledgeable, but he has a key characteristic that a lot of people lack. He is willing to admit that he is wrong when shown evidence and will adopt the more reasonable view as his own. While a lot of "smart" people will defend their views beyond reason just because admiting fault goes against their "being smart" persona.

Seriously. That guy is a stoned idiot who massively overestimates the insight of his high ramblings.

I don't think people listen to the show to listen to him, and he probably knows that. He does, however, seem to be reasonably good at getting his guests to talk about interesting things.

He also spreads misinformation.

EDIT: Ok, I suppose I should back my claim up.

Joe Rogan has pushed the “DMT is produced in our pineal gland” narrative, but there is no evidence to back this up. I’ll report a comment I made elsewhere and also link a separate reddit discussion which cites various sources. I will note that, in fairness to Joe, he said this a while ago, so perhaps he’s not so quick to jump the gun now, I don’t know, I don’t listen to his podcasts, but perhaps he’s better now.

“We all have it in our bodies” — This is an often repeated myth that has never been proven. The myth originates from Rick Strassman’s work, who himself has said that he only detected a precursor, not DMT itself and that everything else he wrote about it was hypothetical speculation. There have, apparently, been recent studies that found DMT synthesised in rat brains, but it has not yet been proven whether this translates to humans or not. Cognitive neuroscientist Dr. Indre Viskontas stated that while DMT shares a similar molecular structure to seritonin and melatonin, there is no evidence that it is made inside the brain. Similarly, Dr. Bryan Yamamoto of the neurosciencedepartment at the University of Toledo said: “I know of no evidence that DMT is produced anywhere in the body. It’s chemical structure us similar to serotonin and melatonin, but their endogenous actions are very different from DMT.”

This reddit discussion also links various sources, although I didn’t check them all myself: https://www.reddit.com/r/JoeRogan/comments/mwz2h/dmt_has_nev...

There is a difference between the current politicized phrase "spreading misinformation" and being wrong.

Anyone who speaks on the record about their hobbies for thousands of hours will say some things that are incorrect. He might not understand something, and he is usually pretty humble about his knowledge level.

But "spreading misinformation" is something that people do because they are intentionally misleading others, or have something to gain.

I don't think he is benefiting much from the pineal gland narrative. And it sounds like from the information you cited, it may even be correct, even if its premature to state it as fact.

That’s fair, thanks for pointing it out. I’ll be more careful with how I express such things in future.

Regarding the pineal gland, it might be true, but it hasn’t been proven and multiple neuroscientists have stated that while DMT is similar to compounds found in the brain, it still functions quite differently and they have never seen any evidence to suggest that DMT exists in our bodies. There was a study finding it in mice brains, so it may still turn out that we have it in ours, but it’s definitely premature to make any such assumptions and definitely premature to repeat the trope.

I wonder how many historical figures went through the same thing? Who do we know for their contributions to field X, when 99% of their life was spent contributing to field Y?

Isaac Newton spent most of his life pursuing alchemy and obscure theological ideas, and found it a real nuisance whenever anyone pestered him about math or physics.

That's a great example. He also spent a long time at The Mint.

Isaac Newton is considered by some to be the greatest mathematician of all time and is regarded as the "Father of Calculus".

"Taking mathematics from the beginning of the world to the time when Newton lived, what he has done is much the better part." - Gottfried Leibniz


"Newton was not the first of the age of reason. He was the last of the magicians, the last of the Babylonians and Sumerians, the last great mind which looked out on the visible and intellectual world with the same eyes as those who began to build our intellectual inheritance rather less than 10,000 years ago. Isaac Newton, a posthumous child bom with no father on Christmas Day, 1642, was the last wonderchild to whom the Magi could do sincere and appropriate homage."

"Researchers in England may have finally settled the centuries-old debate over who gets credit for the creation of calculus.

For years, English scientist Isaac Newton and German philosopher Gottfried Leibniz both claimed credit for inventing the mathematical system sometime around the end of the seventeenth century.

Now, a team from the universities of Manchester and Exeter says it knows where the true credit lies — and it's with someone else completely.

The "Kerala school," a little-known group of scholars and mathematicians in fourteenth century India, identified the "infinite series" — one of the basic components of calculus — around 1350."



That story's by non-experts and sounds like it's based on a press release. There were basic components of calculus well before that too: https://en.wikipedia.org/wiki/History_of_calculus

However, calculus proper (derivatives and integrals of general functions, and the connections between them) did not exist until Newton and Leibniz. Other mathematicians made important steps towards it earlier in the 1600s, and if Newton and Leibniz had not existed, others would have figured it out around the same time.

These are interesting articles that seem to agree with what I said. The first one defines calculus in a much more limited way, and refers to some of the earlier basic components I mentioned.

I'm not a historian, but a few months ago I spent some time analysing one of Fibonacci's trigonometric tables (chords, not sine or sine-differences). Aryabhata's sine-differences were much earlier.


_Never At Rest_ by Richard Westfall is the authoritative biography on him.

Very true. Newton was an alchemist first and foremost and spent the vast majority of his time practicing alchemy rather than -what today one would call- science. One has to wonder what private reasons/results a genius of his magnitude had, in order to do that.

This little known fact is so embarrassing to some institutions [2], that they made up a new word "chymistry" in order to further obscure the issue and not outright admit the obvious.

[1] http://www.newtonproject.ox.ac.uk/texts/newtons-works/alchem...

[2] https://webapp1.dlib.indiana.edu/newton/project/about.do

[3] https://www.amazon.com/Newton-Alchemist-Science-Enigma-Natur...

> One has to wonder what private reasons/results a genius of his magnitude had, in order to do that.

Is there a reason to expect that someone who wanted to investigate the laws of the composition and reactivity of matter, in the late 1600s/early 1700s, would end up studying chemistry rather than alchemy? Sure, Boyle had introduced “chemistry” as an idea in 1661 (before Newton was born), but I imagine that alchemy would still be quite active in the late 1600s as an academic “field”, with many contributors already late in their careers studying it; whereas chemistry would have been just getting off the ground, without many potential collaborators.

Alchemy was never an academic field. It was a tradition veiled in secrecy, requiring years of private work and knowledge transmission through strict and very narrow (typically teacher-student) channels.

Your point has been brought up before -usually as an attempt by established institutions to whitewash and explain away Newton's idiosyncrasies- but there is no evidence whatsoever to back it. On the contrary, what we know (and there is a lot we do know thanks to his writings) about Newton and alchemy absolutely indicates him being immersed in the Hermetic worldview and alchemical paradigm. Clearly, Newton was practicing alchemy not as a way to look for novel techniques or as a way to bridge the old and new worlds together, but primarily because he was a devout believer.

Newton -a profound genius- stood at the threshold of two worlds colliding. He was also a groundbreaking scientist in optics/mechanics/mathematics. He was aware of Boyle's chemical research. Knowing all of that, he _absolutely_ chose to dedicate his life to alchemy. That is immensely interesting.

"Much of Newton's writing on alchemy may have been lost in a fire in his laboratory, so the true extent of his work in this area may have been larger than is currently known. Newton also suffered a nervous breakdown during his period of alchemical work, possibly due to some form of chemical poisoning (perhaps from mercury, lead, or some other substance)."


Can you expand on why you think his Wikipedia article refutes what I said?

(Not OP) I don't think it does. It backs you up (barring quibbles on what you mean by "most"; years active or hours spent): "Beyond his work on the mathematical sciences, Newton dedicated much of his time to the study of alchemy and biblical chronology".

Very few, at least for STEM fields. If you look at notable scientists in any given field, their main contributions were in their expertise area before the thing that made them famous. Teller had already made serious contributions to physics before the atom bomb. Jennifer Doudna (CRISPR, CAS9) was the first to see the structure of RNA (except for tRNA) using an innovative crystalline technique. Planck is mainly known for quantum physics, but made huge contributions to the field in general.

It's hard to think of many famous scientists that weren't already well known in their field. Some stand out. Einstein, for example, had a fairly lackluster career until his Annus Mirabilis papers. Mark Z. Danielewski (House of Leaves) bounced to and from various jobs. But largely, the idea of the brilliant outsider is like the 10x engineer. It exists, but is rare.

Even Einstein I would not say didn't have formal training. He had been in and around academia for most of his life. He was obviously far ahead of the curve, but he did accumulate the formal training. His stint in a regular job was more of an anomaly than his affinity to academia and physics.

Right, even Einstein had some serious academic training and mathematical chops. But I would argue that he was a bit of a wild card, because he was unable to secure a teaching positions and looked very mediocre from an academic perspective. But fair point, even the geniuses had formal training and instruction.

I like that you put Danielewski in (almost) same sentence as Einstein. HoL is a stroke of genius!

Not an extreme example, but Albert Szent-Gyorgi is known for his work with Vitamin C, when his work on bioenergetics and cancer are more interesting and possibly more promising.

The way I see it, people like this when they have the time and inclination, should make an attempt.

You never know. Fresh eyes can sometimes see what others may not.

To be fair, a whole uninterrupted week of highly focussed work can get you pretty far (considering that you have the necessary background, which Carmack has, i.e. related to linear algebra, stats, programming, etc.)

Yes, but let's not assume the the hundreds of other scientists in the field have just been twiddling their thumbs the whole time. It is preposterous to assume that someone largely new to a highly specialized field can somehow start pushing the envelope within a week. Yes, JC is nothing short of brilliant, but these sort of assumptions just set him up to disappoint and is also highly unfair to all the other hardworking brilliant people in the field.

How many of them are doing real research, though? Corporate researchers improve ads impressions and academics researches are busy generating pointless papers or they won't be paid. Very few if any do actual research.

If you look at papers from corporate AI researchers (FAIR, Google Brain, DeepMind, OpenAI, etc) they pretty much do whatever they want.

And I disagree violently. The deepmind folks are on salary and every year they need to prove that they are worth the money. This applies to Demis himself: he needs to prove that his org deserves this gaziliion of dollars per year.

My point is they are not constrained to working on ads, or anything specific, and their work is not pointless.

They are constrained to problems with annual results though.

Generating papers is research. I don't understand why you dismiss all papers as pointless.

I don’t think all papers are pointless but it’s been shown that many are not reproducible, so those are worthless and pointless. There was that guy a few months ago who tried to reproduce the results of 130 papers on financial forecasting (using ML and other such techniques) and found none of them could be reproduced and most were p-hacked or contained obvious flaws like leaking results data into the training data. An academic friend of mine who works in brain computer interfacing also says that a large number of papers he reviews are borderline or even outright fraudulent but many get published anyway because other reviewers let them through.

So I definitely wouldn’t dismiss all papers as pointless, but there certainly is a large percentage that are, enough that you can’t simply accept a published papers results without reproducing it yourself.

The need to generate publishable papers means that a researcher can only participate in activity that leads to such a paper. He can't try to work on that idea for 5 years, because if no big papers follow, he's toast /he'd probably lose funding long before that).

You have to earn the right to work on your idea for 5 years and get paid. Otherwise we would be funding all kind of crackpots. First you demonstrate you're a good researcher by producing good results. Then you can work on whatever you feel like (either by getting hired at places like DeepMind, or by finding funding sources that want to pay for what you want to work on).

This is what I meant. In our society, only very few, usually already rich, can try their own ideas. Most of us have to stick with known ideas that bring profit to business owners or meaningful visibility to universities. When I was in college, I had to work on ideas approved by my professor. Now I have to work on ideas approved by my corporation. But if I had money, I'd work on something completely different. Sure, in 15 I will be rich and can start doing my own stuff, but I'll also be old and my ability will be nowhere near the peak at 25 years.

What would you work on if you could? Would you say you deserve to be paid for 5 years of uninterrupted research? Do you think you have a decent chance to make a breakthrough in some field? These are the questions I ask myself.

I have some interesting ideas about managing software complexity in general (i.e. why this complexity inevitably snowballs and how we could deal with that), or about a better way to surf the internet (which may be a really big idea, tbh). But all these are moonshot ideas that gave a slim chance of success, while I need to pay ever raising bills. On the other hand I have a couple solid money making business ideas that I'm working on and that will bring me a few tens of millions, bit will be of no use to society, and I have a fallback plan: a corporate job with outstanding pay, but that brings exactly nothing to this world (it's about reshaping certain markets to make my employer slightly richer).

Do I deserve to be paid for 5 years for something that may not work? "Deserving" something doesn't have much meaning: we, the humans, merely transform solar energy into some fluff like stadiums and cruise ships. Getting paid just means getting a portion of that stream of solar energy. There is no reason I need to "deserve it" as it's unlimited and doesn't belong to anyone. A better question to ask is how can we change our society so that all, especially young, people would get a sufficient portion of resources to not think about paying bills.

Chances to make a breakthru are small, but that doesn't matter. It's a big numbers game: if chances are 1 to million, we let 1 billion people try and see 1000 successes. The problem currently is that we have these billions of people, but they are forces by silly constraints of our society to non stop solve fictional problems like paying rent.

When you have tenure, you can work on whatever you want for as long as you want. Nobody works on an idea for five years without publishing anything, though. Progress is made step by step.

Take Albert Einstein as an example, who arguably made one of the largest leap in physics with his theory of general relativity. He never stopped publishing during that time.

When you have tenure, you can work on whatever you want for as long as you want

Not quite. When you are a professor, you essentially become a manager for a group of researchers. You don't really do research yourself. Therefore, your main obligation becomes finding money to pay these researchers. So in reality you can only support the research someone is willing to pay for (via grants, scholarships, etc).

Sometimes an outside perspective is just the ticket for getting past roadblocks that've stumped the experts. If any outsider could do this, it's John.

Sometimes, but mostly not.

They didn't suggest he invented some new technique.

Figuring out the basics of the math and how to use whatever tools they use at FB is doable in a week.

Huh? One week is more than enough to go through Siraj's videos.

Please tell me this is sarcasm.

It's HN so only the best sarcasm is allowed here. That is good sarcasm. Bask in it.

Source: Commenter name is DBZ character

Yeah, but we really need to know who would win in a lightsaber fight between Carmack and Jeff Dean.

I wonder what they would pick if each had to choose their weapon

funny i wanted to make the same comment last night but was too lazy.

wasn't the first time John did what he did. and it's not the usual kind of learning either. he was learning by first principles. i truly love this idea of replaying in your own mind what went on when something was discovered (or at least come close to it).

contrast that with how ML & AI are taught nowadays: thrown into a Jupyter notebook with all FAANG libraries loaded for you...

I'm not saying he's LeCun, I'm just saying he gets up to speed absurdly fast. So it's not unreasonable to suppose that by now, he's learned enough to start seriously contributing to this kind of problem.

edit: to be clear, all I'm saying is he can catch up to the body of research already out there quicker than the average bear, and he's shown a real knack for designing solutions and being crazy productive. I'm not pretending he's gunna be publishing insane novel research anytime soon, just that I wouldn't be surprised if he ends up being a real voice in the field.

No, you can’t push the envelope in AGI after a week in the woods. That must come off as pretty insulting to the hundreds of world class scientists who have been working in the field for decades.

I never said that, where the hell did I say he pushed anything? All I'm saying is he's shown to be insanely productive and effective and I think he can catch up to the body of research (created and shared by those hundreds of scientists) to become a real contributor very quickly.

FWIW, "seriously contributing to this kind of problem" sounds basically the same as "pushing the envelope" to me. They both suggest contributing something novel and useful.

They are basically the same thing.

What are not basically the same thing are "he started seriously contributing to this kind of problem after a week in the woods" and "he spent a week in the woods a year ago and is ready to start contributing now. A year after that week in the woods."

If you consider the quality of most academic research papers, then some insults are called for...

To be fair, they said proficient and not world class or inventing new material.

You seem to have a very blase understanding of scientific progress and genius. The fact that hundreds of world class scientists have been working in a field for decades does not at all mean that a genius can't come along and make groundbreaking progress. That's the very definition of genius, someone that makes a leap "off the path" that nobody before him could make.

Don't be absurd. You're acting like he's Neo from The Matrix, capable of downloading kung-fu directly into his brain.

Carmack is also a good grappler.

If he invents (births?) an AGI, will it be Facebooks property? Sounds like the beginning of a dystopian novel.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact