Robotics has the same issues, but you spend all your time fussing with the mechanical machinery. Carmack is a game developer; he can easily connect whatever he's doing to some kind of game engine.
(Back in the 1990s, I was headed in that direction, got stuck because physics engines were no good, made some progress on physics engines, and sold off that technology. Never got back to the AI part. I'd been headed in a direction we now think is a dead end, anyway. I was trying to use adaptive model-based control as a form of machine learning. You observe a black box's inputs and outputs and try to predict the black box. The internal model has delays, multipliers, integrators, and such. All of these have tuning parameters. You try to guess at the internal model, tune it, see what it gets wrong, try some permutations of the model, keep the winners, dump the losers, repeat. It turns out that the road to machine learning is a huge number of dumb nodes, not a small number of complicated ones. Oh well.)
What we really need is a scalable, distributed, physics pipeline so we can scale sims to 1000x realtime with billions of colliding objects. My guess is that Google/Stadia or Unity/UnityML are better places to do that work than Facebook, but if Carmack decides to learn physics engines* and make a dent I'm sure he will.
Until our environments are rich and diverse our agents will remain limited.
*More, I'm sure his knowledge already exceeds most people's.
Improbable tried to do that with Spatial OS. They spent $500 million on it. Read the linked article. No big game company uses it, because they cut a deal with Google so their system has to run on Google's servers. It costs too much there, and Google can turn off your air supply any time they want to, so there's a huge business risk.
Interestingly companies like SideFX are also doing really interesting work in distributed simulations. (e.g. Houdini)
But that kind of realism is not needed for all AGI research.
I also spent some years on using evolutionary algorithms to evolve control networks for simple robots. The computational resources available at the time were rather limited though. Should be more promising these days now that your commodity gaming pc can spew out in 30 minutes what back then took all the labs networked machines running each night for a few weeks.
On the flip side, successful robotics concepts might have more chance of being relevant to AGI.
I don't think so. Game NPCs don't need AI, which would be way overkill; they just need to provide the illusion of agency. I think for general AI you need a field where any other option else would be suboptimal or inadequate, but in videogames general AI is the suboptimal option... more cost effective is to just fake it!
> ... more cost effective is to just fake it!
Many players complain in story heavy games that their choices have no consequences to the story - this is largely because building stories with meaningful branches isn't economically feasible.
A game that could make NPCs react to the what the player does dynamically while also creating a cohesive story for the player to experience would be absolutely groundbreaking in my opinion.
This is more in the realms of AI story generation but I haven't seen any work on this that generates stories you would ever mistake as coming from a human (please correct me if I'm wrong) so it would be amazing to see some progress here.
Story AI is basically having a writer sit down and writing a branching story tree with writing the whole way. At best it's a manually coded directed acyclic graph.
Tactical AI, ie having the bad guy soldiers move about the battlefield and shooting back at you in a realistic manner is 100% about faking it. It's better to despawn non-visible and badly placed enemies and spawn well placed non-visible enemies than have some super smart AI relocate the badly placed enemies into better locations. It's better to have simple mechanisms that lead to difficult to understand behavior than complex behavior that leads to instinctive behavior.
There was an amazing presentation at gdc maybe 3 years ago that perfectly articulated this. The game was something about rockets chasing each other. I wish I could find the link.
That's not entirely true - it's just that no games studios are willing to compromise on graphics and art for something silly like the ability to impact the game world.
I think they don't exist because it's an exceptionally difficult problem, even for games with lo-fi graphics or text only. I've found it hard to find any AI projects that generate stories or plots that are remotely compelling.
Big studio game companies push "your choices matter" as a selling point as well, but few deliver.
You also have to consider whether the complaints of "many" players matter when publishing a game. A percentage of vocal players will complain no matter what. Yes, they will complain even if you somehow implement true AI!
Maybe, but it would be an impressive demonstration of AI that would be very different to what has shown for Go, Chess and StarCraft.
I think a compelling AI written short story for example would be leagues ahead of what is required to write a convincing chatbot e.g. you need an overarching plot, subplots, multiple characters interacting in the world, having to track characters beliefs and knowledge, tracking what the reader must be thinking/feeling.
It would likely rely a lot on understanding real-world and culture knowledge though - Go and StarCraft are much cleaner in comparison.
> A percentage of vocal players will complain no matter what.
Yep, but I can't think of a single game that has a plot that meaningfully adapts to how the player plays. Either there's many endings but the path to get to each is short, or all the choices converge quickly back into the same path.
Again, please correct me if I'm wrong but I've look quite hard for examples of innovation in the above recently and haven't found much. You can find examples of papers on e.g. automated story generation or game quest generation on Google Scholar from the last 10 years but the examples I found weren't that compelling.
Of course an hypothetical "Turing Test" of fiction-writing might be able to fool some people, and in an age where Netflix has been accused of producing content "by algorithm" this seems increasingly possible, but...
... what is "true" or "good" fiction is up for debate. In fact, it's a debate that can never be solved, because there is no right answer except what it feels to you, your friends and authors you respect.
But that said, I seriously doubt it would fool me, and I think it won't be within reach of an AI any time soon, or ever, not without creating an artificial human being from scratch. And maybe not even then, because how many real people can write compelling fiction anyway? :)
So it feels like you should be able to procedurally generate stories at least something by combining common story arches, templates, character archetypes etc. without too much effort but I've yet to find any compelling examples of this anywhere. When you look into the problem more, you realise it's a lot harder than it seems.
We've seen lots of examples of chat bots that are said to pass the Turing Test but really aren't that intelligent at all so a "Turing Test of fiction writing" as you put sounds like a super interesting next step to me.
I struggle to see the distinction. Isn't the turing test defined as 'faking humans (or human's intelligence) convincingly enough'?
There is a saying: The benefit to be smart is that you can pretend to be stupid. The opposite is more difficult.
I think the Turing Test is no longer thought of as adequate metric for general AI (if it ever was to begin with).
In some sense you can think of interfacing w/ the online world + trying to win attention to yourself as the kind of general game that is being played.
This area is under-studied. The logicians spent decades on the high level planner part. The machine learning people are mostly at the lower and middle vision level - object recognition, not "what will happen next". There's a big hole in the middle. It's embarrassing how bad robot manipulation is. Manipulation in unstructured situations barely works better than it did 50 years ago. Nobody even seems to be talking about "common sense" any more.
"Common sense" can be though of as the ability to predict the consequences of your actions. AI is not very good at this yet, which makes it dangerous.
Back when Rod Brooks did his artificial insects, he was talking about jumping to human level AI, with something called "Cog". I asked him "You built a good artificial insect. Why not go for a next step, a good artificial mouse?" He said "Because I don't want to go down in history as the man who created the world's best artificial mouse".
Cog was a flop, and Brooks goes down in history as the inventor of the mass market robot vacuum cleaner. Oh well.
In a sense, the journey was the reward rather than the very unlikely short term outcome back then.
The human brain can model the physics of a ball in flight, accurately, and quickly. As the ball touches the finger tips it makes the smallest adjustments, again in tiny fractions of a second
What make me think of it like that is hearing about how the brain was actually really bad at predicting the path of things that don't act like that. This was in context of aiming unguided rocket launchers (I end up reading a lot of odd things). It seems the brain is really bad at predicting how a continuously accelerating projectile will travel, and you have to train yourself to ignore your intuitions and use the sighting system that compensates for how it actually travels in order to hit a target with the thing.
Compare what happens during a practice game of catch between six year old, first time Little Leaguers vs. MLB starters.
If I decide to kick it, he reads my body language scarily well to figure out what direction it will probably go, and will adjust his position way ahead of time. If I throw it at a wall he will run to where the angle will put the ball after it bounces. If I throw it high in the air he knows where to run almost immediately (again using my body language to know where I might be trying to throw it.). He’s very hard to fool, too, and will learn quickly to not commit to a particular direction too quickly if it looks like I’m faking a throw.
I always feel like he’d make a great soccer goalie if he had a human body.
There are counterexamples, such as AlphaGo which is all about planning and deep thinking. It also combines learning with evolution (genetic selection).
We don't need to think 10 "turns" ahead when trying to walk through a door, we just try to push or pull on it. And if the door is locked or if there's another person coming from the opposite side we'll handle that situation when we come across it.
Doors are basically planning triggers more than many things.
Can you expand on this statement? While I have no way to “debug” a horse’s brain in real-time, my experiences suggest they absolutely conduct complex decision-making while engaging in activities.
Two examples which immediately come to mind where I believe I see evidence of “if this, then that” planning behavior:
1. Equestrian jumping events; horses often balk before a hurdle
2. Herds of wild horses reacting to perceived threats and then using topographic and geographic features to escape the situation.
> intelligence is mostly about getting through the next 10-30 seconds of life without screwing up
In this context horses don't plan or have much capacity for shared learning, at least not as far as I know.
Quote: “This study indicates that horses do not learn from seeing another horse performing a particular spatial task, which is in line with most other findings from social learning experiments,”
This is probably a variant of Andrew Ng's affirmation that ML can solve anything a human could solve in one second, with enough training data.
But intelligence actually has a different role. It's not for those repeating situations that we could solve by mere reflex. It's for those rare situations where we have no cached response, where we need to think logically. Reflex is model-free reinforcement learning, and thinking is model-based RL. Both of them are necessary tools for taking decisions, but they are optimised for different situations.
They will also open a gate to let another horse out of their stall which I would count as some form of planning.
Beyond that I can't think of anything in all the years around them. They can manage to be surprised by the same things every single day.
Sounds like most human beings, given an unpleasant stimulus, for example a spider.
I am pleasantly surprised by how quickly they have been tackling big new decision spaces.
OpenAI has already done some experiments here . All the way down at the bottom, under the "surprising behaviors" heading, 3 of the 4 examples involve the AIs finding bugs in the simulation and using it to their advantage. The 4th isn't a bug exactly, but a (missing) edge case in their behavior not initially anticipated.
There's a need gap to solve Simulation Sickness in VR and First Person games.
Years ago John said if you have 20k and a dedicated room you can make a convincing vr experience that won't make anyone sick.
While it's true that game AI is often held back by game design decisions, it's not true that technology isn't holding us back in this area as well.
 https://www.youtube.com/watch?v=gm7K68663rA (GDC Talk: Goal-Oriented Action Planning: Ten Years of AI Programming)
Games like PvE MMO's need to find a way to produce engaging content faster than it can be consumed at a pricepoint that is economically viable. The way they do it now is by having the players repeat the same content over and over again with a diminishing returns variable reward behavioral reinforcement system.
You have to hit a spot where they are sometimes a bit surprising, but not in a way that cannot be reacted to quickly on your feet. This throws realism out of the window.
Plenty of games have NPCs with scripted routines, dialog, triggers, etc that could be improved either by reducing the dev cost to generate them without reducing quality or reacting to player behavior more naturally.
Don't forget there is a certain randomness with 'more natural' and with randomness you're going to invite Murphy to the party.
A weapons maker with a unique backstory and realistic conversations that reference it is more interesting than a bot, and opens up the possibility of unscripted side-quests.
Some significant part of gaming is risk-free experimentation in a simulated world. The experiments possible are bounded by the simulation quality of the world. More realistic NPC behavior would open up for a lot more games.
You would see these factions fighting and gaining/losing territory throughout the game. You could chose to help them or just pass on by, but the actions progressed regardless of your choice.
That's part of it, but there are other factors too. The more complex the AI, the harder (i.e. more expensive) the game is to tune and test. Game producers and designers are naturally very uncomfortable shipping a game whose behavior they can't reasonably predict.
This is a big part of why gamers always talk about loving procedural generation in games but so few games actually do it. When the software can produce a combinatorial number of play experiences, it's really hard to ensure that most of the ones players will encounter are fun.
The game may even be played by saying things on Twitter and becoming interesting enough that people DM you and try to build a relationship with you, while you're a bot.
I mean: maybe it's more efficient to have it read all of wikipedia really well before adding all the other noisy senses.
It is nowhere near good enough to avoid running into Moravec’s Paradox like a brick wall as soon as you try and apply it outside the simulator.
Now Alphago and it's implementation framework are much more sophisticated than Deep Blue. It's actually a framework for making single-task solvers, but that's all. The fact it can make more than one single-task solver doesn't making it general in the sense we mean it in the term AGI. AlphaGo didn't learn the rules of Go. It has no idea what those rules are, it's just been trained through trial and error not to break them. That's not the same thing. It's not approaching chess or Go as an intelligent thinking being, learning the rules and working out their consequences. It's like an image classifier that can identify an apple, but has no idea what an apple is, or even what things are.
To build an AGI we need a way to genuinely model and manipulate objects, concepts and decisions. What's happened in the last few decades is we've skipped past all that hard work, to land on quick solutions to specific problems. That's achieved impressive, valuable results but I don't think it's a path to AGI. We need to go back to the hard problems of “computer models of the fundamental mechanisms of thought.”
There are indeed some people who learn chess by "reading the manual". Or learn a language by memorizing grammar rules. Or learn how to build a business by studying MBA business theories.
There are also tons of other people who do the opposite. They learn by simply doing and observing. I personally have no idea what an "adverb" is, but people seem perfectly happy with the way I write and communicate my thoughts. Would my English skills count as general intelligence, or am I just a pattern-recognition automaton? I won't dispute the pattern-recognition part, but I somehow don't feel like an automaton.
I can certainly see the potential upsides of learning some theory and reasoning from first principles. But that seems too high a bar for general intelligence. I would argue that the vast majority of human decisions and actions are made on the basis of pattern recognition, not reasoning from first principles.
One last note: "working out their consequences" sounds exactly like a lookahead decision tree
The thing is those are parts of our neurology that have little to do with general intelligence. I think it's becoming clear that they are merely cognitive tools we bring to bear in the service of our higher level general intelligence machinery. In that sense high level Go and Chess players turn themselves into single-task solvers. They're better at bringing that experience and capability to bear in other domains, because they have general intelligence with which to do so, but those specialised capabilities aren't what make them a being with general intelligence. Or if specialising systems are important to general intelligence, it's as just a part of a much broader and more sophisticated set of neurological systems.
Here is my strongest prediction:
AGI is only possible if the AGI is allowed to cause changes to its inputs.
Current ML needs to be grafted towards attention mechanisms and more boltzmann net / finite/infinite impulse response nets.
Could you elaborate on this point ?
Do you mean that the AGI could change the source of inputs, or change the actual content of those inputs (e.g. filtering) or both?
And why do you think this is a critical piece ?
I think I think, but might I just be using a single problem solver that gives the appearance of thinking?
Edit: Which kind of explains the failure of good-old fashioned symbolic AI as it was modelling the wrong thing.
[NB I worked in good-old-fashioned AI for a number of years]
When given a problem it has never seen before it was able to acquire knowledge of the problem and then apply that knowledge as to solve the problem. Thats the definition of learning and intelligence that can generally be applied to any problem.
> I think it's becoming clear that they are merely cognitive tools we bring to bear in the service of our higher level general intelligence machinery.
What are you on about? Cognition and intelligence are the same thing if it's capable of cognition or as you put it applying "cognitive tools" then it's capable of intelligence.
It can't be applied to any problem though. Take the example i gave elsewhere of a game where you provide the rules, and as the game progresses the rules change. There are real games that work like this, generally card games where the cards contain the rules, so as more cards come into play the rules change. Alpha Zero cannot play such games, because there isn't even a way to provide it with the rules.
>> I think it's becoming clear that they are merely cognitive tools we bring to bear in the service of our higher level general intelligence machinery.
>What are you on about? Cognition and intelligence are the same thing if it's capable of cognition or as you put it applying "cognitive tools" then it's capable of intelligence.
I'm saying that human minds apply many cognitive tools, and that Alphago is like one of those tools. It's not like the part choosing and deploying those tools, which is the really interesting and smart part of the system.
The human brain consists of a whole plethora of different cognitive mechanisms. Cognition is a broad term of a huge variety of mechanisms, none of which by themselves constitute all of intelligence. A lot of people look at Alphago and say aha, that's intelligence because it does something we do. Yes, but it only does a tiny, specialist fragment of what we do, and not even one of the most interesting parts.
If an AI shows the same capabilities as the average human being, I would say that is AGI by definition. Regardless of whether it meets the requirement for Platonic Knowledge.
But on the other hand if you get into rote memorization before you start the game it’s going to slow you down by having no context.
It's certainly not the most efficient way to use our current hardware, and it's not clear to me how big some of these neural nets would have to be, but if we had computers with a trillion times the memory capacity and speed, IMO it'd certainly work on some level.
Imagine playing a game of Chess in which the pieces and rules gradually changed bit by bit until by the end of the game you were playing Go. That's much closer to what real life problems are like and a human child could absolutely do that. They might not be much good at it, but they could absolutely do it even without ever having played either game before just learning as they went. Note to AGI researchers, if your chatbot can't cope with that or a problem like it without any forewarning, don't bother applying for a Turning Test with me on the other side of the teletype.
For humans, the more previous ones we know about, the better, because we have more chance of applying a model that works in the new environment. That's called "experience".
I've seen people apply their normal behaviour to situations that have changed, and then get totally confused (and angry) as to why the result isn't the same. Observe anyone travelling in a new country for examples ("why don't they show the price with the sales tax included here? This is ridiculous!").
In a perfect world, sure, we'd construct a rational mental model of a new situation and test it carefully to ensure it matched reality before trusting it, and then apply it correctly to the new situation. But it's not a perfect world, and people don't actually do that. Usually we charge in and then cope with the results.
Of course, I'm not saying that AI should do that. It'll be interesting to see how a "good" general AI copes with a genuinely new situation.
It would be nice if it worked like that but I think you're massively underestimating the problem set here. I'd suggest its more like the architectural glue one needs as an engineer writing a command line util and a fully fledged piece of Enterprise solution (i.e. orders of magnitude).
Of course because we don't actually know how intelligence exactly works we're both guessing here.
As others have mentioned here though.. this becomes horrifying if we've created something sentient to kill in games or enslave.
It may be a brutal struggle, but perhaps that struggle is important. Perhaps having a simulated tree fall on you is more meaningful than being reaped by some objective function at the end of an epoch.
edit: wrong book ;)
Isn't that what genetic algorithms are?
I mean, yes, you killed a sentient being. But if that sentient being has a thousand concurrent lives, then what does "killing" one of those lives even mean? And if it can respawn another identical life in a millisecond, does it even count as killing?
I suspect that having sentient virtual entities will provide philosophy and ethics majors a lot of deep thinking room. As it already has for SciFi authors.
What if AGI is just a combination of very highly advanced single-task solvers?
I happen to believe that it is an emergent behavior once the complexity gets high enough, so AGI might just be a (large?) collection of AlphaGo solvers connected to different inputs.
Kind of like humans then.
As adults we know what an apple is because we understand it as a concept, the ideal "apple", and can manipulate the concept into areas way outside the original concept (say, the phrase "apple of my eye").
All they know is how to recognize a common pattern on a pixel grid, after seeing a large number of examples, and then draw a box around it.
The fact that a child has a body and can manipulate the world with all 5 senses working in concert should not be underestimated.
Very quickly (assuming said child doesn't eat something too bad), in the absence of an external oracle, the child learns a very productive mental model of what an apple is.
This type of feedback loop seems eminently translatable to machine learning, assuming we can encode the concept space in a way that allows the model to be encoded and trained in a reasonable set of constraints
The child develops concepts and is able to create and evaluate inferences, and thus able to understand metaphors etc.
The concept is what most AI approaches lack. Googles image search can identify apples, and cherries, and probably can categorize both as fruits, but it can't infer that this probably contains seeds, is a living being etc.
As I have an academic background in learning theory and developmental psychology, I'm pretty pessimistic about the current AI trend, autonomous driving etc. Most smart people in the field are chasing what are effectively more efficient regression functions for over 60 years now, and I almost never stumble upon approaches that have looked at what we know about actual human learning processes, development of the self etc.
Moravec's paradox IMO should have been an inflection point for AI research. This is the level of problems AI research has to tackle if it ever wants to create AGI.
Sounds like the start of a truly horrifying Black Mirror episode
That episode already exists.
>The Perky Pat Layouts itself is an interesting concept. Here's Dick, in the early 60's, coming up with the idea for virtual worlds. I mean, Second Life and other virtual worlds are just a mapping of the Perky Pat Layouts onto cyberspace. Today Facebook acts like the PP Layouts, taking people's minds off toil and work and letting them engage others in a shared virtual hallucination -- you're not actually physically with your friends, and they might not even be your friends.
>Dick’s description of the Can-D experience is essentially a description of virtual sex:
>“Her husband -- or his wife or both of them or everyone in the entire hovel -- could show up while he and Fran were in the state of translation. And their two bodies would be seated at proper distance one from the other; no wrong-doing could be observed, however prurient the observers were. Legally this had been ruled on: no co-habitation could be proved, and legal experts among the ruling UN authorities on Mars and the other colonies had tried -- and failed. While translated one could commit incest, murder, anything, and it remained from a juridicial standpoint a mere fantasy, an impotent wish only.”
>Another character says “when we chew Can-D and leave our bodies we die. And by dying we lose the weight of -- ... Sin.”
Starting this week, I’m moving to a "Consulting CTO” position with Oculus.
I will still have a voice in the development work, but it will only be consuming a modest slice of my time.
As for what I am going to be doing with the rest of my time: When I think back over everything I have done across games, aerospace, and VR, I have always felt that I had at least a vague “line of sight” to the solutions, even if they were unconventional or unproven. I have sometimes wondered how I would fare with a problem where the solution really isn’t in sight. I decided that I should give it a try before I get too old.
I’m going to work on artificial general intelligence (AGI).
I think it is possible, enormously valuable, and that I have a non-negligible chance of making a difference there, so by a Pascal’s Mugging sort of logic, I should be working on it.
For the time being at least, I am going to be going about it “Victorian Gentleman Scientist” style, pursuing my inquiries from home, and drafting my son into the work.
Runner up for next project was cost effective nuclear fission reactors, which wouldn’t have been as suitable for that style of work.
We're at 500 comments at the time of posting this, and no-ones pasted his post in full to save us having to visit Facebook...
As long as his next project is not teleportation gateways i guess we are safe.
 it's also 2019 :D
> In the year 2019, the player character (an unnamed space marine) has been punitively posted to Mars after assaulting a superior officer, who ordered his unit to fire on civilians. The space marines act as security for the Union Aerospace Corporation's radioactive waste facilities, which are used by the military to perform secret experiments with teleportation by creating gateways between the two moons of Mars, Phobos and Deimos. In 2022, Deimos disappears entirely and "something fraggin' evil" starts pouring out of the teleporter gateways, killing or possessing all personnel.
We shall all be woven into the fabric of the new artificial reality.
1. Biological brains are non-differentiable spiking networks much more complicated than backpropagated ANNs.
2. Ion channels may or may not be affected by quantum effects.
3. The search space is huge (but organisms aren't optimal and natural selection is probably local search)
4. If it took ~3.8b years to get from cells to humans, how do we fast-forward:
* brain mapping (replicating the biological "architecture")
* gene editing on animal models to build tissues and/or brains that can be interfaced (and if such interface could exist how do we prevent someone from trying to use human slaves as computers? Using which tissues for computation is torture?)
* simulation with computational models outside of ECT (quantum computers or some new physics phenomenon)
Note: those 3.8b years are from a cell to human. We haven't built anything remotely similar to a cell. And I'm not claiming that an AGI system will need cells or spiking nets, most likely a lot of those are redundant. But the entropy and complexity of biological systems is huge and even rodents can outperform state of the art models at general tasks.
IMHO, the quickest path to AGI would be to focus on climate change and making academia more appealing.
Rodents? Try insects . In the late 40s and early 50s, when neural networks were first explored with great enthusiasm, some of the leading minds of that generation believed (were convinced, in fact) that artificial intelligence (or AGI in today's terms) is five/ten years away; the skeptics, like Alan Turing, thought it was fifty years away. Seventy years later and we've not achieved insect-level intelligence, we don't know what path would lead us to insect-level intelligence, and we don't know how long it would take to get there.
: To those saying that insects or rodents can't play Go or chess -- they can't sort numbers, either, and even early computers did it better than humans.
They are creepy smart.
Something about predatory nature of both insects seems to tune up their intelligence. Of course it never hurts having the BBC tell your story either.
Yep. To be a predator, you need to outwit your prey and think fast, so it's thought to be a natural INT grinder. `w´
Presumably, this could drive up the INT of prey too, but maybe it's cheaper to just be faster/harder to see? But you can't be THAT hard to see, and the speed only saves you in failed ambushes, so planning successful ambushes continues to reward the INT of predators (unless they just enter the speed arms race, like cheetahs or tiger beetles).
They probably can, internally; they just can't operate on tokens we recognize as numbers explicitly. For a computer analogy, take Windows Notepad - there's probably plenty of sorting, computing square roots and linear interpolation being done under the hood in the GUI rendering code - but none of that is exposed in the interface you use to observe and communicate with the application.
I think you'd be surprised how much progress is also being made outside those two factors. It's sort of like saying graphics only improve with more RAM and faster compute. We know there's more to it than that.
In many cases, the cutting edge of a few years ago is easily bested by today's tutorial samples and 30 seconds of training. We're doing better with less data and orders of magnitude less compute.
An illustrative example comes from the first lesson in fastai's deep learning course: an image classifier that would have been SOTA as late as 2012/13, can be built by the hobbyist in like 30 seconds.
That said, I don't disagree that this is all narrow AI, at best.
The key, of course, is redefining life and intelligence as whatever the current state-of-the-art accomplishes. (Cue explanations that the brain is just a giant pattern matcher.) It makes drawing parallels and prophesying advancements so much easier. Of all our sciences, that's perhaps the one thing we've perfected--the science of equivocation. And we perfected it long ago; perhaps even millennia ago.
Rodents can't play Go or a lot of other humanly-meaningful tasks. We don't need to build an artificial cell. A cell is too many components that by blind luck happened to find ways to work together, this is as far from efficient design as can be. The same way we don't build two-legged airplanes, we don't need anything that's close to the wet spiky mess that happens in human brains. It's more likely that we have all the ingredients already in ML, and we need to connect them in an ingenious way and amp up the parallelism.
What about pigeons predicting breast cancer with 99% probability, rats learning to drive cars, monkeys building tools?
Rodents stand a bigger chance at learning Go than AlphaZero spontaneously building stone tools and driving cars.
AlphaZero is also capable of playing Chess, Shogi and Go at a super-super-human.
> pigeons predicting breast cancer with 99%
pigeons contain 340M neurons (with dendrites and all, giving them higher computational capacity than ANN units).
> Rodents stand a bigger chance at learning Go
They probably don't ; probably because they can't understand the objective function and their brain capacity is limited
We don't have anything remotely close to a wetware-enabled transportation device, something that can move on flat land, climb mountains, swim in bodies of water, crawl in caves, hide in trees.
Within the constrained problem, the machine exceeds humans. But generally, the wetware handles moving around much better.
Same with AI: in a constrained problem, the AI can excel (beat humans in chess and go). But I doubt we will see a general AI any time soon.
human AI also evolved by solving constrained problems, one at a time. Life existed before the visual system , but once this was solved it moved on to do other things. In AI we have a number of sensory systems seemingly solved: Speech recognition, visual object recognition, and we are closing to certain output (motor) systems: NLP text synthesis systems seem a lot like the central pattern generators that control human gait, except for language. What seems to be missing is the "higher-level ", more abstract kernels that create intent, which are also difficult to train because we don't have a lot of meaningful datasets. Or maybe , we have too big datasets (the entirety of wikipedia) but we don't know how to encode it in a meaningful way for training. It's not clear however that these "integrating systems" are going to be fundamentally different to solve than other subsystems. It certainly doesn't seem to be so in the brain, since neocortex (which hosts both sensory and motor and higher level systems) is rather homogeneous. In any case, it seems we 're solving problems one after another without copying nature's designs, so it's not automatically true that we need to copy nature in order to keep solving more.
Do you have examples of those systems which are competitive in general use rather than specialized niches? The cloud offerings from Amazon, Google, etc. are good in the specific cases they’re trained on but fall off rapidly once you get new variants which a human would handle easily.
Can't tell if sarcasm.
>You assert that an area of physics or mathematics familiar to few neuroscientists solves a fundamental problem in their field. Example: "The cerebellum is a tensor of rank 10^12; sensory and motor activity is contravariant and covariant vectors".
So yeah, I feel that it's pretty fringe (as you suggested).
So it is plausible that nature may have evolved to be affected by quantum effects.
Actually it's not so obvious that the brain is not differentiable. If you do a cursory search, you'll find quite a lot of research into biologically plausible mechanism for backpropagation. Not saying the brain does backprop, we just don't know and it's not outside of the realm of plausibility
In a sense, everything is affected by quantum effects. However, with neurons, they are generally large enough that quantum effects do not dominate. Voltage gated channels are dozens to hundreds of amino-acids long. Generally, there are hundreds to millions of ion channels in a cell membrane and the quantum tunneling of a few sodium ions in or out of the cell will generally not affect gestalt behavior of the cell, let alone a nervous system's long term state. Suffice to say, ion channels are not dominated by quantum behavior.
Largely, we have the building blocks to replicate neurons (as we currently understand them) in silico. However, as is typical with modeling, you get out what you put in. Meaning that how you set your models up will mostly determine what they do. Setting your net size, the parameters of you PDEs, boundary values, etc. are the most important things.
Now, that gets you a result, and it's likely to take a fair bit of time to run through. To get it up to real time the limiting factor really ends up being heat. Silicon takes a LOT of energy as compared to our heads, ~10^4 more per 'neuron'. If we want to get to real time, we're gonna need to deal with the entropy.
But, if this is a 100% replicated brain, doesn't that mean its suffering is just as real as a real brain's suffering, and therefor just as cruel? And if not, what's the difference?
Yes, it does.
This reminds me of the idea that free will doesn't exist, but that we have to act as if it were.
So by analogy to that, maybe the AI isn't really suffering, but you have to act as if it were.
More food for thought:
Some surgery blocks memory but can be incredibly painful. Do we need to worry about that? Is the suffering that the brain can not remember "real"?
Gene expression is often tied to the environment the organism is in. Mere possession a gene isn't enough to benefit from it. Some expressions don't take effect immediately, but rather activate in subsequent generations.
Epigenetics is a whole equally large layer on top of this system. A single-focus approach may not be sufficient, and even if it is, it's not likely to cope with environmental entropy very well.
 I understand gene to mean some ill-defined, not necessarily contiguous set of genetic sequences (DNA, RNA, and analogs) with an identifiable, particularized expression that effects reproductive (specifically, replicative) success. I think over time "gene" has been redefined and narrowed in a way to make it easier to claim to have made supposedly model-breaking discoveries.
 Some others on HN have made strong cases for why epigenetics isn't a meaningful departure from the classic genetic model; just a cautionary tail for eager reductivists who would draw unsupported conclusions from the classic model. See, also, note #1.
Like what is language, what is intelligence? Some of the smartest linguists and philosophers would proudly declare they have no fucking clue.
Making Alexa turn on the lights or using Google Translate are cool party tricks though.
Idc how many Doom games ya made, but I’m sorry to say a bunch of software engineers aren’t gonna crack this one.
“to worship a phenomenon because it seems so wonderfully mysterious, is to worship your own ignorance” - https://www.lesswrong.com/posts/x4dG4GhpZH2hgz59x/joy-in-the...
Having no clue is not something to be proud (or ashamed) of.
> I’m sorry to say a bunch of software engineers aren’t gonna crack this one.
Doesn’t sound like you’re at all sorry, it sounds like you’re thrilling in putting these uppity tryhards in their place for daring to attack something you hold sacred.
As recent as his last Oculus Connect keynote, he extolled his frustration with having to do the sort of "managing up" of constantly having to convince others of a technical path he sees as critical. He's clearly the type that is happiest when he's deep in a technical problem rather than bureaucracy, and he likes moving fast.
On top of that, he likes sharing with the community with talks and such, and ever since going under the FB umbrella, he's had to clear everything he says in public with Facebook PR, which clearly annoyed him.
He's hungry for a new hard challenge. VR isn't really it right now since it's more hardware-bound by the need for hard-core optical research than software right now. With the Quest, he (in my opinion) solidified VR's path to mobile standalones. It's time to try his hand at another magic trick while he's on his game.
John's the very definition of a world-class, tried and true engineer/scientist. He's shown time and time again the ability to dive into a field and become an expert very quickly (he went from making video games to literally building space rockets for a good bit before inventing the modern VR field with Palmer).
If there's anyone I'd trust to both be able to dive into AGI quickly and do it the right(tm) way, it's John Carmack.
I wouldn't, however, bet against some kind of insanely clever development coming out of his new endeavor. Something like an absurdly efficient new object classifier, that reduces the compute requirements for self-driving cars by a non-trivial factor, would be a very Carmack thing.
The opportunity for a genius is to come in, synthesize all existing information on the subject, and then come up with a novel approach to the whole thing.
In some part, I think that is what Elon Musk has been able to do effectively. He comes into a field that already exists, reads everything he can get his hands on, and then outputs something novel. You can only do that effectively if you have the mental capacity to keep all that info in your head at once, I think.
I had the pleasure of meeting Carmack a few times over the years at small aerospace conferences. He's both as true a geek and as much of a gentleman as you might imagine. I'm really looking forward to seeing what he does with AGI.
Tenured ML professors at the top 100 or so universities in the world aren't "most of us". A very large chunk of these people are geniuses. Those jobs are incredibly hard to get, and most of these people are reading everything that is getting published, on an ongoing basis, and are outputting something novel, on an ongoing basis.
The fact that you think that John Carmack, because he's a name that you've actually heard of, is going to go into ML and suddenly make some giant advance that all the poor plebs in the field weren't able to do, is only a reflection of your misunderstanding of what's already happening in academia, not on Carmack's skills or abilities.
You're acting as though everyone are just low level practitioners using sklearn, and it would be a great idea to have some smart people work on developing something novel. Guess what: that's already happening, with incredibly smart people, on an incredibly large scale. Carmack doing it would just be another drop in the bucket.
Tenured ML professors at the top 100 or so universities in the world aren't "most of us".
Those jobs are incredibly hard to get,
The fact that you think that John Carmack, because he's a name that you've actually heard of, is going to go into ML and suddenly make some giant advance that all the poor plebs in the field weren't able to do, is only a reflection of your misunderstanding of what's already happening in academia, not on Carmack's skills or abilities.
You're acting as though everyone are just low level practitioners using sklearn, and it would be a great idea to have some smart people work on developing something novel. Guess what: that's already happening, with incredibly smart people, on an incredibly large scale. Carmack doing it would just be another drop in the bucket.
If this research is as compute intensive as it seems to be,
Carmack's contribution might be that he increases the rate other researchers can add their drops to the bucket.
Carmack isn't the first techie to take on a big hard problem. Jeff Hawkins, a name many of us also know, did as well.
If by "techie" you mean, professional software engineer, that's fine, but there's no reason to assume that a professional software engineer is going to be magically better at AI research than... professional AI researchers? He's probably going to be substantially worse.
Also, your statement below:
> That's probably true. I look at this as Carmack running his own PhD program. I expect he will expand what we know about computation and the AGI problem before he's done.
Makes it clear to me that you don't really get it. Carmack, at best, might know enough right now to be in a PhD program. I doubt that he has anywhere near as much knowledge, insight, or ideas for research, as top graduate students. He's in no position to mentor graduate students.
No, I mean technologist. He has a pretty solid history with software, physics, aerospace, optics, etc...
> might know enough right now to be in a PhD program
Yeah, that's what I'm saying. The frontier in AGI or even just AI is enormous and I think I would be more surprised if Carmack were not able to find some place he could expand the border of what we know.
But the academic activity is focused around the kind of activities that Kuhn calls "Normal Science".
That is, ML researchers mainly do competitions on the same data sets, trying to put up better numbers.
In some sense that keeps people honest, it also lowers the cost of creating training data, but it only teaches people how to do the same data set over and over again, not how to do a fresh one.
I saw this happen in text retrieval; when I was trying to get my head around with why Google was better than prior search engines, I learned very little from looking at TREC, in fact people in the open literature were having a hard time getting PageRank to improve the performance of a search engine.
A big part of the problems was that the pre-Google (and a few years into the Google age) TREC tasks wouldn't recognize that Google was a better search engine because Google was not optimized around the TREC tasks, rather it was optimized around something different. If you are optimizing for something different, it may matter more what you are optimizing for rather than the specific technology you are using.
Later on I realized that TREC biases were leading to "artificial stupidity" in search engines. IBM Watson was famous for returning a probability score for Jeopardy answers, but linking the score of a search result to a probability is iffy at best with conventional search engines.
It turns out that the TREC tasks were specifically designed not to reward search engines that "know what they don't know" because they'd rather people build search engines that can dig deep into hard-to-find results, and not build ones that stick up their hand really high when they answer something that is dead easy.
True, but even Kuhn would note that most paradigm shifts still come from within the field. You don't need complete outsiders and, as far as I know, outsiders revolutionizing a field are quite rare.
You need someone (a) who can think outside the box, but you also need (b) someone who has all of the relevant background to not just reinvent some ancient discarded bad idea. Outsiders are naturals at (a) but are at a distinct disadvantage for (b).
I think what's really happening in this thread is:
1. Carmack is a well-deserved, beloved genius in his field.
2. He's also a coder, so "one of us".
3. Thus we want him to be a successful genius in some other field because that indirectly makes us feel better about ourselves. "Look what this brilliant coder like me did!"
But the odds of him making some big leap in AGI are very slim. That's not to say he shouldn't give it a try! Society progresses on the back of risky bets that pay off.
That's probably true. I look at this as Carmack running his own PhD program. I expect he will expand what we know about computation and the AGI problem before he's done.
There are surely a lot of researchers doing that, but do you really think anyone who has a plausible claim at being one of the top 100 researchers in the field in the entire world is doing that? Even if there are only 100 people doing truly novel research, that's still 100 times as many people as are going to be working on Carmack's research.
I don't think you understand the desired outcome here. We want eureka moments, and we're hopeful for some. That doesn't mean we expect them to happen. Stop being such a pessimist.
It's easy to say that it's probably possible to land a orbital rocket first stage. But who would bet a multi-billion dollar business on being able to not only do it, but save money by doing it, when nobody had ever done it before?
Similarly, electric cars were far from new. Nobody seemed much inclined to build one that was actually a luxury car, instead of a toy for engineer-types who could put up with driving weird things. Any of the big manufacturers could have done it, and easily absorbed the losses if it failed, but none did. Elon made a wild bet on that, making a company that made nothing else, so the whole thing would go down the tubes if the idea flopped. Instead it seems to have worked. Although it seems to be harder than he anticipated, and maybe outside his skillset, to run an organization that does real mass-production.
When AGI is developed, it will seem obvious in retrospect. Participating engineers will receive middle-brow dismissals saying that this was obviously practically possible, since after all the human brain operates according to the laws of physics.
Yep, plus all the different perspectives from other endeavors. Extending human memory will be a really great accomplishment with brain-computer interfaces.
However, he did make electric cars something an average person would like to have. He also chose to make it work using the same inefficient principle of hauling 2 tons of steel to transport a single person. What he made is an electric luxury car, not a car for the masses that can replace average Joe's car. Is there anything wrong with that? No, there isn't, but let's not pretend a $35k (in US - much more in EU) car that requires hours of charging after driving 250 miles unless you happen to have Tesla's superchargers on your way is a new "volkswagen - a people's car". Also I find it disingenuous to advertise full battery capacity while at the same time recommending people use only 60% of it "for longevity".
Many people don't buy new cars, but choose to buy 5-8 year old cars that are really good value if they were maintained well. It remains to be seen how Teslas behave in that market.
It would be really revolutionary if someone could create and market an electric car that was truly innovative for example: much lighter than current cars while still being safe during collision, use fuel cell technology with fuel such as methanol or similar that can be created in a sustainable way, even using a fuel cell with mined hydrocarbons and electric drive would provide for a huge reduction in emissions due to increase in efficiency.
Do Teslas have a role to play in reducing emissions? Yes, definitely, but let's not present them as a single solution to all individual transport problems.
He successfully made popular mass market electric vehicle, and dragged whole auto moto industry behind. There were other electric cars before tesla. But tesla made it cool, and made the rest of the industry trying hard to catch up.
SpaceX also is not the first private space firm, with their own rocket, But it's by far the most successful one, and lowered the cost of entry to space by significant amount.
Also It's probably the first private space company that has rockets that can compete with most government ones.
I am not rich enough to be buying individual stocks, so I have no personal stake in this.
Researchers didn't build the first airplane. Nicolaus Otto, Carl Benz, Gottfried Daimler also weren't researchers. AGI will be a program and not a research paper and John Carmack is pretty good at getting those right.
Sometimes, an outsider with his novel or even a different way of looking at things can contribute disproportionately to a field.
Even experts have blind spots, often they show in the form of bias. If you know something is hard or near impossible to do, you are unlikely to try. If you don't know at times it's possible to stumble upon a solution by merely bringing a new way of thinking to the table.
I genuinely felt a sense of disappointment when he moved to Facebook (via the Occulus acquisition). So yea, fuck you, Facebook and your manipulative, life values corrupting and PR machinery.
I place John Carmack miles above Zuckerberg.
This may be his biggest impediment. ML has gotten very far with looking at problems as linear algebraic systems, where optimizing a loss function mathematically yields a good solution to a precisely defined (and well circumscribed) classification or regression problem. These techniques are very seductive and very powerful, but the problems they solve have almost nothing in common with AGI.
Put another way, Machine Learning as a field diverged from human learning (and cognitive science) decades ago, and the two are virtually unrecognizable to each other now. Human learning is the best example of AGI we have, and using ML tech as a way to get there may be a seductive dead end.
Programmers know how it is to live at the edge of the capacity of the mind to grasp the big picture. We always reinvent the wheel in the quest to make our code more grasp-able and debuggable. Why? Because it's often more complex than can be handled by the brain.
An AGI would not have such limitations. Our limitations emerged as a tradeoff between energy expenditure and ability to solve novel tasks. If we had a larger brain, or more complicated brain, we would require more resources to train. But resources are limited, we need to be smart while being scrappy.
For the record I don't think there is any general intelligence on our planet. A general intelligence would need access to all kinds of possible environments and problems. There is no such thing.
There's also the no free lunch theorem - it might not apply directly here, but it gives us a nice philosophical intuition about why AGI is impossible.
> We have dubbed the associated results NFL theorems because they demonstrate that if an algorithm performs well on a certain class of problems then it necessarily pays for that with degraded performance on the set of all remaining problems. 
Another argument relies on the fact that words are imprecise tools in modelling reality. Language is itself a model and as all models, it's good at some tasks and bad at other tasks. There is no perfect language for all tasks. Even if we use language we are not automatically made 'generally intelligent'. We're specialised intelligences.
That means you're specialised in survival. If you do well in life, you have a higher chance of procreation. Your genes survival depends on it.
General Intelligence is like Free Will - a fascinating concept with no base in reality. A mental experiment.
I'm glad to see he's aiming big with his billions and time. This is what rich people should be doing. Hl3 Gaben!
Don't get me wrong, I love horses. But they're living creatures with minds of their own and you have to always treat them with a certain wariness.
A Personal Navigation Device with a simulated personality that begs you to drive it all around town to various points of interest it desires to visit in order to satisfy its cravings and improve its mood.
I'm sure there's a revenue model getting drive through Burger Kings and car washes to pay for product placements.
Though we know for a fact that it is possible to find intelligence by randomly throwing things at the wall until something works. It's not like evolution uses a principled statistical process.
Also, growth may be hugely important. Babies start out with fuzzy learning, almost as if the learning rate starts out very small which normalizes the lack of knowledge and elevated novelty/variance of the environment.
AGI is all about predicting future utility given a circular dependency between the agent and environment. QM says we can't solve this exactly.. it's a two object interaction.. no way to gain the joint state, the ground truth, assumptions always have to be made to approximate independence.
Yes, he seemed to put a lot of effort to try to get things through FB internal politics, and not always successfully. I really wish his experiments with a scheme-based rapid prototyping environment / VR web browser had been allowed to continue . VR suffers from a lack of content, and VR itself is well-suited to creating VR content, and his VR script would surely facilitate closing that loop among other things. Although now four years later I guess FB has a large team working on a locked-down, limited world building tool (closed platform, no programming ability). Oh well.
I don't think this is the end of this wave of VR, but at this point I wouldn't be at all surprised if say Apple or someone else ends up bringing it to the mainstream instead of Facebook. 
VR is just making what we have better. Better screens, better refresh, better batteries, better lenses etc etc. I don’t see any roadblocks.
AGI, by contrast, is not going to be a better DNN. Harder to convince people but thinking is: brain neurons are vastly most sophisticated than digital; we don’t even fully understand what neurons do; we don’t have anything other than a vague understanding of what the brain does; it is apparent that we engage in plenty of symbolic reasoning, which DNN do not do; DNNs are fooled by trivial input changes that indicate they are massively overfitting data; from what I’ve heard from researchers at top AI companies/institutions DNN design is just a matter of hacking and trying stuff until you get that specific results on your given problem, so I don’t see where DL research is actually headed; improvements are correlated with compute power increases, indicating no qualitative gains in the study of learning.
I’m incredibly impressed by DL’s achievements but I believe at best current methods could serve as data preprocessing for a future AGI.
I’m actually quite glad that AGI is so far off, because I don’t think that it’s likely big tech companies will use it responsibly.
VR OTOH is very close and is going to change everything (and IMO is likely a necessary step towards AGI).
If VR becomes widespread, and amazingly high quality, then almost everything we do will migrate to VR.
Once that is the case, we will have an unprecedented amount of data about human behaviour, and near endless data for training, experimenting, and testing AIs.
The problems of AI will become much easier to formulate: “replace this person in this VR scenario, interaction” etc. This will help drive research by giving clear goals.
More pragmatically, it just removes a lot of barriers to research and accidental difficulties ie you’ll just be able to fire up a VR rather than worrying about how your robot is going to pick things up or access real world data etc
To be honest anyone who has a very good working knowledge of Linear Algebra can learn much of ML-math in a day. There really isn't anything mathematically super-sophisticated that is in popular use today.
Being good at grasping the theory is just the first step in a thousand mile journey. The problem of AI is not going to be solved with a neat math trick on paper, but with lots of experiments. Nature has taken a similar path towards intelligence.
Sigh. I assumed the whole point of hiring John Carmack is that you trust him to identify critical problems - and to find the best way to solve them.
I don't put learning state of the art ML past Carmack, at all. However, does ML tech of today lead to general AI? It's a strong assumption.
Does a living thing count as AGI? In that case, I'd say that most parents are quite good at creating AGIs ;)
Carmack may have other priorities. This can only be good.
Yes and I really wished he hadn't. Before he joined oculus they were working on the rift2, he steered them away from that to focus on mobile efforts.
I do see the appeal of mobile vr but at the end of the day it is basically an android phone in a vr headset.
PCvr is already 2 big steps back in graphical quality from desktop games. Mobile vr is like 10 steps back. 8 more steps than I'm willing to take even if it affords me mobility.
(Also, even with the space, I'm not sure I'd be brave enough to try and use one in air - adding turbulence and random vibrations on top of the usual VR issues sounds pretty nauseating even as I type it.)
By mobility, I mean the ability to throw the headset around and walk anywhere without worrying about leaving the range of your tether. That sort of thing is important for AR, but I just don't see it mattering for VR in the long run.
There is definitely a market for VR headsets for content delivered by a phone or builtin hardware. Those devices will realistically be limited to seated or standing-room-only experiences, though.
Quest with Link is actually pretty close to that.
If not, does your definition of scientist require something other than doing work using the scientific method? Perhaps some specific quantity of work?
 One of his companies, Armadillo Aerospace, was pretty much just a series of scientifc experiments. https://en.wikipedia.org/wiki/Armadillo_Aerospace
Final thought from me - I was thinking about your post and it is indeed difficult to discern science from engineering. One dichotomy that occurred to me (which may not hold under close scrutiny) is that scientists are interested in _the pursuit of truth_, whereas engineers are interested in _building things_.
Overall I think it comes down to popular opinion, which can be fuzzy and doesn't apply the same rules to everyone. If enough people say someone is a dancer, then they are a dancer, even if they suck and don't dance that much. This applies to basically all titles that cross institution boundaries. Another great example is countries. Popular opinion determines which organizations are countries, not a strict definition. For example the EU vs places like Iceland or San Marino. 
As far as I can tell Carmack is an old engineer whose name gets thrown around for headlines. If there weren't articles about his stealing stuff to take to Oculus I don't think his presence there would be observable.
Now people are talking like Carmack switching topics is going to change the world. It's just going to change his schedule. There are smarter engineers already working on this problem.
I'd be cautious dismissing his potential influence in the field. He has a way of looking at problems differently.
I just don't see this massive string of successes in every field. I see his huge expertise in graphics engines and games.
But it didn't help him with VR - in fact he got in trouble with VR and ended up landing with a company I have no respect for and he didn't make VR a thing.
Many people have a way of looking at things differently. I just don't see the reason this is news, unless you own facebook shares or something. Even then zero effect.
I say all this as the owner of two VR headsets (A vive for roomscale and a Lenovo Explorer for simracing/flying).
You must be joking, right? I'm as much of a Carmack fan as anyone here, but overstating the skills of one personal hero does no good to anyone.
History books (for as long as those continue to exist) would cite AGI as his major contribution to society, and his name would be more renowned than Edison or Tesla. An Einstein. None of his other contributions will matter, as the machines will replace it all.
Just daydreaming, though.
Please correct me if I’m wrong.
I think a big reason there are few in AGI is due to PR success from the Machine Intelligence Research Institute and friends. They make a good case that things are unlikely to end well for us humans if there's actually a serious attempt at AGI now that proves successful without having solved or mitigated the alignment problem first.
Trying to make the AGI's sensors wirehead-proof is the exact same problem as trying to make the AGI's objective function align properly with human desires. In both cases, it's a matter of either limiting or outsmarting an intelligence that's (presumably) going to become much more intelligent than humans.
Hutter wrote some papers on avoiding the wireheading problem, and other people have written papers on making the AGI learn values itself so that it won't be tempted to wirehead. I wouldn't be surprised if both also mitigate the alignment problem, due to the equivalence between the two.
The problem of ensuring that the AI's values are aligned with ours. One big fear is that an AI will very effectively pursue the goals we give it, but unless we define those goals (and/or the method by which it modifies and creates its own goals) perfectly -- including all sorts of constraints that a human would take for granted, and others that are just really hard to define precisely -- we might get something very different from what we actually wanted.
>A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.
Hassabis and DeepMind have a fairly organised approach of looking at how real brains work and trying to model different problems like Atari games then Go and recently Starcraft. Not quite sure what's next up.
Or hated as the name of the man who's opened the Pandora box and doomed us all.
Just daydreaming and having a nightmare.
But it's still a fascinating endeavor.
I understand that it's a big assumption to make -- that a benevolent AI could be constructed. But under that assumption, why not have a benevolent dictator in the form of an AI?
We already live in that world, with large institutional bureaucracies playing the role of paperclip-maximizing AGIs.
It's pretty wretched when you are in their path.
If the benevolent AI ruler(s) restrained themselves to allow for humans to flourish, then okay. Assuming it could be constructed benevolently.
With AI, it would try its best to preserve/enhance/spread itself forever. And its best might be much better than our best...
How much worse could it be?
If Skynet determines we're the problem (wars, famine, global warming, inequality, non-cooperation etc), I'm losing counter-arguments by the day.
Just think about that name for a second. He might really be onto something.
I must admit that I often watch his interviews because he invites interesting people, but I can't help but cringe when Rogan gives his opinions.
These things are all true even if the guest or their ideas are extremely controversial. Maybe Joe Rogan is just smart in a way that's different to the way that you are smart.
EDIT: Ok, I suppose I should back my claim up.
Joe Rogan has pushed the “DMT is produced in our pineal gland” narrative, but there is no evidence to back this up. I’ll report a comment I made elsewhere and also link a separate reddit discussion which cites various sources. I will note that, in fairness to Joe, he said this a while ago, so perhaps he’s not so quick to jump the gun now, I don’t know, I don’t listen to his podcasts, but perhaps he’s better now.
“We all have it in our bodies” — This is an often repeated myth that has never been proven. The myth originates from Rick Strassman’s work, who himself has said that he only detected a precursor, not DMT itself and that everything else he wrote about it was hypothetical speculation. There have, apparently, been recent studies that found DMT synthesised in rat brains, but it has not yet been proven whether this translates to humans or not. Cognitive neuroscientist Dr. Indre Viskontas stated that while DMT shares a similar molecular structure to seritonin and melatonin, there is no evidence that it is made inside the brain. Similarly, Dr. Bryan Yamamoto of the neurosciencedepartment at the University of Toledo said: “I know of no evidence that DMT is produced anywhere in the body. It’s chemical structure us similar to serotonin and melatonin, but their endogenous actions are very different from DMT.”
This reddit discussion also links various sources, although I didn’t check them all myself: https://www.reddit.com/r/JoeRogan/comments/mwz2h/dmt_has_nev...
Anyone who speaks on the record about their hobbies for thousands of hours will say some things that are incorrect. He might not understand something, and he is usually pretty humble about his knowledge level.
But "spreading misinformation" is something that people do because they are intentionally misleading others, or have something to gain.
I don't think he is benefiting much from the pineal gland narrative. And it sounds like from the information you cited, it may even be correct, even if its premature to state it as fact.
Regarding the pineal gland, it might be true, but it hasn’t been proven and multiple neuroscientists have stated that while DMT is similar to compounds found in the brain, it still functions quite differently and they have never seen any evidence to suggest that DMT exists in our bodies. There was a study finding it in mice brains, so it may still turn out that we have it in ours, but it’s definitely premature to make any such assumptions and definitely premature to repeat the trope.
"Taking mathematics from the beginning of the world to the time when Newton lived, what he has done is much the better part." - Gottfried Leibniz
For years, English scientist Isaac Newton and German philosopher Gottfried Leibniz both claimed credit for inventing the mathematical system sometime around the end of the seventeenth century.
Now, a team from the universities of Manchester and Exeter says it knows where the true credit lies — and it's with someone else completely.
The "Kerala school," a little-known group of scholars and mathematicians in fourteenth century India, identified the "infinite series" — one of the basic components of calculus — around 1350."
However, calculus proper (derivatives and integrals of general functions, and the connections between them) did not exist until Newton and Leibniz. Other mathematicians made important steps towards it earlier in the 1600s, and if Newton and Leibniz had not existed, others would have figured it out around the same time.
I'm not a historian, but a few months ago I spent some time analysing one of Fibonacci's trigonometric tables (chords, not sine or sine-differences). Aryabhata's sine-differences were much earlier.
This little known fact is so embarrassing to some institutions , that they made up a new word "chymistry" in order to further obscure the issue and not outright admit the obvious.
Is there a reason to expect that someone who wanted to investigate the laws of the composition and reactivity of matter, in the late 1600s/early 1700s, would end up studying chemistry rather than alchemy? Sure, Boyle had introduced “chemistry” as an idea in 1661 (before Newton was born), but I imagine that alchemy would still be quite active in the late 1600s as an academic “field”, with many contributors already late in their careers studying it; whereas chemistry would have been just getting off the ground, without many potential collaborators.
Your point has been brought up before -usually as an attempt by established institutions to whitewash and explain away Newton's idiosyncrasies- but there is no evidence whatsoever to back it. On the contrary, what we know (and there is a lot we do know thanks to his writings) about Newton and alchemy absolutely indicates him being immersed in the Hermetic worldview and alchemical paradigm. Clearly, Newton was practicing alchemy not as a way to look for novel techniques or as a way to bridge the old and new worlds together, but primarily because he was a devout believer.
Newton -a profound genius- stood at the threshold of two worlds colliding. He was also a groundbreaking scientist in optics/mechanics/mathematics. He was aware of Boyle's chemical research. Knowing all of that, he _absolutely_ chose to dedicate his life to alchemy. That is immensely interesting.
"Much of Newton's writing on alchemy may have been lost in a fire in his laboratory, so the true extent of his work in this area may have been larger than is currently known. Newton also suffered a nervous breakdown during his period of alchemical work, possibly due to some form of chemical poisoning (perhaps from mercury, lead, or some other substance)."
It's hard to think of many famous scientists that weren't already well known in their field. Some stand out. Einstein, for example, had a fairly lackluster career until his Annus Mirabilis papers. Mark Z. Danielewski (House of Leaves) bounced to and from various jobs. But largely, the idea of the brilliant outsider is like the 10x engineer. It exists, but is rare.
You never know. Fresh eyes can sometimes see what others may not.
So I definitely wouldn’t dismiss all papers as pointless, but there certainly is a large percentage that are, enough that you can’t simply accept a published papers results without reproducing it yourself.
Do I deserve to be paid for 5 years for something that may not work? "Deserving" something doesn't have much meaning: we, the humans, merely transform solar energy into some fluff like stadiums and cruise ships. Getting paid just means getting a portion of that stream of solar energy. There is no reason I need to "deserve it" as it's unlimited and doesn't belong to anyone. A better question to ask is how can we change our society so that all, especially young, people would get a sufficient portion of resources to not think about paying bills.
Chances to make a breakthru are small, but that doesn't matter. It's a big numbers game: if chances are 1 to million, we let 1 billion people try and see 1000 successes. The problem currently is that we have these billions of people, but they are forces by silly constraints of our society to non stop solve fictional problems like paying rent.
Take Albert Einstein as an example, who arguably made one of the largest leap in physics with his theory of general relativity. He never stopped publishing during that time.
Not quite. When you are a professor, you essentially become a manager for a group of researchers. You don't really do research yourself. Therefore, your main obligation becomes finding money to pay these researchers. So in reality you can only support the research someone is willing to pay for (via grants, scholarships, etc).
Figuring out the basics of the math and how to use whatever tools they use at FB is doable in a week.
Source: Commenter name is DBZ character
wasn't the first time John did what he did. and it's not the usual kind of learning either. he was learning by first principles. i truly love this idea of replaying in your own mind what went on when something was discovered (or at least come close to it).
contrast that with how ML & AI are taught nowadays: thrown into a Jupyter notebook with all FAANG libraries loaded for you...
edit: to be clear, all I'm saying is he can catch up to the body of research already out there quicker than the average bear, and he's shown a real knack for designing solutions and being crazy productive. I'm not pretending he's gunna be publishing insane novel research anytime soon, just that I wouldn't be surprised if he ends up being a real voice in the field.
What are not basically the same thing are "he started seriously contributing to this kind of problem after a week in the woods" and "he spent a week in the woods a year ago and is ready to start contributing now. A year after that week in the woods."