Novel computational concepts have often been demonstrated on very old hardware, with the full knowledge of the tricks required to make it work. Often, more powerful hardware was required in order to pionéer the technology, and often the proof-of-concept on older hardware is too slow and clunky to have been a compelling product. But it's often been physically possible for longer than people realize.
I've never made a "long game" statement like this before, so it'll be interesting to read this comment about what I thought before in 2038 or 2048, if it still exists then.
I think you can make a software intelligence that is very intelligent in its own kind of strange alien way. But as far as involving it in human concerns, it will not really understand the human world, even if we try to make it seem like it does.
A robot lawyer might be very intelligent, but it will fail Turing tests. Trying to make it pass Turing tests without giving it a human body and upbringing will amount to patching a fundamentally false system and hoping for temporary illusions.
This opinion is informed by arguments made by Hubert Dreyfus stemming ultimately from a Heideggerian perspective.
You could envision a very intelligent but socially retarded system able to produce products more cheaply than anyone else, and thereby engage in trade and acquire resources. You could also just envision a system that ignores human morality and simply appropriates matter, space, and energy for its own purposes irrespective of what humans consider to be their property.
As such, the Turing Test is fairly irrelevant.
It is none of these things however. The hardware is already capable. The fundamental techniques that are popular and theories are the flawed. Essentially, you need to start over as Geoffrey Hinton said. Something no one wants to put effort into doing or fund.
So, indeed AGI is here today. It's just not within the frame of the populist efforts. An capable individual with the freedom of doing deep pondering and constructions w/o any outside influence is more likely to crack this puzzle than a room of PhDs all systemically subscribed to the same fundamentally flawed approach of Optimization algorithms.
As far as the compute goes, does anyone even truly spend the time to understand it anymore in this day and age of frameworks on frameworks? I mean truly understand it? And there lies to other fundamental problem. How can one make broad statements about the computational requirements of AGI when the most they know about the underlying hardware is an AWS config file?
Swaths of the industry have shut themselves out from ever developing AGI. Sadly, they're the groups w/ the most funding and backing because they represent the same flawed mainstream ideology as every other AI group.
It will be an outsider and the core theoretical approach is already resolved.
I'm not convinced. The architecture of a computer, with its extremely fast CPU cores and its extreme bottleneck of a memory bus with a tiny cache set up in a hierarchy is radically different from the architecture of the human brain.
The brain is parallel on an unimaginably larger scale than anything we've ever built. The brain also doesn't put a wall between "storage" and "compute." I think there are tons of problems the brain solves easily with parallelism that would be bottlenecked by memory latency on a computer.
Oh come on, he says this every couple of years... Bengio made a meme about it... 
There is a computability argument - part of your statement implies the assumption that 'AGI' is computable. I would be inclined to agree in this aspect, I think a preponderance is required before we start seriously considering that intelligence is uncomputable. After all, last time I checked the jury was still out as to whether super-Turing machines are physically realizable.
So let's suppose for the sake of argument intelligence is computable. Well then in principle 'AGI' has been 'realizable' since mid-last century when we first constructed Turing machines.
With the caveat that it may take the lifetime of the universe to classify an image of a cat.
It is quite conceivable that, although Turing-computable, 'AGI' requires taking expectations over such large spaces that it won't really be 'realizable' until hardware improves by another few orders of magnitude.
Note that there's nothing in this guess about the scale required, but I'm imagining something smaller than the Manhattan project. Maybe something on the scale of a whole AWS datacenter or something like that.
An important factor is the computational power needed to emulate the essence of an intelligent entity that exists in the physical world today (i.e. the human brain & sensory system), as you're getting at in your comment about hardware capability.
(That a human-equivalent or better machine for performing complex tasks in the real world is realizable with Turing-computable algorithms seems more or less guaranteed to me, unless there are physical processes happening in biological humans that are not).
'Guaranteed' sounds a little too strong to me. We just don't know yet.
> we are already having some success at using biologically-inspired techniques for classification
I presume you are referring to CNNs, which in good-faith can only very _very_ loosely be characterized as biologically-inspired. People are constantly trying to draw comparisons (see the 'look the brain be like CNNs' papers that show up at NIPS every year) and I wish they would stop... certainly there are some superficial similarities but... it's more a case of convergent evolution than anything, there is no deep connection to be unraveled...
I don't think there's any deep insight belying CNNs beyond 'wow building your data's symmetries into your model works great!'. Which is like a pretty obvious and fundamental thing imo...
I did say 'practically guaranteed'. Let's say (for the sake of the argument) there's less than 5% probability there's something yet undiscovered about animal brains/nervous systems, that has qualitatively different computational properties than the current theory of computation says is possible in the physical world.
Does this sound excessively optimistic to you? I would be delighted to be proved wrong on this; it would be like the theory of relatively suddenly shook up Newtonian physics, in the computational domain.
But I'm not aware of any evidence that our models of computability have any holes. If you know of any, please let me know! I am a curious skeptic at heart :)
The problem is that there's a lot of additional "special sauce" we're missing to turn these classification engines into AGI, and we don't have a clue if this "special sauce" is computationally intensive or not. I'm guessing the answer is "no" since the human cortex seems so uniform in structure and therefore it seems to me it is mostly involved in this rather pedestrian search part, not the "special sauce" part.
(disclosure: I'm not a neuroscientist or AI researcher)
I am thinking of two things in particular that current ML approaches lack; that are ubiquitous (well arguably for the second...) in neuroscience: feedback and modeling uncertainty
A proper Bayesian approach to uncertainty will be akin to an expectation over models; that's an extra order of magnitude.
Feedback is also likely to be expensive. Currently all we really know how to do is 'unroll' feedback over time and proceed as normal for feedforward networks.
Also keep in mind that inference and training are pretty different things; inference is fast, but training takes a lifetime.
> and it's not unreasonable to think they can already do this with a performance on par with the human brain.
I disagree for the reasons stated above; we know we're missing a big piece because of the lack of feedback and modeling of uncertainty.
I do happen to be a neuroscientist and an ML researcher... but I think that just means I am slightly more justified in making wild prognostications... which is totally what this is... But ultimately I'm still just some schmuck, why should you believe me?
Nothing I have said in this comment can be considered scientific fact; we just don't know. But I have a feeling...
You'll be interested in a recent post about a new paper for an AI physicist then: https://news.ycombinator.com/item?id=18381827
AGI probably requires some kind of hierarchy of integration, and current AI is only the bottom level. We probably need to build a heck of a lot of levels on top of that, each one doing more complex integration, and likely some horizontal structure as well, with various blocks coordinating each other.
I like this line of reasoning, you're basically stating that we have found effective ways of emulating some of the sensory parts of a central nervous system. Which seems intuitively right; we can classify things and use outputs from this classification in some pre-determined ways, but there's no higher-level reasoning.
Pretty sure Deep Mind made a RL system a bit like that.
Also reminds me of Ogmai AIs technology.
What are some examples of this?
His definition is also rubbish. Being useful at economically valuable work has nothing necessarily to do with intelligence. Writing implements are vital in pretty much all economic activities, many couldn't be done at all without them before keyboards came along.
Deep Learning is great, it's a revolution, but it's a fairly narrow technology. It solves one type of task fantastically well, it just happens that solving this task is applicable in many different problem domains, but it's still only one technique. At no point did he show how to draw a line from Deep Learning to General AI in any recognisable form. It just looks like a hook to get you to hear his pitch.
It's a great pitch, but it's not about AGI.
No it is not. The basic premise of fixed-winged aircraft was the same from Wright brothers to modern jets. Yet the wright brothers flyer was useless and a modern jet is not.
We have agents that can act in environments. His claim is that getting these agents to human-level intelligence is a matter of compute and architectural advancements that are not qualitatively different that what we have now. This just does not strike me as an absurd claim. We have systems that can learn reasonably robustly. We should accord significant probability to the claim that higher-level reasoning and perception can be learned with these same tools given enough computing power.
He claims we cannot "rule out" near-term AGI. Let's define "rule out" as having a probability of 1% or lower. I think he's given pretty good reasons to up our probability to between 2-10%. For myself, 10-20% seems a reasonable range.
What claim are you responding to here? Simonh said:
> He says general AI is on the same spectrum as the AI technologies we have now, but is qualitatively different. I'm sorry but that's a contradiction.
Which I agree with. How can two qualitatively different things be on the same spectrum? You later say yourself:
> His claim is that getting these agents to human-level intelligence is a matter of compute and architectural advancements that are not qualitatively different that what we have now.
Which seems to be the opposite of what simonh said and it's confusing to say the least.
The big question to me isn't whether computation can scale, which this video makes me belief it will. It's whether the data will scale. In RL domains with good simulators, such as Go and the Atari games, data doesn't seem to be an issue. The in-hand robot manipulation work also makes heavy use of simulators to reduce the amount of real-world time needed to collect data. But I don't see an argument that we will need to get high-fidelity simulators to train these agents in.
I do love the in-hand robot manipulation work, because it's one of the few that shows that results from simulation can be applied to real robotic systems. And while I hope for the sake of robotics that we can get better and better simulators, it's surprising to not see that as the central focus in conversation about getting AGI to emerge from gradient descent on neural networks.
We are already starting to see the nature of data changing. Unsupervised learning is starting to work — see https://blog.openai.com/language-unsupervised/ which learns from 7,000 books and then sets state-of-the-art across almost all relevant NLP datasets. With reinforcement learning, as you point out, you turn simulator compute into data. So even with today's models, it seems that the data bottleneck is much less significant than even two years ago.
The harder bottleneck is transfer. In most cases, we train a model on one domain at a time, and it can't use that knowledge for a new related task. To scale to the real world, we'll need to construct models that have "world knowledge" and are able to apply it to new situations.
Fortunately, we have lots of ideas about how this might work (e.g. using generative models to learn a world model, or applying energy-based models like https://blog.openai.com/learning-concepts-with-energy-functi...). The main limitation right now: the ideas are very computationally expensive. So we'll need engineers and researchers to help us to continue scaling our supercomputing clusters and build working systems to test our ideas.
If you were given a demo of an AI system that uses a completely new/revolutionary approach towards various different problems with success, how open would you be to rethinking your position on 'Optimization techniques'?
Modeling seems as a stop-gap towards getting over the limitations of Weak AI. As I recall, this is what knowledge-based expert systems tried in a time's past and failed at because it's nothing but a glorified masking of the underlying problem with limited human inputted rulesets. I don't agree with Yann LeCun that the way forward to AGI is modeling. I feel like it's the best solution people worked up towards the limitations of Weak AI which were broadly and publicly acknowledged in 2017 and early 2018.
> The main limitation right now: the ideas are very computationally expensive.
This is because the fundamental core set of algorithms being used by the industry are fundamentally flawed yet favorable to big data/cloud computing.. A quite lucractive business model for currently entrenched tech companies. It's why they spend so much effort ensuring the broad range of AI techniques fundamentally stay the way they are.. because if they do, it means boat loads of money for them.
> So we'll need engineers and researchers to help us to continue scaling our supercomputing clusters and build working systems to test our ideas.
When you're attempting to resolve something and you are shown YoY that it isn't being resolved and requires even more massive amounts of compute, it means you're doing something wrong. It will be better to take a step back an re-evaluate your approach fundamentally. Again, what is the willingness you have to do so if shown something far more novel?
Of course if your definition of AGI involves the ability to mimic a human, or maybe display empathy for a human, etc., then yeah, you probably do need the ability to experience lust, fear, suspicion, etc. And IMO, in order to do that, the AI would need to be embodied in much the same way a human is, since so much of our learning is experiential and is based on the way we physically experience the world.
It's pretty simple to model and predict, either for a human or a deep net given some training data.
"Emulating these primitive parts" isn't some impossibility.
We get told to use the term .agi despite public calling it .ai as that's just automation. But this feels like we're now allowed to call it .ai again? It was presented as, given these advances in automation we can't rule out arriving at apparent consciousness. But with no line between.
We do have a definition for intelligence. Applied knowledge.
However here's another thought. Several times in my life I knowingly pressed self destruct. I quit a job without one to go to despite having mortgage and kids. I sold all my possessions to travel. I've dumped a girls I liked to be free. I've faced off against bigger adversaries. I've played devils advocate with my boss. I've taken drugs despite knowing the risks etc... And I benefitted somehow (maybe not in real terms) from all of them. Non of these things seem like intelligent things to do. They were not about helping the world but about self discovery and freedom. We cannot program this lack of logic. This perforating of the paper tape (electric ant). It's emergent behaviour based on the state of the world and my subjective interpretation of my place in it. Call it existential, call it experiential, call it a bucket list. Whatever.
.agi would need to fail like us, to be like us. Feel an emotional response from that failure. And learn. Those feelings could be wrong. misguided. We knowingly embrace failure as anything is better than a static state. i.e people voting Trump as Hilary offered less change.
We also have multiple brains. Body/Brain. Adrenaline, Seratonin. When music plays my body seems to respond before my brain intellectually engages. So we need to consider physiological as well as phsycological. We have more that 2000 emotions and feelings (based on a list of adjectives). But that probably only scratches the surface. What about 'hangry'? Then learning to recognise and regulate it.
diff( current perception of world state, perception of success at creating a new desired world state (Maslow) ) = stress || pleasure.
Even then how do you measure the 'success'? i.e.I have friends with depressions and they don't measure their lives by happiness alone. I feel depression is actually a normal response to a sick world and that people who aren't a bit put out are more messed up. If we created intelligence that wasn't happy, would we be satisfied? Or would we call it 'broken' and medicate like we do with people.
Finally I don't think they can all learn off each other. They need to be individual. language would seem an inefficient data transer method to a machine. But we indivudate ourselves against society. Machines assimilating knowledge won't be individuals. More swarm like. We would need to use constraints which may seem counter productive so harder to realise.
Wow. I wrote more than I inteded there. But yes. Emotions are required IMO. Even the bad ones. Sublimation is an important factor in intelligence.
...Then he basically asserts that we can extrapolate the near-term availability of tons more compute power from Moore’s Law, which is where he lost me.
We’re already running into the limits of physical law in trying to move semiconductor fabrication to smaller and smaller processes, and there are very real and interesting challenges to be overcome before, I think, we can resume anything close to the exponential growth we’ve enjoyed over the last 40 years.
This guy may well think a lot about these difficulties, but not mentioning them at all made his argument sound incredibly naïve to me.
That's not what he's asserting. Even with Moore's law dead, OpenAI claims there is significant room with ASICs, analog computing, and simply throwing more money at the problem. There is a ton of low-hanging fruit in Non-Von Neumann architectures. We should expect it to be plucked, as we have huge use case which is potentially limitlessly profitable.
That is irrelevant. You just scale horizontally, more and more data-centers. Sure, it will not be free like in the past.
If some kind of goal has to be defined, it seems it will always be a narrow AI, where some outside entity defines what its goal is, instead of itself coming to a conclusion what it should do in general sense. Even if that machine is able to recognize the instrumental goals for reaching the final goal (and acting accordingly), it still feels like a non-general intelligence, like connecting the dots based on the available input and processing, just to come closer to that final goal. If no final goal was given, I presume such a machine would do nothing: it would not randomly inspect the environment around itself and contemplate upon it; there would be no curiosity, no actions of any kind to find out anything about its environment and set its own goals based on observation.
It seems that for AGI to come, some kind of spontaneous emergence would have to occur, possibly by coming up with some revolutionary algorithm for information processing implemented inside an extremely capable computer (something that biological evolution has already yielded).
It is interesting, humbling and a bit depressing to apply the same reasoning to us, humans. We are relatively limited in terms of reason, its just that it is not obvious to us, just like it is not obvious that the Earth is round, for example.
I think AGI could happen with today technology if we only knew the priors nature found with its multi-billion year search. We already know some of these priors: in vision, spatial translation and rotation invariance, in temporal domain (speech) it is time translation invariance, in reasoning it is permutation invariance (if you represent the objects and their relations in another order, the conclusion should be unchanged). With such priors we got to where we are today in AI. Need a few more to reach human level.
I just love how these DNN researchers love to bash prior work as over-hyped, while hyping their own research through the roof.
AI researchers did some amazing stuff in the 60s and 80s, considering the hardware limitations they had to work under.
>"AT the core, it's just one simple idea of a neural network."
Not really. First neural networks were done in the 50s. Didn't produce any particularly interesting results. Most of the results in the video are a product of fiddling with network architectures, plus throwing more and more hardware at the problem.
Also, none of the architectures/algorithms used by deep learning today are more general than, say, pure MCTS. You adapt the problem to the architecture, or architecture to the problem, but the actual system does not adapt itself.
It's not like there is a single "neural" architecture that's getting better and better. There are dozens of different architectures with their own optimizations, shortcuts, functions and parameters.
(And let me be clear that none of this is an argument that AGI is near. I'm saying that confidence that it is far is unfounded.)
First, there are many cases in science where experts were totally blindsided by breakthroughs. The discovery of controlled fission is probably the most famous example. This shouldn't be surprising - the reason that a breakthrough is a breakthrough is because it adds fundamentally new knowledge. You could only have predicted the breakthrough if you somehow knew that this unknown knowledge was there, waiting to be found. But if you knew that, you'd probably be the one to make the breakthrough in the first place.
Second, most claims about the impossibility of near-term AGI are totally unscientific. By that, I mean that they aren't based on a successful theory of falsifiable predictions. What we'd want, in order to have any confidence, is a theory that can make testable predictions about what will and won't happen in the short term. Then, if those predictions turn out to be true, we can gain confidence in the theory. But this isn't what we get. What we get is people saying "We have no idea how to do x, y, and z, therefore it won't happen in the next 50 years". I don't see any evidence that people were able to predict even the incremental progress we've seen say, two years out. The fact is that when someone says "it'll take 50 years" that's just sort of a gut feeling, and people will almost certainly be making that same prediction the year before it actually happens.
Third, I think people have too narrow a view about what they imagine AGI might look like. People tend to envision something like HAL, that passes Turing tests, can explain chains of reasoning, and has comprehensible motivations. Let's consider the case of abstract reasoning, which is something thought to be very difficult. We tried and failed for decades to build vision systems based on methods of abstract reasoning, e.g. "detect edges, compose edges into shapes, build a set of spatial relationships between those shapes, etc". But humans don't use abstract methods in their visual cortex, they use something a lot more like a DNN. The mistake is in thinking that because the mechanism of successful machine vision resembles human vision, therefore the mechanism of successful machine reasoning must resemble human reasoning. But its quite possible that we'll simply train a DNN by brute force to evaluate causal relationships by "magic", i.e. in a way that doesn't show any evidence of the sort of step-by-step reasoning humans use. You can already see this happening - when a human learns to play breakout, they start by forming an abstract conception of the causal relationships in the game. This allows a human to learn really really fast. But with a DNN, we just brute force it. It never develops what we would consider "understanding" of the game, it just _wins_.
Sorry the third point was so long, let me summarize: We think some things are hard because we don't know how to do them the way that we think humans do them. But that doesn't serve as evidence that there isn't an easy way to do them that is just waiting to be discovered.
Inteligence is not one dimensional as is evolution => https://backchannel.com/the-myth-of-a-superhuman-ai-59282b68...
Silicon based intelligent machines might not energy efficient =>
A choice quote:
In contradistinction to this orthodoxy, I find the following five heresies to have more evidence to support them.
Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
Humans do not have general purpose minds, and neither will AIs.
This is awfully similar to "On the Impossibility of Supersized Machines" (https://arxiv.org/pdf/1703.10987.pdf).
EDIT: Re: energy efficiency: the problem is that humans are too energy efficient. Your brain can keep functioning after 3 days of running across the Savanna without food, which is (a) awesome, and (b) not really helpful nowadays. The cost of this is that you can only usefully use a little energy each day, say 4 or 5 burgers at most. AGI prototypes will usefully slurp in power measured in number of reactors.
On the second point, who cares? If the first AGI draws gigawatts, it will still be an AGI.
One cannot rule out something unless they've spent a concerted amount of time dedicated solely to trying to understanding it. If there is no fundamental understanding of Human intelligene, what is anyone frankly talking about? or doing?
I have yet to hear a cohesive understanding of human intelligence from various different AI groups. I have yet to hear a sound body of individuals properly frame a pursuit of AGI. So, what is everyone pursuing? There seems to be no grand vision or lead wrangling in all of these scattered add-on techniques to NN. I do see a lot of groups working on weak AI or chipping away at AGI like featuresets with AI techniques making claims about AGI. Everyone has become so obsessed with iterating that they fail to grasp the proper longer term technique for resolving a problem like AGI.
Void from the discussion are conversations on Neuroscience and the scientific investigation of Intelligence. There's more sound progress being made in the public sector on concepts like AGI than in the private sector. Mainly because the public sector knows how to become entrenched, scope, and target an unproven long term goal and project.
The hype as far as I see it is clearly distinguishable from the science. Without honest and sound scientific inquiries, claims in any direction are without support. Everyone's attempting to skip the science and pursue engineering in the dark with flashy public exhibitions namely because of funding.. You can't exit such a room and make sound claims about AGI. If a group claims they are pursuing AGI, I expect almost all of their work to be scientific research pursuing an understanding.
That being said, it appears no one is interested in funding or backing such an endaevour. Everyone states they want to back/invest in such a group on paper but when it comes down to it the money isn't there, they obviously are targetting shorter term goals/payouts, and/or don't frankly know what type of pursuit or group of individuals are required. No one wants to take the time to understand what such a group would look like. No one wants to make a truly longer term bet. This is why things have been spiraling in circles for years.
So, as it has been stated time and time again.. AGI will come and it will come from left field.
There are individuals who truly care to pursue and develop AGI and they're willing to sacrifice everything to achieve it. If no funding is available, they'll fund themselves. If groups wont accept them because they aren't obssesed with deep learning or have a PhD (clearly the makeup that only results in convoluted weak AI), they'll start groups themselves.
Passion + Capability + lifelong pursuit is how all of the great discoveries of time have come to us. The mainstream seemingly never understanding such individuals, supporting them, or believing them until after they've proven themselves. No pivots. No populist iterations. A fully entrenched dedication towards achieving something until its done.
So, no.. you can't rule out AGI in the near term because there is no spotlight on the individuals or groups with the capability to develop it on such time horizons and the thinking just frankly isn't there in celeberated groups with funding. Everyone's in the dark and its an active choice and mindset which causes this.
Geoffery Hinton says start all over.... Yann LeCun raises red flags.
No one listens. No one acts. Everyone wants a piece of the company that develops the next trillion dollar 'Google' like product space centered on AGI but no one wants to spend the time to consider what such a company would be, what is human intelligence, who is looking at it in a new way from scratch as some of the most important people in AI have stated. So, you see... This is why the unexpected happen. It is unexpected because no one spends the time or resources necessary to cultivate the understanding to expect its coming.
In keeping Betteridge's law, no, not really. Hardware capabilities are getting there as evidenced by computers trashing us at go and the like and with thousands  of the best and brightest going into AI research who's to know when someone is going to find working algorithms?
The real problem though (as I see it) is that the vast majority of the best and brightest minds in our society get lost to the demands of daily living. I've likely lost any shot I had at contributing in areas that will advance the state of the art since I graduated college 20 years ago. I think I'm hardly the exception. Without some kind of exit, winning the internet lottery basically like Elon Musk, we'll all likely see AGI come to be sometime in our lifetimes but without having had a hand in it.
And that worries me, because if only the winners make AI, it will come into being without the human experience of losing. I sense dark times looming, should AI become self-aware in a world that still has hierarchy, that still subverts the dignity of others for personal gain. I think a prerequisite to AGI that helps humanity is for us to get past our artificial scarcity view of reality. We might need something a little more like Star Trek where we're free of money and minutia, where self-actualization is a human right.
> I've likely lost any shot I had at contributing in areas that will advance the state of the art since I graduated college 20 years ago.
For a bit of optimism: if you are a good software engineer, you can become a contributor in a modern AI research lab like OpenAI. We hire software engineers to do pure software engineering, and can also train them to do the machine learning themselves (cf https://blog.openai.com/spinning-up-in-deep-rl/ or our Fellows and Scholars programs).
As one concrete example, though not the norm, our Dota team has no machine learning PhDs (though about half had prior modern ML experience)!
> The real problem though (as I see it) is that the vast majority of the best and brightest minds in our society get lost to the demands of daily living. I've likely lost any shot I had at contributing in areas that will advance the state of the art since I graduated college 20 years ago. I think I'm hardly the exception. Without some kind of exit, winning the internet lottery basically like Elon Musk, we'll all likely see AGI come to be sometime in our lifetimes but without having had a hand in it.
They don't get lost so much as they become trapped for reasons due to systematic and flawed optimization structures found throughout society. All is not lost if one breaks out long enough to realize they can make certain pursuits if they are willing to make a sacrifice. The bigger the pursuit, the bigger the required sacrifice. Not many people are willing to do that in the valley when you have a quarter of a million dollar paycheck staring you in the face. You could of course make a decision to sacrifice everything one given day and you'd have 5 years of runway easily if you saved your money properly. Obviously, VC capital wont fund you. Obviously universities aren't the way to go given the obsession with Weak AI. Obviously no AI group will hire you unless you have a PhD and/or are obsessed with Weak AI. Obviously you might not even want this as it will cloud your mind. So, clearly, the way to make ground breaking progress is to walk off your job, fund a stretch of research yourself, and be willing to sacrifice everything. Quite the sacrifice? People will laugh at you. What happens if you fail? Socially, per the mainstream trend, you'll fall behind. If you have a partner, this will be even more difficult as the trend is to get rich quick, get promoted to management, buy a million dollar home, have kids, stay locked in a lucrative position at a company. And what of your pride? Indeed.. And therein is the true pursuit of AGI.
The winners are pushing fundamentally flawed AI techniques because it requires massive amounts of data and compute which is their primary business model. They wont succeed because they are optimizing a business model that is at the end of its cycle and not optimizing the pursuit of AGI.
AGI is coming and it is completely out of the scope of the current winners. If a person desires to pursue and develop AGI, they'd have to be bold enough to sacrifice everything... It's how all of the true discoveries are made for all of time and science. Nothing has changed but for reasons due to money primarily, when the historical learning lessons are far off enough people attempt to re-tell/re-invent the wheel in their favor.. Only to be reminded : Nothing has changed.
The individual discoverers change over time however for they learn from history.
Then we feed it scans of human minds doing various tasks and have it try combinations (via genetic algorithms etc) until it begins to simulate what's happening in our imaginations. I'm arguing that we can do all that with an undergrad level of education and understanding. Studying the results and deriving an equation for consciousness (like Copernicus and planetary orbits) is certainly beyond the abilities of most people, but hey, at least we'll have AGI to help us.
Totally agree about the rest of what you said though. AGI <-> sacrifice. We have all the weight of the world's 7 billion minds working towards survival and making a few old guys rich. It's like going to work every day to earn a paycheck, knowing you will die someday. Why aren't we all working on inventing immortality? As I see it, that's what AGI is, and that seems to scare people, forcing them to confront their most deeply held beliefs about the meaning of life, religion, etc.
Video cards operate on a pretty limited scope of computing that might not even be compatible with Neuron's fundamental algorithm. The only thing SIMD has proven favorable towards is basic mathematics operations with low divergence which is why Optimization algorithm based NN function so well on them.
This is the entrapment many people in the industry fall for. The first step towards AGI is in admitting you have zero understanding of what it is. If one doesn't do this and simply projects their schooling/thinking and try to go from there, you end up with a far shorter accomplishment.
You can't back derive aspects of this problem. You have to take your gloves off and study the biology from the bottom up and spend the majority of your time in the theoretical/test space. Not many are willing to do this even in the highest ranking universities (Which is why I didn't pursue a PhD).
There is far too little motivation for true understanding in this world which is why the majority of the world's resources and efforts are spent on circling the same old time test wagons.. Creating problems then creating a business model to solve it. We are only fooling ourselves in this mindless endeavors. When you break free long enough, you see it for what it is and also see the paths towards more fundamental pursuits. Such pursuits aren't socially celeberated or rewarded. So, you're pretty much on your own.
> As I see it, that's what AGI is, and that seems to scare people, forcing them to confront their most deeply held beliefs about the meaning of life, religion, etc.
One thing about this interesting Universe is that when a thing's time has come it comes. It points to a higher order of things. There's great reason and purpose to address these problems now and its why AGI isn't far off. If you look at various media/designs, society is already beckoning for it.
I guess what I'm getting at, without quite realizing it until just now, is that AI can be applied to ANY problem, even the problem of how to create an AGI. That's where I think we're most likely to see exponential gains in even just the next 5-10 years.
For a concrete example of this, I read Koza's Genetic Programming III edition back when it came out. The most fascinating parts of the book for me were the chapters where he revisited genetic algorithm experiments done in previous decades but with orders of magnitude more computing power at hand so that they could run the same experiment repeatedly. They were able to test meta aspects of evolution and begin to come up with best practices for deriving evolution tuning parameters that reminded me of tuning neural net hyperparameters (which is still a bit of an art).
Thanks for the insight on higher order meaning, I've felt something similar lately, seeing the web and exponential growth of technology as some kind of meta organism recruiting all of our minds/computers/corporations.