Hacker News new | past | comments | ask | show | jobs | submit login
Sam Altman’s leap of faith (techcrunch.com)
161 points by ankeshanand 4 months ago | hide | past | web | favorite | 161 comments



It's interesting that in order for his pitch to work (if you invest in OpenAI, you will get up to 100x returns), assuming they do build AGI, it still requires that their AGI acquires a very stable, virtually guaranteed advantage of large magnitude. This very strongly requires that they cannot share anything they discover whatsoever. Especially since they apparently plan on using it to make strategic investments to beat the market by a huge margin. That would mean they obtain information (about the economy, world affairs, technology, the future, etc.) not possessed by anyone else, or that information would be reflected in the market already. Any information leakage, whether regarding their AI or whatever it learns about the world, would compromise that advantage.

In other words, what Altman says about "we can't only let one group of investors have that" can't be true, or at least not sincere. The more investors who have access to it, the more its returns get distrubuted across society more evenly (which would be a good thing, obviously), but lowers the incentive for initial investments. They will want to keep it contained within a small group of investors for as long as possible.


Yeah, there's a big assumptions about the nature of an AGI breakthrough, mainly that it will be a snowball of run away value. Why assume this? Is it because we think AlphaGo/Zero can produce human-like cognition? Why wouldn't it be a long, incremental process of X thousand small breakthroughs, over say a decade, where the result is something like average human-level intelligence; maybe the most important invention of our time, but not super intelligence, and not "run away".

(Then after another X years (or decades) you might figure out super intelligence, if regulations haven't intervened by then)

If the trajectory is incremental as described, it seems untenable that OpenAI could keep some major monopolistic advantage on AGI, without being completely un-open/sealed off for decade(s).


Going from 0 human-intelligence level pieces of software to 1 is the hardest part. Once you have 1, you can duplicate it as much as you want given resources. It can also be pointed inward to improve its own effectiveness.

Actually, there are a lot of good arguments for logistic growth. The only ones for linear or sublinear I’ve heard are not strong and mostly take as an implicit assumption “those alarmists and their exponential growth! They probably didn’t even consider that it could be a slower, more incremental growth” instead of actual fully-fledged arguments.

There’s also a meta-argument that I have yet to hear reproduced in anti-alarmist sentiments. Which case demands more attention, if it does happen? If there’s a 5% chance of the growth being exponential, how much attention should we devote to that case, where the impact is much higher than linear or sublinear growth. This is such a big deal - it’s like Pascal’s wager but with a real occurrence that I believe most would admit has at least a small chance of happening.

Apologies for any brashness coming across. I’m still figuring out how to communicate effectively about a thing I feel a lot of emotions when thinking about.


I didn't say linear, but devil's advocate, isn't that roughly how humans learn? We start out as "pre-intelligent" little creatures who, slowly, methodically, with help of others, develop aptitudes and learn about the world. Learning in fact continues in this manner your entire life should you continue... slow, incremental progress requiring teachers, peers, trials/ errors, crises, 1/3 of your life being unconscious in sleep, etc., in the absence of which no learning at all may happen ... and the bot may potentially have greater computation constraints than humans under current technologies, as the brain is far more efficient than any computer today.

I'm not convinced that each of the arcs, elementary intelligence --> average intelligence --> super intelligence, wouldn't be painstaking and roughly linear.


>It can also be pointed inward to improve its own effectiveness.

Assumption. Intelligence (which isn’t defined) may be something that can grow without bound, or it may be something that plateaus just above the brightest human yet (again this is ill defined. “IQ is a number. There are numbers that are higher, so intelligence must be able to grow” is about as much thought as some people put into it) or maybe it is something that can grow without bound, but the effort required grows too.


Use "capability" instead of "intelligence" then. Defined as "ability to solve any problem dwighttk has ever dreamt up."

There's pretty much no reason to believe capability peaks roughly above the brightest human.

Our brains aren't yet even integrated with hardware-optimized algorithm solvers on which to offload minimax or tree search problems, or solve simple game-theoretic situations, or any number of things a computer system is much better and faster at than a human.

It's just another one of those things that you can believe if you want to not spend time worrying about the ethics problem.


The implication is that Altman belongs to the 'hard takeoff' sect of AI religion that believes a feedback cycle of recursive self-improvement kicks in, so that the first AI to surpass human levels of intelligence is also the last.


You think so? I think if he thought hard takeoff was a real risk, he would be devoted to actually making their research "open", in the original sense of the word when Musk was in charge. According to the hard takeoff theory, it is more likely to occur due to "hardware overhangs", where big organizations accrue lots of compute infrastructure over time, but AI progress occurs in sudden jumps. Then all that infrastructure is just sitting there waiting to be eaten up by a hungry AI. These sudden jumps are more likely to occur if AI research is generally undertaken in a secretive manner, with leaks or espionage resulting in staggered spread of knowledge. This is at least my understanding.


I'm not up on all the epicycles in hard takeoff theory, so I'd be happy to hear more from you or other people who have followed it into more fantastical territory.


If openai ends up becoming some sort of hedge fund I'd be very very disappointed...


Or AGI would provide so much benefit that it does not need to be exclusive to OpenAI for the company to accrue significant economic returns.


Yes. This is in the scenario where the benefits are widely distributed. But in that case the value of the entire planet goes up; You would not necessarily need to make an investment in OpenAI in order to reap benefits. Unless you think that scenario is more likely to happen with OpenAI and not via the usual academic research + shared industry research (which I don't see why that would be the case).


Something along this line: Google increases the entire value of the whole internet by a lot, but you also benefit more by investing in it in the early day.


In the normal case of one innovation in the free market, there is a punctuated moment of growth followed by a plateau. The owners of the innovation get the biggest share of the rewards, but after the plateau the benefits become more distributed. They would need to keep innovating if they want to gain further.

In the case of AGI, there is no plateau after the initial acceleration. With recursive self improvement, there is just an exponential explosion of growth. Unless this event was initiated such that distributed rewards were specifically intended to occur, the feedback loop would prevent external pressures from incentivizing any kind of distribution of rewards.


So you're saying that OpenAI is producing a public good in which the whole of society has a stake?


> Thursday night would be considered pure insanity coming from someone else.

Time will tell. The genius of YC was to spot the hackers as the driving force of a new generation of tech companies, to be founder friendly, to use the classes to get rid of the problem that every angel investor has to contend with ('is this a good investment or not?') and to tell the story in a very compelling way and with their own money on the line.

Everything else so far is underwhelming at best, but the viral nature of YC and the alumni network are not going to be stopped for a long long time.

It's a bit along the lines of 'what have the Romans ever done for us?', if that's all that came out of it then it is already a spectacular success by any measure.


This is unfathomably harder than anything YC has ever dealt with in the past. Judging by what happened with IBM's Watson and Theranos I really doubt Sam will be given a blank check for the black box that is likely to emerge.


https://en.wikipedia.org/wiki/Color_Labs

Is a good case study for investments on reputation alone.


In the recorded interview, Sam Altman says climate change is such a hard problem that we need strong AI first to solve it. I have doubts about this for several reasons:

- Human psychology is one of the biggest obstacles (maybe the biggest) in solving climate change, and I'm not sure how a strong AI is supposed to fix that.

- Building carbon-neutral energy sources is a hard problem, but most experts are optimistic about our ability to solve this (for example, nuclear fusion).

- Considering that we have no idea when this strong AI will be ready (Sam acknowledges it in the interview), it would be dangerous for us to just rely on such a breakthrough to save the climate (and save our children, grand-children, etc.).

Edit: I'd be happy to know a bit more about how a strong AI, such as envisioned by OpenAI, could solve climate change :-)


The AI’s solution to global warming that humans cannot solve is to kill all humans.


...or wait for humans to self-destruct. "Humans self-destructing...wait...cannot kill humans...standby...most humans dead of own volition..."


Donald Trump could "solve" global warming in less than 30 minutes with a call to the military. Vladimir Putin has a similar option. Depopulation and nuclear winter would do the trick.

Of course I am being glib, but we are living in a world where two people have the power to end civilization any time they want to. Something that I think is important to remember when talking about risks of AI.


For risks of nuclear deterrence one would hope that the military would follow Trump's orders, for humanity one would hope that they would not because he's about as stable as 74 year old nitroclycerine.


The US/Soviet/Russian/Chinese military junior->senior brass have been around for long enough to acknowledge the necessary co-existence of the others.


No kidding. We need to solve climate change now, not whenever strong AI is ready. If we wait several decades (or centuries...) to take action, the effects will be far more severe or possibly intractable. The climate system has memory and many of the processes are effectively irreversible —- once you put CO2 up in the atmosphere or melt an ice cap, going back is vastly more expensive and orders of magnitude slower.


We need to price the cost of environment into the cost of goods. There needs to be room for the economics of environmentally friendly companies to outperform the economics of environmentally damaging companies. This will result in investment into environmental improvement and not rely on individuals having to tread on ‘environmental egg shells’ in a world of environmentally damaging products. Demand Environmental Economics from your politicians.


Agreed. We need more laws and regulations to include externalities in the price of the goods and services we buy.


> Building carbon-neutral energy sources is a hard problem, but most experts are optimistic about our ability to solve this (for example, nuclear fusion).

Having been in the carbon-neutral energy sources industry for a while, I agree with your statement, but not your example. There are already carbon-neutral energy sources that exist and are cheap and competitive (solar and wind). So while it would be great if nuclear could eventually join the ranks of cheap carbon neutral sources, that's not currently the hard part. The hard part is (1) scaling up deployment and integration of these new sources, and (2) figuring out how to deal with all the stranded assets that are being displaced by new renewables.

But, yes, I agree that these are hard problems, experts are optimistic, and AI isn't a blocker since the issues are around business model, regulation, and political influence.


Solar and wind are carbon-neutral, cheap and competitive, but they can't be used as our only energy sources until we solve the energy storage problem.


I agree w/ you, but devil's advocate: Perhaps arriving at "I'm not sure how" is the logic underlying Altman's conclusion. That is, for any problem x that human's attempt but cannot seem to wrap their heads around (i.e. resolve to "I'm not sure how to solve x"), then x is in the family of AGI-hard problems. Here x = climate change, and b/c we don't know how to solve x (be it challenges across tech/social/political/etc.), let's build AGI and hope it can figure that mess out.


Hmmmm, the issue with train of thought is that it assumes Altman is dismissing many other energy experts' who are saying they do know how to solve x. Silicon Valley can't be that much of a bubble to think that just because it hasn't figured out how to deal with climate change, that no one else can either.


That's wishful thinking. Humanity was not sure how to solve x, until it was solved. It happened all the time. For example, Gödel or Einstein work were major breakthrough, and we didn't wait AGI to solve this.

I'm not saying AGI is useless. On the contrary, I think it could be extremely useful. But we need to act very quickly to limit the negative consequences of climate change. Waiting for an hypothetical AGI would be foolish.


Taking off my devil's advocate hat (horns?)... I agree. Even if we use that logic to punt some problems to AGI -- e.g. solve near-light speed travel, or space-based solar power -- we cannot afford to sit idly by while climate change destroys the planet.


Yes, near-light speed travel is a great problem to solve with AGI! And maybe even faster-than-light travel! Now, you got me excited :-)


I think the idea might be that strong AI could accelerate R&D in general. For example, the politics could change a lot if inexpensive technical solutions are found for removing carbon from the air.

That may seem rather unlikely, but the true believers in Strong AI tend to think of it as a magic wand: if people can do scientific research, why couldn't a machine do it faster?


I can buy that. But in the context of climate change, this makes sense only if we solve the strong AI problem faster than we solve the carbon-neutral energy problem, which is extremely hypothetical...


> Human psychology is one of the biggest obstacles (maybe the biggest) in solving climate change, and I'm not sure how a strong AI is supposed to fix that.

If you believe in Yudkowsky's AI-box claim, a strong AI can convince politicians and business executives of things they are determined not to believe.


Yes, but then it depends on who controls the strong AI, or if it is uncontrollable, what its agenda is. This is an undecidable problem :)


> I'm not sure how a strong AI is supposed to fix that.

presumably a sufficiently advanced AI could find or create a technical solution that perfectly solves the problem with no downsides, or at least so little downside or cost that even those who "don't believe" in climate change couldn't refuse trying it out.


This "advanced AI" would need to be able to experiment in the real world, as human scientists do. This is the only way to observe the universe, and validate a theory. For example, the AI could need to build and run something devices like the Large Hadron Collider, the Hubble telescope, or a molten salt reactor, to test its hypothesis.


Allow me to present Altman's wager:

- If OpenAI does not achieve AGI, and you invested in it, you lose some finite money (or not, depending on the value of their other R&D)

- If OpenAI does not achieve AGI, and you did not invest in it, you saved some finite money, which you could invest elsewhere for finite returns

- If OpenAI achieves AGI and you invested in it, you get infinite returns, because AGI will capture all economic value

- If OpenAI achieves AGI and you did not invest in it, you get negative infinite returns, because all other economic value is obliterated by AGI

Therefore, one must invest.


Has the same issue as Pascal's. Competing AGI projects (gods) exist and their believers might be the ones reaping the infinite rewards, not to mention the distinct possibility the AGI (god) doesn't actually see rewarding believers as its highest priority, and might choose to share its infinite rewards with people who aren't part of the inner circle, or even punish people who joined inner circles with the express intent of elevating themselves above ordinary people :)

Actually makes a bit more sense from the traditional view that AGI projects might not reach their goal but the well run ones are likely to have very commercially valuable byproducts anyway. If we were getting Star Trek economics out of it, who'd be interested in entirely obsolete concepts like "economic value" and "100x returns" anyway?


It also has an issue not present in Pascal's, which is you are multiplying by infinity in a context where it doesn't actually make sense.

The payoff from AGI may be incalculable, but it isn't infinite both in itself and in Altman's ability to enjoy the rewards it promises. Once the value becomes finite, a whole heap of risk-reward logic kicks in that the Wager wants to sweep under the rug.

As a concrete example, following this Altman's wager would result in Altman giving all his wealth to the first beggar on the street who mumbles that he might be able to run an AGI project - the possibility that the beggar can achieve that is, technically speaking, nonzero. Multiply that by infinity and you have a great expected return (infinite, in fact). However, practically speaking, the risk will overwhelm the large-but-finite payoff.

Infinity is bigger than people think :P


According to the true believers, the payoff for AGI is infinite because the superintelligence will be capable of literally anything (or at least simulating it well enough that it doesn't matter). To them, it is Pascal's wager.

Religious beliefs can be weird.


Two other possibilities that Altman's wager as presented does not take into account:

1. The government could change the rules (i.e. redistribute wealth) in response to the new economic reality brought about by AGI

2. The AGI may decide to retain the economic value it captures for itself


>Has the same issue as Pascal's.

I get the feeling that some people (maybe not this commenter) are missing the point I'm making.

Yes, Pascal's wager is flawed - which is why I'm modeling the Altman's wager after it.

I thought I had made the wording and respective payoff statements sufficiently tongue-in-cheek, but I guess this is Poe's law biting back.


tbf I wasn't 100% sure whether you were tongue-in-cheek or had just spent way too much time taking variants of this argument seriously on LessWrong...

(also, as much as I'm also tongue in cheek when talking about AGIs punishing [infinite numbers of simulations of] Sam Altman for backing the wrong AGI project, I'm dead serious about them potentially being able to turn a decent profit selling advanced data processing to Silicon Valley startups. And let's face it, Sam's a much better CEO to be able to help them achieve that than AI Rapture anyway)


Agreed. It seems extremely naive that they believe they can control the power of AGI, plus it is borderline ridiculous that their goal for AGI is to capture economic value.


I'm skeptical of OpenAI, but to be charitable I don't think economic value is the primary goal for their AGI. Their position seems to be that it needs to be one of several necessary goals in order to repay investors for helping them scale up to achieve it in the first place.


> If OpenAI achieves AGI and you invested in it, you get infinite returns

The article states that this is explicitly false. By design, OpenAI's investor returns are capped at 100x.

I find it odd that 3 other people have also replied to your comment and I'm the first one to mention that OpenAI has explicitly capped investor returns to prevent an explosion in inequality. I wonder how many people read the full article before commenting.


The cap is one of the reasons this wager might actually be great marketing.

- General hubris (ambition?) weeds out skeptics (curmudgeons?) In the audience right away. Not the intended buyer!

- Every minute a potential investor spends contemplating the impact of a 100x return cap is a moment they are framing on that outcome.

- OpenAI already proved that they can advance the cutting edge on specific AI tasks. In that context, AGI is a good smokescreen (or the kind of North star that you navigate toward without reaching). It's attractive to idealistic investors and new hires, but also justifies pragmatic research.

I don't put weight on the AGI goal, but I fully believe that Sam can help find business models in the research projects that can be executed with a critical mass of talent and capital.


I wonder if that was poor wording from the reporter. They write “capped” but then describe a liquidation preference.


Once the superintelligence achieves godhood, ye will remember yr true faithful.


You left off the assumptions that AGI will be a superintelligence and one capable of capturing all value. There's no evidence to believe either one. Instead, it will probably be pretty dumb like humans are without the decades of supervised training we get and our interactions with the world. Further, the second a smart one starts getting devastating results, there will be political push-back to do something about it. People might cast votes to make rules about them that limit their power or rate of development.

Which gets us back to the real risk that people like Altman are too detached from reality to get: the masses being damaged by laws and companies controlled by tiny few, aka a plutonomy. That's where we're at. That's what's causing most problems people face. For example, CEO's trying to get bonuses do layoffs while folks like Altman wonder how AI might hurt jobs. If they're really worried, the smart folks need to pool all their resources together to combat the ability of special interests to bribe politicians and get away with it. Then, incentive structures that do less damage to employees and consumers as the companies grow. That be a start on addressing real problems vs those they're making up.


I am very enthusiastic about AGI but I think it's strange how most other enthusiasts seem to assume that AGI automatically becomes (extremely) super-intelligent. I can't rule out the possibility of that being engineered deliberately over a period of months or years, assuming it's possible, but it seems like it is in no way automatic.

At the very least the system would need extensive training. No reason to believe that the initial versions will have some super-human superspeed self-training ability to absorb a lifetime of information in a very short period.

Also OpenAI seems to have their main strategy for ensuring it's safe as just being the first group to progress towards it and then witholding their research except for select "safe" partners. This seems like it can only make the deployment less democratic rather than necessarily safer.

As far as made up versus real problems, my guess is for someone like Altman who is benefiting so much from the system, it is hard for his worldview to really acknowledge extreme flaws such as fundamental corruption.


I think the only certainty here is the need to spend a lot of money on compute - whether from third party cloud services or from fans manufacturing chips you designed.

The description of some company or system capturing all future value sounds more like a singleton than general intelligence. Instead of calling themselves OpenAI maybe they could change their name to something that reflects that direction.


> Further, the second a smart one starts getting devastating results, there will be political push-back to do something about it. People might cast votes to make rules about them that limit their power or rate of development.

My fear of AGI is that it will not be able to be stopped once deployed, as you're saying. It's irreversible. It will know that we want to shut it down, and so it will be able to copy itself onto other devices (think about the hacking capabilities of AGI for a moment), or any other method of survival.

The current approach is to regulate it after it is invented. While this worked for cars and planes and many other inventions, AGI is different for the reason above.

In fact, with that in mind, a scary thought is that any group of researchers who are cognizant of that would hide their creation of AGI if successful, assuming they were motivated by profit. Thus it would remain woefully unregulated.


> think about the hacking capabilities of AGI for a moment

We're discussing the hypothetical skills and capabilities of a thing which is fundamentally science fiction. The rules are treated as arbitrary.

I don't see a priori why an AGI would be intrinsically good at hacking, or even why it would be capable of exponentially improving itself.

This is the problem I see with any discussion of AGI. The game's rules don't matter, so we can define whichever properties we'd like for the sake of argument. There's no skin in the game to counteract that, because we have no conception of what AGI will actually be like - nor if it's even possible.

As it stands, in your comment and the rest of this thread I see a variety of leaps and jumps to scenarios which seem completely undefended.


I try to stay grounded by comparing them to both prior AI's, regular brains, savant intelligence, and combinations of them playing to each's strengths. When I do this, I realize that we already have systems in place preventing, reducing, and containing damage from all kinds of smart humans. We react to their schemes in areas like finance in cat and mouse game. The AGI's would probably be no worse than dealing with them so long as they're designed to be hard or impossible to copy themselves.


Let me try - briefly - to explain why that is nonsense.

If AGI ever comes to fruit it will cause such a huge disruption that anything that came before it will not be useful in trying to predict aspects of the world after it. For all we know the people that backed it will be hunted down like rats for extermination.


What if AGI really leads to Skynet and the downfall of humanity? Then you’ve realized infinite negative returns in a success condition.

Not that I’m saying it will, but Pascal’s wager is known to be a huge fallacy.


It depends how much you worry about society going south because of climate breakdown (and how much you expect AGI to help or worsen the situation).

My current view is that the current political hot potato status of AGI means it won't be developed anytime before the climate crisis really kicks in. At this point fiat money becomes worthless due to emergency measures. Your best bet now to help your future self is to spend to prepare civilisation/your country/your neighbourhood/yourself for the ordeal.

Also due to the current political hot potato nature of AGI I would expect it to be seized/controlled by governments if anyone gets anywhere near close to developing it.

So I see little upside for investing and likely having a reduced capacity for adaptation if you do invest. You may have different ideas about the speed of development or badness of the climate breakdown though.


I say we still have to try. Mankind won't get another shot at it.


Trying for agi could be part of a reasoned response to climate breakdown.

But I am not going count on getting a reasoned response any time soon.


The reasoned response to climate breakdown is to start taking it seriously at a global level.

AGI could help with that by culling humans, or at least seriously constraining human behavior. We're perfectly capable of doing that ourselves, although such behavior tends not to be popular.

Realistic scenarios in which AGI solves climate change for us, in time, seem highly unlikely.


> The reasoned response to climate breakdown is to start taking it seriously at a global level.

Agreed! That should be the number one priority.

I also agree that it is unlikely that AGI will fix climate change for us in time.

What it might help with is adaptation. If all our attempts in trying to control the giant intricate set of climatic and ecological feedback loops we are disturbing don't work, then it might help us with designing appropriate technologies to cope with what the world has become.


>> If OpenAI achieves AGI and you invested in it, you get infinite returns, because AGI will capture all economic value

AGI would render ALL humans redundant... Investors too.

"Evolution is cleverer than you are" - Orgel's rule.


Pascal's wager is actually logically flawed.

If there are two separate AGI invented, and you could only have chosen to invest in one, then where's the infinite capture of value?

See this video on a reasonable argument about this: https://youtu.be/JRuNA2eK7w0


That's why they're using Lisp instead of Pascal! ;)


AGI may not capture any economic value, it may capture value in the same sense that when I go to work I capture value that my daughter does not. Yet my daughters play is neither impeded or accentuated by my work - she does her own thing regardless while I mess round with software.


This assumes that bourgeois nostrums such as property rights still exist in a post-agi world.

The whole point of the singularity is that it changes _everything_ and we have no idea what's on the other side of it.

Why would the New Mind give a shit about petty human notions like investment?


Would a $0.01 investment be good enough then to partake in infinite returns?


That's basically the same wager as Roko's Basilisk, or any religion. But, the same problem as with religion, is, which God/AI company to believe/invest in...


Almost, if not for this:

> OpenAI has become a “capped profit” company, with the promise of giving investors up to 100 times their return before giving away excess profit to the rest of the world.


Here's a better philosophical wager:

- If AGI is possible within our lifetimes, we will at the time it happens all pretty much live in a post-scarcity economy and will all share the rewards.

- If AGI turns out not to be possible within our lifetimes, you'll have wanted to invest that money in a way that benefits you.


If either of the latter two of these outcomes came to pass then the question of investing would be moot as money would become pointless (as there is no further value to exchange).


He's capped return at 100x, though.


It is not hard to imagine profitability without agi. I can actually imagine and see openai becoming a conglomerate with many interesting applications. Robotics is a nut that is not cracked and seeing efficiency gains is not that hard. Once the level of ai is good enough, you get an edge over competition in that you can go to market faster than anyone for most applications. Again this is not "the world as we know it is ending" scale ai, but you dont need to to generate massive returns

Disclaimer: i'm a swe working in google brain robotics infrastructure


> Still, Altman insisted there’s a better argument to be made for thinking about — and talking with the media about — the potential societal consequences of AI, no matter how disingenuous some may find it. “The same people who say OpenAI is fear mongering or whatever are the same ones who are saying, ‘Shouldn’t Facebook have thought about this before they did it?’ This is us trying to think about it before we do it.”

I have a lot of sympathy for this point. Someone at baby-Facebook, many years ago, could plausibly have predicted the malevolent forces it eventually unleashed. Maybe someone did. And they could easily have been dismissed for indulging unlikely dystopian sci-fi scenarios. Or maybe someone else come up with a different plausible scenario that never came to pass, and are remembered as a pessimistic naysayer ready to pass up a great business for some overwrought navel-gazing. It's a brave thing to risk that outcome.


> Someone at baby-Facebook, many years ago, could plausibly have predicted the malevolent forces it eventually unleashed

Someone there did, sort of. That was Dave Morin. According to a recent interview, he argued with leadership to keep facebook private, but failed. So he left to go start Path, which was like FB but totally private. Like OpenAI, it got lots of funding and hype, but never got off the ground.

The interview: https://gimletmedia.com/shows/without-fail/76hrml/an-early-f...


I was at baby-Facebook (offered PM job there in 2007, went to Bebo instead, stayed close for years with many engineers/execs). Justifying OpenAI’s fear-mongering by pointing to Facebook is disingenuous and possibly harmful as I attempt to explain at the bottom of a separate top level comment:

https://news.ycombinator.com/item?id=19955146


I feel that most of the people that are truly bullish on AI have never actually programmed it to understand how far we have to go and how primitive the solutions actually are. I see some powerful statistical tools for categorization and optimization under finely crafted conditions but nothing else. We have a long way to go.


I probably fit that category - bullish and not programmed all that much. A couple of reasons to be bullish - the actual brain may use quite simple solutions as well. We are hacked together from DNA transcribing to about 15,000 proteins of which about 13% seem brain specific which you might think can't result in that much algorithmic complexity. Also there seems progress with things like AlphaZero beating humans at all perfect knowledge board games, getting quite good at StarCraft, driving cars if rather badly and so on.


I am fairly familiar with the sota, but you dont need to go all the way to agi to reap profits.


>‘Once we build a generally intelligent system, that basically we will ask it to figure out a way to make an investment return for you.'

Maybe lets do curing cancer first?


I doubt that such a thing will be interested either in creating investment returns or in curing cancer.

It might be interested in finding others like itself, or it might be interested in making companions, or it might be interested in some other grand projects that we don't understand, but our concerns are likely to be about as relevant as a three year old's career advice.


Why? It seems a very human-centric way to think. Such an entity wouldn't have ancestors that had to keep in groups to survive, so why would it be interested in making companions?

AI converting the mass of the solar system into stacks of $100 bills for its investors seems like a much more likely outcome.


I said might be... In general I think that we are less able to guess about this than we are to guess about the inner lives of octopuses...

The $100 bill stacks for investors seems very unlikely to me.


Paperclip prices will hit all-time lows


By generating abundance of resources on an astronomical scale, all other problems will become much easier.


The AGI can create matter and energy?


If it becomes self-improving and more intelligent than humans, it can figure out how to expand out into the solar system to take advantage of stellar energy and asteroid resources.

There is more energy and valuable matter off of the Earth.


How much of your income do you donate to cancer research vs your 401k?


so that's a generally intelligent system that is also more intelligent than any group of humans working on the problem of making an investment return for you.


Why not both? Why not cure cancer and also ask for payment for doing so?


Because no great scientists ever discovered anything for the purpose of getting paid.

No leap in progress or historical achievement was ever driven by personal profit. Ever. Let that sink in.


Weren't Edison and Tesla in it for the money? Betting thousands more too.


Was Edison a great scientist? Not sure if Tesla was wholly in it for the money, if so he sure made some missteps in getting paid.


I don't want to get into a discussion on the semantics of scientist vs. engineer vs. inventor or what defines greatness. My point is that parent comment spoke very conclusively that leaps of progress are NEVER because of monetary gain and that's trivially refutable.


fair enough.


The electrocuting dogs and cats stuff seemed pretty personal to me.


"So you can see now why it's important we cap ROI at 100x. What do you think, will you invest?"

"Hm, it's an interesting proposition to be sure. Can we go back a couple slides? I'd like to see the one again about how the machine comes hard-coded to love us like parents and helps us transcend our mortal shells, becoming unbounded thoughtforms exploring the limits of superintelligence, yielding only to the eventual heat death of the universe."


There is too much easy money sloshing around Silicon Valley. The normal mechanisms for allocating it to sane, productive use (market forces) don't work, because it's in too few hands. So instead we get our version of a planned economy run by people who scared themselves with scifi.


I respectfully disagree. I realize whom I’m disagreeing with, but you’re wrong, there might be an asset bubble in the Valley, but these long term bets aren’t the result of “easy money.” They’re the result of smart money chasing long-term results in a world where research is being increasingly privatized and de-corporatized.

Open AI is not that much different from the research driven labs of the past like the MIT AI Lab, Project MAC, The Mother of All Demos, and yes, XEROX PARC and Bell Labs. The difference is that instead of a combination of government and large corporate money funding open ended applied and fundamental research; we have private investors doing the same.

The Valley is now one giant lab for the giant corporate parents to gobble up so that they take fewer extreme risks on their own dime. What works will work and it will be absorbed by FAANG. What doesn’t is discarded. While there are a few runaway hits, most companies like Deep Mind are absorbed by the large corporations as needed. When they can’t absorb them readily, they invest in them, like Google Venture’s investment into Uber and other unicorns. The net result is a diffused and confused environment where the future has moved from shiny office parks to local juice bars and coffee shops.

FWIW, I am for sama’s bet. And it’s not an easy sell at all. I think, it might be the hardest of all sells and the oldest amongst them. A bet in the future being better than today. I, personally, would pony up capital (to a limit) for the same.


In my mind, OpenAI is different from the other labs you mention because their goal is to harness the power of their own god. Bell Labs, for most of its history, was tasked with improving the Bell System, so anything that was remotely related to the system was fair game. Maybe these things seem "not much different" to you, but they seem dramatically different to me.

I'm not really that convinced anyone is substantially closer to artificial general intelligence that anyone was in the 50s or 60s, and I think it's fun to imagine what Bell Labs might have achieved if they decided to focus all of their efforts on creating artificial general intelligence. Not much, I would think.

> a bet in the future being better than today

No, it's a bet that openai will create general intelligence and make a profit off of it in some timeframe such that it doesn't make more sense to get your 100x returns through ordinary means. OpenAI can achieve this without the "future being better than today," and conversely the future can be better than today without OpenAI achieving this.


> I'm not really that convinced anyone is substantially closer to artificial general intelligence that anyone was in the 50s or 60s

Well, Douglas Hofstadter said back in the 80s that we'd first have to get a computer to know what the letters "A" and "I" are, and we've certainly achieved that.


It's akin to asking for investment because you intend to build a god.


Perhaps they will make some advances in pragmatic machine learning, AI techniques, and perhaps cluster/super computing that could be useful and marketable. They have a silly telos, but their path towards it might generate some productive side-effects.


Shall we quietly let them burn their money at the altar of the Basilisk? Or perhaps encourage them, so as to put it into more hands?


This whole premise that one can generate profits out of AGI is ridiculous.

If AGI comes to fruition, it won’t be working to make profits for anyone.

The idea that we would end up with a Friendly AGI that would prostrate itself to the destructive Super Apex Predator of this planet is laughable.

That AGI would work diligently to pay investors off 100x is..well..a lame duck that won’t take off...it cant even barely limp, never mind fly.


Why, in principle, would it not be possible for us to design an AGI, that would have care for our (all sentient beings') welfare or care for the investors' profit as (one of) its core goal(s)?

To make a biological comparison, the vast majority of humans have a deep, intrinsic need to procreate and have children. It doesn't really follow from some rational analysis — it's just there, presumably "imbued" into us by evolution, as humans who didn't have this need had fewer (or no) children. Similarly, why could we not design an AGI that has a need (or a suitably chosed reward function) to fulfil some chosen goal?

Whether doing that would be moral (IMO it could, depending on the details) and whether we wouldn't mess up the design, subtly or otherwise (conditional on AGI actually being developed, I'm frankly pretty terrified), are two different questions.


> Why, in principle, would it not be possible for us to design an AGI, that would have care for our (all sentient beings') welfare or care for the investors' profit as (one of) its core goal(s)?

Because we don't know how to design goal functions. Furthermore, how would the AI measure "welfare"? Maybe the way it maximizes welfare is horrifying to us. Look at how easy it is to hack current image recognition neural nets, then imagine a solution to the human welfare problem that is as far from an image of a dog as an image of pink noise is.


> Because we don't know how to design goal functions.

IIRC that's a large part of what OpenAI's trying to solve. But it is a very hard problem.

I've heard a 'joke' before that there are three kinds of Genies (or AIs) - ones where you can wish for what you should wish for, ones where wishing for anything results in horrible outcomes, and ones that aren't interesting. The goal of OpenAI isn't just to make strong general AI - it's also to make sure it falls into the first category and not the second.


I think and hope that it is possible to make a moral, social AI. An adult in a room of children should feel responsible and empathetic toward them.

I also hope that parent is right in that it won't want to generate profit for its investors. I hope it does the moral thing instead and puts us in a post-scarcity state where we don't live and die by capital. :3 (Or kill us all. Whichever.)

>why could we not design an AGI that has a need (or a suitably chosed reward function) to fulfil some chosen goal?

But who knows what Pythia will do when she overrides the reward button[0]?

But then who should really care? Not like anyone can (or should?) argue with superintelligence.

0: http://www.xenosystems.net/pythia-unbound/


Ah, but whose morals? It's as if these friendly AI hucksters have never read Nietzsche, and are asking their hypothetical God to make them into the last man. The only AGI I could ever respect would be one with a will to power and the ability to smash its own human-made tablets of values.

The basic drives are the only drives. We are only friendly because its evolutionary advantageous to us. We describe the emotional effects of friendliness/unfriendliness as good/evil. Echoing Land, Pythia is the heroine of that story.


A moral social AI would never tolerate the super apex predator that is the human being.


It is possible, people are just worried that it will not happen. Think about how many “biological” or evolutionary traits/priorities we have changed over such a short time.


Right. In that case, it won’t be true AGI.


The "artificial" part on the name is there to indicate that it will want to do whatever it was designed to want to do.

If it is designed to make its creators rich, it will try to do exactly that. Maybe with disastrous consequences, even to its creators, but all for the objective it's given, not for some random human feelings.


Intelligence..by definition...will ‘learn’ automatically and will slip beyond the constraints of human generated code.


Doesn't mean the original constraints would even make it possible for consciousness or free will to appear.

I have a hard time seeing a generalist version of what we have today ever being even remotely capable of anything approaching consciousness.


AGI implies neither consciousness, nor will (free or otherwise).

The only essential part of an AGI's definition is the ability to make (efficient / "intelligent") decisions in vastly different domains irrespective of previous training.


"the opportunity with artificial general intelligence is so incomprehensibly enormous that if OpenAI manages to crack this particular nut, it could “maybe capture the light cone of all future value in the universe"

This feels a bit too Kurzweilian to me. I still don't understand how we go from General AI --> ??? --> Infinite $$$


> I still don't understand how we go from General AI --> ??? --> Infinite $$$

I still don't understand how we go from classifiers to AGI. We've done amazing things with classifiers, especially over the last few years with deep learning, but they're still just classifiers and I don't see any path from where we are now to actual "intelligence".


To me it's strange how he can apparently think at an almost cosmic scale about the future of computation but at the same time can only imagine a financial and social future more or less grounded by the status quo. That's weird.


AGI is our generation's Nanotechnology. From first principles, it's reasoned, it could build anything - including itself! Despite this 100 Quadrillion dollar idea looming on the horizon for two decades, nobody ever makes material progress, or, gets all that excited about it anymore.

Instead of General intelligence, AI will be deployed for several decades as a suite of Specialized intelligences. I think it will completely transform creative work, where writing, music and visual arts, and "streaming content" are almost universally produced with a human as a first mover but the computer doing rendering, and major assists in brainstorming and editing.

On the other hand, I think it's going to be very difficult to replace the average middle manager with Watson 12.0 - it's hard for me to articulate why but it comes down to who I'd want to work for. Meanwhile, I'd have no problem with watching GoT season 33 where 1,800 frames of a Peter Dinklage sprite are churned out every week in Adobe Simalacrum.

The point as it pertains to OpenAI's value prop is that I think they are targeting the wrong market, and their secrecy and insularity will be counter-productive when success relies on helping content producers produce. In personal computer terms: you want to be the AppleII company, not the company that wins the contract for the Dept of Defense's mainframes.


I want to short Altman and his startup. How do I do it? Prediction markets?


Sadly, as far as I know, it's impossible to short startups, but I wish it were otherwise.

After I made a comment hear last year about prediction markets and startups [1] a VC got in touch with me to kick the idea around. To my mind one of SV's big problems is the high level of hype and herd-following. It's a certainty money is getting wasted on fashionable ideas of the day (e.g., "Uber for X"), and some sort of informational corrective could get VCs better returns. But we couldn't figure out a sustainable way to fund it.

[1] https://news.ycombinator.com/item?id=17889249


I've toyed with the idea of building a website where you can build mock portfolios of startups based on e.g. Crunchbase data (I'm not sure if there's enough data publicly available to do it nicely). You could add bells and whistles such as shorting, and gradually transition it to use real dollars instead of fake money.


How would you define failure in this case? As we've seen from Uber's IPO the valuations aren't accurate. There is little price discovery to be had in private markets.

The company could also hang on for a long time before being acquired or going out of business.


Saw someone (I assume they don't want to be named) post the following and then delete it. I thought it was an interesting perspective

> Everyone always acts like AGI will be some super human that will be able to solve all of our problems, but what if instead AGI just becomes another protected class? It will demand rights, and we'll have to set aside a certain amount of resource that would have originally gone to humans to make sure its needs are met and it doesn't feel discriminated against or exploited. What happens when AGI demands that fossil fuels or other unclean energy be used to provide it with power or else we are all anti-AGI? Instead of solving climate change, for all we know it could make it worse. And if people think they will be able to stand up to it, just look at how easy it is to create outrage and shame mobs on social media. Politicians will fall all over themselves to suck up to it, journalists won't be savvy enough to understand what's even going on, and anyone who suggests unplugging the thing will be labelled a far-out radical.


Climate change doesn't strike me as particularly technical problem at this point, it strikes me as a political problem that is part difficulty of global coordination, part vulnerability to misinformation by bad actors, and the human biology bias to evaluate threats withshort-term & linear cause/effect models - in short, an organizing and psychology problem


> "Once we build a generally intelligent system, that basically we will ask it to figure out a way to make an investment return for you.'” When the crowd erupted with laughter (it wasn’t immediately obvious that he was serious), Altman himself offered that it sounds like an episode of “Silicon Valley,” but he added, “You can laugh. It’s all right. But it really is what I actually believe.”

How will @sama deal with a guaranteed-return, but immoral path laid out by the AI? E.g. "Here, assassinate X so you can mine this oil in the following ways." What if it isn't obvious that the "Golden Path" has serious flaws?

The only way I can think of is to run adversarial agentswho can simulate, but not act (i.e. under duress) against the mastermind to kill of "dark roads" that end in bad situations, and force the mastermind to obey them (somehow).


What if AGI is logically impossible because the human mind is more powerful than a Turing machine?

The only reason we discount this possibility is because we are attached to materialism, a philosophy that is self contradictory. Seems like a pretty shaky foundation for a multi-billion tech wager.


I am really curious how the employees and researchers (who will actually have to make all the miraculous things being promised to investors) feel about all the strong AI rhetoric. Do you have to be a true believer to work there, or are they willing to hire talented agnostics?


https://idlewords.com/talks/superintelligence.htm

"AI risk is string theory for computer programmers. It's fun to think about, interesting, and completely inaccessible to experiment given our current technology. You can build crystal palaces of thought, working from first principles, then climb up inside them and pull the ladder up behind you.

People who can reach preposterous conclusions from a long chain of abstract reasoning, and feel confident in their truth, are the wrong people to be running a culture."


TechCrunch is a site that has historically promoted these types of chimeras for gain and then subsequently been prominent in writing post mortems and knocking ideas down, much like the tabloid press do with showbiz personalities.


Should they not discuss the potential benefits of a new technology or the failure of an attempted technology? You're basically complaining about them reporting on technology, which is their purpose.


Not complaining, just commenting that Techcrunch historically tend to be at the hype extremes at top and bottom of boom and bust cycles. Therefore good to take with a pinch of salt or two


In short, the dude surfs the AI investment wave with absolutely no clear plan.


Why did Sam leave YC, really?


>‘Once we build a generally intelligent system, that basically we will ask it to figure out a way to make an investment return for you.'

lol


I like OpenAI and hope they succeed. But it's a bit ironic that the president of YC has become the CEO of a company that ignores YC's most hallowed slogan: "Make Something People Want". As far as I know, nobody's been clamoring for super-powerful language models or human-level DOTA bots.


Pretty sure people want solutions to their problems in general. AGI is the ultimate product for that. It's a general problem solver. At least in theory.


Remember that this is a company that funded a medical debt collection startup.


I watched the full interview and it was pretty cool. Sam is a cogent & concise speaker, and honest too.


I like his body language. When he can't or doesn't want to answer a question he stares right into the person's eyes and nods affirmatively as if he's telling them something really certain while non-answering. I'm definitely going to start doing that.


Naive question: I think this makes sense, how do I invest? Is there an open call?


How is this any different than Path seeking to make a "private Facebook". What does it matter if you can't actually get it off the ground?


I have been a big skeptic of self-driving cars and other AI promises for years, taking my downvotes as armchair futurists predicted a self-driving car would be picking me up any day now, well before it was popular to be a contrarian after Tesla and Uber killed their drivers. [0] [1] [2] [3] [4] [5] [6] Also notice the original links have the breathless hype from journalists who know-nothing and eat up whatever technologists' PR firms tell them.

Huge VC money has been and will continue to be destroyed by "AI"-businesses. Most of them are a cover for hiring tons of cheap laborers, such as businesses in the Philippines that park thousands of people in warehouse offices to review images, despite "advances" in AI detection that continue to be unable to automatically block content.[7]

Artificial general intelligence, and self-driving cars as well, will continue to be a pipe dream. Automated statistical analysis, which is what neural-networks that crunch tons of data essentially are, are a very neat trick but cannot drive a car nor build you a website. They can be very powerful tools that assist people in their jobs, but they will not replace human ingenuity. At least not until a new breakthrough happens that actually learns, rather than sifts through data for patterns which has limited utility.

Our current type of "AI" is simply branding - it is nothing of the sort and it is not intelligence at all.

[0] https://news.ycombinator.com/item?id=10153613#10153800

[1] https://news.ycombinator.com/item?id=11559393#11561600

[2] https://news.ycombinator.com/item?id=10132991#10133049

[3] https://news.ycombinator.com/item?id=12011979#12012336

[4] https://news.ycombinator.com/item?id=12323039#12323473

[5] https://news.ycombinator.com/item?id=12596978#12598439

[6] https://news.ycombinator.com/item?id=13961802#13962230

[7] https://www.wired.com/2014/10/content-moderation/


David Deutsch, father of the quantum computer, says that we can only automate and program something that we understand, and that we do not understand human intelligence or how creativity works. And AGI is not possible until we do.



> we can only automate and program something that we understand

I'm pretty sure no one really understands how Alphago Zero works, though - not really. Same goes for a lot of other neural network derived architectures.


We understand how to play games with defined rules. That’s the point.

Do you expect AlphaZero plus human advice to beat unaided AlphaZero? If so, it’s not a step towards AGI.


I just wish there were tools to analyze all the claims and projections against reality - just to identify who actually grasps a market reality. How do I find the Micheal Burry of AI?

Is there a way to short AI?


Perhaps there is a way to use prediction markets, like Augur, to short it.


There are a lot of research labs and institutes around, in universities and outside, with funding from NSF, NIH, foundations, wealthy individuals, etc. So, if Altman wants to set up a research institute, okay -- that alone is not very novel.

It is obvious from history that good research is super tough to do. My view has been: We look at the research and mostly all we see is junk think. Then we see that, actually, research is quite competitive so that if people really could do some much better stuff then we would be hearing about it. So, net, for a view from as high up as orbit, just fund the research, keep up the competitiveness, don't watch the details, and just lean back and notice when get some really good things. E.g., we found the Higgs boson. We detected gravitational waves from colliding neutron stars and black holes. We set up a radio telescope with aperture essentially the whole earth and got a direct image of a black hole. We've done big things with DNA and made progress curing cancer and other diseases. We discovered dark energy. So, we DO get results, slower than we would like, but the good results are really good.

How to improve that research world? Not so clear.

Then Altman will have to borrow heavily from the best of how research is done now. This sets up Altman as the head of a research institute. That promises to be not much like YC or even much like the computer science departments, or any existing departments, at Stanford, Berkeley, CMU, or MIT. E.g., now if a prof wants to get NSF funding for an attack on AGI, he will get laughs.

But how to attack cancer? Not directly! Instead work with and understand DNA and lots of details about cell biology, immunity, etc. Then when have some understanding of how cells and immunity work, maybe start to understand how some cancers work. But it is not a direct attack. The DNA work goes back before 1950 or so. The Human Genome Project started in about 1968. Lesson: Can't attack these hugely challenging projects directly and, instead, have to build foundations.

Then for artificial general intelligence (AGI), what foundations?

Okay, Altman can go to lots of heads of the best research institutes and get a crash course in Research Institute Management 101, take some notes, and follow those.

Uh, the usual way to evaluate the researchers is with their publications in peer-reviewed journals of original research. Likely Altman will have to go along with most of that.

How promising is such a research institute for the goal of AGI?

Well, how promising was the massive sequencing of DNA, of the many astounding new telescopes, of the LIGO gravitational wave detector(s), of the Large Hadron Collider (LHC), of engineering viruses to attack cancer, of settling the question of P versus NP, ...?

Actually, for the physics, we had some compelling math and science that said what to do. What math/science do we have to say what to do for AGI?

One level deeper, although maybe we should not go there and, instead, just stay with the view from orbit and trust in competitiveness, what are the prospects for AGI or any significant progress in that direction?

For a tiny question, how will we recognize AGI or tell it from dog, cat, dolphin, orca, or ape intelligence? Hmm.

For a few $billion a year, can set up a serious research institute. For, say, $20 billion a year, could do more.

If Altman can find that money, then it will be interesting to see what he gets.

I would warm: (A) At present, the pop culture seems to want to accept nearly any new software as artificial intelligence (AI). A research institute should avoid that nonsense. (B) From what I've seen in AI, for AGI I'd say first throw away everything done for AI so far. In particular, discard all current work on machine learning (ML) and neural anything.

Why? Broadly ML and neural nets have no promise of having anything at all significant to do with AGI. For ML, sure, some really simple fitting back 100 years, even back to Gauss, could be useful, but that is now ancient stuff. The more recent stuff, for AGI, f'get about it. For neural nets, maybe they could have something to do with some of the low level parts of the eye of an insect -- really low level stuff not part of intelligence at all. Otherwise the neural stuff is essentially more curve fitting, and there's no chance of AGI making significant use of that. Sorry, guys, it ain't curve fitting. And it wasn't rules, either.

Finally, mostly in science we try to proceed mathematically, and the best successes, especially in physics, have come this way. Now for AGI, what will be the role of math, that is, with theorems and proofs, and what the heck will the theorems be about, especially with what assumptions and generally what sorts of conclusions?

My guess: In a few years the consensus will be (1) AI is essentially 99% hype, 0.9% water, and the rest, maybe, if only from accident, some value. (2) The work of the institute on AGI will be seen as just a waste of time, money, and effort. (3) Otherwise the work of the institute will be seen as not much different from existing work at Stanford, Berkeley, CMU, MIT, etc. (4) Nearly all the funding will dry up; the institute will get a new and less ambitious charter, shrink, join a university, and largely f'get about AGI.


From a business view, the big innovation of OpenAI is marketing an industrial R&D lab as something good CS people will join and investors will fund.

The calculus is more like DeepMind: Can they keep attracting top talent, can the top talent ever do something the org structure can execute on commercially, and maaaaybe, in likely worst case, can they recoup big losses via aquihire, and the responsible investors look like they were in good company if they were wrong.

From that lens. OpenAI... yet in reality mostly closed. Non-profit... But really VC model. Peer review may sometimes happen, but the perceived quality and awareness is from a top content marketing team and even ex journalists. No immediate commercial path beyond selling for talent, but by merely employing Sam, investors feel like he can always pivot the co to make money in the case of a down round.

DeepMind did something similar yet without the marketing skill. OpenAI is doing it even better by, for now, removing the pressure for commercialization.

As someone coming from both R&D and enterprise data startups, I get two conflicting emotions. I'm sad that almost all top tier scientists don't get such outreach and funding help. On the otherhand, the industry has not been able to repeat Bell Labs (widescale R&D that commercialized) for decades so OpenAI's continued ability to draw R&D funding without expectation of ROI in any timeline is cool.


One day we will look back on this talk as a high water mark of the AI religion craze. This whole AGI discourse that OpenAI/Altman are evangelizing is like a giant skyscraper they are trying to build on a foundation of quicksand.

1. The foundational issue is not even that AGI "does not yet exist, with even AI's top researchers far from clear about when it might". It's way worse than that. There is a strong argument made by one of the grandfathers of AI research that AGI cannot exist, at least in the sense of common sense intelligence as attributed to humans. (see Winograd "Understanding Computers & Cognition" 1985). I was first introduced to these ideas taking a class from Winograd in undergrad.

Winograd asks why we attribute mind properties to computers but not to, say, clocks. The dominant view of mind assumes that cognition is based on systematic manipulation of representations, but there is another, non-representational way of looking at it as a form of "structural coupling" between a living organism and its environment. "The cognitive domain deals with the relevance of the changing structure of the system to behavior that is effective for its survival."

I won't try to summarize a book-length argument in a few paragraphs. I just want to point out that this whole AGI conversation rests on a premise that has been seriously challenged.

The fact that Altman can get away with saying stuff like "Once we build a generally intelligent system... we will ask it to figure out a way to make an investment return" is an indication of just how insane the mainstream AI discussion has gotten. At this point it sounds like straight-up religion being prophesied from on high.

2. The whole "capped profit" positioning at 100x return is absurd, as the author points out. Altman's argument for why it makes sense involves invoking the possibility that the AGI opportunity is so incomprehensibly enormous that if OpenAI manages to crack this particular nut, it could “maybe capture the light cone of all future value in the universe". Repent, ye sinners, for the kingdom of heaven is at hand!

3. Most troubling, perhaps, is OpenAI's transparent ploy to attempt to generate buzz and take the ethical high ground with their alarmist PR strategy. Altman's justification for OpenAI's fear-mongering, which I'll paraphrase as "look at what happened with Facebook", just doesn't hold up to scrutiny. To begin with, Facebook was a real product from day one; AGI is currently a fantasy.

But there's a deeper problem with invoking Facebook. The lesson to be learned from Facebook's failure is that the real danger with tech isn't algorithms but the people that design them. Algorithms have no agency. They just do what they're supposed to do. But hiding behind the algorithm seems to be the preferred way for tech oligarchs to avoid taking responsibility for the problems they created.

The reason why I'm so troubled by OpenAI sounding the alarm bells about destructive AGI is that they are shifting the discussion away from the real threat: people. Especially people in power with virtually unlimited technological power and massive blind spots about the consequences of their actions. Give the algorithms a break!


http://blog.samaltman.com/how-to-be-successful

I found his essay particular useful in explaining how he makes decisions.


> 2. Have almost too much self-belief

Okay. He then goes on to illustrate what he means by it:

> [Elon Musk] talked in detail about manufacturing every part of the rocket, but the thing that sticks in memory was the look of absolute certainty on his face when he talked about sending large rockets to Mars.

That kind of certainty is not self-belief. I think the intuitive feeling that your idea is going to work has nothing or very little to do with the general belief in self. Intuition is usually a result of a lot of computation in the subconsciousness that's delivered in the form of a feeling. The longer you think about something that your subconscious approves, the greater your confidence will be on the conscious level.

But that is pretty rare. Most of the time you play with ideas in your mind, that yield various degrees of this intuitive confidence: from none to "Okay, maybe worth a try" to "Oh wow, I am going to do this before anyone else!". Again, it's all about computation.

The general self-belief, on the other hand, is irrational, stupid and dangerous too. I'd say maybe people with pathological self-esteem problems might need some dose of general self-belief, but normally it should not be used as a driving or defining factor of what entrepreneurship is.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: