Hacker News new | past | comments | ask | show | jobs | submit login
Be a Visiting Fellow at the Singularity Institute (lesswrong.com)
51 points by kf on May 19, 2010 | hide | past | favorite | 50 comments



> SIAI is tackling the world’s most important task -- the task of shaping the Singularity. The task of averting human extinction. We aren’t the only people tackling this, but the total set is frighteningly small.

Forgive me, but this seems a little on the "self-important quackery" side. There are any number of threats to our species far more urgent and reality-based than xenocide by a weakly godlike post-singularity AI. I suppose it can't be completely ruled out any more than alien invasion or the Nemesis hypothesis (though I find it pretty far fetched given the state of AGI). But "the world's most important task"? C'mon.


"There are any number of threats to our species far more urgent and reality-based than xenocide by a weakly godlike post-singularity AI."

We work on preventing those too. Some of the risks we work to prevent:

- Deliberate misuse of nanotechnology

- Nuclear holocaust

- We’re living in a simulation and it gets shut down

- Genetically engineered biological agent

- Accidental misuse of nanotechnology (“gray goo”)

- Physics disasters

- Misguided world government or another static social equilibrium stops technological progress

- “Dysgenic” pressures

- Take-over by a transcending upload

- Repressive totalitarian global regime

- Our potential or even our core values are eroded by evolutionary development

For more information, see http://www.nickbostrom.com/existential/risks.html.


Before I state my criticism of your mission, let me say that I respect what you are doing, I think it's incredibly interesting, and I think it adds spice to the world that an organization like the Singularity Institute exists. I think the world is a better place for it, and I respect you very much for spending your time exploring these issues.

That being said, I think the way you state your mission appears a little immature to me and many other people, and I think that very seriously detracts from your credibility. This is very important, because with more credibility you could accomplish many more things that make the world a better place. One of the most important things about scientific research is having a concrete, well defined way to attack a problem you're setting out to solve. People are working on quantum mechanics because there are lots of concrete things they can do and test, and people aren't working on antigravity because they don't know where to begin. Your research is from the "antigravity" category of problems, i.e. problems where we don't really have an attack.

This doesn't mean working on these problems isn't useful, or that you should stop, or that you should spend your time doing something else. It does mean that your research is highly philosophical - it is very interesting intellectually, it has potential to attract great young minds to work on interesting things, and it will likely produce results that will get people thinking in new directions. This is always a good thing. But it also means that you are extremely unlikely to find practical solutions to any of these problems, and it seems that everyone is recognizing this other than you. When you present yourself as an organization that is working on practical problems, instead of intellectually interesting philosophical problems, it becomes very difficult for many intelligent people to take you seriously, and I think it significantly detracts from your goal.

Once again, I think what you're doing is great, I wish more people were doing it, and I sincerely wish you luck and success. I am merely trying to show you how many intelligent people that would otherwise be interested in what you're doing view your organization. Please treat this comment as a constructive argument for changing your marketing. At the very least, think seriously about slightly adjusting the wording of your mission statement. It might get more of the right people interested, which will likely accelerate your work, and make all of us better off for it.


I've been working on getting our simulation terminated while attracting a transcedent to upload everybody (giving us a bridge to outside the soon-to-be-terminated simulation).

Also, I'm waiting for SLAC to burn through their barrier and burrow through Palo Alto.


Industrial food production is quite fragile, and the world basically has yet to experience that production in any sort of "failure mode," e.g. global blight due to excessive monocultures.

The probability of such a global failure occurring within the next 100 years strikes me as near 1.0, and thus possibly more pressing than any of the events listed above.


This isn't an existential risk.


If there is a significant die-off in industrial nations it could lead to a major delay in the singularity, perhaps making it impossible altogether.

http://www.nickbostrom.com/existential/risks.html

In these terms this could be a 'crunch'.


Importance-wise, delays in the singularity are trivial compared to a risk of it not happening at all.

So suppose we are worried that a die-off in industrial nations will actually make it impossible. Isn't this risk (that there is a major catastrophic disaster for industrialized nations and this disaster prevents a singularity from ever taking place) much smaller than the risk that the singularity goes badly when it happens? If so, then this supports the spirit of my original comment: worrying about the singularity going well is more pressing than worrying about major shocks to civilization which do not completely wipe out humanity.


The badness of a delayed singularity is insignificant to the badness of it never happening. So let's consider the latter.

Isn't the risk that a large industrial collapse happens and it prevents a singularity from ever taking place much smaller than the risk that the singularity goes badly? If so, then this agrees with the spirit of my comment: worrying about the singularity going well is much more pressing than worrying about catastrophes that do not immediately wipe out humanity.


Have you contacted Nassim Taleb yet? 'Robustifying' society against such unpredictables is his speciality.

http://fooledbyrandomness.com/


> There are any number of threats to our species far more urgent and reality-based than xenocide by a weakly godlike post-singularity AI.

Is this belief based on your familiarity with and assessment of the best(1) arguments(2) for and against Singularity/AI risk? If not, then what? Do you think the idea is absurd enough to confidently dismiss without engaging, and if so, why?

(1) http://lesswrong.com/lw/lw/reversed_stupidity_is_not_intelli...

(2) e.g. http://singinst.org/AIRisk.pdf, http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_...


I took issue with the claim that shaping the Singularity is "the world's most important task" when there are so many extinction-level threats that already exist. It seems arrogant and presumptuous to claim the number one spot for a hypothetical threat that one happens to be deeply involved with.

Debating the likelihood of recursively self-improving AI is tedious, usually pointless, and often goes the way of political and religious debate, which could hint at its inclusion in the participants' self definition [1], and I don't have the time or energy to get involved right now.

I apologize if I came off as dismissive -- I was frusterated over my initial point. If it matters to you, I certainly haven't completely written off recursively self-improving AI. I just find it extremely unlikely in the near-to-mid future, and think there are other more pressing issues that have the distinction of currently existing.

[1] http://www.paulgraham.com/identity.html


Note that Shaping the Singularity was taken as the same task as averting human extinction. The talk of AI is secondary, it comes up because many of us think that AI can offer good solutions. In looking at AI, it also seems apparent that AI is itself an existential threat.

Is shaping the singularity the same as averting human extinction? If the singularity itself poses existential risks (this seems obvious,) and if extinction is inevitable without a singularity (this seems likely,) then we need a singularity that is 'safe and soon'. (though the initial wording does seem to gloss over the need to not go extinct in modern ways.)

Put another way: We can do the most work with the most powerful tools. There is a great need to use very powerful tools correctly. If we can even maybe influence the work done by highly powerful mid-singularity tools, we can accomplish far more 'correct work' than we could just using our modern tools. (Again, this does require that we survive long enough for the singularity to happen, and that you don't have a strong discount for future vs. present utility.)

Personally, I think creating a super-tool which can only be used correctly is the best approach.


> Is shaping the singularity the same as averting human extinction? If the singularity itself poses existential risks (this seems obvious,) and if extinction is inevitable without a singularity (this seems likely,) ...

The answer is trivially "no". It's easy to imagine a future in which we continue to exist and the Singularity never arrives. Perhaps the rate of progress will continue to increase at a decreasing rate. Perhaps it will go linear, or even sigmoid. Perhaps we will achieve a Kardashev Type I civilization without ever going properly singular. It's not a given and perhaps not even likely that singularity is synonymous with the survival of our species.

We have the existence proof for general intelligence embodied in about ~3 lbs of protein. No more, no less. Being certain of the inevitability of the Singularity is closer to religious than skeptical thinking.


On Less Wrong and at the SIAI house, we quickly learn to stop saying certain or "100%". Do you think a Singularity within the next 50 years is 95% likely? 80% likely?

Also, for certain definitions of inevitable, extinction is inevitable without a Singularity. A Type 1 Civilization could at absolute best colonize a few surrounding solar systems, but eventually the stars will burn out. Though a universe where we hit Type 1 but never Type II would be rather interesting.


You and Clippy should put up an Ask HN AMA post.

Though I suspect you'll have to lure Clippy here with the promise of persuading angel investors into funding nanotechnological paperclip foundries.


I did not say that the Singularity was certain or inevitable. I do kind of take it for granted but if you're saying it's not going to happen, that's a whole different argument.

I'm not saying "If humans continue to exist, we will certainly do the singularity thing." I'm saying "We are likely going to go extinct unless things change in a really really big way." And of course those big changes also have to not kill us.


> extinction is inevitable without a singularity (this seems likely,)

Can you explain this point further? If we can manage to, say, genetically engineer ourselves into healthy immortals that are otherwise pretty much as we are today, invent FTL drives to spread ourselves across the universe, and discover a way to reverse entropy to avoid the big crunch—and after doing all that we still haven't turned ourselves into para-omniscient universe-scale entities—then do we really need any sort of technological singularity, or can we just decide to hold off on it indefinitely?


Even just decent sublight space colonization would probably be enough to make human survival fairly reliable for the foreseeable future, but I don't think ANY space colonization is likely in the foreseeable future. On that count, I think we're stuck on this rock for a while (unless the singularity makes it easy) and will sooner or later screw up our rock. (war, probably)

There probably are ways we could steer into a stable human populated reasonably pleasant universe, but we're in a car filled with guns liquor and dynamite on a winding mountain road in the dark.


> It seems arrogant and presumptuous to claim the number one spot for a hypothetical threat that one happens to be deeply involved with.

You're inverting the causation. They're working on it because they think it's important, not vice-versa.


I might be a start-up owner who's convinced that the world desperately needs another social networking site. Just because I think that's an important thing for me to be working on doesn't mean that it actually is to the rest of the world. Equally, just because some people think that working on this singularity stuff is vitally important, doesn't make it so. So in short, I disagree, I don't think causation has much to do with it.


It seems incredibly irresponsible to think something is vitally important and not be working on it. Causation should have much to do with it.

It may be reasonable to think some other task is more important than this one, or for there is no task which you can identify as the most important, but it is in no way arrogant from someone to conclude that there is a most important task, or that some particular task is it, even by a large margin.

I mean, is the counterargument that there is no most important task? or that we can't know it? Or that it's arrogant to think we have it right?


While I think SIAI is borderline crackpot, your list is of much lower probability over the next century than uFAI.


Yeah, they were straw men. My bad.


FWIW, this is the best assessment I've found of singularity/AI risk: http://www.aaai.org/Organization/presidential-panel.php


Here are some arguments for why AI is different that I find convincing:

http://lesswrong.com/lw/qk/that_alien_message/

http://lesswrong.com/lw/rk/optimization_and_the_singularity/


The nice thing about a post-singularity AI is that it also handles all of those other threats.


[deleted]


The core problems with singularity hypothesis makes the whole issue moot.

A) An infinitely intelligent machine is not capable of breaking the laws of physics so they don't get god like powers.

B) Intelligence and creativity are always bound by insanity. (You can always over fit any data set.)

C) Sensors limit the acquisition of new information. (You can deduce the existence of quarks from looking at a web cam pointed at a brick.)

D) While processing power has grown exponentially the cost of fabricating that technology has continued to increase faster than world GDP.


A) But it could discover new laws of physics. i.e., A nuclear bomb would have looked like 'god like powers' before Einstein/Relativity.

B) This is nonsensical ...

C) You can deduce a ludicrous amount of information from relatively little sensor input. Humans don't extract a fraction of a percent of the maximum (in the information theoretic sense) information from our environment. See http://lesswrong.com/lw/qk/that_alien_message/

D) This is trivially, factually incorrect. A 3GHz P4 is cheaper today (in absolute or relative terms) than it was when it was brand new. A new Core 2 Duo will give you better performance/dollar than a P4 when it was new (which was better than a P3, which was better than a PII, etc).


> D) This is trivially, factually incorrect. A 3GHz P4 is cheaper today (in absolute or relative terms) than it was when it was brand new. A Core 2 Duo will give you better performance/dollar than a P4 (which was better than a P3, which was better than a PII, etc).

The parent mentioned the cost of "fabricating that technology", not of processors themselves; this is vague, but AFAIK, fabs are getting more expensive quickly.


The cost of the fab is reflected in the cost of the chip, since it's necessarily amortized over the production of the fab (for a profitable company/fab).


FYI: The cost of building the fab is already a significant percentage of the cost of building chips.

"The increase in cost and complexity is encapsulated in Rock's Law, which dictates that fab construction costs double every four years. That law is itself tightly connected to the better-known Moore's Law, which dictates that the number of transistors on a chip doubles every two years." (http://news.cnet.com/A-fab-construction-job/2100-1001_3-9810...)

"By 2007, the price of building a fab is expected to reach $6 billion."

$6 billion * 2 ^ 10 = 6 trillion in 20 years.


A) could discover new laws of physics Not really it can only discover the laws that actually exist or a close acclimation of such. Humans can already kill most people on the planet with bombs or disease to be "god like" you need to step above that and the laws of physics would need to support that jump. So it might not think of the universe using the same laws we do but it does not get to make them up as it goes along.

B) Not really it's a well known problem with AI. Over train a learning system and you get worse behavior.

C) Nope. Information theory puts strict limits on how much you can zoom in with a web cam. If an AI want's to earn about sub atomic physics it needs to look at the output of a particle accelerator. Can it make a better theory with existing data? Well see B.

D) http://en.wikipedia.org/wiki/Rock%27s_law The semiconductor industry has always been extremely capital-intensive, with very low unit manufacturing costs. Thus, the ultimate limits to growth of the industry will constrain the maximum amount of capital that can be invested in new products; at some point, Rock's Law will collide with Moore's Law.


> While processing power has grown exponentially the cost of fabricating that technology has continued to increase faster than world GDP.

Note that "Singularity" doesn't always refer to exponential growth (http://yudkowsky.net/singularity/schools); SIAI in particular focuses on the "intelligence explosion" definition.



I spent three weeks at the SIAI house in April/early May. Feel free to Ask Me Anything about the Visiting Fellows program. It will be several hours before I check this thread again.


I spent a few weeks there. They kept me locked in a room, and there was a slot through which papers with chinese writing were shoved in. I found a book of instructions, which mostly involved changing the writing around and pushing it back out the slot.


Are you self-aware yet? If not, you should come back and stay longer. Though someone else may be sleeping in the Chinese room (really, it's more of a Chinese closet) now.


Apologies for such an open ended question, but could you generally describe the experience?


Sure. I may not have been the typical visitor, but I'm not sure that there is a Typical Visiting Fellow.

I went there with low expectations of actually accomplishing anything. I'm not yet at the point where I can write the sort of hardcore sciencey analytic philosophy papers that are the main output of the Singularity Institute, but I think with 6 months of learning I will be able to meaningfully contribute to that goal.

I tried to share my lessons of entrepreneurship with the housemates that wanted to learn. I sat for an afternoon with a Brazilian entrepreneur living at the SIAI house and we went through all of the logistical details of opening a Delaware corporation as a foreign national. I did a short workshop on SEO. I went to see Thom Yorke twice in Oakland, saw Orbital in San Francisco, and Blue Scholars in Berkeley.

I graduated from school with an engineering in December and because of the success of my kratom business, I didn't need to get a real job, so I had a lot of time to surf the internet. I started reading Less Wrong while trying to actively comment as much as possible. In 4 months of posting on Less Wrong, I learned more than I did in two years of college. In 3 weeks of living at the SIAI house, I learned much more rapidly than I did while just posting on Less Wrong.

So, to answer your question, what I mostly did was sit around and talk with all of the wonderful people that live at the SIAI house or occasionally pass through or hang out. My favorite types of conversations are those where you push the barriers of the kinds of thoughts that humans are capable of having, the kinds of conversations that often stop making sense because there are too many different infinities or recursive loops. At the SIAI house, I could spend hours a day talking about those kind of things. It sometimes felt like I was living at the center of the universe.

Also, the food is good. Lots of delicious healthy things from Trader Joe's and Costco.


Travel diaries linked from the original post might help: http://xuenay.livejournal.com/332512.html


Manage to sell anyone Kratom while you were there?


Not exactly, but I got business cards from a couple head shop owners that were interested in wholesale orders. As an aggregate, the head shops pay too much for their wholesale kratom.

For anyone that didn't get the reference, my primary business at this point is selling legal drugs (sometimes known as "herbal medicine" depending on the conversational context). My website is http://getkratom.com and in the early days of Hacker News (Startup News?) I started an enormous flame war while trying to start a discussion about my business. I have since tried to stop mentioning it here, though I certainly don't mind talking about it to anyone that may be interested.


I never understood why they couldn't buy in bulk from Ebay.


They don't like doing their own packaging -- head shops aren't exactly apothecaries.


@rms: What concrete things did you work on? Were you writing papers, or coding AI, or...? The Singularity seems like such a broad area of study, how do you gauge progress or even decide what to work on?


See http://singinst.org/accomplishments2009 for some of the stuff that we work on.


I think my answer here answers your question: http://news.ycombinator.com/item?id=1363449

Usually the way of gauging process is that people come up with specific goals and then either meet those goals or don't meet them. I didn't have a whole lot of goals, which means that whatever it is that I did accomplish was higher than my expectations.

I would recommend that other people go in with high expectations of accomplishing things.


I did this during the summer of 2009, and it was utterly awesome. It was like doing a startup, but less stressful.


"It was like doing a startup, but less stressful."

How was it like a startup?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: