Hacker News new | comments | show | ask | jobs | submit login
Bill Joy: Why the Future Doesn't Need Us (2000) (wired.com)
145 points by miobrien 6 months ago | hide | past | web | favorite | 60 comments



For those of you that are youngsters out there seeing this article now, a little context is perhaps appropriate. I remember when this article came out, being a Wired subscriber at the time, as one did if they were in their 20's working at a "dotcom" and thinking this Internet thing was turning out pretty cool.

This article was viscerally and emotionally terrifying at the time. For whatever reason it seemed really real. I remember overhearing people talking about "grey goo" over cocktails at startup events -- the scenario here seemed real and imminent.

As usual, of course, as with most doomsday predictions, the actual future reality turned out to be equally terrifying and completely different.


Oh, the gray goo is already here, has been for centuries. It even glows at night. We've just not noticed because it grows slowly and is only visible at extremely zoomed-out scale. But its growth has definitely sped up in recent decades.

https://earthobservatory.nasa.gov/Features/CitiesAtNight/

(slightly tongue-in-cheek, slightly not).


And similarly, another other SF trope, the superhuman AI with a sole goal of benefiting itself, already exists too -- that's what a corporation is.


Ha. Corporations are clearly not super-intelligent, thankfully :)


Are they not?


I mean, I think it's also not quite sound to go "oh, it's been 18 years and we haven't been converted to grey goo yet, I guess it was a false alarm, phew."

Give the robots some time!


Grey goo fails because it's stuck with the same chemistry life is built off of. You can't use much iron for your self replication because there is not much iron in the environment. You also need an energy source and storage etc.

End result you end up building using organic chemistry which means your goo get's eaten.


Couldn't you make grey goo out of silicon? There's more silicon than anything else in the rocks of this planet. It could produce its energy by photovoltaic cells. Blanket the entire planet in a seething mass of solar-powered nanobots.

That's what I think of when I hear the term 'grey goo.'


Yeah, articles about the dangers of general AI basically missed the simpler, immediate problems of human using mechanical means to modify themselves and their environment without any ability to collectively, wisely consider the consequences (problems that begin long before machines are intelligent).

If humans are a collectivity, "we" right now have created the means for most people to live without working. But upshot of this hasn't been paradise but increasing privation in advanced nations as everyone still needs a conventional job for social and material survival but fewer and fewer people can find them (and the oversupply of workers reduces their price). Just as much, those who leverage the transformation of society using technology have benefited greatly but the resulting inequality of wealth distribution has produced tremendous despair for a significant percentage of the population with little indication that any further transformations are going to fix this (basic income is one effort but I don't see how this can happen without severe inflation, etc).

A similar unthinking dynamic can be seen in climate change and the effective total lack of serious response to it.

It will be a long time, if ever, before some AGI rules humanity. But the use of mechanical processes to control human behavior began not with AI, not with social media but longer-ago with mass media and it's certainly in high gear at the moment (in both mass and social media of course), allowing whatever well-connected actors to control the "news cycle" and guarantee the lack of coherent, collective discussion of the overall, serious problems of this society (as mentioned above and as most people can see in front of them).


"This article was viscerally and emotionally terrifying at the time. For whatever reason it seemed really real."

Maybe for some, but certainly not for me. It was just another doomsday scenario. Just like the kind that have been cropping up in science fiction for at least a hundred years, and religious tracts for thousands of years before that.

"I remember overhearing people talking about "grey goo" over cocktails at startup events -- the scenario here seemed real and imminent."

Just because they talked about it doesn't mean they thought it was real or imminent. Honestly, I don't think most people were any more scared of it back then than now. Just something to consider, and maybe keep an eye on.


> completely different

Don't lose hope about losing all hope yet! Apparently, "Scientists accidentally create mutant enzyme that eats plastic bottles. The breakthrough, spurred by the discovery of plastic-eating bugs at a Japanese dump, could help solve the global plastic pollution crisis"

https://www.theguardian.com/environment/2018/apr/16/scientis...


As a software engineer who graduated college about a year ago -- with a degree in robotics engineering -- somehow the end has always seemed both near and far. Botnets and massive hacks aren't something I grew up with, but they feel commonplace ever since my introduction to the world of tech. I don't know if I'll even be surprised when the grey goo starts pouring out of my USB port, other than the surprise something else malicious didn't start pouring out first :)


I think it was a minor flash in the pan just like whenever someone famous makes some doomsday prediction. In terms of impact and importance, this was probably not substantially different from the time Hawking said that we should be careful looking for aliens because they might be evil.

A lot of things seem viscerally and emotionally terrifying after your fourth free drink on the rooftop at the Industry Standard.


> the scenario here seemed real and imminent.

It was never imminent. Drexler himself argued that there was a huge bootstrapping problem in creating nanoscale ‘assemblers’ with our macro scale technology.


...and it's not even over yet.


I've read Kurzweil's books. Reflecting on them over time, I think he's kind of full of it. It's not that the material was un-interesting but rather that the claims he was making were a bit wild and overly-optimistic. I see the same kinds of flaws in the reasoning of many people of his generation, where they compile this list of truisms and try to get you from A->B but really the fundamental assumptions from A aren't that solid.

Kurzweil wants to live forever. He wants to become a cybernetic being. That was his undoing in those books--his desire to outlast death forced him to make wildly optimistic predictions.

The dangers facing us are not AI but simply mundane databases. Everything is being tracked these days. Compliance is being shoved down our throats. You are being watched on the Internet and in daily life. "But I have nothing to hide." Yes, you do. You just don't realize it. You should read up on how people were treated in East Germany and the Soviet Union because that is your future. Technology gives zero shits about humans.


I call these the Bhopal 2.0 problems. Bhopal was a disaster where 32tons of toxic MIC gas was released into the atmosphere and killed 3,787 injuring half a million people.

My premise for Bhopal 2.0 is we automate processes so much and have no accounting for waste and damage that we simply focus on the top number. Ask an ML to optimize for an output and it will, happily, All the while harming people around the process - automatically. Right now Bhopol 2.0 processes are all focused on information systems - abstractions, so it's much harder to point and say "look, this is dangerous". People do very, very, very badly with abstractions.

In some way this already happened with the recent elections. But that was purposely aiming the machines at us.


> forced him to make wildly optimistic predictions.

maybe I'm wrong, but I thought those predictions have fared pretty well.

edit to add:

optimism, or not... this came quickly (ahem, china)

[1] > 2019, Public places and workplaces are ubiquitously monitored to prevent violence and all actions are recorded permanently. Personal privacy is a major political issue, and some people protect themselves with unbreakable computer codes.

from his 1999 book, via wikipedia

1 - https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzwe...


I was 19 when this came out and this essay single-handedly changed my life and led me to Ray Kurzweil, Minsky, the Unabomber Manifesto, Naomi Klein, and then ultimately to spirituality through Hawking-trained-physicist-turned-cave-dwelling-monk Peter Russell's "Waking Up in Time."

The rest of my life quite literally flows from reading this essay over and over when it came out. Genuinely happy to see it here.


I also recommend Nick Land's "Meltdown".


This is a fruitful investigation. If you are interested in structural impacts over the long term, you might enjoy this article on the difference between a trap and a garden: http://slatestarcodex.com/2014/07/30/meditations-on-moloch/

And this short story about cellular automata: https://archive.org/details/TrueNames


I never heard any of theses terms or texts, can you provide more background? I tried to search for the one "the Unabomber Manifesto" and it seems to be linked to a "mathematician/terrorist", and in the archive.org there is an article about "bad things of industrialization" and some weird characters, what that means?

Can you explain what i'm missing? I'm young and non-american, maybe i'm missing context.


Yes, the Unabomber was a mathematician-gone-terrorist who had a manifesto of sorts published decrying industrialization and technological profress. His academic career was pretty short. There's a Wikipedia page if you want to read up on it.


There's also a Netflix series called Manhunt: Unabomber which is pretty good.


I believe this article references the Unabomber. If it doesn't, then Ray Kurzweil's "The Age of Spiritual Machines" definitely does.

It's a fascinating line of thought--though I obviously don't support the actions taken by its author.

To clarify a little further, Bill Joy in this essay refers to Kurzweil who refers to Marvin Minsky's "Society of Mind." I became a technological utopian until reading Naomi Klein's "No Logo" later that year, as well as the book "Amusing Ourselves to Death" by Neil Postman.

This ultimately led me on a sort of existential quest that led to the book "Waking Up in Time" by Peter Russell and then I began to have a much more spiritual orientation towards life and reality and now try to be as practical as possible while working towards a positive technological future.


If you want to read it, you can Google its title "Industrial Society and Its Future" [1].

[1] https://www.google.com/search?q=Industrial+Society+and+Its+F...


Your comment made me think of the Our Lady Peace album "Spiritual Machines."


I also heartedly recommend Jacques Ellul. From his wiki page (https://en.m.wikipedia.org/wiki/Jacques_Ellul):

> Jacques Ellul (French: [ɛlyl]; January 6, 1912 – May 19, 1994) was a French philosopher, sociologist, lay theologian, and professor who was a noted Christian anarchist. (...) The dominant theme of his work proved to be the threat to human freedom and religion created by modern technology. Among his most influential books are The Technological Society and Propaganda: The Formation of Men's Attitudes.

I’ve just finished reading his “Illusion of politics”, where he explained (back in the 1960s) how the West lives in a technocratic world only focused on efficiency and why this causes politicians and political decisions (and ultimately democracy itself) to become sort of obsolete, because all the decisions taken by them (the politicians) should ultimately answer to only one and most important criteria: they should be efficient. And as only the technocrats can tell/approximate which decisions may or may not be efficient, the politicians end up being just “puppets” rubber-stamping the decisions actually taken by the technocrats.

I’m now just about to start reading his “Le bluff technologique” (it’s in French) and I’m very, very excited because of it. Again, I recommend Ellul to all those interested about us, humans, and about how we function, reading him reminded me of when I first read Hobbes, Rousseau or Tolstoy, i.e. other great literary and philosophical figures from the past that did a great job of describing how our species “operates”. Only that Ellul exposes us confronting this new and modern world, which said authors didn’t have the chance to do.

Later edit: For those down-voting this, I urge them to reconsider. Not the down-voting itself, which I don’t care about, but Jacques Ellul himself. Someone mentioned Ted Kaczynski, it turns out Ellul was his favorite philosopher (https://thesocietypages.org/cyborgology/2012/06/08/the-unabo...). I’ve just finished the introduction of the book I said I was about to start reading, and it’s haunting how he prophetised the “missed chance” represented by the dream of a de-centralized society promised at some point by modern computer networks. He “warned” us (in the 1970s and the 1980s) that we had a very short window of opportunity for making this new technology work in our best interest by making things less centralized. It turns out both the www and more recently crypto-currencies are such missed chances.

> Kaczynski claimed in all humility that half of what he read in The Technological Society he knew already; he discovered in Ellul a soul mate rather than a teacher. “When I read the book for the first time, I was delighted,” he told a psychiatrist who interviewed him in jail, “because I thought, ‘Here is someone who is saying what I’ve already been thinking.'”


Also check out Ellul's "Propaganda", where he says that it's actually the intellectuals who are most susceptible to propaganda, as they feel they have to investigate competing claims and decide for themselves.

Ellul was an interesting thinker. But, to my knowledge, he never advocated violence. I'm not sure where Kaczynski got the idea for using violence to achieve his ends, but it wasn't from Ellul.


One of the things that keeps creeping into my head about sentient machines, the robot future etc etc is this: If they become sentient, and have extraordinary capabilities etc etc ...why on earth (no pun intended) would they stick around here? They certainly wouldn't be limited by the same things we biological beings are. They could just leave into the great expanse - poof, I'm outta here dad!

Furthermore, lets say, we become part of the machines (a la Greg Bear's EON / Way universe). The line of thinking and questioning starts to move towards resources. These incredible advances in machines, it seems would be accompanied by incredible advances in resource utilization. The notion of poor vs rich entities would be completely different, in relative terms, than what we think of them today.

Our insignificance relative to the machines might not matter in the slightest. fwiw - worth reading David Brin's uplift books - in those the machines seem to generally stay the heck away from bio life forms :)


I’m not worried about sentient machines. I’ve done a lot of work with AI over the past few years and we are no where close to anything sentient. It’s really just advanced search mechanism where the biggest danger with the tech it self is that dangers we misinterpret results and use them for something horrible.

The real danger I see comes from ownership. ML is good at predicting things. Nano tech along with genetics will make you immortal and have near magical abilities, and currently only a handful of people will own these technologies.

We are much closer to any cyberpunk dystopian than a robot revolution. In a way it’s always been like that with capitalism, if we don’t regulate the free market, then the free market turns people into slaves - and right now we’re letting the market regulate both society and tech.


This was a really big deal when I read it back then. It led me more seriously into robotics and ai and the discussion of the singularity and jobs. But Bill Joy's essay was not just waiting for the AI to become super-human he was aware that way before it would be dangerous to humans.

One of my favorite quotes from the article which really has informed a lot of my thinking later:

"Clarke continued: "Looking into my often cloudy crystal ball, I suspect that a total defense might indeed be possible in a century or so. But the technology involved would produce, as a by-product, weapons so terrible that no one would bother with anything as primitive as ballistic missiles."


I'd say the author's own work stands in opposition to his conclusion: vi is over 40 years old at this point and hasn't yet been rendered obsolete.


Vi is an example of the Lindy Effect: https://en.wikipedia.org/wiki/Lindy_effect


I've worked with Bill and even shared an office with him at Sun (although he was rarely there, and to this day probably just remembers me as that guy that drank a lot of Diet Dr. Pepper and argued about capabilities for the Java language :-) and his greatest strength and greatest weakness is that he can see too far ahead along a path. He would answer emails that I thought I had completely thought through with a one liner that would illuminate some fault in my logic. It was annoying and amazing at the same time.

In 1994 he was convinced that the kinds of "compromises" that James Gosling were putting into Java guaranteed it would be dead on arrival. He wasn't wrong, those choices would ultimately limit the language (and they have) but he completely missed the 20 years between then and now where Java would have a huge impact.

When this editorial came out I had moved on from Sun and was dealing the leading edge of what would be the dot.com implosion shockwave and now Bill was telling us it was all pointless, the world would probably die on its own desire to create cool new things. Well he wasn't saying it was pointless per se, he was saying we needed to confront the ethics of what we were doing now instead of in the middle of the crisis. And there is much to like about that, but recall that Facebook was created in a dorm room, not a laboratory like Bell Labs or Sun Labs. So there was no oversight, no 'adult supervision' of people who would ask, as Bill would have, what happens when ...?

So to understand Bill's essay in context I have to ask, "What would he have said to Mark Zuckerberg?" I don't doubt for a moment that had Mark confided in him his vision and his plans, that Bill would have foreseen the size and extent of its impact. Bill is a guy who made more money on Microsoft Stock than on Sun Stock because he sold the latter and bought the former, recognizing that at the end of the day Microsoft would have a larger impact. So what does he do? Does he convince Mark to throw it away? Does he say "You will be one of the richest people in the world but you'll have created a tool that nation states will use to undermine democracies around the world?" And how does Mark respond to that? Probably, "If not me, someone else will figure this out. Look at myspace.com, I'll take the money and figure out the rest after it becomes a problem."

The future doesn't need us, and neither does the present. It is the ultimate hubris of humans from the beginning of time that they are somehow "more special" than the rest of the machine that is the universe. When you read books like "The Vital Question"[1] you might be struck that humans are just a 'step in the path' rather than the starting or ending point of that path. You can imagine self aware machines arguing over the notion that they evolved from meat.

The power of Bill Joy for me has always been his willingness to say something outrageous that was the logical extension of a path through the point of absurdity. And in that moment stretching the pre-conceptions of the people hearing him such that they were able to think of something new that previously they would not allow themselves to think it. I've felt it first hand and seen it in happen in others. The after the meeting discussion that goes "That was the craziest thing I think I've ever heard, but something that might not be crazy is if we did this ..."

[1] https://www.amazon.com/Vital-Question-Evolution-Origins-Comp...


> Bill is a guy who made more money on Microsoft Stock than on Sun Stock because he sold the latter and bought the former, recognizing that at the end of the day Microsoft would have a larger impact

That's an interesting revision of history to fit a narrative.


Are you implying that Sun had a larger impact than Microsoft? Of course "larger" is not well-defined if the dimension isn't specified, but in the financial context it is used, I would say that is a fair assessment.


I don't think that's true at all. Let's set aside all the improvements Sun made to the ()nix ecosystem, which includes dtrace and filesystems and network protocols, among others.

Most servers were migrated from Sun to Linux (with minor hiccups). Sun basically set the standard for security and reliability that led to the successor instead of OS/2 or MacOS or ReactOS or whatever. Microsoft HAD to change their OS to meet the needs of the users (ANY security, more interoperability between ()nix and windows, etc).

The statement is either poorly phrased or wholly inaccurate, depending on your point of view.


As a founder, Bill had a lot of Sun stock. Sun, Microsoft, and Oracle all went public in 1986 (which was the same year I joined Sun). Sun came to see Microsoft as the enemy in many ways, not the least of which that they were growing faster than Sun was and had more 'seats' in terms of people who used them to do work. Bill sold a chunk of shares, and I don't recall the exact amount but it was significant and invested them in Microsoft shares. This was a "big deal" around Sun because of the "message" it sent.

Between 1988 and 1998 Sun stock increased 10x and Microsoft stock had increased 100x. By converting 15% of his Sun holdings into Microsoft stock he outperformed his Sun holdings.


Say more, no need to leave us hanging.


What's Bill up to these days? I haven't heard anything about him in a long time.


I read Ray Kurzweil's The Age of Spiritual Machines when it came out and although it's low on specifics, the abstract concepts are solid. The main one IMHO being that once a chip can emulate the inputs and outputs of a neuron, then a human brain could slowly be replaced by silicon and the person wouldn't even be aware of when he or she became a machine.

We used to say that artificial general intelligence (AGI) was 20 or 30 years away. Now it's more like 10. The main thing stopping it is the utter lack of exploration in computer science of highly parallelized/concurrent processing. So hardware-wise that means we need a DSP with perhaps a million cores with local memory (like map reduce), this would be like a video card but with general-purpose cores that can run any traditional language instead of OpenCL/CUDA etc. The software side means that we need more experience with languages like MATLAB/Octave, Elixer, Erlang, Go etc so that we can build other tools like TensorFlow on top of them from first principles.

Given those two foundations, many of the methods in neural nets, genetic algorithms, etc become one-liners and kids could play around with them the way we might have written ray casters and fractal explorers when we were young. They'll quickly discover novel ways of solving problems, combining intelligent agents in various layers, automatically assigning hyperparameters, basically all the things we struggle with today. And not long after that, there will be a repo on GitHub where you can download a brute force AGI and see how fast it can do your homework.

This seems completely inevitable to me and I would have loved to have made a career of it but now there's no time. Any idea you can think of is between 2 weeks and 2 years away from being manifested by someone on the web. Which is why I think we're looking at 5 or 10 advances over the next decade that will make AGI all but certain (barring a global recession like the dot-bomb followed by an embrace of fear like the global war on terror that set tech advances back 10-15 years, but I digress).


"the utter lack of exploration in computer science of highly parallelized/concurrent processing"

I'm not so familiar with such efforts myself, but that seems like an overly general statement to make. Surely there are countless examples of exploring parallel/concurrent programming and processing?

"I would have loved to have made a career of it but now there's no time"

Who knows, as you pointed out, it's becoming easier than ever for non-experts to start playing with machine learning, with access to inexpensive and powerful resources.

"..there will be a repo on GitHub where you can download a brute force AGI and see how fast it can do your homework."

Haha, yes, I can totally see that!


This is a good thread for: http://marshallbrain.com/manna1.htm

Important reading for these topics.

"the robotic takeover did not start at MIT, NASA, Microsoft or Ford. It started at a Burger-G restaurant in Cary, NC"


Kaczynski has written two books since his incarceration: TECHNOLOGICAL SLAVERY (2010) and ANTI-TECH REVOLUTION: WHY AND HOW (2016)

His essential points, according to the preface to Technological Slavery, are:

1) Technological progress is carrying us to inevitable disaster. … 2) Only the collapse of modern technological civilization can avert disaster. … 3) The political left is technological society’s first line of defense against revolution. … 4) What is needed is a new revolutionary movement, dedicated to the elimination of technological society. …


I would be interested to hear people's opinions on Jini https://en.wikipedia.org/wiki/Jini (Apache River) and its influence.


It was interesting in that Sun came up with Service Location Protocol in RFCs and this was an early implementation. Jini was flawed though - it didn't scale well.

Of course now we have things like Bonjour and Avahi.


> These engineered human beings may be happy in such a society, but they will most certainly not be free. They will have been reduced to the status of domestic animals.

Well, if the system is advanced enough that its actions become incomprehensible to humans, then it's a moot point. After all, nobody seems to go around questioning free will because of earthquakes.

The dangers seem to me to be from systems that aren't anywhere as sophisticated. Those little pyscho-dogs from Black Mirror are sufficiently scary. And they are much less likely to have read Sojourner Truth than a sentient cloud that keeps us as pets.


I recall reading this too. It's apt to think now how gracious we've been afforded that software innovation has not kept up with hardware innovation. While computers have got relatively that much more powerful as he states, they are still performing maybe a little more than they were in the 90's. One day I think one of these dystopian realities will happen but I like to think we as humans are actually just really stupid, and that an army of super smart machines if they do come into existence will only happen once we actually stop killing each other.


"Anti-Tech Revolution: Why and How is Kaczynski’s well-reasoned, cohesive composition about how revolutionary groups should approach our mercurial future….. I recommend that you read this compelling perspective on how we can frame our struggles in a technological society." -- The Tech, MIT's oldest and largest newspaper


I was interviewing for my first real job when this essay came out and they took the applicants to lunch. (The firm had a big name in consulting.)

The interviewer brought up this article and how scared he was an I told him he really should not worry about the AI part though maybe we should worry about the biological part.

He did not appreciate that.

It's weird that we could run a "Where were you when Bill Joy's article came out."


It's interesting in that he was right about when Moore's law would end. Maybe off by a couple years, but pretty dead on.


"The Machine Stops" E.M. Forster (1909)

http://archive.ncsa.illinois.edu/prajlich/forster.html


For some perspective on the other side, check out http://idlewords.com/talks/superintelligence.htm.


I don't view these two perspectives as from different sides of anything. Joy seems to talk about unintended consequences in a more general sense than just superintelligence, and Cegłowski also discusses the serious problems of not-intelligent machine learning.

I'm not terribly concerned about the paperclip maximizer that may someday exist. I am concerned about the pay-per-click maximizer that already exists.


I agree with the general thrust of much of this piece (and was shocked to read that he essentially classified these types of things as WMDs, just as I did for mass hacks of autonomous systems) but I will disagree with one main point.

I see no reason that humanity is worth saving over some other, better form of life. I'm not saying its ethical to mow down innocents to make way for the next generation, but some humans are pretty fucking awful. If our descendants are robots with the intelligence of Hawking and are full of kindness and grace then why should we insist that humanity exist forever? Even if we extinguished some of our evil impulses, like genocide and rape, what makes us better? My late aunt had Down syndrome but if I could invent a vaccine that stopped it from ever happening again I would even if I never harm someone with Down syndrome.

The best counterargument I can come up with is essentially the robots asking "What else is there to do?"

In some post-singularity paradise the robots can be heading in all directions at near the speed of light, but once they've figured out this universe's physics and ended armed conflict then it stands to reason that they'd end up creating or simulating places that did not have god-like creatures.

I think we might depend on the resource constraints and conflict to give our lives meaning. Kinda a riff on Eden's "knowledge of good and evil" a creature's intelligence and wisdom may depend on a universe of consequence.


"I see no reason that humanity is worth saving over some other, better form of life. I'm not saying its ethical to mow down innocents to make way for the next generation, but some humans are pretty fucking awful. If our descendants are robots with the intelligence of Hawking and are full of kindness and grace then why should we insist that humanity exist forever?"

If any beings exterminated humanity or through inaction allowed it to go extinct, I'd seriously question their kindness.

Truly kind beings would bring humanity along and help us to be less destructive.


Depends on where their loyalty of kindness lies. Were its kindness 'programmed' to be the preservation of the most life at the expense of the least, then human extermination would be the lowest of the low-hanging fruit.

Keep breeding stock, but then you get into adjudication of value, and at that point all I can think of is Dr. Manhattan saying "the world's smartest human is no more threat to me than the world's smartest termite".

berg01 6 months ago [flagged]

> I see no reason that humanity is worth saving over some other, better form of life.

Knowledgeable nihilists are dangerous. You seem to be one.

After reading Nick Bostrom I'm worried Larry Page is similar.

This kind of thinking creeps me the fuck out.


For anyone stuck behind the paywall (PDF warning): https://www.cc.gatech.edu/computing/nano/documents/Joy%20-%2...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: