Hacker News new | past | comments | ask | show | jobs | submit login
The first empirical study of the real-world economic effects of new AI systems (npr.org)
115 points by SirLJ on May 3, 2023 | hide | past | favorite | 76 comments



"Deflection" in the customer service world means when you prevent a customer from talking to a human. It's a critical metric for CSR orgs. Often you improve deflection rates by putting hurdles between a customer and a rep. If you're being generous those hurdles are helpful because they consist of "Did you try this? Have you read these related articles?" and can solve people's basic problems without taking up valuable human time.

I am highly skeptical of this article because it doesn't mention deflection at all. There are other GPT powered B2B2C companies (e.g. https://yellow.ai/) which are entirely focused on deflection. It's not a huge logical leap to ask, if you can use a chatbot to turn an unskilled worker into a skilled worker, what exactly is the human adding? In many cases the answer is 'not much' or 'nothing.' You can completely replace a low skilled worker with a chatbot some economically viable percentage of the time.

Because of this the claim that "the less experienced and less skilled workers [will] benefit the most" is suspect.


TIL I learned about deflection.

I must be some middle manager KPI's worst nightmare because the first thing I do when I get the usual idiotic chatbot is write "let me speak to a human".

With the advent of the next generation of chat bots, being unable to speak to a human is reason enough for me to delete my account and go elsewhere.

These systems optimise for the 80% of dumb questions, and if you have an honest-to-God problem, they just irritate you further. I thoroughly appreciate the other day when I contacted my ISP about having lost my IPv6 prefix, and I immediately got connected to someone that told me "sorry, we broke the Radius server configuration with a recent update. Try now." (Zen in the UK, if you're curious)

None of that "have you tried turning it off and on again?" I get that it's helpful for grandma, but we're not all metaphorical grandmas.


I’m a little confused by your skepticism. The underlying research is studying what happens given that an interaction reaches a human. Whatever deflection happens before that is irrelevant to the interaction with people afterwards. The paper studies the latter and the results are specific to that set of interactions, which are presumably more complex or ill-suited to deflection.


I'm saying that's a false premise.

Scenario 1:

Hire 100 new people, give them AI -> they do better with AI

Scenario 2:

Hire 10 new people (because AI has reduced demand for humans), give them AI -> they do better with AI

Both of these scenarios are in-line with the research, but are dramatically different when evaluating the proposition that AI helps low skill workers in general.


Why does interaction need to reach a human though? I'm hoping that once the chatbots get good enough, we can give them all the powers of a human representative and get rid of the human.


The study is about an AI support system (a "recent version of GPT", sources link to GPT4, finetuned on internal company data) supporting customer support agents at a software company that provides business process software. It increses productivity (measured in number of chats resolved by employee) by 13.8%, the lower the performance before, the higher the impact. The performance increase is mostly based on being able to handle more chats in parallel and handling each chat quicker. For reference, an agent resolved about 2.12 chats/h before and this increaded by 0.47 chats/h (I was surprised that it's "only" 2.12 chats/h).

Interestingly but also not surprising this increase in productivity at the lower end is atypical for tech innovation which tended to favour the higher end positions before. However, this type of customer support seems like a good candidate that will get automated away completely. It seems like the rollout can be in phases, it already increases productitivty as a support system. First, the productivity will be uplifted to a certain level for all workers where the skill difference matters less and less and finally the work will be done completely by AI.


> First, the productivity will be uplifted to a certain level for all workers where the skill difference matters less and less and finally the work will be done completely by AI.

First we will build a ladder, climb on it and be uplifted to a certain level until the distance matters less and less, and finally we will reach the moon! There is no single AI task that works on its own without needing human help at all. We are still on earth. The max speed of GPT is still the human reading speed.

It's kind of a leap of faith to think we will close the gap, solve that last 1% which is exponentially harder than the first 99%. Where are the L5 cars after 14 years of research? Can we even predict when they will arrive? That's how hard it is to estimate the last 1%. Don't gloss over it. It will surely come some day, but we have no idea how. How do you go from 0 to 1?

The mere fact that we are simultaneously stupefied by AI advancements and still not yet there - this shows how hard it is to estimate progress. We are glossing over future problems and imagining we're almost there.

Slowly we are learning new things about AI limitations - hallucinations, regurgitated content, sycophancy, context size limits, complex planning. I bet there are many more unknown unknowns ahead.


> Where are the L5 cars after 14 years of research?

Uber and Lyft.

Turning humans into automatons works better right now that trying to turn computers into humans.


I have seen demos from azure got service and how easy it is to configure and use.

It's production ready. Companies only need to start using and fine-tuning it.

This should get rid of plenty of manual service request.

And in my opinion the biggest leap is that it's now much easier than ever to fine-tune something like this.

The last 1% does need much much less personal


That’s the difference between AI in cars and AI for business processes. L5 must be >99.99% right. Business processes have a much lower bar.

Even moderate accuracy is sufficient to have a dramatic impact on all sorts of jobs whose only function is maintaining business processes, and it can be done gradually. Eventually, you’ll have 2 workers doing the job of 10.


Oh, let me tell you something. If you want to extract the fields from an invoice, there is no service in this world with more than 90-95% accuracy. OCR itself make errors on each page. So you might send 1 thousand dollars instead of 1 dollar. Or a million. In the end you need a human to double check it, so the automation will only increase productivity by 50%, not 50x. But it sure it pleasant to have the AI pre-fill the fields.


> solve that last 1% which is exponentially harder than the first 99%.

We just don't know that yet!

But given how GPT-3 took everybody by surprise and how GPT-4 is so much better, I would not be surprised if we'll be shocked by GPT-5 or 6 can do.

I'd be in fact surprised if nothing revolutionary comes out of something like GPT-5 + AutoGpt.


“…and finally the work will be done completely by AI.”

How will the AI be trained on novel patterns of interaction, for example new products or previously unknown problems?

It seems to me that (as the article suggests) high level human support capabilities will be more valuable in this scenario rather than less - rather different from a complete AI solution


It will be interesting to see how this company is doing 5 years. It seems that thanks to the AI system novice employees are able to cash out on skilled employees knowledge. But this system might also hinder the novices changes on gaining knowledge and becoming skilled employees. And those skilled emploees are essentialto maintain this system to respond to evolving needs. So the effect on job market might be the opposite than researcher wish: recruits need to be smarter than before and the gap between low-skilled and high-skilled employees will keep widening.

So in that case introducing AI to this process would have the exactly same outcome as automation has had in manufacturing.


If your on-the-job experience and your personal efforts for career advancement have been your capital investment in yourself, now these can be replicated for lesser-paid workers, with the surplus going to the business owner. Take a pay cut or advance your career on the front line at McD's. Fun times!


Seems like the first jobs to be impacted will be the ones with narrow, domain specific knowledge. Like a customer support rep for a specific product line or a QA tester.

The safer jobs will be the ones where you have to collate knowledge from multiple fields. Like a product creator who needs to know a bit of marketing, code, and design to come up with a new idea.

The orchestra conductors, it seems, are safer than the pianists and cellists - for now.


Revenge of the Generalists..


I do agree with you. My feeling is that there will be a rise of so called super experienced users- those,who didn't just get all the answers from the prompt but learned instead and hordes of people who won't even know how to do basic stuff without the AI helper. The former group will be small, but extremely well rewarded,while the rest... Well, not so much..


It feels somewhat analogous to the calculator, except a million times more impactful because of the breadth of fields it can apply to. The times I go to a minimart type place and cashier laboriously, and slowly, whips out a calculator to determine how much change to give when paid 100 for a 32 cost item. Now imagine that analog generalized to basic knowledge and competence itself.

It's kind of ironic because each one of these steps forward towards making knowledge egalitarian tends to do the exact opposite. Same thing with the internet which was going to democratize knowledge, information, education and more. And indeed it absolutely has, but 99% of people don't take advantage of it - so the 1% just pull that many orders of magnitude further ahead.


This perspective or line of argumentation does strike an intuitive chord with me.

There is a difference between being able to leverage AI to accomplish more than without it, and merely outsourcing intelligence and knowledge to the AI and effectively becoming complacent (which as you say, the 99% will end up doing).

My bet is on AI causing equality to worsen, unless there is a major political will to counter this with policy.

IMO such policies will need to provide both carrot (equalizing measures) and stick (lessening the options for the 99% to become complacent).

EDIT: There is a third route that could play out. Leveraging AI or “teaming up” could be equivalent to a faustian pact, because of the nonzero possibility of AI to develop their own goals and agency.


We are the start of a new S-curve. The current skilled employees will use their expertise to bridge the adoption of AI, bring the current use cases. The next wave of AI-native employees will build on their expertise, which we don't have widely yet, to develop new ways of doing things that we haven't envisioned yet and later to ease the next big paradigm shift.


This is a U.S. Fortune 500 company but the workers in the sample are mostly (83%) outsourced customer support chat employees working in the Phillipines. My guess would be that there's not a real career path for these employees and the goal is to eventually automate them away.


Automating the workers away is always to goal. But the irony is that automation makes the few humans that are left in the loop more important than before. So it is possible to end up in a situation where 1000s of easily replaceable employees were replaced by 10s of very difficult to replace employees and very complex system. So all in all it can be that even 10x productivity increase per employee might not be enough to justify the costs overall.

So I guess we have to see this case by case as in manufacturing and industries.


I'd bet they're just using them as the RLHF phase of the AI Chatbot Customer Service training and will be gone in less than a few months after that's successful enough


It would be interesting to see how a general-purpose AI like GPT-4 trained on internal corporate knowledgebase would perform.

Theoretically, such an AI agent would know your internal policies, biases, guidelines. It would also have more knowledge about the company than any individual worker, collating data from across divisions.

In large corps, the silofication of knowledge is a real problem. But an AI agent trained across the company's functions would not have this issue.


>Theoretically, such an AI agent would know your internal... biases,

I think it would follow your internal biases, but would not know them. It is an interesting question as to how to train a model to identify your internal biases.


Especially with all of the meetings switching to virtual spaces with transcripts. This would be so powerful. When you can ask (or create documentation) based on all of those meetings. Then enriched by all of the chat messages, the company's code. Best thing, it doesn't even have to run in the cloud. As it looks like it can be an on-premise inference machine. The future is bright for surveillance.


Stretching that further, it is entirely possible to track every single interaction you have with your work computer. You can track your mouse movements, slack messages, work email, keystrokes...

With this data, it is possible to create a digital version of you, the employee - a digital version who writes like you, moves his mouse like you, and even makes the same coding errors as you.

I can totally see a near future where companies create digital avatars of star employees that stay with the company in perpetuity.


That sounds like an amazing sci-fi plot, and somehow doesn't seem so far off from the future. Only in Germany where I work I am sure this will be blocked by worker's regulations. But how long can they do that, if you have already synth workers from all around the world. We can start a recruiting agency for synths. The more I work with LLMs the more I am aware of the synthetic potential.


I can totally see people making digital clones of themselves and sending them out there to work multiple jobs.

The future is going to be very...interesting.


It sure looks like a beautiful future. I'll do my best to make it a reality as soon as possible.


I know this is cynical but it’s honestly quite rare anywhere I work had anything truly important written down that’s current.

I know that sounds bad but it’s true, at least I my experience.

I think the issue here is, how do you keep this model up to date ? Maybe that’s a trivial thing to do ?


Ingest Slack, Gmail, Docs, Jira, Meet, etc. on a constant, always-on basis.

Associate a timestamp with documents and use that to decay the importance. Assign higher importance to key policy and onboarding docs, company-wide emails, etc.

The PM/EM for a team will become responsible for curating a team's additional training data.

Let the system itself regularly suggest information to prune. Have an internal team analyze this and reach out to the relevant stakeholder(s) to confirm when unclear.

If Slack, Microsoft, and Google aren't re-shuffling their priorities to build this right now, they're crazy.


I can't tell sarcasm from tech enthusiasts dystopian ideas since ~2015 but this is getting on another level


Only concern here is prompt hijacking. People voicing privacy/ethical concerns are ignoring that the first to do this will win.


Right, sounds awesome. Wire me up for pure surveillance in the workplace ?

I’m honestly not to excited about where tech is going lately.

I guess we’ll soon be expected to where mind reading devices to keep the models up to date ?


All of those things are already indexed at your workplace. But the indexes and search suck.

I know that I've wasted hours searching Gmail, docs, Slack, and Jira, or just given up entirely before finding what I want.

Think of an LLM for biz docs as just being incredibly good search. Where you don't have to know the search terms anymore.

Nothing to freak out about here.


>> how do you keep this model up to date ?

You could place mics everywhere. The machine will hear everything and know everything unless you whisper


Better yet, force employees to have a brain implant that records speeches and even thoughts, with neuralink it should get there soon, the future is so bright !


Well, seeing how speech recognition is improving, I think that the whispering issue is going to be fixed sooner than later.


What about stuff I'm only thinking about? Don't think the machine can handle that!


Actually it looks like there is tech which is improving that can read minds. So you might be expected to where a mind reader at work. Legitimately.


So this is why OpenAI named it Whisper.


Whisper 2.0 will also record your whispers


Whisper 3.0 will whisper to you.


SadTalker already does.


It actually works great on whispers already.


Who trains it? How often? All those internal procedures are finally going to have to make sense because they are going to be interpreted by a computer, not half ignored by humans.


This is an interesting empirical study indeed. Direct link to the study: https://www.nber.org/papers/w31161


“And what this system did was it took people with just two months of experience and had them performing at the level of people with six months of experience," Brynjolfsson says. "So it got them up the learning curve a lot faster — and that led to very positive benefits for the company."

Literally the same story as every industrialization ever. Buckle up, we get to live through another industrial revolution.


This wasn't even GPT-4.

Terrific news.

It would be super cool if the people who were currently fretting about paperclips could focus a little more on bias, security, data, privacy, openness, and all the stuff we actually are going to have to deal with over the next five years.


I really don't understand this antagonism between different branches of the "AI safety" community. In the 1960's, some people were concerned about preventing a nuclear apocalypse, and some people were concerned about lead emissions and CFCs and exploding cars. But did the nuclear apocalypse people tell the exploding cars people they should stop their work on exploding cars because nuclear apocalypse was more important long-term? Did the exploding car people tell the nuclear apocalypse people to stop raving about nuclear apocalypse because there were more pressing problems affecting people right now, like exploding cars and lead poisoning and a hole in the ozone layer?

I mean, maybe they did; but if they did then I think it was kind of silly. Both kinds of problems are important, then and now.


I'm sorry it sounds antagonistic, but the negativity and "it's too late and we're all going to die no matter what we do" nihilism, is having the effect of sensationalizing the issue in ways that have been incredibly counterproductive.

Here's what happened the last time technology was going to destroy us all:

https://www.popularmechanics.com/science/a11217/what-stephen...

"The Higgs potential has the worrisome feature that it might become metastable at energies above 100 [billion] gigaelectronvolts (GeV). This could mean that the universe could undergo catastrophic vacuum decay, with a bubble of the true vacuum expanding at the speed of light. This could happen at any time and we wouldn't see it coming." —Stephen Hawking

I'm not trying to make fun of Eliezer and co., but we have to have some proper perspective. We don't do that very well these days, because end-of-the-world scenarios are good for clicks.


Hawking's statement was made after the confirmation of the Higgs, and is still valid. A metastability vacuum event could be a problem (that's putting it lightly, hah) at high enough energies. It might have already happened, elsewhere.

Eliezer and co. should be made fun of. They engage with journalists and lean into statements like this: https://www.lesswrong.com/posts/FKNtgZrGYwgsz3nHT/bankless-p...

They're absolutely hysterical. Convenient for them, and for the companies, the hysteria is to their benefit. More money to fund the MIRI for Yudowsky, and regulatory capture for the companies who believe they're ahead if you can convince some dementia-ridden Sentor to pass the bill regulating some vague notion of "AI safety."

There's some people in these companies who understand that they have no moat, and that you can't make GPUs illegal, as demonstrated by the most recent story about the leaked google post, and that people can outperform their models in an hour with a LoRA on a workstation laptop for a given task, that the cat is out of the bag. But, with the hysteria of the media, where interns straight out of the shithole of liberal arts believe that a post on reddit or twitter where someone was slighted at the dinner table is breaking news, we're now in this situation where we have calls for regulation for something people simply don't understand, no thanks to the useless media.


Some people think the "apocalypse" A.I. safety people consist largely of individuals engaged in magical/ religious thinking based on a love of fiction, so if we were to pick a 1960s analogy, it would be like some people concerned with exploding cars, and some people concerned that a lunar apocalypses will unleash Cthulu, as was foretold by H.P. Lovecraft.


"Some people" probably thought the same kind of thing about nuclear apocalypse people (or the CFC people). If there were, my honest assessment of those people is that they were fools. One of the main reasons we didn't have a nuclear apocalypse is that people knew that we could, and took active steps to try to prevent it.

There are a wide range of people concerned about the long-term issues in AI. I agree that some of the logic of people on the extreme end doesn't seem very sound. But there are loads of people who have a more measured take on things.

Essentially, I think there are two questions of fact, which we don't have good answers for:

1. Is it possible for intelligence to go forward without limit? Is it physically possible for an AI to be as smarter than us than we are smarter than chimpanzees, or ants?

2. How likely is it that if we made such an AI, that we could also induce it to leave humans in control?

If the answer to 1 is "yes", then consider what it would be like to interact with such an intelligence: that it would continually be doing things of which we have no conception, but which cause the universe itself to turn against us.

The analogy for #2 I've heard is, "It may be that at some point we reach a situation where our relationship to AI is like a herd of cows' relationship to a farmer. Maybe the farmer slaughters us, or maybe it keeps us around but on its terms, not ours. Either way we want to stop things before they get to that point."

Now maybe the answer to #1 is "no"; or maybe the answer to #2 is "very high". But I don't think we have rock-solid reasons for believing either answer; and so I think it only makes sense to proceed with caution.


The threat of nuclear weapons was an empirical fact, not a thought experiment.

The issue isn't whether we can imagine alarming thought experiments but rather why would we take seriously someone whose field of "work" is "inventing thought experiments?"


While I reckon there's at least 5 (and probably 10+) years before it's even possible for a sufficiently independent AI to start literally paperclipping us (mostly because of the speed/power needs of good image processing AI), the people who are trying to solve that don't know if we can actually do so in even double that time frame.

But the paperclip scenario was only ever an illustration that the AI doesn't need to hate us to destroy us, merely to want to use our atoms for something incompatible with our own desires.

For disasters to happen, there are many options which may come sooner, varying from not paying attention to what the AI is doing when it makes mistakes — the default, as we can imagine even today someone putting a Boston Dynamics Spot to work in a virology lab and it walking out with a sample because it doesn't know any better — to the malicious, for example one idiot running a more competent version of ChaosGPT: https://youtu.be/g7YJIpkk7KM

Those other, lesser, disasters are sometimes called "fire alarms for AI".


The cynic in me says that the #1 AI-related risk by far is that corporations will immediately apply it to the task of listening in on everything we say and write.

This all reminds me of "A Fire Upon the Deep" by Vernor Vinge, where the Slavers enslave humans by essentially constantly monitoring them for misbehaviour. If I remember correctly, a plot element is that they use yet more slaves to keep the bulk of the slaves in line.

With ChatGPT, you no longer need to do "sci fi things" like turning squishy meat brains into cyborg slavedrivers. You can simply ask the AI to do it, and it will. It will do it 24/7, without pity, or remorse. It will do vigilantly and faithfully.

It will be a checkbox in Office 365 that any psychotic manager can turn on.

And I guarantee they will turn it on.

All of them.


I've not yet had time to read/listen to A Fire Upon the Deep yet, only its sequel A Deepness in the Sky, but that certainly sounds like one of the many bad endings we might get.

That said:

> With ChatGPT, you no longer need to do "sci fi things" like turning squishy meat brains into cyborg slavedrivers. You can simply ask the AI to do it, and it will. It will do it 24/7, without pity, or remorse. It will do vigilantly and faithfully.

ChatGPT has all the vigilance and faithfulness of my late mother's mid-stage Alzheimer's, so if someone tried this today it would rapidly result in everyone getting million-dollar-no-work-required contracts the company isn't allowed to break.

That LLMs can be socially engineered just by asking nicely is a pleasant surprise twist in AI alignment, but unfortunately I think it illustrates how primitive is our current state of the art AI safety and alignment.


What I think people who work in tech fail to realise is that non-tech people are learning about these dangers and I don’t think they’re impressed or care if you have a computer to play with or robots playing soccer.

They’re worried about their families safety and their (which used to be important, you know children? Other biological life etc?) livelihoods.

To a regular, nontechnical person these “talking computers causing damage” sounded like some far off nerd revenge dreams. But people know maybe they’re not just dreams, maybe they are realities.

My honest opinion, and I don’t care what people think about this, shit is going to get wild regarding surveillance and regulation of computers, networks, software and the internet. I mean what else is the alternative if these systems grow more powerful ?

I mean what are we to do ? Just let people make tsar bombs in their garage and detonate them as they see fit? Have an autonomous AI agent take down a lower grid and trigger a war for fun ?

I really don’t understand how this is supposed to work going forwards ?


> What I think people who work in tech fail to realise is that non-tech people are learning about these dangers and I don’t think they’re impressed or care if you have a computer to play with or robots playing soccer.

Doubtless.

That said, when it comes to AI risk, normal people have no reason to know the boundaries between reality and sci-fi for the same reason that I, as a non-lawyer, have no reason to know the boundaries between law and Hollywood tropes.

By extension, this means that I expect the general public to be aware of countless fictional examples of AI takeover scenarios, some where the AI was given a goal and blind logic made it bad for everyone (Days of Future Past, Colossus the Forbin Project), some where the AI is actually evil (Terminator, the character Lore in TNG/PIC), and some where the AI is a tool used by malevolent humans (many Dark Mirror episodes, some TOS), and some where it mixes several of the above (Westworld TV series).

> I really don’t understand how this is supposed to work going forwards ?

Does anyone?


Well, by going forwards, I mean practically by the way :)

So here is where I don't even know then why it's being pursued so aggressively, especially in a business context.

There is absolutely no way that Google or Microsoft can just put an ultra-intelligent "AGI" on the internet for hire with a credit card, to start with. It would likely be an existential threat to their own business, people, and or civilization.

So yeah, even for a business like Microsoft, there is a limit to how good you can let this thing get before it actually destroys their own business or worse.

Then, how would it work from a national security standpoint? Would it not just immediately provoke a war if America got an AGI first and China finds out? Even if someone started leaking rumors the Pentagon has an AI war general with an IQ of 900, that could possible make things escalate immediately?


Why don't you focus on that instead?


> It would be super cool if the people who were currently fretting about paperclips [...]

That's textbook trivializing of an issue that has been extensively analyzed by experts and found to have civilization-ending consequences with near certainty, unless extensive countermeasures are taken well in advance. Your comment is the equivalent of climate change denial.


Have any countermeasures actually been implemented besides blog posts and podcast appearances? Climate activists regularly disrupt traffic, glue themselves to things, and protest pipelines. By contrast folks who believe AI will end the world have a propensity to start AI companies.


At the moment, we're still trying to turn the question from an easy and obvious natural language form ("give me what I meant to ask for") into a form that is amenable to being solved in the world of weights and biases.[0]

So several of the AI labs are published work on this topic, but it looks like e.g. "interpretable models" or philosophy of ethics etc.

So today we're doing today with AI alignment research is the equivalent of asking Ada Lovelace how to prevent the Therac-25 deaths and the best response is "Why would you even do that, just don't write the wrong commands on the punched cards".

[0] Some, myself included, are optimistic that early AI can help us do that. Yudkowsky appears to be dismissive of all options, but I don't see why any near-term AI would care to insert errors into later types of AI, so we've only got out own blind spots to look out for and we had that already…

…but all the AI doomers I hang out with seem to consider me a terrible optimist, so YMMV.


I’ve always thought that in general, societies wants tend to expand to the limits of productivity.

The real danger seems to be of leadership using AI as a cudgel to threaten employees into doing what they want.


Threatening is one of the tools humans use to align other humans, so why wouldn't this happen?


Doesn't take complex AI to benefit new workers (or even middle experience ones). A company I talked to did some simple sentiment analysis on emails, and reduced the closing time for a real estate office by 40%. Which means they can do that many more deals per month etc.

Maybe the real change here, is people are beginning to consider using computer aids to do their job. Instead of insisting their experience is all that matters.


While I get that people are more concerned with productivity insofar as it pertains to their job, but I think the bigger economic concern is when competing companies begin using the same models for navigating the competitive landscape. I highly suspect that we will begin to see accidental trusts form as competitors sharing models begin to optimize around each other.


I'd be A LOT more interested to see what happens to a whole set of workers who dabble in lower-skilled jobs and are now being rendered redundant with tools such as Chat GPT.


I'm not sure what you mean by dabble. Ignoring that that's exactly what this study is.

They're looking at call center employees. The author calls employees who've worked there for more than six months experienced.


I guess that what will happen to them is more or less the same that happened to phone operators.


The future is not that scary. Our path to it is, though.


Isn't this a bit early, and probably distorted by hype fads?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: