Hacker News new | comments | show | ask | jobs | submit login
If I Had a Hammer (nytimes.com)
45 points by wallflower 1346 days ago | hide | past | web | 52 comments | favorite



If "average is over," why is Thomas L. Friedman still writing for the NY Times?

A year ago, TLF wrote this about MOOCs:

Nothing has more potential to lift more people out of poverty — by providing them an affordable education to get a job or improve in the job they have. Nothing has more potential to unlock a billion more brains to solve the world’s biggest problems. And nothing has more potential to enable us to reimagine higher education than the massive open online course, or MOOC, platforms that are being developed by the likes of Stanford and the Massachusetts Institute of Technology and companies like Coursera and Udacity.

Then Sebastian Thrun admitted that Udacity had a "lousy product" in this interview:

http://www.fastcompany.com/3021473/udacity-sebastian-thrun-u...

Friedman is a cheerleader for employers. Employees are warned that average isn't good enough. One stupendously glaring omission in this kind of talk is that good managers are a dime a dozen. If anyone should be warned, it should be managers--if there is no reason to hire employees displaced by automation, then there is absolutely no excuse not to hire the very best managers.


Shocking! Former Stanford professor doesn't see the same results & progress with remedial state students. The entire concept of MOOC's must be doomed.

Basing an entire industry's future off of the shortcomings of one man's idea and expectations is a bit silly.


I won’t try to defend Friedman, but I believe you misunderstood what Thrun was trying to say: he is a very conceptually accurate guy, and him saying it is a bad ‘product’ doesn't mean the general concept is flawed, but that his ambition to market the whole of superior education as a video class has, with testing proven to show issues, with engagement in particular. He is far from abandoning the idea, but reacted by switching Udacity towards a for-pay, vocation certification platform.

Having the best professors use video (or radio) to openly broaden the audience of their abstract teachings is a concept that has been widely tried since, well, I’d say 1530 with the College de France, but the recordings where a bit different then — anyway: it’s the principle of almost every high-brow show. Those tend to have less of an audience than we’d like, or need considerable adaptation — but Udacity was one attempt that was remarkable in mainly one way: how little the class was adapted for anything but highly intelligent and motivated twenty-somethings with fresh brains.

Look at what Dan Ariely is doing for an great example of how to preserve the internet support and have an engagement comparable to a physical class.


Thrun's criticism was pretty general--it went beyond his product. Thrun was sufficiently critical of his own vision of education for the masses as to cast doubt on Friedman's sweeping implicature that, among all the approaches to poverty that have been and that might be tried, this technology has the greatest potential. Given the 1% retention rate, MOOCs might "unlock a billion more brains" if there were 100 billion of them to unlock.


With one billion brain interested, he can still train 50 million (actual retention numbers are closer to 5%) new people every semester. That’s still a lot more than the Ivy league.

As to Friedman… Well, this is HN, a polite society, so let me skip that part.


OK 50 million. Friedman still stands corrected. (Me too, but at least my percentages are within the same order of magnitude.)


>>> One stupendously glaring omission in this kind of talk is that good managers are a dime a dozen. If anyone should be warned, it should be managers--if there is no reason to hire employees displaced by automation, then there is absolutely no excuse not to hire the very best managers.

It has occurred to me that management is basically an information processing task. Why shouldn't it be automated? I think a reason is that managers are kept around to manage information that can't be committed to a stored record. An example would be details of the decision making process for who gets laid off during a downturn.


The hardest part about management is not the rote processes of information collection and distribution. It's all the human elements of the team that you're managing. A good manager is a cheerleader, coach, and scorekeeper all in one. This requires individually tailored approaches to each member of the team and doesn't strike me as something easily automated.


An example would be details of the decision making process for who gets laid off during a downturn.

We know something about the hypothetical manager's decision making process: the typical manager would include the constraint that he should be excluded from the collection of employees to be laid off. An automated improvement on this algorithm would unlock tremendous benefits to principals, and in the case of publicly owned companies, to shareholders.


It's strange the he uses the phrase "Average is Over" without mentioning Tyler Cowen, the economist (and occasional NYT contributor) who published a book with that title just a couple months ago.


I always liked Thomas Friedman's writing on foreign policy issues back in the day, but his work on globalization and technology always sounds incomplete and amateurish. I think Mr. Friedman would do himself a favor if he actually spent some time learning to code and hacking on an Arduino board or something so that he could start to develop more intuition on what can and cannot easily be done with technology, and where globalization fits in and where it doesn't.


His writings on foreign policy and economics is also incompetent garbage. One datapoint is his support for the Iraq war. But really why he still has a column in the NYT is a mystery.


I do not want to be in the business of defending Thomas Friedman, but I think his opinions on the Iraq might have been more nuanced than he is usually given credit for. For example:

NEW YORK TIMES COLUMNIST THOMAS FRIEDMAN

THEN: "Let's start with one simple fact: Iraq is a black box that has been sealed shut since Saddam came to dominate Iraqi politics in the late 1960s. Therefore, one needs to have a great deal of humility when it comes to predicting what sorts of bats and demons may fly out if the U. S. and its allies remove the lid. Think of it this way: If and when we take the lid off Iraq, we will find an envelope inside. It will tell us what we have won, and it will say one of two things.

"It could say, 'Congratulations! You've just won the Arab Germany--a country with enormous human talent, enormous natural resources, but with an evil dictator, whom you've just removed.'

"Or the envelope could say, 'You've just won the Arab Yugoslavia--an artificial country congenitally divided among Kurds, Shiites, Sunnis, Nasserites, leftists, and a host of tribes and clans that can only be held together with a Saddam-like iron fist. Congratulations, you're the new Saddam.'

"In the first scenario, Iraq is the way it is today because Saddam is the way he is. In the second scenario, Saddam is the way he is because Iraq is what it is. Those are two very different problems. And we will know which we've won only when we take off the lid. The conservatives and neocons, who have been pounding the table for war, should be a lot more humble about this question, because they don't know, either."

--The New York Times, January 26, 2003

(See http://www.esquire.com/features/ESQ0306IRAQQUOTES_220)


How do basic coding skills or simplistic hardware projects lend themselves to an improved understanding of the global effects of technology? For a writer concerned with the big picture, I imagine understanding the effect is more important than the cause (of code). Learning how to write a bit of code may improve his appreciation for technology, but I can't imagine it changing his (broad) opinions.


Because some grounding (I don't mean of the ee kind :-) would remove some of the technomystecism in his decision process. At the moment he considers the iphone indistinguishable from magic.


Automation bashing had become so boring. ”Us vs them” its a go to media cliche. Machines, automated or otherwise, are made to help us. We want every thing to be automated. The purpose of business and technology is to improve lives, not to employ people. No one starts a company with the intention of creating jobs. People want to solve problems, make money or bring useful new ideas to life. Startup founders want to do whatever has to be done in order to execute their vision, with humans, machines, trained animals or whatever. Automation improves our position as species and we need to get behind it instead of viewing it a competitive threat. As they say in pickup basketball, ”Same team!”

It's true that automation has a disproportionate effect on different levels of society. But that's more about humans than it its about machines. There are a lot of people working on technical solutions now and sooner or later people will get to work on social solutions that address some of Friedman's concerns. We're a long way from a fully automated society and we have time to prepare for the inevitable changes.


No one sta[r]ts a company with the intention of creating jobs.

Sure they do. Many founders would like their companies to pay them at least a working wage.


> we have time to prepare for the inevitable changes.

I would argue, having lived in countries that lead automation rates, that we already are at a point were large groups of the population cannot find work with their qualification, and never will: farmers, miners and forest workers had to convert to become factory workers, but most of those are now outsourced or automated; at the moment, there isn’t a job in Western countries for someone who has difficulty learning how to read.

Some people genuinely can’t grasp any abstract concepts: I taught in jail for a while, to people who couldn’t for the life of them generalise “Two apples plus three apples equals five apples; two sheets of paper plus three…” and remained baffled by ‘addition’ as an abstraction. Keep in mind: some of those guys fashioned working radios and distilleries from scratch.

Similarly, I have talked to taxi drivers who couldn’t understand it was in their interest to switch to Uber, how the trust mechanism worked with the app (they thought the dispatcher could better tell on the phone if the client was shifty), to say nothing of choosing a path based on a congestion map. I love my Roomba and my waching-maching, but without the cleaning jobs, what are immigrants to Western country going to do? Take care of elderly people to whom they can’t speak?

I fully agree with your point that automation is good for all, but we might have to face before the end of our careers a point where a vast majority of the population has to be considered handicapped, “mentally” for lack of a better word, out of the workforce and entirely subsidised.

At the moment, even a coder, hard working and of average skill, cannot find a job where I am: I know a handful of guys who can make functional, useful apps all by themselves (say: scanning barcode, interrogate database to tell if a product is ethical, handle shopcart, and all that with a great design; sports tracking). They make a hundred of euros from those on the first month, but that’s it: I tell them to treat those as portfolios to get interviews, and they do. I have no idea what goes wrong during those, but their repeated failures to sign anything is bigger than one HR being in a bad mood.

What I see is a world in 15-20 years where 3% of the population designs things; 12% have menial jobs, mainly manning security and healthcare bot-based systems and 85% cannot work. Spain or even Berlin already have a modern economy with 30% unemployment: subsidies, black-market service jobs, low rents… it looks somewhat stable, if shocking.

Clay Shirky had an interesting point about that, in an essay called “Gin, TV series and Wikipedia” or something like that — where he assumed there is a sacrificed generation, and the next learns. I have a hard time seeing millenials be anything but creatives with a YouTube channel. Maybe that’s worth it.


In the Second Machine Age, though, argues Brynjolfsson, “we are beginning to automate a lot more cognitive tasks, a lot more of the control systems that determine what to use that power for.

Automation is what I love most about programming. Most of the time, when you encounter a tedious, dull process, you can come up with creative ways to make your machine do the work.

It's one of the few occupations in which boredom represents opportunity.



That app is pretty clever, and it plays off of valid criticisms of the author. :)

Friedman has a talent for writing as well as glueing together disparate concepts to support his arguments. It's fair to say that in some articles, he says very little, very well.


I'm going to vote this up, but Friedman to me always seemed like a bit of a hack. The kind of guy who would go outside, see that it was raining, then write a 3-part book series on the coming flood, the power of water, and the wondrous new water-world awaiting all of us. A bit breathless, a bit over-done, a bit over-cooked, and a bit over-hyped.

But he has a point. We are on the verge of some new golden age. Perhaps a new machine age. I tend to think we're seeing the beginnings of a transition to a machine lifeform-dominated Earth. Probably take 200-400 years to fully play out. People aren't just going to become obsolete -- they're going to become irrelevant and non-existent. Seeing pictures of an old-fashioned plain human like us, in another 400 years, will be as quaint to future man as seeing pictures of neanderthals in a science magazine is to us.

The interesting thing is just how awake the average person is to the amount of change happening. When the Industrial Age happened, there were huge factories, great cities being built, and all sorts of physically giant things to look at and note that yes, something new was happening. With the new age, we're looking at changes in Silicon, machine learning, maybe robotics. There aren't going to be giant robots leveling cities in the near future (thankfully!) or anti-gravity machines taking us to Mars in a few hours (sadly!). Instead we're seeing massive changes, on a tiny scale, about what it means to be human.

This article is for the cheerleaders. HN is for the cheerleaders, like us. But I wonder if the average person is even going to notice.


A bit of a hack? I find that this article has "self-irony" if there's such a thing - the fact that the writer works for the preeminent American newspaper completely refutes the premise of the article. Mediocrity will forever have a place in this world, if Thomas Friedman still has a job.


People will notice eventually because your relationship to advanced AI is going to determine your relevance. And 400 years is wishful thinking. The new lifeforms may arrive well within two or three decades.


> Dutch chess grandmaster Jan Hein Donner was asked how he’d prepare for a chess match against a computer, like I.B.M.’s Deep Blue. Donner replied: “I would bring a hammer.”

Okay, but why not bring a gun to a chess match against a human? That would be an excellent winning strategy (cf. the Gordian Knot).


I think a hammer would be sufficient in that case as well, unless of course your opponent brings the gun.


I read the article and skimmed the comments and all of the economic comments almost intentionally avoided discussing monetary velocity as a concept. Oh they'd dance around it but never directly discuss it. Its not terribly controversial, so I don't see why. The modern numbers are utterly dismal, maybe thats why? Too much of a downer?

During an economic depression, a growing fraction of the population is removed from the "real" economic system and market, they start to use Tide detergent, lotto tickets, drugs, cases of cola as their new economic system and currency.

In a similar way, as the fraction of the population still involved in the interchange of information decreases, despite all the article hand wringing, its pretty simple, they'll just drop out. Hard/impossible to say what the new system and currency will be. People will just stop. The future of the "knowledge worker" looks a lot like ... travel agent.

So, put it together, and in a post-knowledge worker economy, they'll always be a small number of elite knowledge workers, and always be some jobs that are not financially viable to automate away (not many, of course), but overall society will just wander on over to something else and look at knowledge work / service work as something from the past... thats what grandpa used to do before all of them got downsized, or whatever.

The article is much ado about nothing, in the long run. Currently there exists a service / knowledge economy, at least for a small segment of the population, and its going away, just like every other segment eventually mostly goes away. OK.

That and watching the political ax grinding was moderately entertaining. "If we all just held our hands around the campfire and sang, then ..."


Golly gee, things are changing too fast for Thomas Friedman and his readers!


because employers now have so much easier, cheaper access to above-average software

Yet they consistently don't choose it. Much as in the following case...

All this data means we can instantly discover and analyze patterns, instantly replicate what is working on a global scale and instantly improve what isn’t working — whether it is eye surgery techniques...

That's an excellent example of another thing that won't happen, mainly because it already doesn't happen; medicine is a field where there are many, many examples of hard-data demonstrating better techniques, well-known, that nonetheless don't spread. Atul Gawande discusses it, amongst other interesting things, in his book "Better"; I namechecked that book mainly because, as well as discussing this, it's quite a fun read and a good starting point for principles that carry across all fields, but it's by no means the only source.


> ...labor is so important to a person’s identity and dignity and to societal stability

> ...lowering taxes on human labor to make it cheaper relative to digital labor

> ...guaranteeing every American a basic income

What all these say to me is a an unwillingness to actually advance the social dynamic that has been in place for 100s of the years. The hierarchy of serfdom.

Why when you have the possibility of removing drudge like work with automation would you actually want to actively retain it.

Why when a society of plenty is possible would you want to limit it's benefits for a few and give only a basic income to the rest.

Why would you try to define someone with a menial task when you could aim to help them bring more worth to society and themselves with a more fulfilling and beneficial activity.

Not saying there are easy answers to any these questions but trying to maintain the status quo is not an approach that benefits everyone.


I often wonder when the tipping point for white-collar specialists being replaced by machines will occur. Watson replacing doctors in making diagnoses has received a lot of attention, but there are many other fields. The big one for me is when will we trust software to analyze contracts instead of lawyers? We already have moved to replacing lawyers with computers for discovery, I would be even more pleased in negotiations to only have to change items flagged by machines than the sometimes incompetent lawyers I've had to deal with.


I was recently informed by an acquaintance that "lawyer work" decades ago was mostly about being a low to mid level manager over a herd of paralegals who did all the work by hand. As the market has collapsed, lawyers do all their own electronic searching and often data entry by themselves... The market has crushed the profession.

Most likely that will eventually play out with doctors, probably economically enforced. Sure, doc, do whatever your medical judgement thinks best, if you want to work for free... but we're only paying on claims if your medical judgment matches "watson". So overproduce new docs for awhile to crater salaries, only pay claims if their work matches a computer model, and what we used to call a "nurse" will be the new "doctor". This also helps with doc-in-a-box facilities, which could now literally be a doc-in-a-box instead of a nurse-in-a-box or PA as implemented now.

Eventually, being a "pharmacist" will mean being the minimum wage drone who stocks a semi-smart vending machine, or maybe it'll mean being the minimum wage drone answering the 800-number support line outsourced to the Philippines.

I could see the same thing happening with accountants and HR and perhaps marketing.


I find this sort of bravado about replacing other professions really unconvincing - if you think you can replace lawyers, doctors, pharmacists, accountants, Marketing, and HR with software, why not the programmers themselves?

If you think on why you can't replace programmers with software written by experts who know nothing about the domain (or even by those who do), it will tell you why the professions you've listed would be very hard to automate.

I think software and the hardware it controls is going to be pervasive and take over a lot of formerly human functions in our society in the next century, but the professions you've listed above, along with programming, will be some of the last work to be completely automated, after manual labour, farming, cleaning, data-entry, call-centres, technical support, etc etc. It'd require strong AI to do replace those professions, expert systems don't come close, and if you think your profession is uniquely immune to these changes, why is that?


I'm not the OP you are responding to, but:

1) I do think software engineers are in danger of suffering the same fate, just later than some. We won't even be the last profession to go, that'll be comedians probably.

2) People often underestimate the impact of automation by looking at automation as a binary state where the problem comes when you automate away everyone, but that's not a great way to look at it. Partial automation on a quick enough timeline is more than enough to destabilize the economy of a particular industry.

We are a long way from computers replacing all lawyers or doctors or software engineers, but if they can replace just 25% of them by allowing the remaining 75% to be as productive as the entire 100% was previously, that's already a large economic impact. If automation can replace 75% of them, that's a massive world-altering shift, economically.

We really should be worrying about the economic impact of all this automation now, because at least in the USA (where I live) we are a very long way away from (sociopolitically) being able to deal with this quickly approaching new reality of having way more people than are needed for doing what would traditionally be considered "productive" work.


I agree with you. I also like to think that we have replaced, several times over, the original "programmers" with smarter and smarter silicon and the result is not fewer programmers, but orders of magnitude more. I think this will happen with other professions. Things which once required years of education and (often physical) practice, say delivering certain kinds of healthcare, will become heavily computer and robot assisted and we'll see some of them pushed from the expensive limited availability hospital to the much more widely available pharmacy -- or even to the shopping mall.


Interesting point... you're saying that we might end up with more doctors and lawyers, because with the assistance of technology, anyone can be one?


"why you can't replace programmers"

Oh I'd disagree with that... If you'd like an example, note how "Excel" is the industry wide corporate standard database management system. Its buzzword compliant, being NoSQL, and scales to tens of thousands of records, which despite claims to the contrary is almost always enough for most real world problems. The world of the future is not having a DBA and CRUD developer cooperate to write your shopping list, its one untrained dude with a spreadsheet.

Look at how many graphics artists using paper and pencil have been replaced by one marketing guy with photoshop. Its already happened. Its not bravado but observation.

Rows and rows of desks of junior accountants replaced by one accountant with quickbooks. Another observation, not bravado.

And what, exactly, are pharmacists doing 99% of the time beyond being a very high item value / high risk vending machine? Realizing my mom worked as a pharm-tech part time when I was a kid, so you're not going to fool me... Very highly priced customer service and team lead and thats about it.

The mechanical steam shovel never quite eliminated the use or sales of hand shovels. Its just that "digging ditches" isn't a viable career path anymore for almost anyone, anymore. Or "manual metal lathe operator" or any number of other tasks. I'm not saying there will never be another accountant hired or dr. or lawyer. Just a whole heck of a lot less of them. At least one, two, maybe three orders of magnitude fewer.

Much like the first industrial revolution... there's people right now, out there, being paid to use a hand shovel... the story is, its now probably hundreds of people right at this moment, not 5% of the population or whatever like in 1850.


For certain areas, you can replace or drastically lower the entry barrier to programming, sure.

Replace it totally? Doubtful. There's still inherent limits to computation that we have not resolved, and if our current hunches are correct, likely won't.

COBOL was meant to make programming a negligible and easily doable task with its rigid English-like syntax, but how did that turn out?

Sloppy code generators have been around for decades that can pop out some generic half-useful spaghetti abomination, but if you want anything just above total crap, you have to do it yourself.

Things like databases? Sure. CRUD apps? Absolutely. Even if you don't have a GUI to generate them, there will likely be some extremely abstracted library where it's a matter of pasting in a few procedures.

Yet a lot of things are still too vast and niche. That, and reaching the bug-free equilibrium. Removing state and side effects will deal with a wide class of bugs, but it obviously does not rid of them entirely. There will still be some just barely competent (or wizardlike by the hypothetical far-future standards) people who will be in charge of maintenance, and those will be our programmers.

While programmers are certainly a peculiarity in that doing your job well means furthering your own extinction, I don't think it will be anywhere near as drastic as it's made out to be. An ever-breaking technological dystopia sounds much more likely than some clockwork ubiquitous computing society.


The examples you cite are all enabled by A, cheap, powerful hardware; and B, software which is feasible because of A. But the actual process of building software, a developer sitting at a keyboard, writing code line-by-line, has not really advanced since we moved away from plugboards, or at least since the advent of high level languages. The tools have gotten incrementally better, and are more accessible to more people, but this is all because cheaper, faster hardware made it possible, not because of any exponential leaps in the process of building software. We haven't gotten rid of hand shovels in this case, we've just made them much cheaper and much more efficient, supporting a larger, not a smaller number of hand-shovel workers.

[1] https://en.wikipedia.org/wiki/Plugboard


> but this is all because cheaper, faster hardware made it possible, not because of any exponential leaps in the process of building software.

The exponential curve has no leaps. This looks like pedantry, but it's important to keep in mind. Exponentials are completely smooth and boring all the way up.


"Excel" is the industry wide corporate standard database management system

Excel is not a database (I don't mean that as a trivial correction, it's quite distinct); many of the jobs done with it could conceivably have been done with databases but were not, so in a way it's opened up a whole market to store and manipulate more data than ever before on a small scale in businesses, and I suppose you could conceivably replace dbs with it in some cases. However I don't see it replacing databases in the businesses I work with — in my experience at least I see more data moving from excel over to databases and apps (with internal dbs) as it becomes complex or people want to share it than I see it moving the other way. For example imagine a CRM built in excel, scaling up to 1000s of customers and several users — eventually there comes a point where it is far easier to use a db and front-end than to try to manage that sort of data in excel by hand and share amongst several people.

Data manipulation is an excellent example of the complex changes wrought by technical progress — there is an argument for saying that as technology progresses, we discover more work we can do with it, and data and analysis become more complex, not less — more jobs are created which simply didn't exist before. Automation doesn't always lead to jobs becoming simpler or humans becoming redundant, quite the reverse in our recent experience.

Look at how many graphics artists using paper and pencil have been replaced by one marketing guy with photoshop.

Answer, zero. Graphic artists have become quicker, can do more work, and can produce work of higher quality in terms of resolution, type etc than before, but it's still quite a distinct job from marketing. One related example of a profession which has gone is typesetter though — they've been entirely supplanted by technology. Just to be clear I'm not saying technology never supplants anyone, but that the professions you listed are the farthest away from being supplanted and are not even on the horizon in most cases.

Customer service is still really important in many jobs, interfacing with humans is extremely hard, and expert systems are not an adequate replacement for most humans yet, esp. in creative or customer facing roles — that will require a qualitative change in the nature of our software. Perhaps that will happen eventually (in some ways I hope it will), but we simply haven't made that leap yet.


"Excel is not a database"

It most certainly is used as one in the corporate world. That doesn't mean its a very good one, in the sense that a hammer isn't a very good substitute for a screwdriver.

None the less, where a businessman would have called IT to store a couple hundred records in 1970, now the businessman uses a spreadsheet as a database.

I agree I LOL when I see people my hand doing the equivalent of an SQL JOIN. To some extent, the older I get, the more I see people using computers as a somewhat different form of manual labor, rather than a replacement for manual labor. Rather than typing this form by hand on the manual typewriter, you use Word and call it progress.


None the less, where a businessman would have called IT to store a couple hundred records in 1970, now the businessman uses a spreadsheet as a database.

In 1970 the businessman would probably have used a rather more old fashioned method - a piece of paper and an assistant to write down the figures on a page which looked very like an excel spreadsheet!


Well, that's kind of how things work now, which is why your physician sometimes has to check with your insurance carrier to see if they'll authorize work. For example, I recently saw a sports medicine doctor for what we feared was a supraspinatus tear. To diagnose that, we needed to order an MRI. My carrier signed off on the imaging, but it turns out that one of the other big carriers will refuse it as a matter of course until an x-ray has been performed.

Now, my doctor has forty years of experience, and if she order an MRI it's because she already knows an x-ray won't return any useful information, and will just waste time, money and her patient's radiation exposure limit. But, statistically speaking, the carrier in question has obviously determined that they'll save money if they force providers to route through x-rays regardless. In other words, my provider's judgement was overridden by some Excel spreadsheet somewhere -- not even a Watson or Big Blue.

Meanwhile, doctors are being forced to handle more data entry functions, something they understandably hate, and which has probably turned more providers against the Affordable Care Act than anything else. (The provisions are, IMHO, unduly onerous even as I understand the anti-fraud rationale behind their introduction, but that's another issue for another time.)

Having said that, given the stringent requirements FDA places on med-tech, I don't anticipate AI providing more than a support role over the next few decades. FDA is always going to want a credentialed human in the critical chain between patient and procedure.


As an aside, I wonder how many doctors will simply start to opt out of the ACA/health-insurance racket. I know of a few that work on a cash-only basis and they claim they are able to run their practices with vastly smaller staffs and able to spend more time with each patient and still be profitable.


> The big one for me is when will we trust software to analyze contracts instead of lawyers?

When the cost-risk equation comes out in favor of the computers.


The flaw in this argument is that although the power of the machines is making advances appear faster to make humans less important, the power of humans to use the advances for no good is increasing at the same rate. Thus Google knows when you are going on a trip but someone hacks Target for millions of credit cards in a way no one notices for weeks. Technology is a two edged sword.


I don't think this point contradicts anything Friedman said:

Put all these advances together, say the authors, and you can see that our generation will have more power to improve (or destroy) the world than any before ...


I think he makes some interesting points. I do not think they are that amateurish. Like it or not, we're rapidly sliding into an era where human productivity is going off the charts. Unintended consequences are unavoidable. Nobody really knows what to do about it.


Over 60 years ago Kurt Vonnegut wrote about the "man being replaced by machine" problem in his * 1952 * novel, "Player Piano."


The quote from Erik Brynjolfsson in this article is also right out of the Unabomber manifesto on the dystopian future where machines make our decisions for us, and the masses are ruled by a handful of elites who have access to the control of these systems.


I think Ned Ludd had a few things to say as well, back in 1779 or thereabouts[1].

[1]: http://en.wikipedia.org/wiki/Luddite




Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: