Hacker News new | comments | ask | show | jobs | submit login
The media are unwittingly selling us an AI fantasy (theguardian.com)
92 points by chobeat 33 days ago | hide | past | web | favorite | 72 comments

I agree with the sentiment that the discussion about AI and Machine Learning should not be entirely driven by industry.

At the same time, this article seems to be a bit down on AI itself, and part of their message is that AI doesn't provide "relevant and competent" solutions to problems. It also sounds like they are writing off real work (both in industry and academia) focusing on real ethical concerns with AI and what can be done to address them (e.g., the FAT* conference, a growing number of sessions on fairness, privacy, and other related topics at NeurIPS and ICML, etc).

I think the most important issue is educating the general public about AI and giving them familiarity with what types of things are automatable, what things can be learned from their data, where and how AI is being used in the real world currently, etc. A big part of this is to have the mainstream media be a bit more self-guided.

A final thought: one of the suggestions from the Reuter's article is that we should hear more from scientists and activists in the media. This seems a bit troubling to me, since in ML and AI research there are very strong ties between academia and industry (and often people move fairly freely between the two). I'm not sure we would hear a significantly different narrative if we talked to researchers in academia...

there is this humor show (the fix, on Netflix) where they were supposed to “fix” AI.

I think it’s quite telling that the entire show instead ended up talking about robotics. I don’t think any of the participants reflects at all on how AI is used today in the systems they interact with.

So that’s probably the first step in educating the public. Find a basic mental framework for thinking about AI that doesn’t involve robots or skynet.

That's a really interesting point - it does seem like most people immediately jump to skynet or our robot overlords whenever the topic of AI comes up. I don't have any really solid suggestions for what a good mental framework for thinking about AI would be, but I think giving an alternative to these unrealistic versions would be super helpful.

I always explain it using recommendation engines. You buy a vacuum and you get suggestions for 10 vacuums you might want to get next. Not so good. You buy a film with Tom Cruise, and you see 10 other Tom Cruise moves suggested, thats not so bad.

Related question: Does anyone have a good, accessible summary article of where AI is at, what can/can't be automated, etc?

I don't actually know of anything like this, though I think it should probably exist somewhere. If anyone else knows, I'd be interested to see it too!

All the research in fairness, privacy, and other related topics at academic conferences is irrelevant if the industry continues to recklessly deploy unethical machine learning in production.

In fact, it can be argued that industry funds such research only to appear to be ethical and to give themselves cover to continue to use unethical machine learning.

I don't know if I'd say it's irrelevant if it's not used by the current industry. It's always helpful to understand what is possible and how to achieve it. And beyond actual application, formalizations of fairness notions and thinking about their implications for computing gives us another lens to think about fairness more broadly (as in, these kinds of technical discussions can give specificity and clarity to discussions of fairness even beyond machine learning and computing).

To your second point, it's very difficult to attribute a single intention to large companies. There are certainly employees and researchers at large tech companies that work on fairness and privacy preservation because they think it's important and want to make their companies more responsible (I know some!). It's also certainly true that exploring this type of research looks good for the company, and some other employees are surely aware of that and promoting it for those reasons.

I suggest this reading to anybody interested in this topic: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3078224

It tracks the history of this narrative and explains who profits from it.

This phenomenon has a clear political and ideological root and both journalists, politicians and engineers should fight against it together.

If you want to explore more related topics there's plenty of related content in a reading list I've been developing in the last few months: https://github.com/chobeat/awesome-critical-tech-reading-lis...

From the abstract:

> In practice, the confusion around AI’s capacities serves as a pretext for imposing more metrics upon human endeavors and advancing traditional neoliberal policies. The revived AI, like its predecessors, seeks intelligence with a “view from nowhere” (disregarding race, gender, and class) — which can also be used to mask institutional power in visions of AI-based governance.


> The manufactured AI revolution has created the false impression that current systems have surpassed human abilities to the point where many areas of life, from scientific inquiry to the court system, might be best run by machines. However, these claims are predicated on a narrow and radically empiricist view of human intelligence. It’s a view that lends itself to solving profitable data analysis tasks but leaves no place for the politics of race, gender, or class. Meanwhile, the confusion over AI’s capabilities serves to dilute critiques of institutional power. If AI runs society, then grievances with society’s institutions can get reframed as questions of “algorithmic accountability.” This move paves the way for AI experts and entrepreneurs to present themselves as the architects of society.

This is implausible, to say the least. Isn't this basically a conspiracy theory? Not to mention, huge tech companies were already huge before the AI hype. I'm 4 pages in and I still don't know what's the author's point, other than promoting conspiracy theories and spreading FUD about AI.

They are not conspirancy theories because they don't assume a concerted effort from these entities but just a common interest and their action converge to reinforce the same phenomenon.

The author is a system biologist writing about sociology, so the paper should be read with the vocabulary of sociology, not of colloquial language.

> They are not conspirancy theories

Agreed, I've read a bit more:

> The pragmatic interest on the part of industry is natural, since the behaviorist approach that has appealed to many AI researchers aligns with the profit motives of surveillance capitalism.

Still, the language is loaded, the examples of claims (of AI proponents) are cherry-picked, the limitations of technology are misrepresented and the results are dismissed because they don't take "historical context" or "emotions visible clearly on the faces" into account.

It doesn't read like a scientific paper at all - or is this how papers in non-STEM look like in general?

The language is loaded because it's part of an ongoing discourse that by

>is this how papers in non-STEM look like in general?

you're not familiar with. And yes, this is quite a good paper for sociology standards. It comes from a STEM-guy so I think I like his style of writing.

> the examples of claims (of AI proponents) are cherry-picked,

This is not hard science where you have to find hard rules and a counter-example breaks your argument. He's commenting on a trend that we can all relate to. Is it a vocal minority or it's actually the vast majority of the industry/media? For that you can go back to data but that's not the goal of the paper and it doesn't invalidate his thesis anyway, as long as the narrative dominates the public discourse.

> you're not familiar with.

Agreed - unfortunately, I'm not familiar with the field at all :(

> it doesn't invalidate his thesis anyway

But, but, his thesis is that we're inevitably heading in the direction of a dystopia with "robo judges" and scientific pursuit being judged based on "metrics"... And that "surveillance capitalism" companies and the proponents of AI are power-hungry demons who plan to use AI to force some unspecified "psychology model" on the society as a whole!

Well, maybe that's how it is - I suspect it's not like that, but I can't know for sure. My objection is that, whatever the plans of evil corporations and traitors-to-humanity scientists, our current technology is nowhere near enabling any of the "changes to society" the author fears. The "superhuman intelligence" is not going to surface for a long time, the "bots" which "write quality editorials and replace journalists" will probably be realized something like 5 years before the "superhuman intelligence" mentioned, so also in the (very) far future. As for funding science, it's not AI (nor AI proponents) who come up with "metrics", but people, and they did so for the past 2 centuries at the very least. Yes, it's dangerous, but it has nothing to do with AI. The paper makes it sound like the hype around the AI is an imminent danger to the society-as-we-know-it, but isn't it infinitely more probable that this hype will follow thousands of others and simply die out?

I guess what I want to say is that the difference in complexity between - also impressive in their own right - current ML-based solution and any kind of understanding is so vast that worrying about what will happen when the AI will be capable of the latter in no way justifies the sensationalist tone of the article.

Well, I could be misinterpreting the author due to my unfamiliarity, so maybe it's not particularly sensationalist for the field and I just misinterpreted it.

> But, but, his thesis is that we're inevitably heading in the direction of a dystopia with "robo judges" and scientific pursuit being judged based on "metrics"... And that "surveillance capitalism" companies and the proponents of AI are power-hungry demons who plan to use AI to force some unspecified "psychology model" on the society as a whole!

I would argue that this is the present, not the future.

> Well, maybe that's how it is - I suspect it's not like that, but I can't know for sure. My objection is that, whatever the plans of evil corporations and traitors-to-humanity scientists, our current technology is nowhere near enabling any of the "changes to society" the author fears.

You don't need advanced technology for that. The existing technology is more than enough and we're seeing the devastating effects on the existing society. I don't think that for the thesis of the author, the actual progress of technology is relevant: if it looks intelligent, they will apply the narrative and profit from it.

The author is talking about how the promise of a yet to come AGI helps to build a narrative today that is used to exploit people. This is one thing. The dystopia is a critique to the narrative itself, that would lead to even further deterioration of the social fabric if it keeps being pursued. This is completely independent by the satisfaction of the promise of AGI or similar. As long as the narrative is believable, it will be used.

I feel the list misses a lot of the literature in "software studies" eg.



Can you please create a PR? I will gladly go through the content

I'm actually not strong on those topics. It's on my neverending list, but...

Continental philosophy and "theory" are time-consuming in that one needs to grep for the truffles in the mud. I've spent >10 years reading Gilles Deleuze in my free time and now I see him as the harbinger of surprisingly useful ideas (many of which the "technical community" has discovered independently, from computer viruses to the tension between decentralization and centralization, but still really novel for 1970-1980 when his core works were published) -- but it was difficult getting there! I got into Deleuze because I fell in love with the prose, but you will be forgiven if you group him with Derrida and Foucault and even French Marxists because he's (1) a creature of his time and friends with many of those now-irrelevant figures and (2) fluid and ambivalent almost if required to be consistent with his own teachings.

So... I wish I had time for "software studies", but I'm currently working on a method for betting on horses^H^H^Hmarathon runners using the gamma process and the idea of risk-free measures from finance.

interesting list on github - thanks for sharing that here.

I'm looking at the "Coding is not fun" article right now ..will also check out the "Manufacturing an Artificial Intelligence Revolution" that you linked above.

you're welcome ^^

The problem is that researchers and engineers rarely engage in the public debate except if they need to drive corporative interests.

The debate sees corporations and the tech elite on one side and artists, activists and philosophers on the other, with journalists split between the two. Engineers are used by either side but they don't have their own voice and mostly because they don't have informed opinions or the cultural means and interest to join a non-technical debate as a cohesive force.

The result is that the whole debate is conducted mostly by people that has no idea how this stuff really works and cannot separate marketing mumbo jumbo from the actual practice of building "AI" systems.

It's therefore very refreshing to see the work of artists like Hito Steyerl or ssbkyh that actually learn how to work with Deep Learning or other techniques to actually create critique of the existing narrative.

> mostly because they don't have informed opinions or the cultural means and interest to join a non-technical debate

This is somewhere between slander and trying to explain away other social groups bullying engineers.

The truth is much simpler: engineers have responded to being bullied out of the debate by simply ignoring it, and implementing AI without concern for the results of that debate. Engineers have that power -- they can unilaterally change society by implementing a new kind of mind without approval or buy-in from other parties.

So I think my position is the reverse of yours: if business leaders, activists, etc want to meaningfully impact the AI debate, they should engage with the engineers actually building it -- rather than having a debate among themselves.

> The truth is much simpler: engineers have responded to being bullied out of the debate by simply ignoring it, and implementing AI without concern for the results of that debate. Engineers have that power -- they can unilaterally change society by implementing a new kind of mind without approval or buy-in from other parties.

Truth is never simple. Engineers have been "bullied" out of the debate (this is not really how I see it but ok) because they often hold beliefs that renders the debate impossible. The narrative of "engineer is a pure discipline" or that "tools have no political color" are still strong despite countless counter-evidence. In the case of software-engineers that are left alone in the hands of managers and their interests, the social devastation that follows is very evident just by looking at the news.

> So I think my position is the reverse of yours: if business leaders, activists, etc want to meaningfully impact the AI debate, they should engage with the engineers actually building it -- rather than having a debate among themselves.

This is a real and pressing concern for many of them but it's not easy. I'm an engineer and I'm trying to do just that, or at least bring the existing discourse to the engineers if I can't bring the engineers to the discourse. But believe me, the cultural and personal resistance is extremely strong, first of all because it forces them to re-evaluate the belief system and take responsability for what they are doing and what they did in the past. Staying in an ethical comfort zone where you can ignore the consequences of your action is much easier.

Well, but tools do have no political color. That's why The Open Source Definition has "No Discrimination Against Fields of Endeavor", period.

I can't be the only person who has seen FOSS people complaining bitterly that their work has been used in projects they object to.

"No Discrimination Against Fields of Endeavour" - like FOSS itself - is an absolutely huge win for the corporates.

Related: I used to know someone who designed missile guidance systems. His work was a purely theoretical problem solving exercise for him until he saw a missile he had worked on being used in news footage of a war.

That was when he realised that even though he wasn't discriminating against some fields of endeavour, the technology he was building most certainly did.

I have sympathy for anyone who regrets his mistakes, as I certainly regret my own. However, it can't really have been a total surprise that a missile guidance system was used in a war?

Depending on the circumstances it could absolutely be.

For example, many defense research labs have “red team” projects where new capabilities are developed strictly to understand adversary capabilities, feasibility / cost to extend a legacy system with modern tech, and many similar things. Kind of like Myth Busters but applied to questions about an adversary’s capabilities.

Some of these research labs are even joined with academic institutions that carry with them a strict ethics mandate that any and all such work can only be theoretical or defensive in nature, to assess and defend against threats, anticipate new threats or debunk claimed capabilities from existing adversaries, but absolutely never to carry out an offensive agenda, enhance existing attack capability or anything similar.

In a situation like that, you absolutely could be greatly surprised & upset to learn your defensive “myth busters” research is turned around and repurposed by another team or something to enhance attack capabilities.

It could be similar with computer security as well. Imagine putting your best effort into developing an attack, exploit or malware because you think it’s purely to determine if something could be done by an adversary, or to highlight a weakness purely for defensive purposes... only to find that it’s used to directly attack someone else after the research leaves your hands.

Again, this isn't meant as harsh criticism, since I know how easy it is to fool oneself. However, we're talking about another level of cognitive dissonance entirely with defense research lab staff discovering only too late the purpose of "defense research". Maybe there was a sign on the door that said "for peaceful purposes only", but the main thing to remember about the war industry is that they lie.

I think you are imagining that it’s some small, throw-away comment, but it’s not necessarily.

MIT’s Lincoln Laboratory for example was originally opposed by the university president at the time with huge community backlash against the university becoming connected to a military research lab.

Part of the original charter of the lab was that its scope of operation was very, very strictly restricted to US air defense, and very strictly not the development of offensive capabilities. There was even a huge report commissioned by the university to detail exactly the defense needs and set boundaries around them in terms of what projects could possibly be approved for funding at the lab. The “no attack” component of this was a giant, first-order constraint of the whole multi-million dollar endeavor to even create the facility at all.

On the other hand, I do agree many other cases could be like what you suggested. Just saying not all.. and some of them would be directly, loudly predicated on “defense only” mandates where it would be a huge surprise if the research was subverted later.

The thread hypothesis is that regardless of the marketing, work in weapons-oriented research will create weapons. Does Lincoln undertake such research? Could it do so? You introduced LL into the conversation, presumably because you wouldn't be shocked to see weapons come from there too. I don't know much about LL, but I wouldn't be shocked either. I do believe you when you tell me that some people working there would be shocked.

Within the first 5-15 years of LL’s existence though, I would have been dramatically shocked if it engaged in projects that created offensive capabilities.

The Definition deliberately enforces non-discrimination. The point of that is to make sure that licensors don't "color" the tool by putting restrictions in their license (thus excluding some potential licensees).

Surely you can see that this has no logical bearing on whether any given tool has inherent "political color". Some tools, like some foodstuffs, could conceivably be inherently incompatible with particular religions, for example. The OSD doesn't take any position on that possibility because it's not regulating the nature of the tool, just what the license contains.

This is rather like "bits have no colour", which is true if you only look at the bits. It relies on not looking at the context, and then declaring the context irrelevant.

Whether something is relevant context or not is a personal and political decision.

Bits don't have color. Nor does arithmetic. The people who use them do, and the things those people do with bits and arithmetic might. It depends on who you ask... and what their politics suggest the appropriate color-carrying context is.

Aaaaaaaaaand here we go again.

But the leading researchers are very much on the hype train. I agree that lack of knowledge only makes it worse. But feel like there's a lot more going on.

Hype in of itself is a very attractive thing. For a practitioner it's hard to pass on a narrative that puts you in the center of the universe. Then world-is-about-to-end/change alarmism is very attractive too.

And on the other side skepticism and cynicism generally isn't something that spreads easily. Unless the hype is so absurd that the criticism itself becomes sensational. Which might be happening at the moment.

>Engineers are used by either side but they don't have their own voice and mostly because they don't have informed opinions or the cultural means and interest to join a non-technical debate as a cohesive force.

Not strictly true engineers do concern themselves with ethics and issues are discussed including in professional journals etc(For example look at what IEEE has to say about ethics of AI) it may not be public debate but it is not ignored.

One thing you are missing is Engineers work for companies which tend to place limits what you can communicate publicly. Pretty much everything has to be released via official channels which go through all the PR Spin.

I can think of at least two reasons for this. 1. The information could be considered stock market sensitive and 2. it's very easy for something you say informally 'off the cuff' to be taken as a company position by lazy media who can't distinguish between "engineer who works for company x says..." and "company x says..."

I am an Engineer (I don't work in AI) but my company places strict limits on what we can say on things like social media and to reporters, at conferences etc so yes that does exclude participation in 'the public debate'.

I don't know about software world (I am the non software type of engineer) but we had courses on professional conduct at university (which covered professional communication, alongside topics such as, Ethics, regulatory compliance, liability etc).

I whole-heartedly agree. This is one of the reasons I think that education about AI and ML is important (and, on the flip side, I also think that it wouldn't hurt to educate AI and ML researchers/engineers about humanist issues so that they can participate in non-technical debate).

I found there's not much to learn about "humanist" issues, in the sense of background knowledge of consensus of the field. United States and North Korea agree on nuclear physics: they use the exactly same number for cross section. They don't agree much on humanist issues.

> United States and North Korea agree on nuclear physics: they use the exactly same number for cross section. They don't agree much on humanist issues.

This is true but concluding that therefore there's "not much to learn" seems like the worst possible way of resolving this conflict.

"humanist issues" are about the management (not the solution) of unsolvable disagreements. The knowledge is different in nature from scientific knowledge but for no reason should it be considered inferior. By thinking this way you just exclude yourself from any discourse and just hinder your condition.

Replace AI with Science and the articles points are all pretty much the same. The media reports on cancer cures, new batteries and energy tech, etc all the time. It's the same thing. Grand visions, money to be made, stories to sell.

Everyone reading here knows there is no 'AI' but are happy to peddle or sit back and watch as pattern matching used to identify nudity is wildly hyped up as 'AI'. This is the same kind of fraud as Theranos was guilty of.

There is nothing in the current programming paradigm to create an 'AI' as the 'world understands the term' but individuals and groups blinded by profit and greed are 'growth hacking' redefining and blurring the meaning of words to suit their commercial agendas.

The technical community at large would come out against this kind of widespread misinformation and abuse but they don't as everyone is looking for a gig, job, contract or more ominously some 'illusory power'. But that doesn't make it any less abusive of discourse.

How can we make an AI when we don't even understand human intelligence properly. What in current programming language allows you to create anything that can 'think' and make decisions. How is processing data and matching patterns 'thinking' or AI is any way or form? A culture of hubris is seeing programmers constantly engaged in hype and overestimating their tools while underestimating basic human intelligence required for even mundane tasks like driving.

But once you redefine and twist words they lose all meaning for communication and in the end you lose all credibility, but this seems to be a gold rush and as long as some have made money on the way who cares. If AI happens it will be from decades of hard research in the scientific community like how all breakthroughs happen, not in the trade.

>> Why do people believe so much nonsense about AI?

That is the central question. Why is it that people are prepared to believe in all the wild fantasies that the marketing arms of the tech industry come up with? Shouldn't the expectation be that the majority would instead be more cautious and avoid believing all the overblown promises, especially when they repeatedly fail to be fulfilled?

Agreed. My pet theory is that it boils down to how superficially accessible AI is. A conversation about AI is kind of like Bikeshedding, except no one realizes it's actually a conversation about the Nuclear Power Plant. Everyone's got practical, direct experience with sentience, with thinking, with speaking a language, with seeing things. Everyone can introspect for a little bit and make genuine claims to having contemplated the fundamental problems of AI. And since so little genuine progress on the really fundamental parts of the problem (the Hard Problem, as Chalmers would say, versus the somewhat more tractable 'parsing problems' that Deep Learning has made such great gains on), the most frustrating part is that it's really hard to prove some amateur thinker's conclusions wrong. You can bury them with the technical jargon of three thousand years of philosophers, but no one's been able to build a formally verifiable model for the whole thing so far, and so amateurs and experts are in some sense separated only by how impressively technical they sound when talking about the matter. This problem is amplified by the technical jargon being painfully close to the technical jargon for philosophy of the mind, and to straight lay terminology. So a person with technical knowledge can talk about, say, 'training', and a person with no technical knowledge can think they understand, because the word is so ill-separated from their own internal, non-technical definition.

It's so easy to have an opinion about AI, the fundamental ideas are so prevalent in day-to-day discourse, and it's so hard to prove any given opinion wrong.

The materialist philosophy of mainstream science and academia makes this palatable.

Today we believe in SCIENCE (whatever that means). Hence, we believe what any of its priests says unless countered by other (more famous) priest.

I wish I could say that people believed in science. Priests (literal ones, not metaphorical ones like your strawmen) seem to have more influence over public policy in much of the world though, including America, and more often than not to the detriment of all.

The benefit about being more honest about their role in society: they get to exploit it more thoroughly.

Speak for yourself. I personally prefer to evaluate the statements of scientists on a claim by claim basis.

That seems counter to the narrative that led to the current holder of the highest office in the United States, no? Because if his fans believe in unquestioningly science then why do they disbelieve in a tapas selection of the following: climate change, evolution, vaccines, gravity, a round Earth, and/or whether the moon landing happened?

It's a hot topic.

One very good argument I've read recently is that most of what pro-science people label as "anti-scientific" statements like anti-vax stuff are made by people that don't refuse science as a whole but just science coming from what they perceive is a corrupted establishment. Clearly this involve cherry-picking that validates their ideological positions, or it's just following the twisted narrative of some political group.

Science, with the capital S, is still society's reference framework for truth (and this is a huge problem, but not the one we are talking about) but these people refute the priests and the religion, not the God. New priests are bending the same God to their interests and the times favor them over the previous priests.

>Why is it that people are prepared to believe in all the wild fantasies that the marketing arms of the tech industry come up with?

Because very influential people have thrown their weight into the discussion I would say. You have Kurzweil who basically predicts that the singularity is imminent every five years, and you had Musk and Hawking painting apocalyptic scenarios about AI. That dominated the AI pop news headlines for weeks if not months.

On the other hand people like Andrew Ng or Kai-Fu Lee who actually work in the industry and draw attention to the socio-economic impact of AI, which is very real, seem to only find an audience within the community itself, if at all.

The reason is probably twofold. Terminator-esque stories are more interesting than talking about the economic impact of AI, and the industry has a lot more to gain by distracting people with grandiose scenarios than by talking about the negative impact they have on the lives of working people and society at large.

>You have Kurzweil who basically predicts that the singularity is imminent every five years

Kurzweil has always predicted the singularity for 2045. He may be wrong but I don't find that prediction or Musk and Hawking say AI may be dangerous as wild fantasies. You can make reasoned arguments for all that stuff.

They want to believe.

Rule of thumb: if someone is talking about "AI" and not "Machine Learning", "Pattern Matching and Recognition", "Search heuristics" or "Statistics", it is most likely going to be bullshit.

On the other hand, OpenAI is not bullshit, so...

There are occasional readable pieces in the press...


TLDR: ML is like a credit score


(Granted the Register is not msm).

TLDR: AI is oversold and surprisingly fragile.


TLDR: The progression from math to AI.

I'm just going to leave this picture of the elephant in the living-room:


Eh, this week Scott Adams' Dilbert has been going that way:



Is this a real issue though? The article skips past what the problem is and focuses on why it exists by accusing the tech industry of writing articles about tech.

Besides the philosophical argument of whether pattern matching counts as AI is there really a problem here? I have not heard of anyone effected by AI becoming a buzzword. Products are still evaluated on what they can actually do, so who cares?

Siri and Google can now take voice commands. I don't know of anyone who actually expected an intellectual conversation. They were never marketed as such.

Actually it is somewhat dangerous in that once politicians buy in, they start pursuing policies built around the assumption that this sort of thing is real. Billions will be wasted on military projects pursuing AI-enabled features that are nowhere close to being ready; police are blowing money on facial recognition systems based on misleading stats; and we already see local and national governments salivating over self-driving cars as a solution to public transportation woes. Tax money is and will continue to be given away to corporations who are selling the snake oil of self driving cars rather than that money being invested in real solutions for public transit, or even basic maintenance.

>...it goes like this...on balance AI will be good for humanity. Oh – and by the way – its progress is unstoppable ... The truly extraordinary thing, therefore, is how many apparently sane people seem to take the narrative as a credible version of humanity’s future.

I don't see what's extraordinary at all. AI's coming and will probably be net good. There may be some inaccuracies in how it's reported and corporations are not all angels but for what field is that not true?

Also he says Theresa May has drunk cool aid by agreeing to fund AI but what she's funding is machine learning for cancer detection which is getting results like this

>In tests, it achieved an area under the receiver operating characteristic (AUC) — a measure of detection accuracy — of 99 percent. That’s superior to human pathologists, who according to one recent assessment miss small metastases on individual slides as much as 62 percent of the time when under time constraints. https://venturebeat.com/2018/10/12/google-ai-claims-99-accur...

That's for metastatic breast cancer that kills 500,000 people a year. Overall Naughton's arguments seems a bit silly.

To be fair though, those numbers are the results of a single evaluation of the algorithm against a single group of human experts:

> In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting.

Certainly an impressive feat of image recognition, but far from revolutionising cancer diagnosis. It's also not clear to me that this would actually diagnose cancer more accurately if you factor in the ability of experts to consider other things than just the scans.

I guess so though it seems promising. Not quite sure how it'd work in real life. I've got two friends who went to the doctor with a headache and sore throat and were told it was nothing to worry about and then a year or two later found it was cancer - one died rapidly, one presently having a bunch of surgery. It would be good if there were a better way of screening that sort of stuff.

What a stupid article. Attacks Theresa May for say AI will be good for healthcare; It will be.

A database parser, is not AI, it is a parser. when a script, compares realworld cause effect relationships to database parses then rewrites the database to accurately resemble real life this is the beginning of AI. learning is an aspect of intelligence and they both feedback to eachother in a less than simple manner. i think Machine Learning is the glimmer of the holygrail being glimpsed.

I guess such „counter-hype“ feels like an attractive position to many. Just like general cynicism seems to be the mindset of our times. Maybe because people think being contrarian and/or negative makes them appear smart?

I don’t necessarily buy into any predictions of what machine learning will accomplish in the future. But just the examples already available today are quite stunning, especially in images and language.

In any case, I don’t see how „the media“ is at fault here. I see far more „hype“ of AI among the tech community than the larger media outlets. The possibility that AI could transform our economies certainly exists, and it would seem prudent to nurture the debate about the future of work even in the abscence of certainty.

What is the tech community? The CEOs or the engineers/researchers/data scientists? Because they are not by any means the same entity.

Your exactly right. This is just contrarian click-bait leveraging general cynicism.

This sort of nonsense bothers me. So the guardian, a member of the media, is telling us that the media are unwittingly selling an AI fantasy? And the fantasy is that it's positive? Most of the news I see about AI is handwringingly pessimistic about AI being a disaster that will wipe out jobs and maybe the human race, which is the opposite of the thesis of the article. This sort of article is just trying to grab attention and took no thought to write, it has no substance, and took no research or deep insight to write.

And the idea that somehow we can get Law before we work out the Ethics of AI? You cant just have fiat law pulled out of a hat that works? How can you possibly expect the law to proactively regulate AI before we know what it is? Because at this point we don't know what it is, its an evolving thing just like the internet was when it first came out.

Frankly it's disappointing that drivel like this can make it onto the front page of HackerNews.

> So the guardian, a member of the media, is telling us that the media are unwittingly selling an AI fantasy?

No, it is an opinion piece written by an academic that summarizes the findings of a research study of how media outlets covered AI in their articles.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact