At the same time, this article seems to be a bit down on AI itself, and part of their message is that AI doesn't provide "relevant and competent" solutions to problems. It also sounds like they are writing off real work (both in industry and academia) focusing on real ethical concerns with AI and what can be done to address them (e.g., the FAT* conference, a growing number of sessions on fairness, privacy, and other related topics at NeurIPS and ICML, etc).
I think the most important issue is educating the general public about AI and giving them familiarity with what types of things are automatable, what things can be learned from their data, where and how AI is being used in the real world currently, etc. A big part of this is to have the mainstream media be a bit more self-guided.
A final thought: one of the suggestions from the Reuter's article is that we should hear more from scientists and activists in the media. This seems a bit troubling to me, since in ML and AI research there are very strong ties between academia and industry (and often people move fairly freely between the two). I'm not sure we would hear a significantly different narrative if we talked to researchers in academia...
I think it’s quite telling that the entire show instead ended up talking about robotics. I don’t think any of the participants reflects at all on how AI is used today in the systems they interact with.
So that’s probably the first step in educating the public. Find a basic mental framework for thinking about AI that doesn’t involve robots or skynet.
In fact, it can be argued that industry funds such research only to appear to be ethical and to give themselves cover to continue to use unethical machine learning.
To your second point, it's very difficult to attribute a single intention to large companies. There are certainly employees and researchers at large tech companies that work on fairness and privacy preservation because they think it's important and want to make their companies more responsible (I know some!). It's also certainly true that exploring this type of research looks good for the company, and some other employees are surely aware of that and promoting it for those reasons.
It tracks the history of this narrative and explains who profits from it.
This phenomenon has a clear political and ideological root and both journalists, politicians and engineers should fight against it together.
If you want to explore more related topics there's plenty of related content in a reading list I've been developing in the last few months: https://github.com/chobeat/awesome-critical-tech-reading-lis...
> In practice, the confusion around AI’s capacities serves as a pretext for imposing more metrics upon human endeavors and advancing traditional neoliberal policies. The revived AI, like its predecessors, seeks intelligence with a “view from nowhere” (disregarding race, gender, and class) — which can also be used to mask institutional power in visions of AI-based governance.
> The manufactured AI revolution has created the false impression that current systems have surpassed human abilities to the point where many areas of life, from scientific inquiry to the court system, might be best run by machines. However, these claims are predicated on a narrow and radically empiricist view of human intelligence. It’s a view that lends itself to solving profitable data analysis tasks but leaves no place for the politics of race, gender, or class. Meanwhile, the confusion over AI’s capabilities serves to dilute critiques of institutional power. If AI runs society, then grievances with society’s institutions can get reframed as questions of “algorithmic accountability.” This move paves the way for AI experts and entrepreneurs to present themselves as the architects of society.
This is implausible, to say the least. Isn't this basically a conspiracy theory? Not to mention, huge tech companies were already huge before the AI hype. I'm 4 pages in and I still don't know what's the author's point, other than promoting conspiracy theories and spreading FUD about AI.
The author is a system biologist writing about sociology, so the paper should be read with the vocabulary of sociology, not of colloquial language.
Agreed, I've read a bit more:
> The pragmatic interest on the part of industry is natural, since the behaviorist approach that has appealed to many AI researchers aligns with the profit motives of surveillance capitalism.
Still, the language is loaded, the examples of claims (of AI proponents) are cherry-picked, the limitations of technology are misrepresented and the results are dismissed because they don't take "historical context" or "emotions visible clearly on the faces" into account.
It doesn't read like a scientific paper at all - or is this how papers in non-STEM look like in general?
>is this how papers in non-STEM look like in general?
you're not familiar with. And yes, this is quite a good paper for sociology standards. It comes from a STEM-guy so I think I like his style of writing.
> the examples of claims (of AI proponents) are cherry-picked,
This is not hard science where you have to find hard rules and a counter-example breaks your argument. He's commenting on a trend that we can all relate to. Is it a vocal minority or it's actually the vast majority of the industry/media? For that you can go back to data but that's not the goal of the paper and it doesn't invalidate his thesis anyway, as long as the narrative dominates the public discourse.
Agreed - unfortunately, I'm not familiar with the field at all :(
> it doesn't invalidate his thesis anyway
But, but, his thesis is that we're inevitably heading in the direction of a dystopia with "robo judges" and scientific pursuit being judged based on "metrics"... And that "surveillance capitalism" companies and the proponents of AI are power-hungry demons who plan to use AI to force some unspecified "psychology model" on the society as a whole!
Well, maybe that's how it is - I suspect it's not like that, but I can't know for sure. My objection is that, whatever the plans of evil corporations and traitors-to-humanity scientists, our current technology is nowhere near enabling any of the "changes to society" the author fears. The "superhuman intelligence" is not going to surface for a long time, the "bots" which "write quality editorials and replace journalists" will probably be realized something like 5 years before the "superhuman intelligence" mentioned, so also in the (very) far future. As for funding science, it's not AI (nor AI proponents) who come up with "metrics", but people, and they did so for the past 2 centuries at the very least. Yes, it's dangerous, but it has nothing to do with AI. The paper makes it sound like the hype around the AI is an imminent danger to the society-as-we-know-it, but isn't it infinitely more probable that this hype will follow thousands of others and simply die out?
I guess what I want to say is that the difference in complexity between - also impressive in their own right - current ML-based solution and any kind of understanding is so vast that worrying about what will happen when the AI will be capable of the latter in no way justifies the sensationalist tone of the article.
Well, I could be misinterpreting the author due to my unfamiliarity, so maybe it's not particularly sensationalist for the field and I just misinterpreted it.
I would argue that this is the present, not the future.
> Well, maybe that's how it is - I suspect it's not like that, but I can't know for sure. My objection is that, whatever the plans of evil corporations and traitors-to-humanity scientists, our current technology is nowhere near enabling any of the "changes to society" the author fears.
You don't need advanced technology for that. The existing technology is more than enough and we're seeing the devastating effects on the existing society. I don't think that for the thesis of the author, the actual progress of technology is relevant: if it looks intelligent, they will apply the narrative and profit from it.
The author is talking about how the promise of a yet to come AGI helps to build a narrative today that is used to exploit people. This is one thing. The dystopia is a critique to the narrative itself, that would lead to even further deterioration of the social fabric if it keeps being pursued. This is completely independent by the satisfaction of the promise of AGI or similar. As long as the narrative is believable, it will be used.
Continental philosophy and "theory" are time-consuming in that one needs to grep for the truffles in the mud. I've spent >10 years reading Gilles Deleuze in my free time and now I see him as the harbinger of surprisingly useful ideas (many of which the "technical community" has discovered independently, from computer viruses to the tension between decentralization and centralization, but still really novel for 1970-1980 when his core works were published) -- but it was difficult getting there! I got into Deleuze because I fell in love with the prose, but you will be forgiven if you group him with Derrida and Foucault and even French Marxists because he's (1) a creature of his time and friends with many of those now-irrelevant figures and (2) fluid and ambivalent almost if required to be consistent with his own teachings.
So... I wish I had time for "software studies", but I'm currently working on a method for betting on horses^H^H^Hmarathon runners using the gamma process and the idea of risk-free measures from finance.
I'm looking at the "Coding is not fun" article right now ..will also check out the "Manufacturing an Artificial Intelligence Revolution" that you linked above.
The debate sees corporations and the tech elite on one side and artists, activists and philosophers on the other, with journalists split between the two. Engineers are used by either side but they don't have their own voice and mostly because they don't have informed opinions or the cultural means and interest to join a non-technical debate as a cohesive force.
The result is that the whole debate is conducted mostly by people that has no idea how this stuff really works and cannot separate marketing mumbo jumbo from the actual practice of building "AI" systems.
It's therefore very refreshing to see the work of artists like Hito Steyerl or ssbkyh that actually learn how to work with Deep Learning or other techniques to actually create critique of the existing narrative.
This is somewhere between slander and trying to explain away other social groups bullying engineers.
The truth is much simpler: engineers have responded to being bullied out of the debate by simply ignoring it, and implementing AI without concern for the results of that debate. Engineers have that power -- they can unilaterally change society by implementing a new kind of mind without approval or buy-in from other parties.
So I think my position is the reverse of yours: if business leaders, activists, etc want to meaningfully impact the AI debate, they should engage with the engineers actually building it -- rather than having a debate among themselves.
Truth is never simple. Engineers have been "bullied" out of the debate (this is not really how I see it but ok) because they often hold beliefs that renders the debate impossible. The narrative of "engineer is a pure discipline" or that "tools have no political color" are still strong despite countless counter-evidence. In the case of software-engineers that are left alone in the hands of managers and their interests, the social devastation that follows is very evident just by looking at the news.
> So I think my position is the reverse of yours: if business leaders, activists, etc want to meaningfully impact the AI debate, they should engage with the engineers actually building it -- rather than having a debate among themselves.
This is a real and pressing concern for many of them but it's not easy. I'm an engineer and I'm trying to do just that, or at least bring the existing discourse to the engineers if I can't bring the engineers to the discourse. But believe me, the cultural and personal resistance is extremely strong, first of all because it forces them to re-evaluate the belief system and take responsability for what they are doing and what they did in the past. Staying in an ethical comfort zone where you can ignore the consequences of your action is much easier.
"No Discrimination Against Fields of Endeavour" - like FOSS itself - is an absolutely huge win for the corporates.
Related: I used to know someone who designed missile guidance systems. His work was a purely theoretical problem solving exercise for him until he saw a missile he had worked on being used in news footage of a war.
That was when he realised that even though he wasn't discriminating against some fields of endeavour, the technology he was building most certainly did.
For example, many defense research labs have “red team” projects where new capabilities are developed strictly to understand adversary capabilities, feasibility / cost to extend a legacy system with modern tech, and many similar things. Kind of like Myth Busters but applied to questions about an adversary’s capabilities.
Some of these research labs are even joined with academic institutions that carry with them a strict ethics mandate that any and all such work can only be theoretical or defensive in nature, to assess and defend against threats, anticipate new threats or debunk claimed capabilities from existing adversaries, but absolutely never to carry out an offensive agenda, enhance existing attack capability or anything similar.
In a situation like that, you absolutely could be greatly surprised & upset to learn your defensive “myth busters” research is turned around and repurposed by another team or something to enhance attack capabilities.
It could be similar with computer security as well. Imagine putting your best effort into developing an attack, exploit or malware because you think it’s purely to determine if something could be done by an adversary, or to highlight a weakness purely for defensive purposes... only to find that it’s used to directly attack someone else after the research leaves your hands.
MIT’s Lincoln Laboratory for example was originally opposed by the university president at the time with huge community backlash against the university becoming connected to a military research lab.
Part of the original charter of the lab was that its scope of operation was very, very strictly restricted to US air defense, and very strictly not the development of offensive capabilities. There was even a huge report commissioned by the university to detail exactly the defense needs and set boundaries around them in terms of what projects could possibly be approved for funding at the lab. The “no attack” component of this was a giant, first-order constraint of the whole multi-million dollar endeavor to even create the facility at all.
On the other hand, I do agree many other cases could be like what you suggested. Just saying not all.. and some of them would be directly, loudly predicated on “defense only” mandates where it would be a huge surprise if the research was subverted later.
Surely you can see that this has no logical bearing on whether any given tool has inherent "political color". Some tools, like some foodstuffs, could conceivably be inherently incompatible with particular religions, for example. The OSD doesn't take any position on that possibility because it's not regulating the nature of the tool, just what the license contains.
Bits don't have color. Nor does arithmetic. The people who use them do, and the things those people do with bits and arithmetic might. It depends on who you ask... and what their politics suggest the appropriate color-carrying context is.
Hype in of itself is a very attractive thing. For a practitioner it's hard to pass on a narrative that puts you in the center of the universe.
Then world-is-about-to-end/change alarmism is very attractive too.
And on the other side skepticism and cynicism generally isn't something that spreads easily. Unless the hype is so absurd that the criticism itself becomes sensational. Which might be happening at the moment.
Not strictly true engineers do concern themselves with ethics and issues are discussed including in professional journals etc(For example look at what IEEE has to say about ethics of AI) it may not be public debate but it is not ignored.
One thing you are missing is Engineers work for companies which tend to place limits what you can communicate publicly. Pretty much everything has to be released via official channels which go through all the PR Spin.
I can think of at least two reasons for this. 1. The information could be considered stock market sensitive and 2. it's very easy for something you say informally 'off the cuff' to be taken as a company position by lazy media who can't distinguish between "engineer who works for company x says..." and "company x says..."
I am an Engineer (I don't work in AI) but my company places strict limits on what we can say on things like social media and to reporters, at conferences etc so yes that does exclude participation in 'the public debate'.
I don't know about software world (I am the non software type of engineer) but we had courses on professional conduct at university (which covered professional communication, alongside topics such as, Ethics, regulatory compliance, liability etc).
This is true but concluding that therefore there's "not much to learn" seems like the worst possible way of resolving this conflict.
There is nothing in the current programming paradigm to create an 'AI' as the 'world understands the term' but individuals and groups blinded by profit and greed are 'growth hacking' redefining and blurring the meaning of words to suit their commercial agendas.
The technical community at large would come out against this kind of widespread misinformation and abuse but they don't as everyone is looking for a gig, job, contract or more ominously some 'illusory power'. But that doesn't make it any less abusive of discourse.
How can we make an AI when we don't even understand human intelligence properly. What in current programming language allows you to create anything that can 'think' and make decisions. How is processing data and matching patterns 'thinking' or AI is any way or form? A culture of hubris is seeing programmers constantly engaged in hype and overestimating their tools while underestimating basic human intelligence required for even mundane tasks like driving.
But once you redefine and twist words they lose all meaning for communication and in the end you lose all credibility, but this seems to be a gold rush and as long as some have made money on the way who cares. If AI happens it will be from decades of hard research in the scientific community like how all breakthroughs happen, not in the trade.
That is the central question. Why is it that people are prepared to believe in all the wild fantasies that the marketing arms of the tech industry come up with? Shouldn't the expectation be that the majority would instead be more cautious and avoid believing all the overblown promises, especially when they repeatedly fail to be fulfilled?
It's so easy to have an opinion about AI, the fundamental ideas are so prevalent in day-to-day discourse, and it's so hard to prove any given opinion wrong.
One very good argument I've read recently is that most of what pro-science people label as "anti-scientific" statements like anti-vax stuff are made by people that don't refuse science as a whole but just science coming from what they perceive is a corrupted establishment. Clearly this involve cherry-picking that validates their ideological positions, or it's just following the twisted narrative of some political group.
Science, with the capital S, is still society's reference framework for truth (and this is a huge problem, but not the one we are talking about) but these people refute the priests and the religion, not the God. New priests are bending the same God to their interests and the times favor them over the previous priests.
Because very influential people have thrown their weight into the discussion I would say. You have Kurzweil who basically predicts that the singularity is imminent every five years, and you had Musk and Hawking painting apocalyptic scenarios about AI. That dominated the AI pop news headlines for weeks if not months.
On the other hand people like Andrew Ng or Kai-Fu Lee who actually work in the industry and draw attention to the socio-economic impact of AI, which is very real, seem to only find an audience within the community itself, if at all.
The reason is probably twofold. Terminator-esque stories are more interesting than talking about the economic impact of AI, and the industry has a lot more to gain by distracting people with grandiose scenarios than by talking about the negative impact they have on the lives of working people and society at large.
Kurzweil has always predicted the singularity for 2045. He may be wrong but I don't find that prediction or Musk and Hawking say AI may be dangerous as wild fantasies. You can make reasoned arguments for all that stuff.
TLDR: ML is like a credit score
(Granted the Register is not msm).
TLDR: AI is oversold and surprisingly fragile.
TLDR: The progression from math to AI.
Besides the philosophical argument of whether pattern matching counts as AI is there really a problem here? I have not heard of anyone effected by AI becoming a buzzword. Products are still evaluated on what they can actually do, so who cares?
Siri and Google can now take voice commands. I don't know of anyone who actually expected an intellectual conversation. They were never marketed as such.
I don't see what's extraordinary at all. AI's coming and will probably be net good. There may be some inaccuracies in how it's reported and corporations are not all angels but for what field is that not true?
Also he says Theresa May has drunk cool aid by agreeing to fund AI but what she's funding is machine learning for cancer detection which is getting results like this
>In tests, it achieved an area under the receiver operating characteristic (AUC) — a measure of detection accuracy — of 99 percent. That’s superior to human pathologists, who according to one recent assessment miss small metastases on individual slides as much as 62 percent of the time when under time constraints. https://venturebeat.com/2018/10/12/google-ai-claims-99-accur...
That's for metastatic breast cancer that kills 500,000 people a year. Overall Naughton's arguments seems a bit silly.
> In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting.
Certainly an impressive feat of image recognition, but far from revolutionising cancer diagnosis. It's also not clear to me that this would actually diagnose cancer more accurately if you factor in the ability of experts to consider other things than just the scans.
I don’t necessarily buy into any predictions of what machine learning will accomplish in the future. But just the examples already available today are quite stunning, especially in images and language.
In any case, I don’t see how „the media“ is at fault here. I see far more „hype“ of AI among the tech community than the larger media outlets. The possibility that AI could transform our economies certainly exists, and it would seem prudent to nurture the debate about the future of work even in the abscence of certainty.
And the idea that somehow we can get Law before we work out the Ethics of AI? You cant just have fiat law pulled out of a hat that works? How can you possibly expect the law to proactively regulate AI before we know what it is? Because at this point we don't know what it is, its an evolving thing just like the internet was when it first came out.
Frankly it's disappointing that drivel like this can make it onto the front page of HackerNews.
No, it is an opinion piece written by an academic that summarizes the findings of a research study of how media outlets covered AI in their articles.