- there is no explanation of what the "AI philosophy" or "AI way of thinking" is according to the authors
- according to the authors this undefined philosophy has caused economical and social damage
- the authors propose "focusing on the human" as an alternative but the two aren't mutually exclusive at all
It's just a cheap formula - keep the target vague, blame the vague target for real problems without proving causation, propose an alternative that's good on its own but doesn't contradict any real form of the vaguely defined target.
I haven't read a lot of Wired but this just comes accross as poorly thought out at best. Or worse, it's just manipulative.
There's a famous paper by Dijkstra where he claims that using anthropomorphic terms for computers is a sign of immaturity of the discipline. I used to think that was a bit extreme, but the more people keep talking about fucking AI, the more I'm convinced he was actually right.
Machine learning is linear algebra, nothing more nothing less. Making a model, using it wrong, and then complaining that "the model failed" is a unique kind of stupidity that's becoming more and more popular with "hoi polloi" due to garbage articles like this.
I hate to be that guy, but most ML is definitely not linear algebra. The most popular and useful ML algorithms are boosted trees and random forests; they're more or less geometric approximators (which are all nonlinear). And as ANN nerds like to say, there's plenty of nonlinearity in neural approaches.
But mostly I agree with you: modeling is horse shit, worshipping models (linear and otherwise) is childish idolatry and Dijkstra was completely correct about just about everything (speaking as an APL bum).
> I hate to be that guy, but most ML is definitely not linear algebra...
Yes, I'm not disagreeing with any of that (see also my other reply), it was a misuse of terminology on my part.
> But mostly I agree with you: modeling is horse shit, worshipping models...
Absolutely. The way I see it is that models are like theorems: they are useful and worthy of being studied, but if you forget to validate that your hypotheses are correct, you get wrong results: garbage in, garbage out.
An alarming amount of stuff these days is the equivalent of people trying to apply the pythagorean theorem to a non-rectangle triangle and then complaining the results don't match.
Good ponits you guys. I think it would be fair to say that "AI" is nothing but algebra ran by an automata. The rest is a mix of hype and nerds taking advantage to manipulate people that has this concept out of their intellectual reach. Depending on the case, this can enable evil at global scale. As always with humanity, ideology is always a problem.
This kind of argument is common in this sort of discussion, and more than a bit reductionistic. Extreme complexity often comes from very simple processes at scale.
Your computer's abilities are the result of little switches flipping between two states.
> Your computer's abilities are the result of little switches flipping between two states.
I don't think this is a reductionist view of a computer. That's exactly what it is, and with a very little imagination describes what it has the capacity to do. What computers do now wouldn't be a surprise to people in the 1940s.
It is reductionistic, because so much comes from how you arrange those switches, the current state of the switches, and the sequence in which you tell those switches to switch. And I think most in the 1940s would be very surprised at what computers can do now. It only takes little imagination because we’ve already seen it in action.
Similarly, you can say that deep learning is just a series of matrix multiplications with nonlinearities applied, which is true, but it certainly wasn’t obvious to most that when you scale that way up, that that would lead to computers being able to interpret images, sound, and text. For example, my AI course professor described neural nets as something of interest mostly for historical reasons, something that was tried, but was an evolutionary dead end. And he was no slouch.
The hype has gotten ahead of its current abilities, but I simultaneously think that it will be one of the most impactful inventions we’ve ever created.
> The hype has gotten ahead of its current abilities, but I simultaneously think that it will be one of the most impactful inventions we’ve ever created.
This statement is one of the things I find most fascinating about the current state of AI. Do I think deep learning is incredibly useful? Absolutely and I look forward to seeing more awesome stuff done with it in the future.
Do I think the singularity or "true" AI will be based on deep learning? Not really. I strongly suspect anything capable of reaching scifi levels of intelligence (which a lot of hype says or strongly implies is imminent) will have to be a difference of kind, not of degree.
I could be wrong, of course, but it will be interesting to wait and see.
Yeah, I tend to be more in the difference in degree camp, with the caveat that it's pretty clear we're at least a few discoveries of structural mechanics away from mimicking human-style learning. We need to make advances in semi-supervised learning, as well as increasing sample efficiency, and reducing catastrophic forgetting. But my intuition says that the solutions will be pretty simple, and seem obvious in hindsight.
But I'm also guessing we'll come up with an endless number of complex tweaks to get the current stuff closer before someone figures out the simple mechanics and sweeps most of that away.
It manipulates people who can't grasp it. It sets up implementors for failure when the hype settles. To an extent, it's responsible for the peaks and valleys you inevitably see in funding.
I wouldn't say modeling is horse shit. It's really marketing surrounding modeling that's horse shit.
Modeling is actually very very useful but is too often oversold, which drives me bananas. It's especially bad in the world of data driven or data intensive models because that's trendy and sells software, hardware, and services. The problem is that various forms of modeling are complex and therefor expensive and convincing people to invest in furthering the domain results in these poor public misrepresentations and overselling.
Modeling gives us ways to abstract reality and attempt to predict the future or find patterns we might otherwise miss. Sometimes it works, a lot of times it doesn't, but when it does it's great. Everytime--its expensive and people need to feel they didnt waste their money.
And here's a famous old paper by McDermott that makes the same point but specifically for AI. Using terms like "understanding" to mean the effect of a certain algorithm is fooling yourself into thinking you've accomplished something you haven't.
> There's a famous paper by Dijkstra where he claims that using anthropomorphic terms for computers is a sign of immaturity of the discipline.
If you take the word "computer" out of the sentence this statement is likely backwards. Putting the word computer back in doesn't change that.
Look, for example, at Dennett's seminal book, "The Intentional Stance" for a philosophical discussion of the expressive and reasoning power behind statements like "The thermostat tries to keep the temperature at 60 degrees".
Exactly. Anthropomorphic language is such a useful and normal way of thinking and expressing ourselves.
If you say "the classifier thinks this roadsign is a truck" everyone knows you mean "the classifier has labeled this roadsign as a probably a truck with a high confidence". No-one seriously would take you to mean the classifier thinks in the same way a person thinks any more than if you say "The toaster wants to pop up but it's jammed," they would take you to mean the toaster has a yearning like unrequited love.
If we remove our ability to use language in this way we give up on a mental model that is incredibly useful for simplifying lots of explanations and reduce ourselves to a very clumsy literalism instead. As long as we understand the limitations of this model and are clear that it's not literally what is going on it's very useful.
My problem is not with natural language in and of itself, it's obvious that natural language is extremely useful. I'm arguing one should be extremely careful, however. Natural language doesn't have deduction. It doesn't have definitions. It doesnt't have ways to restrict us from making vague statements. It is subject to non-unique interpretations.
This is particularly relevant when technicians and scientists speak to the general public. Let me quote one of your examples:
> "the classifier thinks this roadsign is a truck" everyone knows you mean "the classifier has labeled this roadsign as a probably a truck with a high confidence".
The fact that the classification is probabilistic is obvious to me and to you, but nothing in the natural wording suggests that, and it's a pretty important caveat.
It's not enough for us to understand the limitations of natural language, everyone in the conversation has to, for it to be a useful tool.
"Thinks" in that context is inherently probabilistic. No one uses the phrase "I think..." to literally state that they have a thought process going through their head, they do it to clarify that the following statement is based on their own conclusions rather than objective reality.
No one except Descartes: "I think therefore I am."
But all this does is underline that natural language is a vague mesh of concept categories anchored to words with multiple overloaded meanings.
One of the reasons philosophy exists is because sometimes it attempts to untangle the overloading.
The argument about AI is no different, but with the complication that the marketing concept mesh has different requirements to the research concept mesh, and both are different to "Make this work by next Tuesday" mesh.
Google used to have an ancestor of word2vec which tried - very crudely - to map some of these meshes. Predictably it was much better at mapping word meshes than concept meshes, but it was still an interesting approach.
The underlying problem is that we literally don't have a clear representation system for these concept meshes. Words are overloaded, code and math are too specific in their different ways. The various shades of ML are still too concrete.
So we're going to keep having these debates until someone invents a representation system for natural language concept maps that isn't based on verbal approximations.
Not arguing, just suggesting an alternative phrasing that might better convey the intent. When people say thinks in this sense they mean determines to a certain level of confidence, rather than modelling consciousness.
I think ML is a bit too complex (different approaches trees, DNNs,...) to oversimplify it like that. One could say, it's just math - but even experts can't really tell some of the why it works behind back propagation in DNNs. A lot of research is empirical, like switching to ReLU...
ML is a commodity now, even popular with artists to create art. Being a commodity means you dont have to understand the inner workings to benefit from it. I don't see anything wrong with that, but a lack of proof for biases/backdoors/bugs in models is an issue that I'm worried about when its applied too enthusiastically in critical areas.
Article is weak though, and hard to follow conclusion.
I knew an older programmer who used the phrase "parasitic interpretation" to describe the human tendency to assume a things' name related to its function. This is the idea that something called `loop_index` is the index for a loop, when it could actually be anything. He strongly felt that semantic content in variable names should be minimized.
I haven't run into the phrase anywhere else, and Google doesn't turn up any programming-related results that I can tell, but I've found the idea endlessly useful. In talking about programs and the goal of programs, we often use language that describes things we have not yet been able to construct. The language has a parasitic relationship to the thing it names - misdirecting our understanding of the machine it sits on.
I try to practice a lot of skepticism around language and computers. It is very easy to name a thing and much harder to make the machine behave in the way the language suggests.
You may like the article "Troubling Trends in Machine Learning Scholarship", Lipton & Steinhardt, 2018 https://arxiv.org/abs/1807.03341
Especially the trend of "misuse of language, e.g., by choosing terms of art with colloquial connotations or by overloading established technical terms"
Yeah, this is an interesting take. Broadly speaking I agree, but you have to be pragmatic about that. There exist some accepted conventions it's just senseless to break just to make a point, for example, if you are defining a ring, making + distributive over \* rather than the opposite as is usual is not inherently wrong, but I would classify that as bad notation.
Similarly in OOP, you could technically name getters in any way you want, but why would you not conform to the accepted "getValue()" standard?
Of course fooling yourself into thinking you have achieved things you actually haven't by clever usage of natural language is the opposite end of the spectrum, and it's exactly what I was ranting against.
I'm not claiming it's literally a linear model, obviously. The point is that they are relatively simple mathematical models (that actually use linear algebra inside, https://en.wikipedia.org/wiki/Artificial_neuron does the output formula remind you of anything?) different in scope/use but not conceptually from linear regressions.
Attaching (meta)physical meanings ("neurons", "intelligence") to any of that is dubious at best.
How is this different from saying that a brain is "just" a bunch of atoms, which individually are quite simple? It's when you put the individually simple pieces together and run an optimiser that you get something clever. The result of that optimiser is a complex and very large array of weights.
The difference is that we have neither the scientific knowledge nor the computational power to make an exact model of the human brain (nor we are anywhere close to either requirement, AFAIK).
In other words, we cannot pinpoint the exact sequence of operations and events that produce a certain behavior, while we absolutely can do that with ML models (you could in principle run machine learning models with pen and paper, obviously).
If you could show me a program that emulates my brain to a certain degree of precision (it's unlikely that such a program would be a deterministic one, by the way), I would have no problem accepting that my brain is not meaningfully different from that program.
"Intelligence" is a word that, etymologically and semantically, is related to human or human-like capabilities. You wouldn't say that a leaf floating on a lake is swimming, and likewise, claiming that computers are "learning" or "intelligent" is at best a thin analogy and at worst a mischaracterization of the process.
What's happening in my brain is something we don't have full scientific knowledge of, but we know it's not x86 machine code. While the two processes may be in many ways similar, conflating the two into this ill-defined concept of "intelligence" is a discussion about semantics more than anything else.
> "Intelligence" is a word that, etymologically and semantically, is related to human or human-like capabilities. You wouldn't say that a leaf floating on a lake is swimming.
The definition of words changes in response to increasing knowledge - just take 'energy' for example. One cannot establish truths about the world by arguments from usage. (On the other hand, to be clear, I do not think that the current state of AI merits being called "intelligence". What happens in the future is speculation.)
> What's happening in my brain is something we don't have full scientific knowledge of, but we know it's not x86 machine code.
The introduction of x86 machine code at this point seems to be moving away from your original claims about "AI" being "just" relatively simple (though not simply linear) mathematical models, which are not "just" machine code either. The interesting (and very much open) question is how much of intelligence can be modeled in this way, and what else, if anything, is necessary.
The more you stress the simplicity of these models, the more intriguing their achievements seem.
> The definition of words changes in response to increasing knowledge
The usual process in mathematics and science is that you have a phenomenon that everyone agree exists but nobody can quite put their finger on it, so someone proposes a formal definition and if that definition turns out to be adequate, people work on the formal definition, and that's much easier because you now can use math, statistics, formal methods, etc; a prime example of this is the notion of "computability".
I don't believe that we are seeing the same thing with the concept of "intelligence", this is probably in part because it's much harder to capture the concept in a formal definition. Computers do computable stuff. Overlapping that notion with "intelligence" serves no purpose in my opinion: it explains nothing, it doesn't clarify anything, and it's certainly not obvious that the two are related.
> which are not "just" machine code either
I'm using "machine code" as proxy for "instructions/lambdas/whatever for a computational model of your choice", which they certainly are.
> The more you stress the simplicity of these models, the more intriguing their achievements seem.
It's not my intention to downplay any of the achievements of "AI". They are certainly not less intriguing when viewed from my perspective, the same way a compiler is not less intriguing if you think it's "just code".
My point is that any association of a formal concept (math, models, etc.) with philosophical concepts (intelligence, "truths about the world", consciousness, etc.) is always on thin ice, because natural language and formal concepts are hard to mix. Especially so when the concepts at play are so ephemeral.
In the past, when a construct like 'intelligence' has been hard to pin down, science moves on--leaves it to 'philosophy' and works with formal definitions.
Would you say that's one part of your point? Just to clarify, I am only responding to that part of your point.
By way of example, behaviorists declared a strict subset of the human experience to be in the purview of scientific study--and that may even have been just fine for that era.
You seem to be sweeping something important under the rug because it's hard to pin down, and saying that this is what science has done in the past--and if so, you're right.
But there's an assumption--an assumption that it is safe to sweep things under the rug like that. That assumption may prove false, and if it does, we're screwed.
Convinced that AGI is not something to worry about? Fine--but surely you agree there's such a thing as an information hazard? That is, information that can be deadly in the wrong hands: like how to create the next COVID, or how to make a nuclear bomb. In past eras of human history, knowledge was not as powerful. Today and ever more so in the future, whether humanity can Get Things Right will matter.
So from my perspective, it doesn't matter that whatever 'intelligence' is, is hard to pin down: it's still got to be figured out, whether or not it's difficult.
> In the past, when a construct like 'intelligence' has been hard to pin down, science moves on--leaves it to 'philosophy' and works with formal definitions.
Yes, that's part of the point, your wording is better than mine. If we're sticking to a purely historical perspective, this is definitely what happened. Most disciplines that today we (rightfully) regard as fully independent, originally splintered off philosophy (the most obvious examples are mathematics and physics, but even something like economics, in spite of having become more formalized recently, undoubtedly originates from moral philosophy).
> You seem to be sweeping something important under the rug because it's hard to pin down, and saying that this is what science has done in the past--and if so, you're right.
I won't deny that strictly speaking there is a bit of inductivism at play here. Historically, the scientific approach of limiting the domain of discourse to a tractable subset has been so much more productive and successful than any alternative that my, as it were, "bayesian prior", is that we should replicate the same approach if possible at all.
> So from my perspective, it doesn't matter that whatever 'intelligence' is, is hard to pin down: it's still got to be figured out, whether or not it's difficult.
This is a reasonable position, but wouldn't you agree that it's more of a "moral intuition" (not that there's anything wrong with that!) than a position regarding how ML results ought to be interpreted? As such I have no real counterpoints to offer, except perhaps an utilitarian point of view: are you really sure that banging your head against this very specific wall is the most productive thing to do?
Most problems I see with AI arise from either flat out using the models incorrectly (i.e. mathematically wrong, not ethically wrong, which is what I was pointing out in my previous comments) or from already familiar "political" problems, i.e. incentives, transparency, privacy, openness of the decision-making process.
The good news is that none of that is new. The bad news is that our track record as a species on problems of that kind is abysmal. I doubt that, in any case, trying to halt scientific progress makes sense.
People learn to hit a target by changing the structure of their brain to fit the task. Computers become better at hitting a target by changing a data structure. That seems directly analogous to me. Critically, learning doesn’t imply the ability to perfectly execute the task.
Your response on definitions actually supports my point on the matter: definitions follow from knowledge ("a phenomenon that everyone agree exists") and are modified in response to new knowledge ("if that definition turns out to be adequate..." - and if not?) As before, "energy" stands as an example of how it works, and "computability" did not enter the lexicon until there was a use for it.
Nevertheless, I agree that in the specific case of current AI, using the word "intelligence" is misleading. I do not, however, think this misuse has any serious consequences, as, to reverse how I put it before, usage does not establish truths about the world.
>> which are not "just" machine code either
> I'm using "machine code" as proxy for "instructions/lambdas/whatever for a computational model of your choice", which they certainly are.
Then that is an unfortunate choice of proxy, unless, perhaps, you intended to imply that it is a priori impossible for intelligence to be created by running x86 machine code. It was not clear to me whether, by introducing machine code into the discussion, you were not making some sort of argument from incredulity against the possibility of AI.
> My point is that any association of a formal concept (math, models, etc.) with philosophical concepts (intelligence, "truths about the world", consciousness, etc.) is always on thin ice, because natural language and formal concepts are hard to mix. Especially so when the concepts at play are so ephemeral.
At least since Newton, mathematical models have proved very useful in discerning truths about the world. Are we to just assume they will not work for the biological phenomena of intelligence and consciousness?
> Your response on definitions actually supports my point on the matter
I'm afraid I failed to understand your point, then. I don't have a problem with what you said there.
> using the word "intelligence" is misleading. I do not, however, think this misuse has any serious consequences
This is where I fundamentally differ. Its misuse implies a connection between a formal model (algorithm expressed in a computational model) and a philosophical concept (intelligence) that's dubious at best. On a conceptual level, this makes it harder to reason clearly about those fundamentally mathematical and abstract concepts, and on a concrete level, it misleads the public at large, implying that certain goals have been reached when that's plainly untrue. That's pretty "serious" in my book.
> Then that is an unfortunate choice of proxy, unless, perhaps, you intended to imply that it is a priori impossible for intelligence to be created by running x86 machine code.
Again, I'm afraid I don't understand your objection. It's widely accepted that all reasonable computational models are equivalent. Citing x86 was colorful language, it has clearly no bearing on the point at large. Machine Learning algorithms are clearly computable, which means they are expressible as Turing machines, terms of a classical untyped lambda calculus, Python scripts, C++ template metaprograms, or anything else. They are literally just programs.
> At least since Newton, mathematical models have proved very useful in discerning "truths about the world." Are we to just assume they will not work for the biological phenomena of intelligence and consciousness?
I certainly believe mathematical models to be useful, you would be hard pressed to say otherwise.
The ontological status of scientific theories is however at the very least a debatable topic. One needs not believe Newtonian mechanics is ontologically true, it's a tenable position to claim it's just a model, and we accept that model because it's useful.
Specifically, one could easily argue that Newtonian mechanics is false, because, for example, it fails to accurately predict Mercury's orbit.
Similarly, one needs not believe ML is anything more than relatively simple math to find it useful.
> I'm afraid I failed to understand your point, then. I don't have a problem with what you said there.
You have to go back a couple of posts to see the point. There, you wrote "'Intelligence' is a word that, etymologically and semantically, is related to human or human-like capabilities. You wouldn't say that a leaf floating on a lake is swimming." As we are now agreed that definitions follow from knowledge and are modified in response to new knowledge, it would not be somehow wrong to extend the concept of intelligence to a certain class of machines, if it turns out to be useful and informative to do so.
> They are literally just programs.
I take it, then, that you don't agree with the sort-of Platonist view that algorithms have an existence independently of any implementation? I'm on the fence, myself, but lean towards the Platonist side.
Regardless, it follows from your position here that your original statement "What's happening in my brain is something we don't have full scientific knowledge of, but we know it's not x86 machine code" can be rewritten as "What's happening in my brain is something we don't have full scientific knowledge of, but we know it's not computable." - but while the former is true, the status of the latter is not yet decided, so they are not identical propositions.
> One needs not believe Newtonian mechanics is ontologically true, it's a tenable position to claim it's just a model, and we accept that model because it's useful.
One could say the same about specific ontologies - they are as subject to revision in the face of increasing knowledge as are both mathematical models and individual words - and if it turns out that a mathematical model of biological intelligence or consciousness is effective and useful, it would be tendentious to imagine an ontological line between that model and intelligence.
> it would not be somehow wrong to extend the concept of intelligence to a certain class of machines, if it turns out to be useful and informative to do so.
No objections, but it's a pretty big "if". You could restate my point as "there is no evidence it is in fact useful".
> I take it, then, that you don't agree with the sort-of Platonist view that algorithms have an existence independently of any implementation? I'm on the fence, myself, but lean towards the Platonist side.
I don't, my position is essentially formalist. While I believe most research mathematicians would side with me here, your position is absolutely valid; famously, Kurt Godel was a Platonist, as are many others. My only observation here is that even from a Platonist point of view you are not really rejecting formalism, at least in the sense that while you don't agree it's the only view, I find it impossible to argue that one can't view mathematical objects as formal constructs.
> "What's happening in my brain is something we don't have full scientific knowledge of, but we know it's not computable." - but while the former is true, the status of the latter is not yet decided, so they are not identical propositions.
No, I don't really hold that view. I probably worded that badly. My position is that the latter is unknown, and I would be content to accept that my brain is not fundamentally different than an algorithm if you showed me an algorithm that can effectively emulate my brain within an acceptable margin of error. Ironically, if it were possible to do that, it would be proof there is no "intelligence", only "computability", making the first entirely redundant.
> One could say the same about specific ontologies -
Again, this is really thin ice. I don't really know what to think about this because it's getting too abstract for my monkey brain, but it's certainly not obvious that a better model implies an ontological line between itself and reality. To put it bluntly, is a better model really "more true", i.e. qualitatively different from a worse one?
Can you specify what you mean by "emulate my brain within an acceptable margin of error"? What would this mean as an actual experiment? Depending on your answer, I think we can actually test your implicit proposition that no such algorithm exists.
I'm not actually claiming no such algorithm exists, as I have already stated, but simply that such a thing is not known to exist.
I'm not an expert in neurosciences so I can only give an informal description. Let's also remove "me" from the equation, let's talk about a randomly chosen human H. We know for a fact that there is nothing physically special about human brains, they are just ordinary organic matter. This matter forms a system subject to the laws of physics. With enough computational power and scientific knowledge (we have neither as of now, AFAIK), we could write a program for a quantum Turing machine that runs a 1:1 simulation of H's brain in software. Any quantum program can be emulated by a Turing machine equipped with sufficient random numbers with at most an exponential slowdown, making this program computable in exactly the classical sense.
My questions are (1) is it possible, even in principle, to make a program of this kind? (2) Would such a program be sufficiently predictive (with any statistical notion of that concept you prefer) of H's behavior?
If there exists a program that satisfies both (1) and (2), then I'm content with the notion that I am, myself, not significantly different from such a program.
Hmm, this still doesn't quite answer my question at the level of concreteness I was looking for. But thank you for clarifying.
The thing is, you can only define predictive accuracy relative to some experimental design. Otherwise you can always claim that there is some unknown, unperformed experiment where the predictions of the model and your actual behaviour would diverge to a greater degree than is permissible by your accuracy threshold, no matter how many successful experiments have already been done in constrained conditions.
Imagine a task where you have to classify images as being of dogs or non-dogs. We can already train a model that can almost perfectly predict the choices you would make during the runs of such an experiment. But we obviously wouldn't call such a model a "model of your brain"!
My question is this: what would be a sufficient experimental design or empirical criterion to decide that some program is a model of you? The loosest criterion I could imagine would be something like "can successfully deceive your loved ones into believing they are you in a single text chat of unbounded duration with some extremely high success rate." Recent advances in NLP lead me to believe that we'll be able to reach at least this level of fidelity quite soon.
To be fair, qsort is not insisting on seeing an algorithm that is metaphysically identical with a human mind, nor claiming that such an algorithm would be a p-zombie, devoid of subjective experiences, which are both positions that you might find from dualist philosophers.
> You could restate my point as "there is no evidence it is in fact useful"
And if you had originally stated your point that way, I would probably pointed out that there is equally no evidence that it will not be useful, if it turns out to be the case.
> ...but it's certainly not obvious that a better model implies an ontological line between itself and reality.
Clearly, I failed to get my point across, to the point where I cannot guess where this question is coming from. Let's see if I can be clearer...
My position on the definitions of words is that they are contingent on our knowledge and that new knowledge can change our definitions (I gave 'energy' as an example, and you appear to have accepted this point a couple of posts back.)
I take the same view of ontologies; the categories we see are contingent on what we know and may change as our knowledge increases. This should not be surprising, given that ontologies are specific cases of words with meanings that nominally pertain to how the world is. There is no implication here that a better model implies an ontological line between itself and reality; rather, the point here is effectively a "so what" reply to your statement, "it's a tenable position to claim [Newtonian mechanics] is just a model, and we accept that model because it's useful." Mutatis mutandis, as they say, and consequently, there is no justification for holding on to old ontologies if new facts suggest a better alternative, any more than there is for models or theories.
> To put it bluntly, is a better model really "more true", i.e. qualitatively different from a worse one?
Did you mean to write that, especially given that, in your previous post, you offered an argument for the proposition that "Newtonian mechanics is false, because, for example, it fails to accurately predict Mercury's orbit." If there is a relevant point here, I think it is that "more true" models are qualitatively better (and quantitatively better, also.)
> Ironically, if it were possible to [show an algorithm that can effectively emulate your brain within an acceptable margin of error], it would be proof there is no "intelligence", only "computability", making the first entirely redundant.
How so? If "intelligence" is a useful concept now (and your objection to "artificial intelligence" seems to be predicated on it being so), when we do not know if the mind is computationally modelable, why would this usefulness necessarily vanish if this turns out to be the case?
> And if you had originally stated your point that way, I would probably pointed out that there is equally no evidence that it will not be useful, if it turns out to be the case.
No objections, but isn't it a bit weird to argue that we should do that just in case it might be useful someday? We'll deal with it when it comes up.
> Clearly, I failed to get my point across
I'm sorry, I'm pretty sure there's an argument but I just don't get it. I'm not really following the train of thought anymore.
>> I'll take the seventh.
>? - I'm not familiar with this expression.
> No objections, but isn't it a bit weird to argue that we should do that just in case it might be useful someday?
That is not an argument that we should do that just in case it might be useful someday, it is just a response to the quoted statement. As you know, I am not in favor of the current usage of "intelligence" in AI, and the only thing we differ on in that regard is whether it matters much.
> I'm sorry, I'm pretty sure there's an argument but I just don't get it.
It is an argument that ontologies are not privileged, canonical or fixed ways of representing the world; they have to conform to current knowledge as it evolves, or be replaced, and they are only interesting if they can "earn their keep" by being useful. Consequently, I do not think your argument from ontology, that this abusage of "intelligence" is a big deal, is definitive.
It’s weird that you compare AI to the human brain. I think the goal is here in the long term is to surpass the mere human brain. For that reason it’s stupid to try and model it at all. Maybe some future neural architecture will make the human brain look like a computer built using vacuum tubes.
I don't disagree with this. It was a misuse of terminology on my part, my broader point has absolutely nothing to do with the mathematical details of neural networks.
Reason for the nitpicking is, that for me - as a trained mathematician - and seemingly some other commenters, the gap between linearity and non-linearity is yet inside mathematics quite substantial. This is true also in general. Non-linearity might be not sufficient, but it is certainly necessary for any definition and/or serious discussion of AGI.
Just insert the words "these days" and "mostly", and you're pretty much covered. Also a honorable mention for calculus perhaps. Even decision trees use gradient boosting very commonly.
I searched for the "famous paper" in question and could not find it - can you point us to a link?
So far I find two of Dijkstra's talks/writing on anthropomorphism
I thought gradient descent was mostly calculus, not linear algebra. I was under the impression linear algebra was used to frame calculations so that GPUs could be utilized (since GPUs are very good at LA operations)
> "AI" is best understood as a political and social ideology rather than as a basket of algorithms. The core of the ideology is that a suite of technologies, designed by a small technical elite, can and should become autonomous from and eventually replace, rather than complement, not just individual humans but much of humanity.
On economical and social damage: The article mentions that less than 10 percent of the US workforce is counted as employed by the technology sector, that essential contributions of others aren't counted as "work" or compensated, that this has "hollowed out" the economy and contributed to the concentration of wealth in an elite, and that this has contributed to concentrating power, as well.
On mutual exclusivity: The article proposes paying people for contributions, rather than writing them off as non-work. It also mentions humans with "AI resources" outperforming AI alone.
> - there is no explanation of what the "AI philosophy" or "AI way of thinking" is according to the authors
I think this point was implied here: "A clear alternative to “AI” is to focus on the people present in the system. If a program is able to distinguish cats from dogs, don’t talk about how a machine is learning to see. Instead talk about how people contributed examples in order to define the visual qualities distinguishing “cats” from “dogs” in a rigorous way for the first time. There's always a second way to conceive of any situation in which AI is purported. "
It's basically like saying it's all really just "curve-fitting", a mathematical tool which which requires smart mathematicians and programmers to implement successfully, not anything genuinely intelligent about the software.
Now we're talking about humans doing data labor to help enable computer systems tell cats from dogs in imagines in a rigorous way. Now that we've rejected the AI philosophy and refuse to use the AI way of thinking and remember and center the human, what have we gained?
I feel like I'm talking to a college sophomore who has learned that framing and jargon are useful rhetorical tools, but hasn't learned that they serve useful purposes.
> Now that we've rejected the AI philosophy and refuse to use the AI way of thinking and remember and center the human, what have we gained?
Very simple. If we contextual ever the contribution of the people, then why if it that Google captures most of the value of search ads, rather than the people who created the web pages that Google scraped? Why don’t they get a greater say in Google’s comment policies and ranking biases, including influencing which algorithms are appropriate for which context?
Because of the humans -- the content consumers, like me and you.
There were a number of websites which relied on humans (like old yahoo catalog and opendmoz). They still exist (I think), but humans did not choose them.
There is a whole bunch of other search engines other than Google, and I am sure some of them give way more control to creators. Humans did not choose them either -- likely because other humans filled them with spam and SEO.
It is all about humans -- they are the real decisive force because they get to choose if AI lives or dies.
(That said, as a part of this "decisive force", I feel modern Google products have way too many ads, and that current developments are against my interest. I guess this does show that as long as your software is great, you can charge great price, in $$$ or in attention to ads)
I'm afraid I don't understand. In what way does contextualizing and centering human agency and humanity change how these questions are asked and considered in ways unavailable to us today?
Wouldn't we also need to weight the human work and labor that went into all that scraping, organizing, and making Google possible? Wouldn't we need to consider the humanity of the person making the query, who may not want what the human who controls a given website that was scraped by the humans at Google wants?
> I'm afraid I don't understand. In what way does contextualizing and centering human agency and humanity change how these questions are asked and considered in ways unavailable to us today?
Because today, Google’s search algorithm is assumed to have maximum value among the pieces, and Google gets to capture far more value than the people who generate the training data for search.
> Wouldn't we need to consider the humanity of the person making the query, who may not want what the human who controls a given website that was scraped by the humans at Google wants?
Oh, absolutely! The point is that would further lower the weight/importance/power we ought to allocate to Google’s algorithms. Again, prioritizing the human roles & needs, rather than fetishizing the “AI”.
I'd say the authors have impressive resumes, though the arguments about economics and society are too far out there for me. So I didn't read very far into that stuff to see what they have gained from the position.
But I see it more as using accurate descriptions rather than just alternatives, which does have value in not misleading people.
They didn't make the point very well but I think they may have been trying to argue that algorithms while pretending to be objective have biases of their creators built in. The ideology they are critical of being that algorithmic decision making is the best or most of objective form of decision making.
I strongly suspect that the author of this article does not have much background or interest in AI, and is merely part of a push to politicize it.
I have spoken to people who are adamant that AI is racist and needs to be forced to change, but seem at a loss to go into even layman levels of technical details.
So I suspect that they are just parroting a received opinion.
To be clear, of course I know that there are all sorts of systematic forms of unfairness. But I think that AI is just an implementation of systems.
The AI umbrella extends as far as what are essentially spreadsheets. Are spreadsheets unfair, or are the policies unfair? Does it make sense to lobby Microsoft to do something about predatory lenders using Excel?
According to the authors this undefined philosophy has caused economical and social damage
The authors propose "focusing on the human" as an alternative but the two aren't mutually exclusive at all
Yes, it's rather vague.
What seems to be annoying the original author is simple. Most of the areas where machine learning is currently deployed are somewhat obnoxious. Ad targeting. Face recognition. Behavior recognition. Those are areas where some error is expected, so mediocre ML performance is acceptable. ML isn't yet good enough to drive, for example. That has a lower tolerance for errors.
All this is "focusing on the human". In the sense of "Big Brother is Watching You". Be careful of what you ask for.
The AI way of thinking is: AI is better than humans, and that it will replace our labor and our decisions. Therefore we must focus and invest in AI, as well as fear it, because it is taking over.
The damage done is obvious. We discount human agency and human labor in the face of this idea. Products are being sold based on this idea. Foreign and domestic policy, security and privacy, and private and public funding are being redirected by this idea.
If one were to simply replace "AI" with "machine learning", which is the name of the actual technology in most of these cases, in most of these contexts the hype and ideology would go away. And with it take away the attention and the money. We would then be able to focus on better things.
I wouldn't blame all that on the term "AI". There was quite a bit of hype before that under the label of "big data". In Germany it's trendy to talk about Industry 4.0 and "digitalization". You can generate quite a lot of buzz even without talking about "AI".
Exactly. "AI" used by the media, by marketing, and by science fiction has created its own concept, and that concept having consequences beyond the technologies associated with the label is what the article is about.
That's not what I meant. I mean that overusing the term "AI" is kinda bad and annoying but I think it's not doing the heavy lifting in the rapid adoption of these technologies and the buzz it creates.
Even if we just talked about large-scale high-performance statistical data processing, or "algorithmic automation" (AA) it would still be used everywhere, it would just be more difficult for laypeople to come to grips with it.
It's a bit as if people around 2000 were saying it's stupid to talk about "cyberspace", it's not some new space out there, it's just people sitting behind their computers connected by wires! But that term was not the reason for the hype.
Now, sure the label AI does contribute its share to the hype, but this stuff is so good that whatever way you call it (as long as it's not too many syllables), it will become a buzzword just for that reason. It's like euphemism treadmills. You can rename the concept, but it will re-acquire the same meanings. I mean "convolutional" managed to become quite a buzzword itself (even though it's a well defined technical term), as it appeared so often alongside really sexy stories.
"the holy grail of AI, it's ultimate goal, is AGI"
I don't think it is an explicit goal for many people - most people seem to be focusing on using 'AI' techniques to solve narrowly defined practical problems that were difficult or impossible with other techniques.
Edit: AGI was perhaps more of an explicit goal during the phase of "good old fashioned" symbolic AI the peaked in the 1980s and early 1990s.
Full AGI has little economic use aside from a brief resurgence of slavery. Its real value lies in creating an alien, terrestrial, intelligence to which we can compare and learn about ourselves.
Philosophically AGI is ancient and has always been about humans.
>“AI” is best understood as a political and social ideology rather than as a basket of algorithms. The core of the ideology is that a suite of technologies, designed by a small technical elite, can and should become autonomous from and eventually replace, rather than complement, not just individual humans but much of humanity. Given that any such replacement is a mirage, this ideology has strong resonances with other historical ideologies, such as technocracy and central-planning-based forms of socialism, which viewed as desirable or inevitable the replacement of most human judgement/agency with systems created by a small technical elite. It is thus not all that surprising that the Chinese Communist Party would find AI to be a welcome technological formulation of its own ideology.
Jaron Lanier is a frustrating thinker. In general he's a pessimist and highly critical of AI and technology. I was at talk of his and asked him about open source AI models and suggestions for ethical frameworks to guide AI research. He gave a rambling answer about we need to pay all the human annotators ever involved in producing a model and that AI is fundamentally anti-human. His justification was as an odd anecdote about an elementary student asking what's the point of human life if robots will do everything in the future. Jaron doesn't provide a meaningful way to engage with his critique and it seems the logical conclusion of his view is that we abandon technology all together.
I do agree with the underlying claim that modern AI research erases the human effort (everything from Mturkers to exploited labor) in producing the annotations that most AI system rely on. It's also fair the critique the proliferation of surveillance states and authoritarian governments that build upon and fund current AI research.
There is isn't good ethical guidance and frameworks to help researchers navigate doing research and in understanding the implications of their works. I'd prefer critiques of AI help guide some sort ethical framework for understanding, developing, and deploying these technologies responsibly. I don't think we can just stick our heads in the sand and pretend the problem goes away or abandoning AI in liberal democracies somehow stops authoritarian regimes from building even worse things.
The usual narrative goes like this: Without the constraints on data collection that liberal democracies impose and with the capacity to centrally direct greater resource allocation, the Chinese will outstrip the West.
In surveillance, yes. But that's not where China's productivity comes from. China does have some centrally directed resource allocation, but it's at a very coarse level. See "Made in China 2025".[1] That's industrial policy. Japan became an industrial power in the 1970s and the 1980s through good industrial policy, run by the Ministry of Trade and Industry. It's a help when you're developing a country, if done well. Most of the "Asian tigers" did that. Done badly, it's a disaster.
What seems to make China go now is a large number of medium-sized companies aggressively competing. Like the US up to the 1980s.
Also in AI the real innovation seems mostly in the west - Google, Facebook, Open AI, Tesla, DeepMind etc. We're not going to get AGI by having lots of people label images.
> Jaron doesn't provide a meaningful way to engage with his critique and it seems the logical conclusion of his view is that we abandon technology all together.
While it may appear that way, I wonder whether he might be speaking past many people involved in applying ML on human data.
As a starting point for establishing clarity, do you recognize/understand that one of the core ideas of his theme is human agency? And what he means by “humanity” (our sense of purpose/meaning tied closely with competence & judgement) and why he fears it might be overwhelmed by AI? Since human nature is “reflexive”, being infantilized or treated with certain biases by algorithms will push humans to become like that. The technical way to phrase this is that statistical modeling assumes static distributions, but the actual distribution of human behavior responds/adapts to these assumed models (“distribution shifts”). Pause and think about that for a moment.
Eg, the question he discusses in the talk you link to: if AI could (someday) do everything (some very broad range of things), then what is the point of human life?
If your answers are going to emphasize convenience & improvements and opportunities for better consumption, then you are ignoring the fundamental premise of the question. He’s pointing out a perspective that is deeply at odds with the assumptions which drive today’s computing/ML related industry. Are you saying you’d prefer he stops asking inconvenient questions?
> As a starting point for establishing clarity, do you recognize/understand that one of the core ideas of his theme is human agency? ... Eg, the question he discusses in the talk you link to: if AI could (someday) do everything (some very broad range of things), then what is the point of human life?
Yes, and I disagree with that framing of humanity. It is at best fundamentally nihilist and at worst reduces human value to its ability to do work and produce value (ironic given is Marxist interpretation of data annotators). AI is a threat in Jaron's framing because is displaces humans ability to do work and therefore devalues human life. Existence in of itself is meaningful and I believe we define meaning for ourselves. I find his framing of humanity deeply problematic.
>If your answers are going to emphasize convenience & improvements and opportunities for better consumption, then you are ignoring the fundamental premise of the question... Are you saying you’d prefer he stops asking inconvenient questions?
I don't see why having a pragmatic critique of his position misses his point. I have critical theory background and its been valuable for framing the world and being self-reflexive in my work as an AI researcher.
The problem with his framing is that there is no space for engagement in the present. He seems resigned that world will overtaken by AI (he's literally said that both in the talk above and elsewhere). I live the present and believe (perhaps naively) I have some agency to impact the future. I rather see critical conversations converge towards helping us create a better future (specifically have discussions on how to do AI research ethically).
> It is at best fundamentally nihilist and at worst reduces human value to its ability to do work and produce value
I see Jaron Lanier’s position as exactly the opposite, in an interesting accounting of second order effects.
“We shape our tools, and then our tools shape us.”
You are imagining that AI is going to be fantastically impactful in reducing human drudgery, which might lead to a flowering of human potential. JL argues that since most AI applications today perceive human development as increasing consumption (and people’s interaction with the products/services they use is very passive) — which means that as AI gets better, the ambient environment discourages human development. Imagine you satisfied every whim of a child, making it so comfortable that it never has to “grow” — that’s how technology is infantilizing us (AI is just the latest and most impactful layer on top).
> I live the present and believe (perhaps naively) I have some agency to impact the future.
To give the lion’s share of the credit/benefits/power to AI/algorithms vests all the agency in the hands of those who can control/influence them, and condemns those who cannot to a life “below the API” (even if they might be crucial to generate training data) where their voices are drowned out.
As JL says multiple times, he is very excited about (the same) AI research when we perceive it as “just” code/models/linear-algebra, because then suddenly the questions we pose are from a position of much higher human agency (probing the system and how we might influence it), rather than a position of awestruck breathless learned helplessness to the machine.
Instagram and YouTube are examples of algorithms optimizing the wrong thing. Whatever you like, it gives you more of it. More moreness, to the point people find themselves in dark echo chambers. Eventually you stop seeing contrary opinions or viewpoints. You stop growing.
> I see Jaron Lanier’s position as exactly the opposite, in an interesting accounting of second order effects.
Interesting. I had the opportunity to talk to him after his talk in Boston, he was pretty pessimistic and struggled to define his conception of humanity without a negative referent.
> You are imagining that AI is going to be fantastically impactful in reducing human drudgery ...
Nope I never said that nor do I imagine do humanity in a world where AI magically solves all problems. I call for more conversations about ethics because I do believe it has the potential for negative implications on society.
My definition of humanity lies outside of technology and is ontological. Here's a thought experiment where Jaron's definition of humanity fails. In a world where we have infinite resources and everything, what do you do. It's thought experiment I run often to ground my values and try to define myself as more than my work. I think of those around me, my time with my loved ones and friends. Humanity and meaning becomes for me the ability to enjoy and witness the lives of those around me, the relationships I cultivate and shared experience I have. If Jaron is preplexed by a kid asking what's the point of life if robots can do everything, his framing of humanity seems severely lacking.
> To give the lion’s share of the credit/benefits/power to AI/algorithms vests all the agency in the hands of those who can control/influence them, and condemns those who cannot to a life “below the API” (even if they might be crucial to generate training data) where their voices are drowned out.
This is not unique to AI. The digital divide existed before AI. It's a collective action problem with opportunities for public policy solutions.
> As JL says multiple times, he is very excited about (the same) AI research when we perceive it as “just” code/models/linear-algebra ...
Sure define AI research whatever way you want. That's not particularly helpful though to change the outcome he fears. It doesn't provide a way to predict unintended or intended consequences of fundamental research questions in context of societal impacts.
> I had the opportunity to talk to him after his talk in Boston, he was pretty pessimistic and struggled to define his conception of humanity without a negative referent
I thought the video you shared had a great 10-15 minute segment starting around ~50:00.
> Sure define AI research whatever way you want. That's not particularly helpful though to change the outcome he fears.
JL’s point is that it makes a huge sociological difference, because it (the contemporary mythology/theology of “AI”) massively changes how much we’re willing to question the status quo, rather than accepting it as “AI”.
> In a world where we have infinite resources and everything, what do you do [...] I think of those around me, my time with my loved ones and friends. Humanity and meaning becomes for me the ability to enjoy and witness the lives of those around me, the relationships I cultivate and shared experience I have.
There’s a subtle but crucial difference between infinite resources (not biasing us towards certain modes of being) versus a landscape promoting infinite (ever-increasing) consumption. Eg: imagine Google having every video on earth, but you have to search/ask, rather than having auto-recommendations.
> This is not unique to AI. The digital divide existed before AI.
Yes, but the aura of “AI” gives it a new legitimacy and authority in our ambient narrative.
—
PS: if you wish to continue this discussion, feel free to hit me up via email :-)
> He gave a rambling answer about we need to pay all the human involved in producing that model (even those that are consumed as opensource models) and that AI is fundamentally anti-human.
I do think this raises a very valid point in terms of intellectual property. If I train an AI to paint in the style of an artist whose art I scraped, does that artist have any claim as to its licensing? If not, should they not?
Are you saying that DeepMind should have to pay royalties to the Go players whose games were used as training material for AlphaGo? Should the same apply to human players who study others' games as training material?
(Obviously, it's a moot point for AlphaZero which was trained solely via self-play, and kind of puts a spanner in the works for Jaron Lanier's views particularly if unsupervised machine learning displaces supervised machine learning as the dominant tool)
There is something fundamentally different about an AI, though. Perhaps the ability to produce output at scale. Perhaps that its creator/owner may not have the ability it trains the AI to perform.
I'm not sure. But I feel that comparing AI to humans here is disingenuous or naive.
Derivative works are allowed under copyright law. Though the interpretation is steadily being constrained especially on site like Youtube.
His claim is more extreme. Essentially any individual involved in producing the data (annotators, aggregators, that grad student, etc) should be compensated for any downstream profit produced by any model that consumed that data. It's an interesting interpretation of Marxist critiques that workers are separated from the products they produce and alienated from the value produced by the fruits of their labor. A car factory workers don't get any more compensated for the sale, resale, and any other subsequent profit made the use of a vehicles they helped produce.
While it sounds great in theory I have no idea how you'd even begin to implement that. Ironically the best means we have available to developing that solution is the technology that he critiques as anti-human.
I do like how his proposed solution does not even remotely solve what he thinks the problem is.
Say you pay image labelers $100k a year. Unfortunately, as soon as they produce a finished model, that algorithm replaces them, permanently.
If a model is profitable to create, it produces structural unemployment. If it's not profitable, then those jobs continue to exist. The only way his proposal functions is if it's a stealth ban on AI. There's no sustainable way for an image labeler to have anything like a middle class income for more than a few months.
And his proposal has to apply... globally? How much does Tencent pay its image labelers? I can't image the PRC version of mturk is any kinder and gentler.
The hostility here to this article is interesting. To me, a reasonable interpretation is that it criticizes the way we understand/conceptualize some areas of modern technology. Indeed, if for some legal or practical reason research labs wouldn't have access to the data generated by the public (i.e. humans), many breakthroughs wouldn't happen. This is the other side to emphasizing the progress in algorithms (which is of course hard to deny).
You can't fully extricate technology from the societies where it exists, and things like naming, branding, institutions, and of course ideologies. There is a tendency to treat some areas of technology like some kinds of ancient gods or idols that have their own "needs" and "mandates" to force on their environment no matter what.
I happen to like[1] some their political proposals, like forcing the barter "data for services" to some fully disclosed monetary form. Also, coming up with some method of using these technologies that is compatible with individual freedom is a big concern. Even from purely practical standpoint, people trying and doing what they want has obvious value compared to everything having to be accepted by some (always to some extent self-interested and narrow-minded) authority. Shifting the language to talking about how we can enable and shape "AIs" as humans and societies seems reasonable: emphasizes that we have natural agency in all this.
[1] Liking doesn't necessarily mean supporting yet.
AI as “ideology” may be a stretch, but it tends there with all the mythos behind a word like “artificial intelligence”.
Naming things matters.
While the head of Apple’s ML/AI strategy likes “machine intelligence” [1], I prefer “machine learning”.
“Intelligence” is a loaded word. I think (hope) that “learning” can be understood as narrowly and crudely as the field actually requires.
A more sober term would help the entire world manage this better on average, IMO, than if we use the unnecessarily scary and confusing term “artificial intelligence”.
[1] “For this reason and others, many AI experts (Giannandrea included) have suggested alternative terms like "machine intelligence" that don't draw parallels to human intelligence.”
> this gets little attention from investors who believe “AI is the future,” encouraging further automation. This has contributed to the hollowing out of the economy
Automation == "hollowing out of the economy" does not follow. Companies can use automation to grow their output while keeping their workforce intact by taking on new projects that were not previously possible.
If automation were that dangerous, then being a software developer should be the worst job because it automates itself away. But no, in reality we do more and still need even more devs.
On the other hand, not automating is wasteful and a future time bomb. This anti-automation rhetoric is like children hating school while their parents force them to attend - preferring short term ease that comes with a much bigger long term penalty (temporal discounting at work).
Your argument doesn’t follow. How would being a software developer be the worst job because they “automate their own work away” when in fact they’d have a job until they automated everyone’s work away? It’s likely a given developer would reach retirement before that happens. Meanwhile tons of other people are out of a job.
I think the point is that software automation tools (automatic testing, nice IDEs) generally don't lead to companies hiring fewer devs, it leads to companies being able to do more with the same number of devs and consequently hiring more. Same thing in construction or any globally competitive industry
Has anyone actually convincingly argued that automation reduces the net number of jobs over time? My impression is that this belief entails a kind of parochialism about job loss, i.e., that it merely tracks the specific kinds or manifestations of jobs that automation has rendered obsolete while ignoring either the fact that particular occupations remain but now make use of new, more sophisticated methods, or the new jobs created by the needs and complexities introduced by new technologies.
Think of our ancestors. How many occupations were there? Arguably fewer than we have today. Certainly, your great grandmother had to wash the dishes herself, but we didn't have factory workers at dishwasher manufacturing plants, dishwasher repairmen, dishwasher dealers, and so on. With the introduction of the dishwasher, there is a reduced need for human dishwashers, but the technology also introduced a whole new industry in its place to support it.
I almost want to say that some law of conservation or even entropy is observed. We are exchanging one kind of burden for another or potentially many.
> It seems like there is no convincing argument to the contrary either.
That jobs are created seems to me to be more convincing, but I leave that as a matter of opinion substantiated only to the degree that I have in my OP.
> There is also something a bit odd in my mind about looking at human existence in terms of needed occupations.
> Isn’t the entire point of technology to eliminate work?
From a certain point of view, yes. Economically valuable work is only needed to the degree that it costs something to attain a (licit) good and to the degree that the good is desired by someone else. If those desires could be satisfied without cost, no real opportunity for a market exists because no need for exchange exists. But I cannot imagine that that situation could ever obtain. And therefore, because it could never obtain, the only thing most people can exchange for a desired good is something obtained through the exchange of labor for capital. So it's a bit moot since labor appears endemic to the human condition.
I just don't see why automation of one thing necessarily means a net loss of jobs. If anything, automation entails complex technology and complex technology, among other things, requires labor to produce and to maintain.
Firstly, this is contestable, and secondly it creates a tautology.
But also - consider that that humans have cognitive and physical limits.
Even if there is in principle always new opportunity created by automation, which I do find plausible, I see no logic supporting the idea that these new opportunities would necessarily be widely addressable by the majority of humans.
> Firstly, this is contestable, and secondly it creates a tautology.
I didn't say it was incontestable. I only said that it seems to be the case and that the evidence seems to favor that opinion. It would be more interesting to hear why you think it is contestable. Also, I'm not sure how this is tautological.
> But also - consider that that humans have cognitive and physical limits.
If anything, that would seem to suggest that these impose a bound on how much we can automate. Arguably, this means an upper bound would be determined by the maintainability of such technologies.
> Even if there is in principle always new opportunity created by automation, which I do find plausible, I see no logic supporting the idea that these new opportunities would necessarily be widely addressable by the majority of humans.
If you are making that claim on the basis of cognitive limitations, then see my previous point. But by the same token, if cognitive limitations impose a limit, then it would seem to suggest that the problem space would require a greater division of labor in order to render it tractable.
> Companies can use automation to grow their output while keeping their workforce intact by taking on new projects that were not previously possible.
This idea does not scale. It assumes that there is no upper limit for demand/production, and assumes that a business can suddenly gain the expertise to enter new markets to sell new products. It also assumes that investors would prefer risky reinvestment strategies to cutting costs by reducing labor and increasing short term profits.
> If automation were that dangerous, then being a software developer should be the worst job because it automates itself away
Automation goes after the easiest targets first. IT is still growing because it is an immature profession, and a platform for automation itself. I remember IT in the late 90s - basically anyone who could operate a computer could get a job doing installs. In fact my first job was adding Trumpet Winsock to machines that didn't have any built-in way to access the internet. A few years later you could still make decent money installing and configuring operating systems. Those jobs are long gone and replaced by SCCM and other similar tools. They have been replaced by jobs that require much more technical skill. Entire teams are replaced every day by outsourcing to automation platforms like AWS, because those platforms can provide the same services with fewer employees.
A better example is farming or manufacturing. Take a look at productivity versus employment since 1900.[1]
> This anti-automation rhetoric is like children hating school while their parents force them to attend - preferring short term ease that comes with a much bigger long term penalty (temporal discounting at work).
No, it's a pretty straightforward recognition that retraining even a few percent of any given workforce every year is going to lead to massive inequality and social problems, especially when there is no infrastructure to provide for living costs while a worker gets retrained. A manufacturing worker can be retrained to a better job, but how long do you think it would take to educate that person so they could design and maintain the machines that replace them? How much would that cost? Who is going to pay for it? And if demand for the product is flat, does it make any sense?
It's also a recognition that we are nearing peak production of practically everything. Developed nations are leveling off and declining in population. Around a third of our food is discarded. Ride-sharing is reducing demand for cars. E-commerce is able to offer discounts because it uses less labor. We are now at the point where major technology vendors have resorted to designing addictive experiences to compete for attention. What's left after that is saturated? I'm almost afraid to ask.
Automation certainly doesn’t hollow out the “economy” but it certainly can contribute to wealth inequality by reducing the number of employed humans required to generate value. At the extreme it could permanently upend the concept of full employment in the economy.
But that's a bit of a simple perspective. Not so long ago, a third of the population of Europe were farmers. Do you think that farming automation concentrated all the wealth in the remaining farmers' hands, leaving nothing for anyone else? Or did the resulting plummet in food costs allow us to figure out other jobs that added previously inconceivable value in other areas?
It's a fallacy to assume that the consequences of future automation is going to be just like past automation. There's no reason to assume that there will be new jobs to replace old jobs next time. Because this time it's going to descend on us a lot faster.
This is not a "simple" perspective, and if I'm permitted to be equally offensive, I suggest yours is a naive perspective. After all, this is already beginning to play out in our economies today.
It used to be that being successful meant that you owned the most successful store in your town, and maybe also the three other closest towns. Now, success is judged on a national or even international level. Success is achieved by fewer people at bigger scales. Success then looked like Wal-Mart. And now it looks like Amazon.
Success used to look like a dozen taxi medallions in NYC. Now it looks like Uber.
Do you really think that the "American Dream" will withstand a large fraction of truck drivers, taxi drivers, manufacturing line workers and clerical workers being made forcefully unemployed? You're not seriously suggesting that the answer is to teach truck drivers to code?
There were more humans hired to perform simple calculations prior to the calculator, but we don't see people saying we should get rid of the calculator.
Machine Learning algorithms and Artificial Neural Networks are still mostly adequate terms for what's going on in the industry thus abusing of term "AI" (artificial intelligence) makes it just a marketing BS.
> Machine Learning algorithms and Artificial Neural Networks are still mostly adequate terms for what's going on in the industry thus abusing of term "AI" (artificial intelligence) makes it just a marketing BS.
It's true that the terms ML and ANN predate the current hype but they were introduced/kept in use for the same reason: talking in mathematical terms does not excite research grant decision makers or business customers. Neural is a buzzword, it's easy to interpret for laypeople. If you talk about Latent Dirichlet Allocation, Support Vector Regression, Projection Pursuit, Principal Component Analysis, Reproducing Kernel Hilbert Spaces, Reverse-Mode Automatic Differentiation etc etc, then people yawn.
Why do we call linear programming "programming"? It has nothing to do with machine instructions. The answer: the name was made up for the hype and for securing research grants, when math funding was dry, but CS research was popular.
Why call "dynamic programming" like that? What's dynamic about it? Because the inventor wanted a name nobody can object to and sounds buzzy enough.
People try to name things in sexy ways to gain an edge.
Recently government of my country stated that "LGBT aren't people, they are an ideology" and used that as an argument to ban LGBT demonstrations and introduce local law in some regions where they have majority to create so called "LGBT-free zones" where "propagation of LGBT ideology" is banned or at least cannot have institutional support (like other forms of political activity enjoys).
This is obviously not as bad (because AI isn't people (yet?)), but it follows a similar pattern of calling something "an ideology" to show it in a bad light and draw absurd conclusions.
If you look that hard everything is an ideology. Stop playing with definitions and just say what you wanted to say in the first place.
For a long time, "AI" meant "unsolved hard problem" to hackers like us. Speech synthesis and recognition was AI. Text recognition was AI. Some compiler optimization was AI. Search was AI. Now, those things are, well, those things. They work.
Things are a little different now. The feasibility of neural networks trained up on vast data sets means we have non-human systems with hunches. That is, we have AIs capable of delivering results without explanations.
Take a look at Neal Stephenson's recent "Fall" for a social network that uses AI to generate "news" stories where the training metric is engagement. The consequence is the construction of an alternate social universe, and the kind of dystopia only Stephenson can dream up. https://www.worldcat.org/title/fall-or-dodge-in-hell/oclc/11...
Human hunches are often accompanied by a sense of ethics. The agricultural savant who sexes hatchling baby chickens knows the success of the farm depends on a low error rate. The judge sentencing a culprit knows the consequence of making mistakes.
I wonder how hard it would be to add a sense of ethics to neural network results? Maybe it's just a matter of managing the error rates. But this article suggests otherwise.
AI is a blanket term or a bucket term. It can mean Artificial General Intelligence - AGI or Artificial Narrow Intelligence - ANI. In my experience and opinion, as soon as something is well defined it becomes ANI - Vision systems, Natural Language Processing, Machine Learning, Neural Networks - it loses the name "AI" and gets its own specific name.
How AI is talked about depends on the audience. Nowadays, no one really thinks of rule based engines or scripts as AI, neither as ANI or AGI, but back in the day people thought maybe you could just have enough rules to replicate intelligence. So old ANI is just "an algorithm"
To researchers, as soon as an approach is well defined it becomes ANI; at least so far. In future there may be an approach to AI that becomes AGI, but ANI is not considered AI to most researchers.
For industry, AI generally denotes a handful of ANI approaches - Machine Learning and Neural Nets. If you say "AI" in a corporate environment, this is what people will think of.
I personnaly think AI more as a computer assisted culturals productions. The contents of theses productions are numerical, quantitaives stuff, so it look like the result of a pure intelligence. But if you take some distance with what is actually done with AI you can realize that AI is mostly the development of our current cultural environment. So AI is not ideology, but our current ideology let us see "AI" as a pure intelligence rather than a automated cultural production. What have to be questioned is why does AI produced stuff are seen as pure intelligence in our cultural context. What is called AI is "just" a cultural fact (but a fascinating one!).
This is a full reversion of where we were at the start of the century. Back then, the AI paradox was in full effect - if no reliable implementation had been found, it was AI. If you could just A* your way through a maze, it wasn't intelligent routefinding anymore, it was just a simple algorithm.
The change is mostly in terms of money. The promise of AI has become a draw for investment rather than a repellant. Additionally, there is an interest in turning the mountains of surveillance data that exist now from a situationally useful tool to a tradable commodity.
There are private intelligence agencies like google and facebook that make money from extracting all the information they can of individuals in the society.
There is a huge interest on their part on ending privacy restrictions, of course. There is lots of money too, and people in charge in governments that are interested in using private security agencies like public security agencies because the private sector is usually way more efficient than the public one. That also applies to spies.
This is prone for abuses, and society should react to protect themselves. You can easily create a "dictatorship in practice" from a democracy with it.
Now, AI is just the technology that makes global surveillance possible, machines are cheap and easy to control. The alternative humans are expensive and prone to whistle blowers denouncing your abuses.
But AI is not an ideology, it is just a technique set, and you can use it for very good things, like counting blood cells, driving cars or watch your house while you are away.
It is a good thing that exist, only that it must be controlled by the users and not be the controller itself.
FWIW, when referring to specifics, I'd be more comfortable with "techniques" instead of "technology". So a book title could be "Weaponizing AI Techniques to Maximize Dopamine Addiction via Social Mediums" instead of "Recognize Kitten Pictures With Awesome AI Technology for the LOLs".
Of course Jaron Lannier is coauthor. Frequent go-to mildly provocative but still hip contrarian for Wired, the historically full throated Big Tech apologist, booster, and primary paid placement press release conduit.
> the adornment and “deepfake” transformation of the human face, now common on social media platforms like Snapchat and Instagram, was introduced in a startup sold to Google by one of the authors;
Uhh excuse me but deepfakes were not introduced at this company. It was a mobile qr and image recognition technology and that is all. I should know because I was the lead tester of this technology at Neven Vision. They must be talking about DeepDream a google project by the company founder many years later.
AI is a buzzword. A fresh label slapped on old technology that became much more usable in the last years and allows us to tell apart cats and dogs with 98.6% accuracy.
In a nutshell, the term "AI" refers to any situation when software is contrived to massage data in some complicated way that produces useful answers, without any clear, rational explanation why.
(The associated "ideology" consists mainly of the position that this is just terrific, and we should be doing more of it.)
Broadly understood this thesis could be properly developed, AI approaches are based on aggregated data not the particular user, and will in a conflict of interests always go against the particular user. There is a lot of ideology present in the way platforms operate, but mostly its to embellish that they are profit driven.
There isn't as much difference between the West and East as one might imagine. Westerners claim they want privacy, but see no problem giving it away for shinier toys. If you want privacy, it can be had, both in the East and West, you simply have to pay for it. Or learn how technology works and prioritize it. I moved off of Windows decades ago, OSX years ago, and am slowly tightening up all aspects of my digital life, for power, convenience, and yes, privacy.
Privacy is a luxury good like everything else humans want. Available for the masses only if you're willing to spend the time and energy learning how to DIY, or can just be bought if you're rich.
It was a weird experience today watching Apple's announcement video. Everything there is designed to keep you locked into their platform, and it was amusing when I saw how machine learning on your phone was given such significant space. They didn't even try to claim that this would help out ordinary people, jumping immediately to reducing costs of very very deep-pocketed customers, medical device manufacturers. Oh but an AR bird will know where your hand is maybe a bit better too. :rolleyes:
I mean, that's Apple's market, along with "pro" users, with the HomePod tech being a sad afterthought, a bone tossed to the less-than-extremely-well-heeled users as an alternative to mass-market Amazon devices.
America has a cult of the big and flashy, and the engine runs on our data so companies can predict and influence the behavior of the masses. Predictable revenue flows are the name of the game. China's government just has an added incentive to bake it all right into the social fabric. That's all.
This is one of those weird articles where the author's intent becomes clearer the moment you replace "AI" with "capitalism". Anthropomorphizing technology goes well beyond AI and has been a thing pretty much ever since we've had gizmos. When we talk about a car or a fridge or a calculator that we own, we don't talk about the 300+ people that were complicit in its design and production, so I'm not sure how this has got anything to do with AI specifically. When you buy groceries from your supermarket, are you thankful to every single person it took to get them there?
It's also strange how the article references GPT-3, and in the same breath talks about how there's a need to provide "actively engaged" and "high-quality" training data, even tho GPT-3 was a success specifically because it relied on neither.
I think “actively engaged” here means “curated”, where a human had to provide labels or guidance.
Obviously the rise of big data and deep models that can induce probabilities is the source of pretty much all advancement in AI since heavy reliance on curation and logic were abandoned. As such, AI really is as much a technology now as it has ever been. It seems that the central revelation in the field since the demise of symbolic AI (aka GOFAI) has been that so much useful pattern matching can be achieved by 1) the inducing of sufficient probability from large data to enable high levels of (narrow) intelligent behavior and 2) doing it using so little human curation.
Perhaps the author is suggesting that recent advances in AI are not really based in intelligence or cognition, and thus is only a trivially small subset of the greater endeavor of building machines that can think. If so, then the misnomer she objects to isn't “AI”, it's “intelligence”.
I have no clue what "actively engaged" data actually means, since the author doesn't provide any links to the studies asserting its superiority. GPT-3 was trained on 45TB of text, little of which was written with AI training in mind.
AI is best defined as whatever computers can’t quite do yet but we think they can do soon. It’s a moving target. Lots of things used to be “AI” that are commonplace now.
Maybe not necessarily, if long term the end result is fewer people making a living and instead relying on a centralized system that props them up economically or otherwise “provides” for them. In that scenario agency is removed from people. Historical or modern slaves also lack some aspects of agency. In that sense you can see that to allow AI you have to be okay with people having less agency. To fib that it’s easier when you can ignore their natural agency.
this article makes many valid points but I would have loved to read a version that was written without use of the terms "AI" or "artificial intelligence", as I think it would have been clearer and more interesting.
> Glen Weyl is Founder and Chair of the RadicalxChange Foundation and Microsoft’s Office of the Chief Technology Officer Political Economist and Social Technologist (OCTOPEST). Jaron Lanier is the author of Ten Arguments for Deleting Your Social Media Accounts Right Now and Dawn of the New Everything. He (and Glen) are researchers at Microsoft but do not speak for the company.
The statement that Lanier and Weyl "do not speak for the company" includes an unacknowledged assumption that their position is not tacitly influenced by Microsoft. I don't think this assumption is demonstrably true for these two, nor for anyone who works for a company. Rather, the general maxim applies: "It's difficult to make a man understand something when his sustenance depends on his not understanding it."
That said, what are Lanier and Weyl incentivized not to understand?
Let's look deeper at (what I believe is) the central thesis of the article. There is a problem:
> When people provide data, behavioral examples, and even active problem solving online, it is not considered “work” but is instead treated as part of an off-the-books barter for certain free internet services.
And the solution:
> Active engagement is possible only if, unlike in the usual AI attitude, all contributors, not just elite engineers, are considered crucial role players and are financially compensated.
AI is presented as a one-sided type of commerce where "contributors" (the vast majority of people) are unwittingly exploited by "elite engineers" -- Google, Facebook, Twitter, Amazon, TikTok, etc. M$ plays in this space, but they lack a competitive advantage and do probably would prefer the whole idea disappear. I'll call these two groups "AI losers" and "AI winners" for clarity.
Here's how I interpret this thesis:
The AI losers consume, engage with, and create content, cluelessly leaving behind a trail of golden breadcrumbs ("data") for the AI winners to profit from. To change this, AI losers just need to wake up to the fact that they're all pooping out gold and start selling these nuggets to AI winners on some kind of open market.
The question looming above all of this is, "Why haven't they already done so?" No one wants to ask this question, much less answer it.
Why is that? In the mindset of Lanier, Weyl and the entire "humanist" camp (including Tristan Harris and his team), the AI losers aren't just being exploited, but they are too stupid to realize they're being exploited. Hence, the injustice will only stop if brilliant ethicists like themselves step in to intervene. At present, they are addressing the general public, but inevitably their crusade will take them to Washington, where either they will team up with the same people they're crusading against or fizzle out.
What they are incentivized not to understand is that the AI losers are not stupid. I don't even agree that they're necessarily being exploited. They're getting world-class online services from giant tech companies without paying a cent. They're also engaging in collective bargaining with these giants to get their way (via the social lever of brand safety, which is flawed, but that's another issue). This is an equation that actually works in their favor; if the equation changes, you can't assume that their behavior will remain the same.
Lanier and Weyl don't consider that perhaps the AI losers pay $0.00 these services not because they de-value their own privacy, but because these services are actually worth $0.00 to them. They don't see usage of technology as a choice on the part of consumers, but a compulsion that must be satisfied at any price. But they don't know how many users a $5/year Google would have, or a $5/month Facebook, if any at all.
Another question that remains to be seen is what the legal environment is going to look like for these companies in the post-Snowden, post-Techlash environment. Is this "data" a trail of golden breadcrumbs (like all big tech spokespeople including Lanier and Weyl believe it is -- and in fact, their salary depends on that being so), or a trail of nuclear waste? Only time will tell, really.
I agree that there is a form of commerce going on, but the commerce is three-sided, not two-. It's actually between the AI winners, AI losers, and the government, and it's much more complicated than the authors lead on. I submit that the AI "winners" are in a more tenuous position (financially and legally) than we are being led to believe.
Still waiting for ai to predict or at least help with a solution for covid.
Jokes aside, ai has done a great job for chatbots, deep fakes, profiling customers and analysing stats about data. Beyond that its mostly sensationalism.
Controversial... How about Oncology? It appears to be making a big difference there. Plus there was AlphaGo, which was thought to be impossible, up until the very moment it actually worked, right?
Not yet it isn't. Nobody trusts it enough to not require a trained oncologist to look at the samples. Not to mention the machines or software are unavailable for end user doctors. It's in research.
>How about Oncology? It appears to be making a big difference there
It doesn't really. I'm not aware of any oncology work that has yet been replacement by a machine, a few hype articles aside every year or so.
>Plus there was AlphaGo, which was thought to be impossible, up until the very moment it actually worked, right?
Also no. After the AI winter people were more conservative about when computers would be able to compete in Go, and people generally thought it was five more years away or so. But nobody believed playing go was impossible or even out of reach, why would anyone have thought that
Just because there's a huge amount of sensationalism doesn't change the fact that AI is created a huge amount of economic value in a vast number of applications over its ~6 decades.
Voice recognition, text translation, OCR, image compression? "Analysing stats about data" alone is something that can be applied to everything from antiviruses to climate models.
There are plenty of indirect applications of AI in COVID but the core problem is not learning, it's searching and testing solutions. So a simulator would serve better. Learning comes after you already have the training data and want a model to solve similar cases.
Pieces like this remind me of soviet propaganda when they literally claimed that “cybernetics is capitalists’ whore”. Some people are just incapable of seeing past their political beliefs (or unwilling to)
It's a self-serving ideology dreamed up by a group of elite psychopaths who think of themselves as value creators... When in fact they never created a single ounce of value in their entire lives; they only know how to trick and coerce others and capture the value which others have produced (thereby concentrating that value into their own pockets at the expense of everyone else).
I generally don't think of researchers as psychopaths though I'm sure a small number are. On the whole, I would think that they are more motivated by a thirst for knowledge and wanting to make something cool and gain social approval from other smart people.
I'm more referring to the people at the top who make the decisions about the direction of AI innovation and use AI as an excuse for receiving tons of funding from the government (or from industry) that they don't deserve and could be better allocated to more productive things which will actually help people instead of trying to control people.
Controlling people does not help them. AI tech related to productive industries like farming, construction, etc... That's great. But clearly that's not the kind of AI which Facebook is working on (there may be general overlap, but the goal is wrong).
None of this has anything to do with AI. It’s just a barely tangential critique of... society? Capitalism? Either way it has nothing to do with the OP.
Man you didn't spend much time in academia. The number of psychopaths I encountered in academia quite easily exceeded those in Business, and I've been in business a lot longer. Maybe I was just unlucky/lucky; don't know.
Anyway, "AI" is an actual subject, but as used in the marketing and press releases, and at this point on HN, is a sort of grandiose malapropism meaning "statistics and machine learning."
The ideology spelled out by the author is ill presented, but fairly real.
1) "AI" as presented is a continuation of the program of the technocratic managerial "elite." The social class of people who more or less started in the time of Herbert Hoover, and while having some mild successes (the creation of the highway system, public health initiatives, WW-2 production), have mostly discredited themselves for decades (aka the shitty roads, a nation of fat diabetics, deindustrialization because muh free market reasons). Similar social classes also failed spectacularly in the Soviet Union.
2) Data collection is mostly useless. In intelligence work, in marketing, political work: most of it is completely useless, and collecting it and acting on it is a sort of cargo cult for technocratic managerial nerds, economists and other such human refuse. Oh, it pays off sometimes: it doesn't matter though; collecting it and performing actions based on it becomes is its own reward. There's an entire social class of "muh science" nerds who think it a sort of moral imperative to collect and act on data even if it is obviously useless. The very concept that their KPIs and databases might be filled with the sheerest gorp .... or that you might not be able to achieve marketing uplift no matter what you do... doesn't compute for some people. But it's real. I encounter bullshit like this every day.
3) "AI" as labor replacement is a wish, rather than a fact. They really mean your job will be replaced by an Alien or Immigrant, not a computer. As the article points out; the human proctored stuff on the web is the valuable stuff. They didn't point out that "AI" produces a ton of web (and email) spam content as well: and it's almost entirely shit. Something like a search engine is somewhat a somewhat valuable linear algebra way of organizing data in an automated fashion, which ding-dongs continue to conflate with "AI." Human proctored data is almost infinitely more valuable. I don't like Wikipedia, but if I had to pick between it and existing search engines; I'll take Wikipedia all day.
Most of this is a discussion about semantics, which is silly.
In 99% of industry, AI is used as a synonym for Machine Learning. The remaining 1% refers to Artificial General Intelligence, which is a long term vision for what might be achieved using, again, Machine Learning.
I think a lot of it is ANN based now, but it worked pretty well when it was just KNN. Calling KNN "AI" is pretty cope; it's just a database query. Of course, the large ANN networks are also more or less just a database query refactored as a function transformer, but nobody likes to talk about that.
Recommendation systems are probably one of the most significant advance, YouTube for the à user that actually care about discovering wonderful content/music is a marvel thanks to machine learning. I also believe that Google search use ML to some extent.
Text to speech and speech to text give a voice to the disabled
Translation make knowledge and communication accessible
But the most important technologies like dependency parsing are enabler of a future AGI that could transcend human intelligence in some revolutionary metrics.
Just look at this non exhaustive list of what AI can do and how well:
https://paperswithcode.com/sota
- there is no explanation of what the "AI philosophy" or "AI way of thinking" is according to the authors
- according to the authors this undefined philosophy has caused economical and social damage
- the authors propose "focusing on the human" as an alternative but the two aren't mutually exclusive at all
It's just a cheap formula - keep the target vague, blame the vague target for real problems without proving causation, propose an alternative that's good on its own but doesn't contradict any real form of the vaguely defined target.
I haven't read a lot of Wired but this just comes accross as poorly thought out at best. Or worse, it's just manipulative.