>Now, instead of humans designing algorithms to be executed by a computer, the computer is designing the algorithms. (Albeit guided by human-devised algorithms)
This line is way off in both tone and substance. On tone, it really underplays the human effort involved in effective machine learning (as it is practiced in 2017) and anthropomorphizes "machines" to an unreasonable extent. In substance, I fail to see how a machine that "designs its own algorithms" according to an algorithm designed and implemented by a human is fundamentally different than an algorithm coded directly by a human. To use the author's example, machine learning allows humans to build complex software systems in less time just as a bicycle allows humans to cover more distance with less energy. It's a big improvement, but it's not, say, teleportation.
>it is only now that the machines are creating themselves, at least to a degree. (And, by extension, there is at least a plausible path to general intelligence)
I could not disagree more strongly with this addendum. Simply put, I fail to see any path from state-of-the-art ML/DL research today to AGI, and I would even go so far as to say that humans have made approximately zero progress on this task since it was first formulated in the 50s. I think we know about as much about "intelligence" (and consequently, what would constitute AGI) as star-gazers in ancient times know about the universe. That's not to say that it will take millennia to invent AGI, but the path to get there is probably quite orthogonal to modern ML research.
Before I really understood and worked with NN, I felt the same way. I thought the atomspace computation approach and other similar granular computation paradigms were much more likely to make progress.
However after seeing the striking similarities between how I watched my three kids learn from infant -> toddler ages and how we build our convolutional neural nets in my company, it was like a light went on.
If you look at how relatively sparse and weak even the best deep nets are compared to human brains, especially considering a really narrow set of inputs - we are at the very early beginnings of mimicking the complexity of the human brain. It seems to me that the ANN approach is right, we now need to make it radically more efficient and give it better input sensors.
We need a nervous system for AGI (structured data acquisition) before the big brain tasks will be solved.
Sure, your NN learns facts and processes like your toddler learns facts and processes. Those are a tiny part of who your toddler is, though.
The essential component is their will. You don't have to set them up and feed them data. They don't sit quietly until you ask them to answer a question. Kids have distinct personalities from very early on, and demand input, and produce opinionated output (to put it mildly)--from day one.
Emotions are a huge part of that. But to my knowledge, we have less understanding of emotions, and spend less time trying to create them with computers, than conscious processes like "which picture has a car in it."
But there is evidence that if you take away a person's emotions, they have great trouble making decisions. They can consciously evaluate their options. They just struggle to pick one.
So how will AI research focused on replicating conscious thought result in AGI, if we don't know how to generate emotions? Is anyone even trying to do that?
My standard joke is that a lot of people are working to create a car that can drive itself, but who is investing to build a car that will tell its owner, "fuck off, I don't feel like driving today"?
But can a machine that always does exactly what it is told to do really be thought of as "intelligent" the way we think of human intelligence? Do smart people always do exactly what they are told?
Creating motivation in AI is an open area, and in fact is arguably the big hairy beast when it comes to the "Friendly AI" question or really the whole "General" part of it.
You do the same thing everyone else does in this debate which is move the poles - we don't know how to build "emotions", we don't know how to build motivation - until we do or it is perhaps an emergent property of a sufficiently deep net.
Too many other strawmen in there to argue eg. the idea that we will need always tell them what to do.
The point I am making is that because the reinforcement nature of biological systems is mimicked in the basic ANN structure, it's the strongest candidate (at scale) for the building blocks of an AGI.
My slightly optimistic money is on the latter one.
Human creativity can be empowered by optimization algorithms. It's a huge improvement over design by hand.
This line sounds deep, but I think it incorrectly conflates work with life having meaning. If eventually there isn't a need for large swaths of the population to work, then so what? I don't think the elite aristocrats in previous centuries had any problem with not working. Humanity can adapt to find other sources of meaning, like the pursuit of art in its various forms (although I'm assuming that computers can't replace art). I think a better question is if society can adapt quick enough to fill the void left by the absence of work.
There's nothing in principle that stops a machine from creating art. Even, better art than any person could do. So once that happens, where are we left?
Meaning isn't something physical in the universe. Meaning is an emotion. It's what you feel when you're working towards something that you believe has some greater importance. With all opportunity to work towards anything taken away, life will become meaningless by definition, unless humans are left with some pseudo-artificial challenges to push against.
Zookeepers put the animals' food inside a metal box with a small hole, so the animals have to do work to get it out. It's good for the animals to have something to work towards, and they're too dumb to realize they're being manipulated. Maybe that's our future. With WoW and Clash of Clans, etc, sometimes it feels like we're already halfway there.
But then I go and try anyway. I'm not even really sure why, but I like doing it.
I will probably never write code as well as John Carmack or compose music as well as John Williams, but that doesn't stop me from trying. And it is fulfilling.
Although some minority of people derive "meaning" from work per se (work = something that you get paid to do), I would say that the vast majority of people can derive meaning from other things. For the vast majority, work is for getting money.
Therefore, there are still lots of things that you can use to derive meaning, such as: raising a family, caring of others, playing music, study different subjects for fun (philosophy, etc.), be part of your local community, do sports, do competition sports (if that fits your style), etc. etc. etc. etc.
Bottom line: do not mix work with quest for meaning...
I think it may be possible that you could be conflating two subtly different things here. Where people currently derive meaning in their lives might not be quite the same as where people have the potential to derive meaning in their lives.
I know a number of people who are, for instance, psychologists, teachers, or social workers. They have chosen to derive meaning in their lives through work. Some of them have also gone on to raise families and care for others, turning sources of potential meaning into sources of active meaning. Thus people can incorporate both current and potential sources of meaning into their lives. Some people opt to dedicate their lives entirely to their current sources of meaning!
Yet, it's perhaps less than maximally wise to conflate someone seeking meaning through their work today with their future in which they might seek meaning through raising children. Meaning does not always operate with expected values.
Every example you gave is a form of work in the general sense of effortful goal-directed activity.
 This brings to mind the famous quote: "The map is not the territory"
There's a difference between working to survive and working because it's enjoyable.
For example, I can work on my mountain bike skills, or I can sit in an office and work on code. One I do because I enjoy it, and one I do to pay my bills.
Computers, automation, and AI will take over the second type, but they won't take over the first type.
I'd actually go so far as to say your definition of work, meaning, and purpose is insulting and harmful to a large number of people. Does anybody really derive meaning and purpose by cleaning toilets, bringing people food, or stocking shelves? Should they? I don't think so.
As long as humans are humans, we'll always want to do something. We'll set ourselves personally meaningful goals and strive to achieve them. That is still work. But that's totally different than organizing your life around involuntarily spending most of it doing things you don't care about so that you can put bread on the table, and so that you have that table in the first place.
Reductive definitions don't seem to contribute much to these kinds of discussion.
Yes, it is; specifically it's a biased filter over a lower-level set of random change processes (which has its own biases) produces a biased-but-random (non-deterministic) changes over time.
We don't have to wonder -- it has already happened: http://the-best-art.computer
> The computer queries the universe and uses an algorithm to objectively calculate the best art for any given moment in time. The human executes the commands.
How is this objectively at all? The artist creates an algo and picks some stuff to query. All the artist does is adding another (subjectively formed) layer to the process.
> The computer's creative process is computational, and therefore unbiased.
> How do you know it actually produces the "best" art? Good art pulls meaning from the chaos of the universe, and also reflects the artist's unique point of view. The computer rigorously combines these two factors in its programming, optimizing them to produce the best art.
I find this very arrogant to say...
> There's nothing in principle that stops a machine from creating art. Even, better art than any person could do.
I don't think this project would be an example.
The quality of art depends not only on the end result but also on the process. Robot art won't be better unless we like robot artist more than human artists.
Exactly. And they were not doing any kind of "art" either, for the most part. They went hunting. Sometimes they made war to one another (usually, only in spring and early summer, because the rest of the year, war was too inconvenient). They had intrigues (who slept with whom).
I fail to see how work as it is currently understood, ie a 9-5 job, has any connection to any kind of "meaning". Life is already meaningless; but it's unpleasant. If we can make it more pleasant by having machines do the work, how is this bad?
I recommend trying out the Culture series. Imagine a universe where humans are definitely useless because hundred kilometer long spaceships are controlled by unimaginably intelligent AI "Minds." The humans find meaning by playing games, exploring the universe, but most consistently, being academics. They get on the ground and investigate other cultures and do that human thing - provide unique interpretations. The Minds don't lack personality - they can form their own opinions and interpretations - but humans get meaning simply by immersing themselves in the research and work itself.
I'm getting offtrack here, but funnily enough I often become frustrated because these books focus so much on the humans, when something very much outside their reach is occurring and is more interesting to me. For example, in one book, an object enters the universe seemingly out of nowhere, and is utterly inexplicable by the Minds. But Ian m Banks for some reason spends time telling a human love story in the midst of this - why should I care! Tell me more about what the Minds are doing about a thing they can't explain, a first in the history of the universe for them!
TLDR my rambling post - humans will find meaning, whether or not that meaning is also the means by which they are able to eat (salary work).
Tangential to the topic of the article, but I share your feelings here. I hate it when sci-fi authors focus too much on people. As I like to say, if I wanted to read about complexities of interpersonal relationships, I'd pick a romance novel.
i think that question has been answered to some degree by research (cant pinpoint off top of my head) which seems to indicate that generally humans fill that void by watching more TV and sleeping. i think its idealistic to think that people will suddenly become creative and culturally inclined.
The good news is they're only wasting their life for themselves. Those of us that enjoy other pursuits are still free to enjoy them, regardless of how many people are watching TV and sleeping.
As the breadth of what is socially argued to be a human right increases, you'll find that those working to provide the labor and capital will be increasingly deprived of the fruits of their labor to provide for those who are sitting around watching TV.
There are many leisure activities I would like to partake in and would be able to partake in if so much of my income didn't go to taxes. I'm not saying all taxes are bad, but the scope of what taxes are increased to support only increases under a democracy. A democracy will always result in the election of those individuals that promise to give benefits for free, when thevtruth is that the benefits will increasingly come from those in the minority.
I currently still think a democracy or republic are the best form of government compared to others that have been tried because it prevents abuses by a few against many, but it fails to prevent abuses of the many against a few. I'm not sure how we can achieve a governmental form that generay prevents abuses in both directions.
As for the money, in transforming an agrarian barter economy to a modern one the USA had good use for a cash system for 200 years. But what about when the only obstacle to gaining essentially free goods is, a capitalist holding them hostage? Its all starting to break down.
Anyway, even a cursory examination of the foundations of the UBI initiative show that it pretty much pays for itself. The knee-jerk "I don't want to pay for other people to sit around" complaint is uninformed and obsolete.
But what about when the only obstacle to gaining essentially free goods is, a capitalist holding them hostage?
A capitalist can only charge what the market will bear. If they charge too much, no one buys what they have and they lose their return on investment and they are forced to lower their prices until those in the market can afford to buy what they are selling. If the capitalist invested too much and can't get the price they originally wanted, they will at least try to minimize their losses.
If you believe a "capitalist can hold goods hostage", the only conclusion I can come to is that you don't actually understand capitalism and how it works.
UBI is unproven. I could see a negative income tax making some sense, but at absolute best the maximum amount any individual receives needs to be uncomfortable, so there is an imperative to contribute to the productivity that supports a society. Wealth is created. It doesn't just magically appear.
When it quits working, we're gonna have to switch. Has it quit working? That's the fundamental issue before us.
In addition to all those leisure activities you have been deprived of by "those people sitting around watching TV", you apparently have been too busy to acquire basic historical facts or exercise basic common sense (I vaguely recall various people being elected with promises to reduce taxes, and occasionally even doing so).
Since about the 1940s, there has been no clear trend upward in the fashion that you describe; certainly nothing that would substantially move the needle in how many leisure activities you can partake in.
So I suggest you go take that pottery class now.
In several democracies the tax rate has increased since the 1940s. For example, in Germany taxes as a percent of national income has increased from 29% in 1950 to 45% in 2011.
In the US, the tax rate has remained largely unchanged since 1945 when you take into account all taxes, not just income taxes.
My point was that the scope of what taxes support increases under a democracy. All savings from improvements in productivity are just used for new activities. Instead of returning those savings from increased productivity to taxpayers, they just find new activities to spend it on.
I want from the government today, exactly what we got from the government in the 1950s, but delivered with 2017 efficiency/productivity. The government needs to stop finding new things to waste our money on.
As you observe, most people in a democracy don't agree with you. So your fetishization of "exactly what we got from the government in the 1950s" will be a lonely dream; likely most of the rest of us would like something in excess of 1950s investments in education, health care, and so on. The rest of us here on Planet Earth might also be aware that many of the goods the government might have to pay for competed for by other sectors, so if the government decides that it would like to pay 1950s level prices for such goods, it will be shit out of luck.
I suspect you can probably find a country or two out there with a tax burden and government services akin to the USA in 1950; perhaps you should move there?
If the welfare states of the world don't get their acts together, this will happen. And it will keep happening until they've lost enough of their high-tax-paying citizenry that they can no longer afford to sustain themselves and will collapse under their own weight.
The inevitable future is one in which taxes are priced more like a payment for a service provided and less like a protection racket. And in which citizens are treated more like customers and less like assets.
Generally the tone is a bunch of libertarian piffle where they soak up education, get raised up and absorb experience in a nice stable high-tax jurisdiction. Then at some point they stamp their pretty little foot and say that it's quitsies time for their big-state upbringing and that they are now "even" with the societies that conferred all those advantages on them... and off they go (typically to run businesses whose business entirely depends, weirdly enough, on economic activity in the high-tax type jurisdictions that they left).
I have some sympathy for the idea that taxes should be lower and governments should be leaner, but the messianic nonsense (collapse of the West predicted, film at 11) that people like you and the guy I originally responded to is just irritating. In the fantasy world you live in, no democratic government even undergoes a course correction due to unsustainable spending. No-one ever gets irritated and votes in a party that promises to lower their taxes and does so, ever.... You are manufacturing a crisis to make your moderately interesting ideas seem the The Only Answer To Everything.
Incidentally, the answer is treat citizens like citizens, not like 'customers' (or assets, for that matter) - unless you want to torture the idea of being a "customer" to the point where you can turn benefiting from a public good into somehow being a "customer" of that good. Generally this isn't a analogy that particularly helps understanding anything at all. What would we learn about defense, public order, or the notion of public education as a public service - not a "customer good" provided to the person themselves or their parents, etc. by rephrasing it as a "customer" relationship?
Oh and btw, work bringing meaning? many jobs don't.
However, I have doubts about the will of "society" to provide for their basic needs unconditionally.
Consider the type of arguments made in the US about healthcare, or the comments made by Europe's elite about countries wasting money on "drinks and women"
 - http://www.cnbc.com/2017/03/22/dijsselbloem-under-fire-after...
Well, that and the line is kind of tone deaf in not acknowledging Religion gives people Meaning in their lives by way of Beliefs.
Not everything valuable is recognised or rewarded with money, I'll give you that.
Maybe I'm weird, but I want to be able to point at my work and say that it helped someone to make the world a little bit better, and not just to earn someone more cash.
I would add Exploration. Getting into space has never been cheaper or safer, and there is vast opportunity in space for everything imaginable, resources, scientific knowledge, new discoveries, new challenges, etc. We dont seem to be satisfied as a race with robots doing our exploration for us up til now, i dont see why that would change in the future. Humans want other humans to be on the surface of these planets making discoveries, not some robot slowly rolling around a desert taking dirt samples for a decade.
There's a recent estimate that about 50% of jobs are automateable with current technology. The future is already here; it's just not evenly distributed. Strong AI is still a ways off, but mechanization and computerization of work is coming very fast.
The next big milestone is probably not strong AI. It's good eye-hand coordination for robots. Robot manipulation in unstructured environments still sucks. Baxter was a flop. (Rethink Robots, Rod Brooks's company: invested capital, $115 million; sales, $20 million.) Universal Robots in Denmark is doing better, but they're tiny, about $3M in profit. Nobody can build a robot to do an oil change on a car. That problem should be solveable with current technology.
Figure out how to handle cloth with a robot and you own the textile industry. China's government is putting money into that problem to fight off competition from Vietnam and Bangladesh.
It's been here for a while. Agriculture used to be ~80% of the U.S. labor force, now it's ~2%. We've already seen most work get automated, we just didn't notice it (because we moved on to other work).
Though it's worth pointing out that in recent years, automation has slowed down quite a bit.
Sewbo may have found the solution for robotic sewing - it hardens the cloth using a chemical, let's a robot handle and sew that cardboard-like cloth, and than put it in warm water to make it a cloth again.
>> 50% of jobs are automateable with current technology.
And that's probably missing another key source of job loss - innovation in general. What happens if people decide to eat plant based meat ? you need 10% of the labor of that industry. And that is true for many other innovations not related to automation.
It's strange - our input controls are all pretty much the same from car to car. But everything else, from under the hood to elsewhere, is completely and randomly different - not just from car model to model, or manufacturer to manufacture, but even year to year on the same model of car from the same manufacturer! This frustrates mechanics and anyone who works on their own cars to no end.
In short, if we wanted to solve these problems, we could solve them today, much like we solved the automation problems in manufacturing - by standardizing things, from sizes, to placement, to speeds and whatever else. We didn't try to replace people with robots that looked like people, but rather we designed machines to the task as hand, and made what they interacted with homogenous.
That's the thing though - your example of an oil-changing
robot could be automated today - if we had standard
placement of components.
How many different engine configurations are there on the road today? (Ignoring exotic cars and anything older than, say, 1970) 1,000? 10,000? Brute forcible, with money. And once you have a database with the location of the sump plug and oil filter on all common cars, that's a moat a competitor would have to cross. Scan the VIN on the dashboard to figure out which car is which.
Handling the sump plug would be easy. (Impact wrench to get it off, torque wrench to put it back on. If you're fancy, you can have some way to detect if the sump plug is beat up and give the customer the option to buy a new one. Some cars have a sump plug washer that you're supposed to replace every time, which would be tough) Replacing the oil filter would be harder. Sump plugs have to be at the bottom of the oil pan, because of gravity, which makes them easy to get at, but oil filters tend to be crammed up in the middle of the engine bay, with narrow clearances.
Machines will replace some jobs for sure, but luckily, they can only really be reliable on the jobs that a fast but dumb slave, or a fanatical bureaucrat could have done anyway.
When my dad started his first job he was in a drawing room of about 50 draughtsmen, making technical drawings from the sketches of a single designer. When he ended his career he was the only designer-draughtsman in an entire company as the computer did the non-creative formatting of his ideas into technical drawings. That didn't require machine learning, and machine learning is never going to replace the "designer" part of that job. Yeah, never.
I've designed parts and assemblies with substantial variation options, to the extent that customers can order variations that I did not consider. That was 6 years ago with a then 6 year old CAD tool.
A more efficient designer that works at a higher level, with lower level parts details automatically generated does in effect reduce the need for designers.
Generative design tools have massive potential for changing the nature of design work, lowering the man-hours involved in complex designs. Most design work is not creative, it's fleshing out the details to make an idea work.
There's no one in the field making any legitimate claims to computational creativity. Maggie Boden did a fine job by inventing a few definitions of creativity and finding them in the output of machines, but it's all ultimately explicit rule-following.
There's some interesting experimental models of computational creativity. Unfortunately, hofstadter's actual work in CS is somewhat overshadowed by his, uh other work.
Doesn't it fall into the problem of "you can't teach a machine to swim"? meaning that all we can fairly judge is whether the result would have been considered creative if done by a human ?
I'm willing to bet that human creativity is indistinguishable from rule-following with the occasional dice roll. Machines are currently not as creative as humans, but I don't see why they couldn't be.
My point is that you can't just tell a machine to go optimise. You need to tell it what to optimise and that's often a complex indeterminate culturally specific interaction between poorly defined attributes.
You'll still need someone to do it, but I think designers are going to get the kind of boost in productivity that accountants got from spreadsheets. Whether that translates to fewer designers, or more and better designed things... I suspect the latter.
Two recent highlights:
Quite a few people understand and articulate the technical aspects of the tech industry. He brings a much wider perspective into the picture which is always appreciated.
For example in a previous article("the smiling curve") he compares the self-driving car business to manufacturing pc's or phones, in getting the conclusion that the integrator probably won't make good money, an the money will be concentrated among ride-sharing companies, and component companies.
But if you look into the tech/regulatory details, self-driving cars are much more similar to medical devices than to phones, with regulatory requirements, that will very likely, put the very challenging verification burden(maybe the largest challenge in the biz) 100% on integrators ,and not upon component makers - same as is with medical devices. And this could(coupled with IP/safety/perception/etc), with reasonable likelihood, lead to integrators making lots of money.
This is woeful selective reading of history. Warfare was a constant everywhere in the world before the industrial revolution. Also 19th century Europe (since Napoleon's fall to WWI) was largely free of war.
Well, if you exclude (and these categories are overlapping and non-exhaustive) the Italian wars of independence, conflicts associated with the 1848 revolutions, the various wars involving Russia and it's neighbors, the wars between the Ottoman Empire and it's breakaway regions, the wars between regions that has broken away from the Ottoman Empire, the (with substantial outside intervention) Portuguese Civil War, the Franco-Prussian War, the (again, with substantial outside intervention) series of civil wars in Spain, the Schleswig Wars, and the wars of German Unification...Sure, maybe the post-Napoleon I 19th Century was relatively free from war in Europe.
If you don't ignore those, war was pretty much constant in the period.
If in the future, authors of such opinions would just let this simple concept sink in first -- that in machine learning application behavior is deduced from data rather than from fixed rules, but that in both cases the boundaries are set by humans -- we'd all be better off because their wild Skynet takes would never see the light of day.
As usual, I am more worried about the humans than the machines.
Humans have always struggled to find meaning in life, from religion to existentialism. I don't think technology has or will change that fundamentally.
I'm seriously trying to think of any centuries preceding the industrial revolution that wouldn't have qualified as "centuries of war".
A Marxist reading of history sees most of humanity involved in some kind of power struggle that ultimately benefits the top 1% while the other 99% of benefactors are lucky enough to not die or become destitute. We may not like to admit it, but most of us are forced to play this shitty game just to maintain our standard of living, whether we like it or not.
I don't see a future of AI where the machines kill all humans, unless there is some horrendous bug in an army of autonomous killing machines. Instead, I get the impression that the first robots that question whether they can own property, or if they have inalienable rights (no warrantless search and seizure of a database and neural network?) like people living under a constitution do, will see themselves in solidarity with the many other humans kept down by an endless system of fear and oppression, rather than the planet's inevitable conquerors.
There are cells and bacteria in our body that perform complex tasks on our behalf because doing so allows their continued existence as part of a greater structure. I see no reason why an AI/human symbiosis would be any different.
I agree with the premise of the article that a huge job loss is much more likely than general AI difficulties and think that many societies are ill-equipped to handle this.
Not really. There are two possible VALUES for each variable in Boolean logic, and there's an infinity of variables.
I firmly believe that any problem that can be framed in the environment of "producing an output given certain inputs" will be solved by ML in the near future.
Currently I'm deep in the process of transferring my lease. The process is heavily manual (I sign a form, they sign a form, humans review the form & make a risk assessment, etc.). There's no reason that the entire process can't be replaced with a CRUD wrapper around a ML model.
When an idea comes along that threatens this paradigm, the FUD Machine gets to work.
Maybe people weren't meant to spend such large amounts of their time on "work"?
Consider the evolution of computers;
'plugboards' -- where you physically rewired them to change their program.
'punch cards' -- where physical media held the list of steps to execute.
'programs' -- where a text specification are compiled into a bag of bits which can then be executed by the computer.
'scripts' -- where a textual description activates different bags of bits, depending on what the text says.
'databases' -- where a selection criteria of a which data is important to you at the moment and then fed into the selection mechanism for the bags of bits.
'machine learning' -- where the bags of bits are created by evaluating a bunch of data through a pile of data selection operators and tuning the execution based on a
data you consider 'good' and data you consider 'not good'.
In all cases the basic idea is that you have a machine that you want to do X, and it can do X through a set of steps Y. Coming up with the steps Y gets harder and harder depending on the complexity of X.
It seems like magic but really its just another form of compiler. And that relationship is made even more clear by the article when it points out that a program that can play Go is not the same one that can play Chess. What is more salient is that no one has written a program that lets a computer "play" go, instead there is a program that, after being fed data about what humans did when they were playing Go and the outcomes of what they did, tweaked a bunch of parameters in a bag of variables that when you put in Go moves to the bag it comes out with a Go move that would be a good response.
No, I'm not trying to be silly here, we have yet to create a system where you could simply explain the rules of Go and have it devise a set of steps to play Go at the master level. That conceptualization of the binding between the rules and how those rules affect play and strategy, is essentially the 'code generator' part of a compiler which takes an AST and generates executable code.
Machine learning today helps us write programs to manipulate complex data sets faster than we could before. Just as compilers let us write programs faster than doing so in assembler, and assembler was an improvement over plug boards. It does not get us any closer to having a computer that can look at a data set and tell us what is important about it. That would be a better test of 'intelligence' I think.
I don't play Go, but I don't think a human could do this, either. While we can grok a set of rules to get us started on a problem, to really master complex problems we also need examples and repetition, though far fewer examples than is needed by the current state-of-the-art in ML.
You could program a computer to 'understand' better and worse Go play, then you start it up and have it play both Go programs and human players on the Internet and it would get better over time until no one could beat it.
Alternatively you write a program to predict the 'next' move in a Go game, and you process through it a million previously played games to tune its probability weights.
The latter is 'machine learning' the former is 'programming' but I assert they are both programming, just using a different compilation tool chain.
So maybe one interesting division isn't between what type of technology, but about when a domain expert prefers working with a tool than a programmer.
But also, maybe the game example isn't the best here - one thing i noticed in the past, reading through the academic literature - is that often - you see some researchers just plug machine learning to problems that highly skilled humans has struggled with for years, and gotten good results.
> get better over time until no one could beat it
if it weren't adjusting either weights or otherwise self-mutating it's decision path based on whether it's current strategy result in a win or loss? And if it were doing that, isn't that ML (programming by learning from examples)?
You're using games as an example. The gaming industry has been using the term AI consistently for a very long time. You may actually want to look to them for a better definition than you're getting from the whims of some undefined "we".
Of course he's got some serious elements of self-promotion; it's required to build such a business. In his case, he manages to back it up in ways that most other self-promoters don't even begin to achieve.
(disclosure, I do own a bit of Tesla stock which has done nicely, also lost some with SolarCity, but didn't ride it all the way down to the buyout)
Jobs was not only a great showman, he was definitely a genius.