RT is state-supported, but it is only one of a number of sources that all reported pretty much the same thing. Guccifer, who was an unemployed taxi driver when arrested, led to the disclosure of HRC's server, and claims to have hacked it "like, twice."
^^^ There. Mainstream, American-as-apple-pie sources.
some people voted for him for those reasons, presumably others voted for his opponent(s?).
sometimes i just feel like the punchline to all of this will be something like:
"it's the patriarchy, stupid"
This doesn't mean he doesn't know what he is talking about, as you seem to be implying (I don't want to read too much into your comment).
Leaders don't always have to understand the tactical details that are required for executing their strategy. Obama led as a technologist from day one.
The recent shenanigans with Russia are incredibly telling. Perhaps you remember in 2012 how Obama and his surrogates viciously mocked Romney for claiming that Russia was the biggest geopolitical threat faced by the US. Obama was caught on tape telling the Russians he could be "more flexible after the election". They took his flexibility and marched right into Ukraine with it. They took Obama's cancellation of a European missile defense shield, and reciprocated by positioning nukes on the border. They saw Obama's incoherent Middle East strategy, and took the Iranian-Syrian-Russian bloc to new heights. Now, they're apparently hacking our election infrastructure and have released incredibly embarrassing emails about the corrupt machinery of Obama's chosen successor. The humiliation is total. Obama owes Mitt Romney an apology.
There's another recent President who faced a hostile Congress, Bill Clinton. He was able to achieve a lot by being willing to compromise.
On the other hand, had he won, Obama would not have become president because we would not have had to "react" to the Bush years.
"After this interview, Gore became the subject of controversy and ridicule when his statement "I took the initiative in creating the Internet" was widely quoted out of context. It was often misquoted by comedians and figures in American popular media who framed this statement as a claim that Gore believed he had personally invented the Internet. Gore's actual words, however, were widely reaffirmed by notable Internet pioneers, such as Vint Cerf and Bob Kahn, who stated, "No one in public life has been more intellectually engaged in helping to create the climate for a thriving Internet than the Vice President.""
I blame the education system for this kind of thinking. It is so heavily influenced by big government to the point that many people have come to believe that demagogues actually have a role in the advancement of civilization.
These folks (demagogues) really should get real jobs. They are mostly good at production of words instead of production of goods & services or like Hans-Herman Hoppe likes to say, "Government specializes in the production of 'bads'". I'd like to recommend his book - Democracy, The god that failed.
Or, in Gore's own words, "That's how it has worked in America. Government has supplied the initial flicker -- and individuals and companies have provided the creativity and innovation that kindled that spark into a blaze of progress and productivity that's the envy of the world."
That's not how it worked when America had the most growth. In the late 19th century, when there was less government intervention, America saw the most rapid rate of economic growth. I don't want to get into a debate about who led to the emergence of the great depression but I strongly believe that it was due to government involvement in the economic affairs of America.
That aside, when you talk about funding, there are two things that emerge;
1. You believe that that is the only way to get funding for revolutionary projects. Government is not the only way to do this. Read on IBMs/GEs research labs and you'll see what private research labs can accomplish.
2. That government has money. Government has no money. They only have money in so far as you and I allow them to take our hard earned money in form of taxes. You have to get out of the mindset that demagogues and bureaucrats actually do valuable work. They only get in the way of productive people in a bid to placate non-producers thereby granting themselves power and undue privilege once in office.
As a side note;
The 20th century was the most murderous period that humanity has gone through. Why? Governments comprised of demagogues made terrible decisions after convincing the demos that they can make the best decisions on their behalf.
At least during the aristocratic periods, it was well known that wars were the affairs of Kings and that the common folk were not going to tolerate anyone who would force war upon them. Moreover, you couldn't be conscripted; the only people who went to war were the soldiers paid by the King out of his pocket. You may say that king still taxed people but at least it was just one parasite who had very low taxes as opposed to today when it is many many corrupt parasites with very high taxes.
Oh boy. Anti-government extremists like yourself love to grandstand about the merits of unfettered capitalism without understanding that government creates the conditions for market capitalism to exist.
There would be no property rights without the police, legal contracts without the courts, no medium of exchange without the Treasury. Governments provide minimal standards of worker safety, public health, and public education - all of which are necessary for a productive workforce.
Additionally, government is one of the only entities that can correct negative externalities (i.e., side effects of business, the classic example being air pollution).
Your position is ignorant of both economics and history.
I would venture that the only extremism, so to speak, that exists is in how government has increased in both its size and ineffectiveness. When you have a combination of the two aforementioned features, then things inevitably get worse.
> Oh boy. Anti-government extremists like yourself love to grandstand about the merits of unfettered capitalism without understanding that government creates the conditions for market capitalism to exist.
> There would be no property rights without the police, legal contracts without the courts, no medium of exchange without the Treasury. Governments provide minimal standards of worker safety, public health, and public education - all of which are necessary for a productive workforce.
Just because monkeys can ride bikes doesn't mean that only monkeys can ride bikes. Due to government's inherent inefficiencies and its tendency to grow and encompass ever more aspects of life, two things happen; you lose your liberty and it becomes very expensive to sustain government.
I really don't see how you can't see that government really is bad for you and that there's always a better way. I don't like the fact that as the human race we've resigned ourselves to thinking that we can innovate/disrupt most other things except for governance. When I hear statements like, 'democracy is the worst form of government except for all others', I cringe. Here's an idea, how about less or where possible, no government. These demagogues and bureaucrats really aren't as important as you think they are.
Let John Galt be. Let the markets be. Obama and the rest of them have no place dictating how innovation and businesses should be run.
> In the late 19th century, when there was less government intervention, America saw the most rapid rate of economic growth.
The late 19th century of American history featured a large number of one-time-only economic improvements and wholescale pillaging of large amounts of natural resources.
The completion of a transcontinental railroad (completed 1869 with government sponsorship via the Pacific Railroad Acts), Pennsylvania oil rush (1870s), the settling and harvesting of the West (1850-1900), and implementation of manufacturing economies of scale on the back of the new rail system.
Additionally, unrestrained consolidation of competition into cooperative trusts gave rise to monopolies that Theodore Roosevelt spent considerable political capital resolving in the early 20th century via lawsuits under the Sherman Antitrust Act (1890). See Northern Securities Co. v. United States (1903) and Standard Oil Co. of New Jersey v. United States (1911).
I think you just don't like that I'm right about this. If I'm making a valid argument, should it matter that I'm presenting it in a manner that is inconsistent with some form of political correctness?
>Additionally, unrestrained consolidation of competition into cooperative trusts gave rise to monopolies that Theodore Roosevelt spent considerable political capital resolving in the early 20th century via lawsuits under the Sherman Antitrust Act (1890). See Northern Securities Co. v. United States (1903) and Standard Oil Co. of New Jersey v. United States (1911).
No one is saying that monopolies are good. In fact, one could argue that they aren't very capitalistic (capitalism requires that there's voluntary exchange and when you as a consumer have only one option to choose a basic necessity from, then that looks more like the very opposite of voluntary exchange).
As a side note, it is however curious that when you bring up these facts that sort of demonize the likes of Morgan, the fact that he single-handedly led the financing of the bail out of America during the economic crises of 1907 and 1893 never comes up. In 1893, the then President, Cleveland, borrowed $65 million in gold from J.P. Morgan to support the gold standard thus ending the panic.
When the economy does well:
Pro-government: government fueling/supporting industry into prosperity
Anti-government: markets make everyone richer as expected
Pro-government: capitalist greed/exploitation leads to ruin
Anti-government: government regulation/strangulation leads to ruin
No positive/negative turn in the economy can be isolated to a single cause, so neither side will ever be convinced by the others arguments.
A third point, which I think is often lost in these arguments, is that governments have interests that are beyond improving the financial standing of its citizens. A pure market capitalist probably wouldn't think subsidizing farming or shipbuilding a particularly good idea, but the government might be willing to accept some market inefficiency in exchange for food security or having an established ship building industry for times of war. Similarly, high income inequalities may cause high social unrest, it's in the governments own interest to prevent this, so it may be willing to accept lower total nation wealth in exchange for more evenly distributed wealth among its citizens by imposing progressive taxes and creating welfare programs.
Everybody agrees that markets work, the main questions are: are they optimizing for the thing you want? and are there cultural/political externalities that the market doesn't care about but a government might?
What I do know is that he's suddenly talking a big game about public/private partnerships for Mars when he undermined the previous public program, and with very convenient timing so that he won't be responsible for any of the tough decisions. It's also seems like he's trying to steal some headlines away from SpaceX and Boeing and make sure that the public sector, which he idolizes so much, doesn't look impotent by comparison.
However, I guess props are due for the effort even if he doesn't personally understand all of it.
“ITO: I feel like this is the year that artificial intelligence becomes more than just a computer science problem. [...] the question is, how do we build societal values into AI?”
The point is that the subject is far broader than the CS department. If you think Obama doesn't have relevant expertise, your view of the subject is too narrow.
I think it's a stretch to say he has "no particular credentials" or understanding of technological trends.
Given the enormous amount of press, tweets, blog posts, conferences, degree programs, seminars and interviews popping up it seems like there has to be something more than just hot air here. Still, the most outrageous predictions are hinged on breakthroughs in unsupervised learning happening. Taking the pessimistic view on science, what if we don't get there?
Specialized AI (and I hate calling it AI) is coming along really quickly. We're getting better at it in existing fields and learning to apply it to new fields. More than anything, we just have so much data on everything now, and computers are pretty powerful now, so even old school models are finding tons of new applications.
Generalized AI is a different story. We are a few really major breakthroughs away. We aren't even 100% sure if they are possible, much less understanding how to do them. These aren't the normal slowly-chip-away-at-it breakthroughs, these are things we have no clue about. With something like that, who can really say how far we are? It could be 5 years, it could be never.
On the other hand, once someone cracks that, a huge number of low-end jobs will be automated.
Deep learning is still mere perception. It doesn't handle memory or processing, it just transforms input into output, typically trained by Big Data, way bigger than necessary statistically speaking, given the world we live in.
AGI requires super aggressive unsupervised learning in recurrent networks, likely with specialized subsystems for episodic and procedural memory, as well as systems that condense knowledge down to layers of the network that are closer to the inputs. At a minimum. And nobody is really working on any of that yet (or at least succeeding) because it's really damn hard.
That's why everyone in "AI" is rebranding as a deep learning expert, even though deep learning is really just 1980s algos on 2016 hardware - you gotta sex up feed forward backprop or you don't get paid.
Edit: to be fair, robot control is much simpler than AGI, and might be mostly solved with deep learning somewhat soon, I forgot the context of your post.
There's more going on than convolutional neural nets. Architectures with memory and attention mechanisms do exist.
What really doesn't exist is any meaningful stab at unsupervised (or self-supervised) training on completely unstructured inputs or any sort of knowledge condensation/compression, at least for time dependent problems. These are of paramount importance to the way we think, and to what we can do.
There's a lot of trivially low hanging fruit, too - I still have yet to see even a grad school thesis that starts with an N+M node recurrent network and trains an N node subnetwork to match the outputs based on fuzzed ins, and then backs that out into an unsupervised learning rule that's applicable to multiple problems. Or better, a layered network that is recurrent but striated, that tries to push weights towards the lower layers while reproducing the same outputs (hell, even with a FF network this would be an interesting problem to solve if it was unsupervised). These are straightforward problems that would open up new avenues of research if good methods were found, but are mostly unexplored right now.
I could be wrong, if I had real confidence that we were close I'd be working on this stuff, but I'm collecting a paycheck doing web dev instead...
Differentiable neural computers - https://deepmind.com/blog/differentiable-neural-computers/
> we introduce a form of memory-augmented neural network called a differentiable neural computer, and show that it can learn to use its memory to answer questions about complex, structured data
So it seems that deep neural nets can have memory mechanisms and be trained to solve symbolic operations.
A few weeks ago there was a paper posted on "Synthetic Gradients" which should make it much more practical to train RNNs for games. Before it required saving every single computation the computer makes to memory, which uses a huge amount of memory and computation. Using synthetic gradients they need only store a few steps in the past. And it can learn online.
And have some problems with PacMan as the system can't plan: https://www.technologyreview.com/s/535446/googles-ai-masters...
Maybe those should be the other way around?
AI problems can be characterised as those where there's no clear path to a solution (otherwise we just call it "programming"); tackling them necessarily involves trial-and-error, backtracking, etc.
Since there are far too many possibilities to enumerate, solving such problems requires reasoning about the domain, e.g. finding representations which are smooth enough to allow gradient descent (or even exact derivatives); finding general patterns which will apply to unseen data; finding rules which facilitate long chains of deduction; etc.
The difficulty is that there's usually a tradeoff between the capability/expressiveness of a system, and how much it can be reasoned about. If we choose a domain powerful enough to represent "the field of specialised AI generation", for example turing machines or neural networks, methods like deduction, pattern-finding, gradient following, etc. get less and less applicable and we end up relying more on brute-force.
To me, this is where the AI breakthroughs are lurking. For example, discovering a representation for arbitrary programs which allows a meaningful form of gradient descent to be used, without degenerating into million-dimensional white noise; or to take deductive knowledge regarding one program and cheaply "patch" it to apply to another; and so on.
There was a great article recently on HN that highlights the current problems:
Just because we may acquire the processing power estimated to be used in the brain (in operations per second) doesn't mean we know how to write the software to accomplish the task. It is very clear current algorithms won't cut it.
Also, I think we are a few orders of magnitude off on raw processing requirements because I think it is a bandwidth issue as much as an operations per second issue.
TL;DR - you could throw as much processing power and data as you want at any current deep NN or their derivatives and you wouldn't get general intelligence.
That said I don't think the winter will be as bad as before because, like OP says, specialized AI is useful.
I think a lot of the early AI research (not my specialty) had the idea that if we made a bunch of systems that were good at their own piece of the puzzle, then we could just tack them together and get real intelligence. It just didn't turn out that way. Something I'm more familiar with is graphical models, and while they in principal could do amazing things when you stick little expert components together, we've proved the complexity grows pretty badly in the most general cases that would have been really amazing. I'd bet similar things happened in other "let's put a bunch of specialized systems together" tracks. Maybe we can do it, but not the naive way that would have been great.
Then you can get interesting and philosophical about it, where you might even say that emulating intelligence and intelligence are different. Like the chinese room thing, or even a character in a story vs a physical person. I'd rather not weigh in on that right now, but there are good interesting arguments both ways.
This would be a very surprising result. For example, if I can make a TSP-solver-emulator... I have a TSP solver.
General AI - think about thinking machines
We can do the former, have no clue how to do the latter.
This theory compactly explains why no one knows how to do general AI.
I don't believe there is a special sauce waiting to be discovered.
Many humans go their whole lives without doing that, so I'm aware it's a high bar. But it's a bar that some humans do pass, and if AI is to be more than just a helpful gimmick, it'll have to do that as well, since I'd like to believe all humans have that potential, even if not always realized.
(Obviously a helpful gimmick still does have value.)
Looking at the brain, it often does look like a bunch of interconnected specializes neural networks.
And computers are fast now. And RAM is infinite. And GPUs are fast and plentiful. And all of this is cheap.
AI stuff is already real more than most of us realize on a day-to-day basis. While machines might not be "intelligent" per se, their cleverness has definitely started impacting desk jobs.
The desk job stuff is what most of those alarmist news items are worried about. Nobody seems to worry [too much] about reducing blue collar jobs through automation. Can you imagine how many people you'd need to unload a modern container ship without computers tracking stuff and optimizing storage? Or how much work it would take to harvest modern crops where automated harvesters are used?
Imagine a human who lacks empathy entirely. That's a disability. They may be able to do some amount of destruction, but at some scale they simply lack the social intelligence necessary to compete with the entire species.
This is the most common mistake I think people make when reasoning about AI: they think human limitations are weaknesses. But they're not weaknesses they're tradeoffs. Natural selection has had a chance to reward all kinds of variations, including more cortex, and less empathy. But we ended up where we are because of tradeoffs.
Any AI which is intelligent in the same way humans are will also have our limitations. Any AI which doesn't have our limitations won't be as smart as us in those respects.
You have to really ask yourself what the difference is between a human with an AI simulator and an AI with a human simulator. In the limit of simulator quality there is none.
So long as AI remains tasked with categorizing human-taken photos or playing human-created games (no matter how "complex"), AI will remain just that -- artificial -- and not "real".
Given the rate of hype that you point out, I suspect a winter of some kind will hit before enough folks realize this.
By the way, the above requirement of real world data probably even applies to building chatbots. (A successful chatbot will need some understanding of the world it is talking about; this is what researchers mean when they say "grounding" is important for NLP).
I suspect that you'll encounter limits to the techniques before they do everything a person can do.
If there's a potential problem with machine learning that could sink the enthusiasm over time, I suspect it would come because machine learning applications are these black-boxes system which are the product of training with huge datasets and which use the very tuned-level of expertise of their creators (a common joke is talking of the "graduate school descent" algorithm, getting enough grad students to tune your app till it work). It may be that the deployers of these applications may find that when they have to train them again, in a year's time, that the geniuses have moved on to other things or that the geniuses now charge rates that look excessive for an application that works for just a year.
But that's just spinning possibilities. Currently things seem to be going great.
The other idea I have doted on is that perhaps universes are the manufacturing tool to create super AI by some parent universe.
> Taking the pessimistic view on science, what if we don't get there?
I have no interest in living forever but I really really wish I could be told what will happen or what did happen or what all is. I'm sure specialized intelligence will continue to improve but my gut says general intelligence is probably not in our lifetime (or is limited or capped from above mentioned pop-culture-probably-wrong reasons).
I ask because I wonder, if we did meet or make a creature which could learn everything, if it wouldn't say some variation on "99.9999% of everything there is to know is right there in your ecology and visible to the naked eye. Go look."
I also wonder if one of the first things the sentient AIs teach us is that yes, we are committing an egregious ongoing information Holocaust through habitat destruction.
And perhaps our creators are long dead but the simulation keeps going...
That is the hot air
For example, say you are driving along a road, passing a bicyclist. You'd like to give the cyclist more room, but there's an oncoming car in the lane next to you. How much room do you give? At exactly what threshold do you decide to wait to pass the cyclist? What if the cyclist is a kid?
All the time, the driver has to make decisions that trade off the safety of multiple parties, from the car's occupants to other drivers, to bicyclists and pedestrians. In reality, these will almost always be statistical tradeoffs, and usually comparing very small probabilities of accidents, but they are still real ethical decisions that have to be made.
The interesting case is in between.
If we find ourselves faced with "pulling the lever" everyone has already failed miserable.
Choosing who should be "unlucky" doesn't solve the problem. By assuming unavoidable failure you effectively give up solving the underlying problem. The solution is to avoid the failure in the first place.
Armchair scenarios do not help, we need actual research in simulators at high stress.
Did you miss the obvious problem with your statement?
Suppose if you don't swerve, you hit 30 school children, and if you do swerve, you hit one terminally ill child molester.
...That's why you don't get to abstain. You have to choose. That's the definition of the trolley problem.
As long as the computer makes a decision where we can say "that was very reasonable, even if better decisions existed" then it seems ok to me.
What the two possible next presidents think is more interesting simply because they may affect policy, regardless of their being more or less insightful than B.O.
No doubt that advances in technology have always done away with jobs. We're almost to the point at which the biggest blue collar industry (truck drivers) is about to be wiped out by self driving trucks. What I'm concerned about is the government stifling innovation such as driverless trucks to retain those jobs, or some sort of regulation that stifles the technology's potential. What is the alternative?
The alternatives are tech companies start to pay their taxes and humans start to use tech to care for one another, genuinely, not just wealthy Silicon Valley types trying to make buckets because their "entitled" to do so and pretend it's all in the name of progress.
The idea of the Internet and Automation were exciting to me as it was about liberation and decentralisation, from where I'm sitting it's turning into a bit of a joke, it's just empowering those who own the tech and their not giving a whole lot back right now.
Not attacking you personally, but this, attitude of "don't worry, those people who will be made redundant will be fine" thing is a myth, they will suffer and so will their families.
Automating textile production has "put more people out of work" than computers ever will. Do you think for a second we would ever want to go back to how it was? Do you ask the inventor of the sewing machine how they will start giving back?
These inventions free humanity for more worthwhile endeavors. Nobody will look back after the next great transition and wish we just had more humans driving trucks around the country. It would be utter lunacy. And the people who created the self-driving machines will be seen as liberators of human potential and ushers of a new era of productivity.
So it is, same as it ever was.
But still the industrial revolution didn't really replace laborers. The machines were still quite limited, and humans were still needed to do the jobs machines couldn't do. What's different this time, is soon the machines will be able to do everything humans can do. Or at least everything an unskilled worker can do. Operating a machine in a factory, driving a truck, entering data into a computer, making phone calls, these are all things machine learning is capable of.
Lastly look at horses. The invention of trains would seem to have competed with them and taken many of their jobs. But instead horses vastly increased, because trains couldn't do everything horses could do. Then cars were invented, and the horse population crashed over just a decade.
Why did this happen? Didn't the invention of the train prove that automation doesn't take horse jobs? Shouldn't there always be new jobs for horses? Can't horses specialize in the 1% of tasks that cars can't do, like transportation in places without roads?
But that didn't happen. The cost of just feeding the horses was much higher than the cost of buying an automobile. There were some obscure jobs for horses left, but nowhere near enough.
The industrial revolution dramatically increased both production and consumption. We now own different outfits for every day, dozens of special occasion fashions which must be regularly replaced and updated, etc. Total production, and total wages paid by the industry has dramatically improved.
A shirt a couple hundred years ago would cost $4,500 at minimum wage to produce. But no one paid the equivalent of $4,500 for a shirt. What actually happened is a lot of the work was virtually or actually completely unpaid.
The machines drove a vast increase in productivity and GDP and provide a standard of living today which 200 years ago would have been bad science fiction. The machines drove down cost of production dramatically, increasing consumption and increasing overall employment and wages.
People didn't destroy the machines because they caused poverty, they destroyed them out of fear.
Your analogy with horses is deeply flawed. Horse used declined just like spinning wheel use declined - because they were obsolete.
The latest round of automation does not by any stretch of the imagination make humans obsolete. It will actually make humans more productive and actually more valuable.
As Steve Jobs said, computers are a bicycle for the mind. AI is a motorcycle. Get on, go faster, reach higher, achieve more, live better.
What does it matter if there are more clothes, if there aren't any consumers to buy them? We are looking at a world where humans are obsolete just like horses. There is nothing an unskilled worker can do that a machine can't, at least in the near future. And many skilled workers do jobs vulnerable to automation as well. The vast majority of the human population is unnecessary, just as horses were after the invention of cars.
It is truly an insult to humanity to think; all these people doing menial jobs which could be automated are now obsolete. That they are somehow incapable of higher thoughts and reasoning and cannot add value beyond the machines.
In fact, while certainly there is a range of inherent potential between humans, my understanding is that the nominal human capacity for creative thought is orders of magnitude beyond the point of obsolescence by any kind of "artificial intelligence" we expect to be able to create at least within the next century.
No, certainly we have not yet created anything even remotely like the machine that will be our master.
The average human, as we have asked each generation of "average" human before us, will use technology to reach farther than you can imagine they would ever be able to reach.
And also, it's worth considering, how very much we tend to under-estimate the intelligence of historical man from our lofty perch of technological superiority, just as we under-estimate our future potential.
Most humans aren't that special. Long before alphaGo beat one of the best Go players in the entire world, simple Go programs could destroy the majority of players. Sure, AI probably won't be able to do computer programming for a long time. But the average person with an IQ of 100 is not going to retrain to be a computer programmer. AI doesn't need to be as intelligent as the best humans, it just needs to be as intelligent as the average person. Probably much less than that, because the average job is boring repetitive work that doesn't necessarily require much intelligence.
I mean seriously, where do you predict all the unemployed people will go to? What jobs do you think are invulnerable to automation, and can absorb 90% of the population? What jobs have such great economic value, require lots of unskilled workers, and can't be replaced by machines?
Guess who is getting an appetite for wool socks?
I often hear that not everyone is capable of getting a PhD, but then again if we take the same energy and dedication of a career and push it over into education I think plenty have the capability. I often consider myself exhibit A so to speak since I got my GED and was working three low end jobs (line cook, gas station attendant and construction worker) until I bounced out of the workforce and pursued my PhD in Public Policy. I'm sort of playing around with undergrad level math now as I prepare for a second PhD. Seriously think if I could manage at least one PhD anyone else could easily manage 2 or 3.
This makes sense, if there is some sort of government sponsorship or grants, but providing these to all people would mean drastically higher taxes, and companies are opposed to those.
I think the funding issue can be figured out without higher taxes or at the very least without absurdly higher taxes. Perhaps one idea would be to train people to invest through grades 9-12, have some sort of basic minimum income which isn't rich but isn't below poverty either; encourage people to invest with those funds and use profits to pay for college etc. I also sort of align with Jaron Lanier's concept of paying people for their data usage; but built onto that maybe so that it includes both government and private companies usage of that data; individuals don't see those funds until 18, from which they are encourage to go to school. As those funds are held they are invested like social security maybe? None of these ideas are really all that worked out just yet, just responding to you is all.
If you want to push widespread welfare or UBI for those in academia, that's an entirely different option. But that'd require a serious revision in the current tax codes or some very wealthy benefactors to bootstrap.
See http://truecostmovie.com for more information.
This common narrative that, "automation" will let us finally get to tackle the real issues really is a myth, it's just going to make a larger wealth distribution problem because most companies leading the charge won't pay appropriate taxes.
If truck driving could've been offshored, it would've been.
Side note, I have a blanket in my family weaved from hemp by an ancestor, it's generations old and it's still in really good condition and we still use it all the time on the couch, 500 hours? Maybe, but its over a hundred years old!
For that shirt, you work an hour to earn $150, pay $50 in payroll/income taxes, pay $100 for shirt, of which maybe another $20 of that will also be taxes (sales tax at the point of sale, and then income tax on the corporation, not even counting taxes on the materials and wages that they buy with the remaining $80).
Figure around half of every dollar you spend is either taxes on the way in, or taxes on the way out. The problem is decidedly not a lack of taxes.
Let's suppose I've founded booking.com and put a LOT of travel agents out of business. At what point did I assume the responsibility of giving back to those travel agents?
I agree that they are miserable because of my actions. However, in real life, nothing happens because of a single reason. They are also miserable because of free choice of their former customers, who abandoned them. Those customers have profited from the change too: after all, they chose me because of lower prices and better service. Why am I, and not the customers that left them, have this responsibility?
At the point when you started living in a peaceful and organized society, that depends on not many people having a disastrous future to keep being peaceful and organized.
So let me ask you a question — when we discuss this "back" direction, would you include someone who's been living on this welfare net for all of this life in it? Someone who purposefully declined to contribute anything to the society? The very phrase "to give back" implies that we're giving to someone who gave us something before — and this hypothetical Joe sure as hell didn't give anything to us. *
Since my question is purely rhetorical, let me answer to it to get to the point. I don't think that you would include him in "giving back". And yet, we keep him (or at least, me and you agree that we should try) fed and alive to some degree.
So, this whole notion about "giving back" is something different from keeping this hypothetical Joe alive. What is it, exactly?
* This paragraph may sound similar to typical right-wing propaganda about lazy people living on welfare. However, it is only a hypothetical example to prove a point. I'm not making any statements about real people in it.
Not hypothetically, and noting that you explicitly stated you didn't want to discuss factual reality, nonetheless: such hypothetical Joes are rare to nonexistent in the US post Clinton welfare reform.
You won't have to. If it gets bad enough, they'll come take it.
It's like talking about eucledian geometry and then saying that no physical object is actually a single point, straight lines do not exist and parallel lines would collide due to space-time curvature.
Can you name every event that would have enabled you to be ready to create the business, and all aspects of the world that would allow for its success? It's impossible. This is why we have to give back.
 We could obviously go back further, but it becomes less meaningful in the context of this discussion.
OK, so how do you determine where "back" is?
Why is "back" in pockets of people being laid off and not on an altar of Cthulhu? By your logic, we never would be able to know the reason anyway, so giving to the Dark One seems to be just as reasonable.
Determining where, how, and to what extent is a much harder question. I don't have any easy answers there.
At the very least, recognizing how the indifferent hand of fortune plays an immense role in all of our lives nudges suggests that helping those who have less is a great place to start.
Of course, we should do so in a way that helps all. However, even if we distribute help in a totally blind fashion, a single "unit of help" given to those who have less will comprise a larger percentage of their "life capital" (opportunity, potential, resources, etc, for lack of a better term) than a unit of help given to someone who is in a better position. So both the perceived and real impact will be higher for the less fortunate.
Once we have managed to gather a shared set of resources (of course, there will be much grumbling and immense disagreements about the exact amounts), how should it be utilized? Certainly, you aren't suggesting we give a larger share to those who already have more.
If you are worried about the disincentive to contribute upon receiving help, remember, this whole sub-thread started in the context of businesses making use of new, advanced automation technologies and placing large swaths of people out of work.
What if the value of labor for a large percentage of the citizenry really does fall to unemployable levels due to technology? Should we "make up" jobs for them? Let them starve?
Why does the person who can afford to buy an army of robots deserve all of the proceeds? They didn't invent the robots, it took thousands of years and billions of human lives toiling in the dirt for such incredible technology to enter the world.
It isn't impossible to strike balance between helping those who have less, and allowing those who are skilled and make large contributions to be richly rewarded.
And even if many people become somewhat unemployable, society can choose to encourage living productive, engaged lives. We don't need to become zombies hiding in our houses playing video games all day or binge watching netflix non-stop. People still will want to be fulfilled. There is much we can do to promote fulfilling, productive lifestyles and vibrant communities.
If clothes used to cost $60 and now cost $20 that is equivalent to giving back $40.
The problem starts when their wages are falling faster than the cost of the goods.
If as a employee you earn $20 per hour you need to work 3 hours to afford your own clothes and now only earn $5 per hour you need to work 4 hours.
Many of these seem to have been victims of cultural and political change, but there's quite a few (for eg., Knocker-up, Lamplighter, and more recently, Switchboard operator) that became obsolete due to technological advancements.
I'd expect the latter to come from the right in the US, ironically enough.
Of course, replacing the current social safety net (food stamps, et. al.) with a UBI would likely result in the trimming of government bureaucratic jobs. Everything's a trade off.
So those who are doing jobs that AI can do better and more cheaply than any human, they are going to somehow gain skills and/or ability to perform new good, high paying jobs created by automation? I don't buy it. Analogues to those hypothetical future high-quality jobs already exist, so why aren't the people in the soon-to-be obsolete jobs already doing the existent high quality jobs? Do they not like money? Training for these jobs has never been cheaper!
I think we'll eventually have work for everyone who wants it (but hopefully you won't have to work to live). The future will probably have a lot of demand for "artisanal" products (ie. produced by humans).
Inequality is not a problem. Poverty is. If everyone has everything they need and most things they want, but a few super rich people own entire planets, I fail to see how that's not a utopian future.
Imagine a farmer working his entire life on his farm. Now that he is reaching the end of his lifespan he wants his children to benefit from his work. He built the house and everything himself.
Why should the government take a chunk out of that? One might argue that the children do not work and therefore this "unearned" income is unfair because it's not possible to choose whether you are born into a wealthy family.
However it is both earned by the work of the father and already taxed by the government through his income.
By the time the father dies the children are likely already 40 to 50 years old but they benefited from his little wealth while he was still alive. The inheritance tax does not affect this. It only discourages using the inheritance efficiently over multiple generations.
The question is when do you start implementing UBI? When only 80% of the population have jobs anymore? 60%? 40%?
Because when you do implement it, the money that goes to the unemployed people is going to have to come from those that still make money. And they will probably be pretty pissed off about it, too.
Also, in a two-party system country like the U.S., the party you vote for may literally decide this outcome. Like say 70% of the people still have jobs and they don't want UBI, and the Democrats support switching to UBI after the election. It's very likely that the Democrats will not win an election again until a majority of the population supports implementing UBI, no matter what other terrible things the Republican party promises to do, while also promising not to implement UBI.
Regardless, you do make good points.
More likely the Government comes up with "innovative" ways to hide these people from the official unemployment numbers. Somehow unemployment is calculated at only 4.9% today...add 3.5 million newly unemployed commercial drivers tomorrow (>10% of US total population) and the new unemployment rate = 4.9%.
Do you count those that are not actively looking for work, and don't want to look for work? e.g. House(wife|husband)s, semi-retired people.
Do you count those that have given up looking for work?
Do you count those that are working part time, but looking for full time work (i.e. the underemployed)?
Do you count those that aren't really looking for work (for whatever reason), but occasionally look online at job listings (since it's a very low effort thing to do)? This is an interesting one, because Statistics New Zealand just recently decided that this doesn't count as actively seeking work, which seems right to me. I look at online job listings all the time, but am happily employed and not seeking other employment.
Do you count the self employed? Contractors? Are Uber drivers considered employed? What about Airbnb hosts?
That is my point entirely to OP. If the concern is the Gov. is willing to block new technology adoption bc of concerns of mass unemployment, then it isn't unreasonable to think the Gov. would allow the technology and hide some of the unemployment by "redefining" unemployment.
I am not sure why I would be down voted as these moving goal posts are a well known and controversial issue, including, but not limited to the many variables you highlighted.
Just count those that want work or should have work (need to support a family/receiving government aid)
As I just commented, that's easier said than done.
I wonder if President Obama has seen it yet, because while his wording obviously has to be steeped in establishment rhetoric, its not a matter of "if" the automation comes for our jobs, but that we are already dealing, will continue to deal, and how we will deal in the future with the reality that for decades now automation of varying degrees has been eroding the market for human mental labor.
It is exactly like climate change. It is not a future problem. It is a now problem, but the progress is so slow and the symptoms variable enough you don't obviously see the underlying trend already taking place, so nobody regards it with the urgency it deserves. Social stratification, growing wealth inequality, growing partisanship, growing radicalism, and growing unemployability is already expanding globally in response to the ongoing obsolescence of the human mind. The first step is to recognize that it is happening.
> Obama: And you just have to have somebody close to the power cord. [Laughs.] Right when you see it about to happen, you gotta yank that electricity out of the wall, man.
Do we seriously think that it would be that easy? I think an a "generalized" AI, if aware of the ability for a human to remove its power and so that as a bad thing, would not be stopped by unplugging it. By the point you realized you needed to unplug it, it would have already convinced a human that to help it spread and it would have found other sources of power.
If you'd say that the AI will be super-persuasive, persuasive enough to make humans behave irrationally, I say, maybe, but it's possible to simply use already irrationally fearful humans as guards to prevent the AI from escaping.
Why is that a problem that they're mostly white? The interviewer doesn't bother to elaborate, it's sort of mentioned in passing as if it were something obvious. To me (a non-American here, mind you) it isn't. Would the difficulties they mention be alleviated if that wasn't the case? Why? Couldn't - say - an Asian student fall into a misconception that machines will come up with answers to all questions? Is it less likely? What substantiates such a claim?
So Ito's concern was probably that a group of people of similar characteristics(i.e white male geeks) is not diverse enough to address the world's social problems, especially when this group of people is not social enough; they're more comfortable talking to computers than to human beings.
This fits with what he says in the end Because the question is, how do we build societal values into AI? - he's more concerned about the builders themselves.
Care to share any examples of what technologies is, in turn, a white software engineer more likely to be able to design? Or does this philosophy only work one way : )
Is designing medical software better left to be implemented by software engineers who themselves eg. battled cancer?
2. White software engineers are better at designing racist software (mostly joking). Another half joke: white people are better at designing technology that gets taken seriously by the government and the public (i.e. their technology will be taken more seriously because they are white, not any special ability there). But seriously, white engineers would probably be better at designing technology for teaching other white people about race.
3. Yes, there are absolutely some parts of designing medical software that engineers who have battled cancer would be better at. Imagine you are making one of those medical devices that sits next to a cancer patient's bed post-chemo and shows a bunch of numbers. Now if you fought cancer, you've probably had lots of experience lying in that bed next to those screens, and you could have a much better intuition about how those screens should look and how they should present their visualizations in ways that make a patient more confident. Or imagine the software engineer wants to, you know, talk with some patients or doctors to understand what to make: the engineer who battled cancer will probably be much better understanding what the patients (and doctors) want.
White people are guilty of colonialism much like Catholic children inherit the Original Sin, that is, it's a means of control by anti-intellectuals.
I wouldn't support Trump, but the American Left is especially dangerous because it relies on gender / racial divide to get the most votes. In some twisted sense, the problem is not supposed to be solved.
Skin color does not make Obama an expert in race relations. If he had said "...but one of my concerns is that it’s been a predominately male gang of kids, mostly black, who are building the core computer science around AI..." there would have been riots in the streets. Tim Hunt's joke was taken out of context and almost ruined his career.
EDIT: To be clear, his white remark was fully intentional.
On 8 June 2015, during the 2015 World Conference of Science Journalists in Seoul, at a lunch for female journalists and scientists, Hunt was asked on short notice to give a toast (https://en.wikipedia.org/wiki/Tim_Hunt). He made a self-deprecating comment and media vultures run with it. He was shamed out of the Royal Society and UCL.
Nor does your observation mean he's NOT an expert in race relations. You're making the same mistake as the people you criticize in applying generalities to specific cases.
I'm sorry, but this is pseudo-reasoning to me. These are logically linked on the surface, yet extremely vague truisms that can only pass for some form of a coherent argument because they're so full of weasel words.
Based on the same principle you could argue that in order to improve car safety, to have a better chance at it, one needs to have had a life-threatening accident. Why not? After all, people generally are able to solve a problem when they understand it... and direct experience is a valuable form of learning... etc.
It really depends on the nature of the problem though, and the nature of this direct experience, and so forth. Painting the situation with such an overly broad brush doesn't lead to any meaningful conclusions.
For starters, first-hand experience is typically caused by symptoms of a problem, the underlying nature of a complex problem isn't readily apparent, or else it wouldn't be complex.
For instance getting sick from air pollution doesn't do anything to help you understand the nature of these pollutants, how they're emitted, what is the economical context and therefore possible countermeasures etc. It just reassures you that the symptoms of such pollution are a bad thing, which isn't that much of a discovery by itself.
Not everything is as simple as an itchy-scratchy situation, and we shouldn't pretend that it is, especially when it leads to racially biased claims.
Part of what makes us human are the kinks. They’re the mutations, the outliers, the flaws that create art or the new invention, right? We have to assume that if a system is perfect, then it’s static. And part of what makes us who we are, and part of what makes us alive, is that we’re dynamic and we’re surprised.
And then he goes on: One of the challenges that we’ll have to think about is, where and when is it appropriate for us to have things work exactly the way they’re supposed to, without surprises?
One might argue that in software development things never work exactly the way they're supposed to.
I think HN is best when its money or pure tech and tends to be dismissive or diminish the other stuff and you are left with a sterile unidimensional discussion.
Its like the gold rush, they can generate the wealth but the harder questions around politics, economics, social structures and humanity will have to be done in more 'distant' surroundings less touched by the frenzy of greed and profit.
Not much to say on the general AI question, but that's understandable.
I gotta say, I thought he'd be more familiar with that metaphor.
At least I have heard this any number of times in relation to technology innovation, entrepreneurship, "lean startups", etc. I bet a lot of the people who use the term have no idea of its origins in Mao's China.
See for example, this article from Inc magazine in 1984, which begins as follows:
"Not too long ago, fortune 500 companies looked at small-scale entrepreneurial companies as fodder for acquisition -- if they were big enough to make the effort worthwhile -- and little more. The executive who dared to suggest that Goliath might learn from David was likely to be trampled by a herd of MBAs waving printouts on the economies of scale that flowed from a centralized and rationally managed organization...
Then something happened...The micromillenium was born in a Cupertino, Calif., garage."
[excerpted from "Let a Thousand Flowers Bloom" http://www.inc.com/magazine/19840401/2895.html]
How else do you think they managed to convince so many people to be communists?
Once they had momentum? With rifles.
First you give people the liberty to explore and then you crush most of them with thousands of pages of legislation.
A little dark perhaps, but it fits.
Here's Ai Weiwei's interpretation of the phrase, that was a temporary installation in Alcatraz recently http://www.fubiz.net/wp-content/uploads/2014/11/blossomaiwei... (Cite: http://stefany-cordedoce.blogspot.com/2014/11/porcelain-bouq... )
-- Ronald Reagan
EDIT: Actually, looking into the history of this particular phrase and its use in the West, I may be mistaken here.
Language is great at evolving and incorporating colorful and evocative phrases while stripping out the original connotations.
Let a billion flowers bloom!
I mean, that's the reasonable application of the metaphor, right?