Hacker News new | comments | show | ask | jobs | submit login
Algorithms Have Already Gone Rogue (wired.com)
235 points by steven 14 days ago | hide | past | web | 157 comments | favorite



My only objection to this is a semantic one -- the word "algorithm" is not being well-served here. The correct word for this sort of thing is "heuristic". The concern isn't that algorithms themselves are incorrect, the concern is that the problem they are trying to solve is a heuristic one, not a formal one.

Saying "let's write an algorithm to improve search results" is meaningless; "let's design and implement a heuristic that improves search results". The algorithmic part of this is how to efficiently implement that heuristic.

I can usually get through articles like this by silently replacing "algorithm" with "heuristic"; the problem arises when some articles attempt to draw equivalencies between "algorithmic" concepts, like running time and space, and "heuristic" concepts, like optimizing for the wrong thing.


Algorithms running on "social hardware" can be surprisingly formal. A famously well-documented example are early modern witchhunts. The humorous depiction in Monty Python and the Holy Grail does a surprisingly good job at conveying the algorithmic nature.


Many aspects of the law are algorithmic. Even though there is no one, settled formal definition for algorithm, statutory and common law meet many informal definitions. Laws usually lay out an ordered, (theoretically) unambiguous set of steps for deciding a legal issue. When lawyers talk about "elements of a test," they are referring to this structured logic.

For example, the elements required to prove a negligence claim are:

1. Duty

2. Breach of Duty

3. Cause in Fact

4. Proximate Cause

5. Damages

When evaluating a negligence claim, a lawyer first tries to determine if the defendant owed the plaintiff any duty of care, then whether the plaintiff breached that duty, then if that breach was the factual cause of a harm suffered by the plaintiff, then whether the causal relationship was close enough to be considered legally proximate, and then, finally, whether the plaintiff actually suffered measurable damages.

Arguably, that superficially algorithmic process frequently breaks down in practice. For example, it's often easier to start with the damages suffered by a plaintiff and work backwards by identifying the causes, then who was responsible for those causes and any duties they may have owed to the plaintiff. However, regardless of how the lawyer and plaintiff identify whom to sue, they must frame their pleadings to allege the elements of the tort in the order specified by their jurisdiction's law, so the actual practice of law in court amounts to an algorithmic exercise.


Along those lines, here are some of my comments on this general topic from an email I posted to the Doug Engelbart Unfinished Revolution II Colloquium in 2000: http://www.dougengelbart.org/colloquium/forum/discussion/012...

===

... I personally think machine evolution is unstoppable, and the best hope for humanity is the noble cowardice of creating refugia and trying, like the duckweed, to create human (and other) life faster than other forces can destroy it. [Although in 2017 I'd add other possibilities like symbiosis or trying to create friendlier AI as a partner (or at least AIs with a sense of humor -- see James. P. Hogan's AIs, or the ones like Libbry in EarthCent Ambassador series, or the Old Guy Cybertank series example), improved sensemaking through better intelligence-augmenting tools, and trying to help human society be more compassionate in the hopes our path out of a singularity will somehow reflect our path going in...]

Note, I'm not saying machine evolution won't have a human component -- in that sense, a corporation or any bureaucracy is already a separate machine intelligence, just not a very smart or resilient one. This sense of the corporation comes out of Langdon Winner's book "Autonomous Technology: Technics out of control as a theme in political thought".

You may have a tough time believing this, but Winner makes a convincing case. He suggests that all successful organizations "reverse-adapt" their goals and their environment to ensure their continued survival. These corporate machine intelligences are already driving for better machine intelligences -- faster, more efficient, cheaper, and more resilient.

People forget that corporate charters used to be routinely revoked for behavior outside the immediate public good, and that corporations were not considered persons until around 1886 (that decision perhaps being the first major example of a machine using the political/social process of its own ends).

Corporate charters are granted supposedly because society believe it is in the best interest of society for corporations to exist. But, when was the last time people were able to pull the "charter" plug on a corporation not acting in the public interest? It's hard, and it will get harder when corporations don't need people to run themselves.

I'm not saying the people in corporations are evil -- just that they often have very limited choices of actions. If a corporate CEOs do not deliver short term profits they are removed, no matter what they were trying to do. Obviously there are exceptions for a while -- William C. Norris of Control Data was one of them, but in general, the exception proves the rule. Fortunately though, even in the worst machines (like in WWII Germany) there were individuals who did what they could to make them more humane ("Schindler's List" being an example).

Look at how much William C. Norris of Control Data got ridiculed in the 1970s for suggesting the then radical notion that "business exists to meet society's unmet needs". Yet his pioneering efforts in education, employee assistance plans, on-site daycare, urban renewal, and socially-responsible investing are in part what made Minneapolis/St.Paul the great area it is today. Such efforts are now being duplicated to an extent by other companies. Even the company that squashed CDC in the mid 1980s (IBM) has adopted some of those policies and directions. So corporations can adapt when they feel the need.

Obviously, corporations are not all powerful. The world still has some individuals who have wealth to equal major corporations. There are several governments that are as powerful or more so than major corporations. Individuals in corporations can make persuasive pitches about their future directions, and individuals with controlling shares may be able to influence what a corporation does (as far as the market allows).

In the long run, many corporations are trying to coexist with people to the extent they need to. But it is not clear what corporations (especially large ones) will do as we approach this singularity -- where AIs and robots are cheaper to employ than people. Today's corporation, like any intelligent machine, is more than the sum of its parts (equipment, goodwill, IP, cash, credit, and people). It's "plug" is not easy to pull, and it can't be easily controlled against its short term interests.

What sort of laws and rules will be needed then? If the threat of corporate charter revocation is still possible by governments and collaborations of individuals, in what new directions will corporations have to be prodded? What should a "smart" corporation do if it sees this coming? (Hopefully adapt to be nicer more quickly. :-) What can individuals and governments do to ensure corporations "help meet society's unmet needs"?

Evolution can be made to work in positive ways, by selective breeding, the same way we got so many breeds of dogs and cats. How can we intentionally breed "nice" corporations that are symbiotic with the humans that inhabit them? To what extent is this happening already as talented individuals leave various dysfunctional, misguided, or rogue corporations (or act as "whistle blowers")? I don't say here the individual directs the corporation against its short term interest. I say that individuals affect the selective survival rates of corporations with various goals (and thus corporate evolution) by where they choose to work, what they do there, and how they interact with groups that monitor corporations. To that extent, individuals have some limited control over corporations even when they are not shareholders. Someday, thousands of years from now, corporations may finally have been bred to take the long term view and play an "infinite game".

However, if preparations fail, and if we otherwise cannot preserve our humanity as is (physicality and all), we must at least adapt with grace whatever of our best values we can preserve or somehow embody in future systems. So, an OHS/DKR [Open Hyperdocument System / Dynamic Knowledge Repository] to that end (determining our best values, and strategies to preserve them) would be of value as well.

When aluminum was first discovered around 1827, and for decades afterward, it was worth more than platinum, and now just under two centuries later we throw it away. In perhaps only two decades from now, children may play "marbles" using diamonds, and a child won't bother to pick a diamond up from the street unless it is exceptionally pretty (although you or I probably would out of habit -- "see a diamond, pick it up, and all the day you have good luck").

This long essay is my own current perspective on this developing situation, and part of the process of my formulating my own thinking on these trends and how I as an individual will respond to them.

To conclude, I think all the "classical" problems like food, energy, water, education, and materials will be technically solvable by 2050 even if we don't do much specifically about them (and like hunger are solved today except for politics). The dynamics of technology and economics are just taking us there whether we like it or not. Those goods may all may essentially be "free" or "extremely cheap" by 2050. Obviously the complex politics of these issues need to be resolved, and the solutions need to be actually implemented. If they are "extremely cheap", people still need a tiny amount of income to buy them.

Still, I think Doug [Engelbart] is right. We face huge problems that only collaborative efforts can solve -- especially the problems of intelligent machines, technology-amplified conflict, and a complete disruption of our scarcity-based materialistic economic and social systems. These problems dwarf technical issues like energy, food, goods, education, and water.

The problem has always been, and will always be, "survival with style" (to amplify Jerry Pournelle). The next twenty years will fundamentally change what the survival issues are: environment, threats, and allies. They will also very well change what we value as "style" -- when diamonds are cheap as glass [perhaps from nanotechnology], what will one give to impress?

===


Just sayin... point me to the person who has the habit of seeing diamonds on the ground and picking them up (and doesn't work in a strip mine). Habits aren't somethings we want - they're somethings we do.

talking machines had an episode on the difference between algorithms and models, and how the general public understands the meaning of the word "algorithm". In general these are conflated terms which is hard to be absolutist about, at the very least

The general public (& journalists) use the word 'algorithm' to mean any computerized process that "does things to them", such your facebook news feed, or what a credit agency does.

This is a different meaning from how social scientists use these words.

http://www.thetalkingmachines.com/blog/2017/9/22/the-long-vi...

In the episode, he talks about how even something like Principal Components Analysis (PCA), which is something that normally we would call an algorithm, which follows a discrete sequence of steps, can also be thought of as resting on something that resembles a model


I don't think there's a correct word here. He's talking about widely differing things and using a vague word to try to relate those things. The correct thing is to reject the relation and treat each of the notions he's talking about (such as web search results, government policy, and economic models) as the distinct things that they are.


We already have a great word that exactly describes our approach to capitalism though: ideology.


Yes, a lot of people use "algorithm" to mean simply "procedure". I'm glad to see someone besides me pointing out the distinction between algorithms, in the strict sense, and heuristics. Complaining about the misuse of technical terms is unlikely to have impact on usage in the popular press ([0]), but I think it's appropriate in a technical discussion.

One of the most egregious misuses of "algorithm", in my opinion, is the term "genetic algorithms". Not only are these not algorithms in the strict sense, but referring to the procedures as "genetic heuristics" or "genetic search" would be much clearer.

[0] https://news.ycombinator.com/item?id=10475884


Genetic algorithms are algorithms in that they are a description of a specific process or set of processes (which may used in heuristics, or studied in isolation) at an implementation level of abstraction. "Genetic heuristics" suggests heuristic applications, rather than algorithms in isolation, and "genetic search" suggests a specific application, but a "genetic algorithm" can be extremely simple and isn't fundamentally different than an algorithm like quicksort. Either could be used as part of some heuristic, or not.


> a "genetic algorithm" can be extremely simple and isn't fundamentally different than an algorithm like quicksort

Ah, but it is. What do you get when you run quicksort? You get a sorted list. What do you get when you run a genetic "algorithm"? You get ... the result of performing that procedure. There's no other formally specifiable postcondition. You certainly aren't guaranteed to get a perfect solution to the problem you were trying to solve. That's why it's a heuristic: if you run this procedure a certain number of times, you might get a useful result. Maybe. And the way it works is by searching a space of possibilities in a certain way. That's not an application; that's the whole point.


I don't understand what you mean about formally specifiable postconditions. Genetic algorithms can have formally specifiable postconditions, they just aren't definable in terms of the problem space to which they're typically applied. The postconditions can be defined in terms of the state of the genetic system. I also am not able to find, anywhere, a formal definition of algorithm that unambiguously wouldn't include genetic algorithms as I outlined them in the above comment.

A heuristic explores a problem in a specific domain. An algorithm specifies a process at a level suitable for machine implementation. Genetic algorithms are, thus, algorithms, though they are typically applied in heuristics to explore real-world problem spaces.


Well, I'll grant you that lots of people use the word "algorithm" in a way that doesn't respect the distinction andrewla and I are trying to draw.

Nonetheless, I believe there is a real distinction here. As it happens, there is a discussion about algorithms textbooks on HN right now [0]. I contend that GAs are not the kind of thing that would ever be described in a textbook on algorithms, no matter how comprehensive.

> Genetic algorithms can have formally specifiable postconditions, they just aren't definable in terms of the problem space to which they're typically applied.

Actually, I think this would be a pretty good definition of the distinction, if we changed "genetic algorithms" to "heuristics". I think if you look at the procedures described in algorithms textbooks, you'll see they all have postconditions definable in terms of the problem space.

[0] https://news.ycombinator.com/item?id=15423045


> The concern isn't that algorithms themselves are incorrect, the concern is that the problem they are trying to solve is a heuristic one, not a formal one.

Heuristic problems are simply problems that aren't yet formally understood. I don't think it's meaningless to use "algorithm" in the examples you cite, as long as its understood that a good algorithmic solution requires a good model of the actual problem being solved.


well, heuristics are made up of algorithms.


This is not true. A (bad) heuristic for search results might be "rank the documents by the total number of occurrences of each search term in that document".

That's not an algorithm -- that's a desired result. Similar to how "sorting a list" is a description of a class of algorithms; it gives no description of how a machine can accomplish that goal.

The difference between the heuristic above and "sort a list" is that the success criteria of the latter can be very well defined, whereas the heuristic presented is an attempt at approximating the desired result, which is something like "present the best search results first, for some meaning of best".


>"rank the documents by the total number of occurrences of each search term in that document"

I fail to see how this is not an algorithm. The heuristic (rank search results from most to least relevant) is backed by an algorithm (find occurrence of word, sort document based on occurrences). I like to approximate the two by thinking of heuristics as an approach to solving a given problem while algorithms are actions to taken to get to the end results.


There's an algorithm to rank the documents by an arbitrary metric. And there's an algorithm to calculate the number of occurrences of each search term in that document.

However, those are insignificant implementation details - all the logic (and all the good and bad results) comes from the arbitrary decision to use the number of occurrences as meaningful for measuring the relevance, from the choice of heuristic.


That's just saying it's the wrong algorithm, not that it isn't an algorithm. Every computation of pi has truncated the result, which is a 'heuristic' decision, but that doesn't invalidate the fact that pi is computed using an algorithm.

All algorithms approximate things, after all. That's simply a consequence of abstraction.


The financial system is a giant message passing algorithm. It is pretty much just a min-sum algorithm [1] whose sole purpose is to answer the question "what should we do?". Anyone who has played around with these algorithms for solving decoding problems knows that they are fabulously powerful.

But these message passing algorithms have two weaknesses:

(A) When there is more than one solution

(B) When there are small loops in the message network

By far the worst problem is (B): it's a kind of a "corruption" of the network and causes it to pretty much go off the rails. I think people already do understand the consequences of these problems in the financial system, but we don't seem to see how we can just change the topology and/or the messages themselves, in particular, to try to fix up these self-reinforcing loops. Or: move away from min-sum towards sum-product (which often works an order of magnitude better) by perhaps implementing basic income. Etc. Etc.

[1] https://en.wikipedia.org/wiki/Generalized_distributive_law


Having worked in the financial industry, your view of it is what we'd call, "50,000" feet.

I don't think it reflects reality on the ground, nor does it really reflect how humans in groups make decisions according to research.

The message network loop issue isn't even an issue for message passing systems except in the most pathological cases (where it results in an unbounded growth in messages or an infinite delta on the values the algorithm calculates.


I'm not saying the finance system doesn't work, it obviously does, and amazingly well. I'm just pointing out the weaknesses of the system, and these weaknesses could easily cause humanity to walk off a cliff.

I also have a finance background. This idea that the system is a kind of message passing is not a "50,000 feet" idea, it actually came to me from writing algorithms to arbitrage markets. There you really are performing message passing (think of forex legs.) It's Dijkstra's algorithm. So this is the view from 50,000 nano-seconds. But I believe it holds for many other scales. Plenty of people walk into the supermarket and buy the product with minimum cost. Yes? How is this a view from "50,000 feet"? But obviously we are not just min-sum automata: we are all free agents, and so on.

> how humans in groups make decisions according to research

Well this is obviously a huge topic.

> message network loop issue isn't even an issue for message passing systems except in the most pathological cases

I totally disagree with this. Short loops completely screwup message passing.


> I'm not saying the finance system doesn't work, it obviously does, and amazingly well. I'm just pointing out the weaknesses of the system, and these weaknesses could easily cause humanity to walk off a cliff.

I actually think the system is fantastically broken and produces nonsensical outputs quite often. For example, it rewards speculation in the absence of measurable outcome but demands measurable value to speculate.

> There you really are performing message passing (think of forex legs.) It's Dijkstra's algorithm.

I think you're confusing a model you use for the reality at hand though.

> Short loops completely screwup message passing.

If short loops screw up your data other than the ways I described then your system lacks idempotency guarantees and has a much larger problem.

If you're that suceptible to replays then your architecture is somewhat antiquated. Even mass market products like SQS and RabbitMQ make it fairly approachable to model a message passing system across a central linearize queue and correct these issues.

If you're not centralizing, then your system needs more sophistication but it is even more important that replays don't cause issues because you're essentially guaranteed to get them.

Don't get me wrong; bugs exist. But it sounds more like design features and flaws you're describing.


Great analysis.

(A) When there is more than one solution

And this is exactly why the concept of a "growing pie" is flawed. Markets generally pool around a single solution versus averaging around multiple and there isn't enough capital (labor or dollars). I think it comes back to power law driving behaviors here and everyone wanting a huge win. So in effect markets look like fixed pies in the short term and the winner is the one that grows the pie.

Trouble with this scheme is, the "pie growth", when it happens, is distributed to a very small group of people who have the ability to make big bets - so it compounds.

In terms of (B) those are basically local maxima tied to (A) so that distribution of growth is chaotic and skewed. So while it might seem like corruption of the network, it actually is a function of the "winner take all" nature of any market in the absence of either consumer/user self regulation or some deus ex machina regulator (government etc...).

Bottom line, it's a problem with how humans act (or fail to act) collectively around information sharing.


> Trouble with this scheme is, the "pie growth", when it happens, is distributed to a very small group of people who have the ability to make big bets - so it compounds.

It seems like the real trouble is that we've set things up so that "big bets" are required.

Huge tomes of regulations that have one-time compliance costs, so the cost to a tiny shop is the same as it is to Comcast or Microsoft, and regulatory capture to keep it that way.

Tax laws that encourage profits to be kept within the corporation instead of returned to investors, which requires successful companies to become conglomerates.

Laws that allow Hollywood and Apple to control content distribution and reject anything from anyone who poses a competitive threat to them.

Fix things like that and you won't have to be so big to grow the pie.


> The financial system is a giant message passing algorithm. It is pretty much just a min-sum algorithm [1] whose sole purpose is to answer the question "what should we do?".

no, no no. To even propose that the answer to the question of "what should we do?" can be solved by the financial system is laughable; that is pure free-market absolutism.

The anwer to the question "what should we do?" is _political_ .

Let us not confuse the market with politics.


I'm afraid you read "what should we do" in much more general sense than the original author. I think that algorithm answers a narrower question of "what should we do to optimize monetary resource allocation", and it only answers within the boundaries you set, for instance, trust and reputation play major roles.


Does "optimize monetary resource allocation" include policies such as basic income, that the author referenced?

Downvoters, explain your logic please


I didn't downvote you, but my best guess to why you're being downvoted is that HN skews capitalist, and confusing the market with politics is, in the capitalist p.o.v., how the game is in fact played.

Myself, I don't think that you can untangle (even in your mind) the mess that is the market (financial system, really) from the mess that is the government. They're called "voting dollars" for a reason.

I prefer to just refer to the whole big mess as "The System".

As for downvotes, I don't think they add any value to the site except to encourage groupthink, and you're better off ignoring them. Or take them as a hint that you might be on to something! :)


I assume you're being downvoted because you seem to have misread the author's statement and created a strawman argument. He posited that the financial system's algorithm's purpose is to answer, "what should we do?". You seem to have interpreted that as saying that the financial system should answer that problem for society as a whole.


> He posited that the financial system's algorithm's purpose is to answer, "what should we do?".

Yes, That is just re-stating what the author wrote, without providing any clarifying information.

> You seem to have interpreted that as saying that the financial system should answer that problem for society as a whole.

Maybe yes that is a maximalist interpretation; what is the alternative interpretation (it doesn't seem to be present in the author's text)?

> move away from min-sum towards sum-product (which often works an order of magnitude better) by perhaps implementing basic income. Etc. Etc.

Referencing 'basic income' here seems like a whole society problem to me, which has nothing to do with the technical implementation financial payment gateways, no?


"(B) When there are small loops in the message network"

Can you elaborate a real-world example of this for the layperson ? 50k foot view is fine :)


Here's the most important loop that's been driving up staggering wealth inequality in our lifetimes:

- Corporations and the ultra-rich tend to be the ones who try to profit-maximize the most (because people who care about other things, don't work as hard to accumulate wealth, and because corporations that profit-maximize the best tend to survive and grow better)

- National governments since WW2 had done a decent job of redistributing wealth, but since there are increasing returns to scale on investing in tax avoidance/evasion, it is the richest individuals and corporations that are best able to avoid taxes and move wealth offshore

- It is cheaper and easier to cut a sweetheart tax with a corporation or a rich individual to temporarily attract capital to a nation-state than to generate wealth the hard way through education and infrastructure investment

So our global capitalist system for the past few decades has simultaneously selected for the most selfish, profit-maximizing institutions and individuals, while also setting up a race to the bottom between nations (and even cities and states -- see Amazon's bid for a 2nd headquarters or Tesla's Gigafactory) to see who can give the biggest tax breaks to those at the top.


That's not the result of 'loops in the network'. That's the result of the memory effect of wealth. See, this is the problem with handwavy gibberish.


> That's not the result of 'loops in the network'. That's the result of the memory effect of wealth. See, this is the problem with handwavy gibberish.

I'm not sure if you're intending to be ironic, but you've achieved a gold star for irony nonetheless. Google "memory effect of wealth" to see if you come up with any meaningful, non-handwavy definition if your post was actually sincere.


> "(B) When there are small loops in the message network"

> Can you elaborate a real-world example of this for the layperson ? 50k foot view is fine :)


A real-world example of such dysfunction happening right now is that most of the money that could be used by all people to signal demand via cash "messages" is now tied up in a small loop of messages sent between the wealthiest people a "Casino Economy". See for example "Money as Debt II": https://www.youtube.com/watch?v=6MwHgpFSQMo

Money can be seen as a form of kanban unit or ration token for signalling demand. Essentially, the richest 1% or less of investors now use their "messaging" tokens (cash) for speculative investments in games against other wealthy investors in the financial sector (foreign exchange, derivatives market, etc.). That starves the rest of the economy for kanban tokens (cash) so it can't function. It would be like you walked into a Toyota factory using Kanban tokens and randomly removed 90% of the tokens -- that would prevent each industrial operation from signalling its need for required parts from other operations, causing all operations to slow down as they wait on all their dependencies to arrive. https://en.wikipedia.org/wiki/Kanban

Almost any economist will agree that if 90% of the money supply suddenly disappeared we would suffer a great economic depression in the USA. But the same economists seem unable to accept the same depression will happen for the 99% if the richest 1% take most of the money supply out of general circulation and just use it to play poker with each other. There are other complexities (including velocity of money message passing), but it seems to me the big issue many people overlook -- it is not just the total amount of money supply but how it is distributed.

Unfortunately, most of the governmental solutions (to satisfy wealthy donors taking part in legalized bribery of campaign donations) are based on supply-side "voodoo economics", like giving trillions of more dollars to the wealthy via bank loans or tax cuts. This is done in a foolish unfounded hope the wealthy will use extra money differently than they already have in the casino economy disconnected from meeting the needs of the 99%.

Even the slightest amount of thought will show how absurd supply-side economics is compared to demand-side economics. Almost anyone who can show a predictable demand for a good or service (like booked orders) can get a bank loan to fulfill those orders -- and to get orders, the customers need to have cash to signal demand. It is demand that makes businesses successful -- not supply.

Markets can work well to meet needs and wants, but they only hear the needs and wants of people who have money. Thus the value of a basic income to ensure all people's needs and wants are heard by the market to at least a basic extent.

Other options for dealing with the cash crisis caused by the triumph of the Casino Economy include strengthening non-market parts of the overall society such as: subsistence production (home 3D printing, home robotics, home gardening, home power via solar panels); the gift economy with more volunteering, freecycling, and sharing knowledge via the internet; and better democratic planning.

Unfortunately, with the big move of women into the workforce in the USA over the past few decades, home production, volunteering, and civic participation have all been reduced. Expanded entertainment options as a form of "Supernormal Stimuli" also distracted many people from physical daily life, also reducing participation in those other three sectors of society. Thus, a growing percentage of total societal interactions take place via exchange in the market instead of via subsistence, gift exchange, or civic planning.

Ironically, the "Two Income Trap" means families have very little to show for the second income between extra expenses involved in working outside the home, two-income families bidding up the price of houses and other items, and an increased supply of workers leading to lower compensation and poorer working conditions for everyone. See Elizabeth Warren's book on that: http://www.motherjones.com/politics/2004/11/two-income-trap/

With an increased supply of workers, there was decreased power of workers to demand wage increases in parallel with ongoing productivity increases. This in turn created the situation whether the owners of capital could take profits and lend them to workers instead of paying them in wages. Richard Wolff talks about in "Capitalism Hits The Fan", (whatever one thinks about his proposals for reform): http://issuepedia.org/Capitalism_Hits_the_Fan

To be clear, I'm not saying women should not have a choice as to what they do with their time. This is just about the societal implications of certain trends given men did not leave the work force to stay at home and be subsistence producers, volunteers, and civic actors in the same numbers as women who joined the work force.

I'm also not saying these "supernormal stimuli" are all bad (see for pros and cons: http://www.paulgraham.com/addiction.html ).

How we deal with that situation is a political question -- but to deal with it, we first need to acknowledge and understand what happened. And beyond a decrease in activity in the non-market sectors of society, one of the consequences of multiple trends has been a concentration of wealth in a smaller number of hands which has made the shift to a Casino Economy more likely.

Automation also has a role to play in that concentration of wealth from a different direction. Marshall Brain talks about that in "Robotic Freedom": http://marshallbrain.com/robotic-freedom.htm

Better technology has also increased options for participation in those three non-market areas (via cheaper tools, cheaper robots, and cheaper communications), so it is hard to say the entire trend has been downward for those non-market sectors. We may yet see them rise again as those other costs continue to fall -- and perhaps if people learn to move beyond the supernormal stimuli of mass-produced entertainment and back to making their own fun and using their own creativity.

While this won't directly break the tight loop of the 1% and the Casino Economy, it may bypass it so it does not matter as much -- in which case all that vast amount of money controlled by the 1% may just becomes like Monopoly money -- meaningless to most people most of the time because their life is built around non-market interactions.


The gist is right but over all this is some handwavy gibberish.


Is this his new meme-hustling? "Algorithm"?

https://thebaffler.com/salvos/the-meme-hustler

I've never liked how Tim O'Reilly frames his discussion around vague terms like "open" which could mean participatory, transparent, available, or any other number of vague, feel-good terms. Now he seems to be calling "algorithm" things like economic models and government policy.

These are widely disparate things, but by using vague terms in different contexts, he pushes discussion towards the direction he wants to steer it: in the case of "open", away from free software. In the case of "Web 2.0", towards anything that involved crowd participation.

With "algorithms", he seems to be wanting to push the notion that technology is both scary but liberating and we need tech messiahs like Bezos or Musk to bring it under control.


That's been bothering me, the word "algorithm" is slowly becoming known as this ambiguously scary thing.


It's kinda understandable, though. Whenever the services of Google, Facebook, etc. behave in an inscrutable, nonsensical, or offensive way, they blame it on their "algorithm" (for a recent example, see https://www.theguardian.com/us-news/2017/oct/06/youtube-alte...). That's really the only context where the term "algorithm" surfaces in mainstream discussion.

ML has had a lot of successes, but one of its failures has been more unpredictability on the level of individuals and events that people actually experience.


Maybe if humans took responsibility for the algorithms they created and didn't shrug and say, "It's the algorithm" then "the algorithm" wouldn't be the antagonists in this story.


Even in that light, algorithms aren't the enemy: models are. If I build a predictive model that harms people, then my model is the cause of harm, not the fact that I built it using random-forest or CNN.


Models are algorithms with parameters.


You should get used to that. When technology does harm, then the creators of this technology will point to the algorithm, the same way that Google talks about an algorithm as being a living thing. Of course, whenever they do good things, then the merit goes to the (human) creators of the algorithm -- in fact, to the founder(s) of the company that created the algorithm.


Yeah, 'algorithms' are the new 'chemicals'.


An "algorithm" is simply a way to do a thing based on a set of rules to be followed... of course it's ambiguous.

Getting scared of anything that generic is silly. Might as well fear the outside because anything can happen out there!


Throughout most of history rules and laws were enforced by a combination of "the letter of the law" and "the spirit of the law". The latter being shorthand for the role of human discretion.

Algorithms completely obliterate the latter - increasingly turning previously flexible systems into the equivalent of zero-tolerance-policy schools, where human discretion has no role to play. This is a problem because many laws on the books were written with the assumption that "the spirit of the law" would be a guiding principle when "the letter of the law" is unclear.

As a trivial example of this going terribly wrong, consider Youtube's copyright enforcement algorithms. Copyright was clearly designed with many loopholes for fair use to allow culture to move forward. Youtube's algorithms ignore all of this, changing the effective meaning of copyright on the site from one where the rights of the copyright owner are balanced with the rights of critics, commenters, and other creators to one where the rights of copyright owners are the only ones that matter.

Now imagine this kind of algorithmic enforcement applied to traffic laws, HR rules, or insurance policies and you can see why people might be nervous about "algorithms". Algorithms neither think nor feel and have no empathy. It's the ultimate actualization of the dystopia in the movie Brazil where the world is a cold, unfeeling, bureaucratic nightmare. Except where human bureaucrats at least need to sleep sometimes, computerized ones never rest.


"Now imagine this kind of algorithmic enforcement applied to traffic laws"

We already have this in the form of red light cameras which have been shown to cause rear-end accidents at traffic lights:

"There have been concerns that red light cameras scare drivers (who want to avoid a ticket) into more sudden stops, which may increase the risk of rear-end collisions."[1]

"the authors of the study found a statistically significant, but still smaller, reduction in angle and turning injury crashes by 15 percent, as well as 'a statistically significant increase of 22 percent in rear-end injury collisions."[2]

In short, there situations where the humans involved would all agree on what a "correct" driving response would be, but the presence of the algorithm (the camera, the ticket, the court, etc.) forces another action - and sometimes that action can be bewildering to other participants.

[1] https://en.wikipedia.org/wiki/Red_light_camera

[2] http://bigthink.com/ideafeed/study-red-light-cameras-ineffec...


I write a faulty policy that harms people. I encode it as an algorithm, implement it as a program, and deploy it on a wide scale. Now it's automated and distributed, and it is harming people. Where is the fault --- in the program? in the algorithm? Or in the policy? Where do we fix the problem?

IMO, we are quick to blame inanimate constructs, when people and their policies are the source of fault. Vilifying "algorithms" only serves to distract from root causes.


The argument I'm making is that when it comes to "human systems" like communities it's not possible to write a complete, consistent, and fair policy that can be unambiguously interpreted (i.e. by a computer). This is why Hacker News still has moderators and is not strictly governed by algorithms.

"Fairness" has always been heavily contextual, and the idea that it can be distilled to a matter of "if A and B then C" is folly. Even pure math can't reach the combination of completeness and consistency you assume is possible: https://en.m.wikipedia.org/wiki/Gödel%27s_incompleteness_the...


Human judgement can't escape Gödel


> Vilifying "algorithms" only serves to distract from root causes.

But that's the goal of some actors. They want to misdirect attention and responsibility away from themselves when their creations misbehave.

The root cause is no-one is really at the wheel once the algorithm goes into production to provide human discretion. For instance, pretty much the entire consumer facing apparatus of Google and Facebook consists of "algorithms" and there's no one empowered to call when they go wrong. Fixing that would cost money that the tech giants don't want to spend.


The point seems to be that is you limit your policy to something that can be reasonably be encoded as an algorithm and implemented as a program then you're by definition limited to write faulty policies that can't be flexible enough and will harm people.

I.e., that if you're choosing to use an algorithm for your policy, then this means that you will write harmful policies, and the choice to use an algorithm at all, any algorithm, is morally flawed and should be vilified - to motivate you and others to write policies that include appropriate flexibility, arbitration, human evaluation, overrides and thus can't be implemented as a scalable algorithm/program. Well, not unless we get superhuman general AI systems.


Algorithm is an ambiguous word, but when people refer to them in the scary context, they are particularly talking about the kinds of algorithms the giants use to analyze and predict behavior. My only concern is that the word itself becomes associated with that one use case, which would be kind of sad because algorithms make up pretty much all software period.


It's quite not, actually. Humans have the ability to see nuance in things. Computers currently don't. So you see things like Google banning developers for innocuous violations of the rules, whereas humans (assuming they were given enough time to properly review) wouldn't give 3 strikes just because they saw the same violation 3 times.


No, that really is the definition of an algorithm.

Algorithms and computers are NOT inextricably linked. Just because software often uses algorithms to define behaviour, doesn't mean algorithms cannot mean other things.


In the context being discussed, computers are involved and are the ones making the decisions.


That's not the context of this discussion. Refer to the parent comment: https://news.ycombinator.com/item?id=15417291

The context is the word "algorithm".


Everyday people couldn't even tell you the definition of "algorithm", even then, they wouldn't recognize that algorithms are not only encoded into chips but also business process, legal compliance, etc.


To be fair, it's very hard to precisely define "algorithm" and actual, working scientists and philosophers have been arguing about this for ages:

https://en.wikipedia.org/wiki/Algorithm_characterizations


O'Reilly's in an interesting story. He became fabulously wealth by selling partly/mostly closed-source books about open-source software, in the short time-window when opens source existed while it was also possible to make money selling books (before Internet samizdat became cheap as free)


WTF is a "closed-source book"?


The opposite of an open source book.

I'd assume the parent is thinking of something like the "Pro Git" book, available on Github as code and licensed under creative commons. https://github.com/progit/progit2


Can anyone explain why O'Reilly thinks nobody knew until recently that companies are biased toward part-time work in order to avoid providing benefits?

"We can’t see, for example, that the algorithms that manage the workers at McDonald’s or The Gap are optimized toward not giving people full-time work so they don’t have to pay benefits. All that was invisible. It wasn’t until we really started seeing the tech-infused algorithms that people started being critical."

And here's one that's more subtle, so I don't blame him quite as much, but he is naive to think "ideas" are what cause corporations to act the way they do. Material and institutional conditions cause their behavior, which is then justified after the fact by appeal to shareholder value.

"Somebody planted the idea that shareholder value was the right algorithm, the right thing to be optimizing for. But this wasn’t the way companies acted before. We can plant a different idea. That’s what this political process is about."


> Can anyone explain why O'Reilly thinks nobody knew until recently that companies are biased toward part-time work in order to avoid providing benefits?

How do you know he's thinking that? The way he was talking, I read him as "well it's obvious that nobody has stopped this behavior so it's fair to assume that not enough people noticed". Wouldn't you agree with that?

> And here's one that's more subtle, so I don't blame him quite as much, but he is naive to think "ideas" are what cause corporations to act the way they do. Material and institutional conditions cause their behavior, which is then justified after the fact by appeal to shareholder value.

That was the only part of the interview where I strongly disagreed with him -- and you're right. It's not about ideas; there are a lot of people out there who are extremely good in bean-counting, micro-management, and of course awful at promoting a positive work environment. They will never change. Only regulators can limit them a bit, if even that.


> How do you know he's thinking that? The way he was talking, I read him as "well it's obvious that nobody has stopped this behavior so it's fair to assume that not enough people noticed". Wouldn't you agree with that?

I wouldn't. Anyone... everyone... who's worked retail _knows_ this. Anyone who's worked retail management knows this because when you ask to hire people you're told to hire part timers. Two part timers is always better than a full timer, you're told. This is NOT invisible to anyone. It's simply unspoken.


> you're told

> you're told

> It's simply unspoken.


Not the same thing.

You're told to hire part time. You're not told it's because the company doesn't want to pay them benefits. It's implied, but you're not told.


>> Can anyone explain why O'Reilly thinks nobody knew until recently that companies are biased toward part-time work in order to avoid providing benefits?

> How do you know he's thinking that?

From the article:

> We had plenty of bias before but we couldn’t see it. We can’t see, for example, that the algorithms that manage the workers at McDonald’s or The Gap are optimized toward not giving people full-time work so they don’t have to pay benefits.

His thinking on the point seems pretty clear and the parent seemed to summarize it pretty well. I had the same question upon reading it and thinking back to the very prominent criticism that companies like Domino's and other fast food operators were taking for cutting workers' hours below the 32-hour max to avoid health care costs.


As I said in the other arm of this mini comment thread: I am pretty sure most of the planet knew of the potential for these nasty tactics but the handful of people who could've done something chose to turn blind eye to it because it serves their financial interests and job security plans.


Workers have been agitating against this behavior for many decades, so no, I would not agree that it is fair to assume nobody noticed.


Well then. Maybe nobody in power noticed -- or if they did, they turned a blind eye to it.


No, people in power — the CEOs of those companies — did notice. It is a feature, not a bug, from their perspective. It makes the employees cheaper since they don't need to be provided benefits.

I don't see how an automated implementation of the same feature/bug is an improvement over the manual implementation. Either way, it's a deliberate choice that minimizes the value provided to one set of people (low-level employees) and maximizes the value to others (shareholders and executives).

I don't see how using automation and AI to increase the efficiency of transferring money from the poor to the rich is an improvement.


Agreed. That's what I was implying -- CEOs want to reduce expenses and eliminate the human error factor, not to make the world better place for workers.

Automation only made these problems easier to increase. Sadly it was almost never used to actually improve people's lives. :(


It's perhaps instructive when people think "people" works as an abbreviation for "people in power".


Theoretically, democracy tells us all people should be people in power but we're seeing how well it works in practice, right?


>We had plenty of bias before but we couldn’t see it. We can’t see, for example, that the algorithms that manage the workers at McDonald’s or The Gap are optimized toward not giving people full-time work so they don’t have to pay benefits. All that was invisible. It wasn’t until we really started seeing the tech-infused algorithms that people started being critical.

I’m not sure how this can be said with a straight face. An algorithm was really not needed to perceive this and it’s insulting and strange to suggest it.


“Tech-infused”. Like tech is an herb or a spice? Surely a sign that there are no coherent ideas to be found from that writer.


"why capitalism is like a rogue AI"

These people seriously need to take a step back. At some point you cross the bridge between reporting and actively advertising someone's personal views.

I can't ever trust or read people who constantly try to push an agenda, it's disingenuous. Even more so, when reporting, engaging in personal exchange without the discussing data and statistics, i.e. the facts, you are just allowing yourself to become a personal blog of someone else.

Reporters are supposed to fact check, look for concrete evidence in someone's statements, hold people accountable to their words, and yet certain people get a pass all the time, even the star treatment.


Yes? This is an interview with an author that was conducted specifically to talk about the book and the material in it. What did you expect?

Unrelated, I'd like to see your argument against that particular line. IMO the comparison is an excellent one; its only issue is that the chosen scope ("the financial market") is too small. Corporations and other bureaucratic entities like governments are powerful cross-domain optimizers with utterly alien cognitive processes and goals. Intelligence, certainly, artificial, might as well call it that.


Well I'd hope that we'd expect better when an author makes outrageous, nonsensical claims.

They are rehashing the book instead of analyzing its claims and assessing its validity. Conveniently, the blame of all the issues are left on the firms instead of the rules of the marketplace, conditions, etc which lead to distortions and structural issues in the market.


I believe that you're looking for a very different piece than this was intended to be. This is essentially a summary of the book. I don't know what the motive was, but it's probably something like "I believe that more people should hear these ideas". The treatment you're requesting would likely qualify as a doctoral thesis. The writers at Wired do not have the time, nor is it likely that they have the expertise, to create that work, nor does the audience or funding exist to make it responsible course of action for the author.

If you wish to engage on the claim itself, instead of spouting unsupported emotional appeals like "outrageous" and "nonsensical", perhaps you could present a rational argument? I suggest considering the belief I noted earlier about the scope of the argument: The label of "artificial intelligence" is valid for more than just financial organizations and market(s).


It is a poor attempt to reconcile the authors views within the industry that he exists within outside of the scope of the book. If they want to throw softballs fine. We are more than free to criticize his claims, free of a doctoral thesis, mind you.


Of course you're free to criticize his claims. It's just pointless to demand someone else do it for you. The interviewer here apparently felt there was a different purpose than they one you'd want. I'm not sure what your recommended remedy for that would be -- forbid him from doing the piece?

If you think the claims are "nonsense", then by all means, go off and write up your rebuttal.


I'd say this is what we call a Puff Piece. Something like 80% of all news is Puff Pieces or Hit Pieces on various things. There's not much practical difference between a Puff Piece and an ad, or a Hit Piece and an ad for the opposite side of whatever is getting hit.


Calling financial markets AI is wrong in pretty much the only way a word can be "wrong": It's not what most people mean when they talk about AI.

This makes it really easy to make statements that sound deep and meaningful, but really aren't. E.g. "I'm not worried about Artificial Intelligence - we already have artificial intelligence, its called a company. Companies are artificial, and they behave intelligently".

This just isn't what people are worried about. What people are worried about is:

1. Soon we will be able to create software/robots that replace tons of human jobs. This has nothing to do with "companies as an AI".

2. A super-intelligence will be created that is vastly smarter than any human, and can make itself even smarter, but will have different goals than humanity. Again, this is only very thinly related to the "companies as AI" spiel (companies are not superintelligent, they don't actually have coherent goals of their own).


It's not that companies are "operating intelligently" it's that they aren't: they're operating on a principle that maximizes profits (and ROI for shareholders) at the expense of everything else, that's the central guiding principle, and nobody at a publicly traded firm can oppose it successfully, without being voted off the board by shareholders. It's effectively an algorithm that delegates tasks to human operators, and automation is slowly replacing the human component.


If I said "algebra is geometry", you could make the same criticism; it's wrong, because that's not what most people mean by those terms.

But then you see that almost all of the most important mathematical developments in human history make that exact conflation, and that it is all driven by the need to describe human experience in symbolic language... and it does look very deep.

Nobody cares what 'most people' mean. Most people are idiots. Sure, you wouldn't talk like that when you're writing a dictionary, but dictionary writers are not known for making intellectual breakthroughs. That's the difference between description thinking and proscriptive. One allows for creativity.


Everyone had an agenda of some kind. Whether conscious or unconscious everyone has their own opinions and belief systems which color their perceptions of the world. That includes their perceptions of hard data. There's no such thing as objective reporting -- it's simply not possible for humans to remove their own opinions and feelings from their perception of an event or an issue and report on it in an unbiased way. No matter how hard they may try, their bias will slip in.

This is why I prefer intentional debate as a way of understanding the world rather than poor attempts at "objective" reporting. I always feel far more informed about a topic after I've listened to people with opposing agendas duke it out on an intellectual stage than I do when I've read a supposedly "neutral" article by someone either masking feelings on the topic or who doesn't care about it much. Intelligence Squared US is fantastic for this and I really wish there were more outlets like them.

With that in mind, if you think this article has an agenda, then -- instead of complaining about it -- go find one with the opposite agenda and read that. Then compare their points and arguments.


> I can't ever trust or read people who constantly try to push an agenda, it's disingenuous.

On the contrary, I can't bear to trust people who don't display a clear agenda.

Everyone has an agenda. It's either spelled out, in which case we can clearly see it and proceed to discuss its merits; or it's hidden under a mask of neutrality, in which case it's much harder to notice and counteract. Pretending to be neutral is both disingenuous and manipulative.

Meanwhile, one of the major points that OP makes is that "algorithms" (or heuristics) are the same. All human-made algorithms reflect human biases. Some are easier to notice and counteract. Others are hidden deeper under a guise of neutrality. Either way, it does not help to pretend that there are no biases.


It's just a paraphrase. Here's the quote: "Yes, financial markets are the first rogue AI." I didn't read the paraphrase as an endorsement. It's just a description of what was said.


> This one touches on the effects of Uber’s behavior and misbehavior, why capitalism is like a rogue AI, and whether Jeff Bezos might be worth voting for in the next election.

I'd prefer 'Why OReilly believes capitalism is like a rogue AI'. Having Wired repeat OReilly's statement, rather than attribute it to him, makes it seem like they're not critically examining a very significant claim.


Why would Wired do that work when they can just recommend that you read the book? Why would you ask Wired to do that when you can just read the book? And why do you believe that you'd find that article any more satisfying than either this interview or the book?


> Why would Wired do that work when they can just recommend that you read the book?

Why do you think it's a significant amount of work to simply quote an author, rather than adopt their viewpoint?

> Why would you ask Wired to do that when you can just read the book?

Justification is provided in the comment you're responding to.

> And why do you believe that you'd find that article any more satisfying

I'd find an article that quotes OReilly rather than adopting his viewpoint more satisfying because it distinguishes OReilly's viewpoint from Wired's. This is the difference between journalism and marketing.


> Why do you think it's a significant amount of work to simply quote an author, rather than adopt their viewpoint?

Because simply quoting the author won't suffice. The topic is sufficiently complex that a proper treatment requires an entire book chapter plus several preceding chapters of explanation. Paraphrasing or skimming won't work because then you'd accuse them of being even more biased.

This article gives a factual list of the topics the book covers, asks the book's author several questions about those topics, and directly quotes the book's author's responses. Yeah, that's marketing. Anything they could have written in that format would have necessarily been marketing. The only way not to support the book in that format would have been to not publish the article at all. Where do you see Wired "adopting O'Reilly's viewpoint"?

If you just want to never see anyone advertise anything, uh, okay? Every bit ever communicated is an opinion designed to accomplish some goal in the mind of another person. To escape this I suggest renouncing all interaction with other humans and moving to an uncontacted area in the depths of some rainforest.


> Because simply quoting the author won't suffice.

It would establish a separate voice from OReillys and the publication that should be reporting on, not marketing, his new book.

> Where do you see Wired "adopting O'Reilly's viewpoint"?

Already answered multiple times in this thread.

> The topic is sufficiently complex that a proper treatment requires an entire book chapter

Or they could review it. 200 words. There's a lot of books reviewed that way, many far more complex than OReilly's.


Steven Levy is one of the best writers when it comes to profiles and the gestalt of the tech scene. His stuff is consistently well done. I think you're looking for a different _type_ of writing, perhaps?


Maybe you're looking for PhD thesis presentations and not interviews with book authors, then. YouTube will be happy to help.


I think you're taking an idealized version of reporting that was never, ever true. Otherwise all papers would be the same, more or less. There has always been agenda pushing in reporting, both in what was reported, and what was not reported.


Hysteria sells.


Capitalism and AI bashing also. This book will become a hit in Germany.


"I’m not sure that Jeff would make a great president, but he might."

So let me tell you about my experience with Amazon Fulfilment. I was gonna pay Amazon, who has this huge expertise in packing and shipping stuff, to fulfil my Kickstarter. I'd made a large, delicate art book. Amazon, in their infinite wisdom, stuffed them in bubble wrap envelopes and dropped them in the mail. They were getting bent, the envelopes were getting ripped up, it was a mess.

I spent a month in customer service hell e-mailing someone who was following a script that said I would have to turn on something they called "prep", which would ask them to look at it and package it better. Three times I checked that this was set, and three times I sent myself a test book that came in a bubble mailer.

Finally I got clued in that there is a high-level support team that you access by sending a complaint directly to Bezos. This person, after some back and forth, ultimately informed me that due to their internal systems, it is completely impossible for them to put a book in anything but this level of shitty packaging; "prep" is just not a process that can ever happen to a book.

They covered shipping my books out to someone actually capable of finding sensible boxes and shipping them. And they intimated that someone maybe lost a job over this. But changing this, they said, might take on the order of years.

I'm not sure I want a man who presides over a system like this running the country.

Especially given that O'Reilly points out that the "rogue algorithms" of the title are corporations, and the only reason Amazon is headquartered in WA is that the tax was lowest there...


"Our fears ultimately should be of ourselves and other people."

Indeed. The short-term fear at least should not be about machine overlords, but about how people in power use AI to increase their power and/or make life worse for everyone else.


Have any of these tech "visionaries" (aka millionaires that we pretend have more insight than non-millionaires) considered that at the same time that we've increased our wealth we've also done more damage to our environment? If the cost of increased wealth is decreased habitat, will peak wealth result in the death of nature?

Aside: people are still comparing AI to a machine that feels, and this is so stupid it boggles the mind. The machine that creates paper clips will not kill off anyone who tries to turn it off because it does not have self-preservation, which is a system dependent on fear of death. Machines do not fear, and even if they did they would not fear death, because they have solid state. Bio systems are walking RAM disks.

Algorithms are basically math problems. The financial system and government do not work based on math problems, they work based on emotional instability. Seriously. Both these systems are driven almost entirely by the feelings of humans. They aren't algorithms, they are shitty biological systems that don't make mathematical sense at all.


If a good algorithm optimizes for "quantity of paperclips produced", then it would recognize that "prevent humans from turning me off right now" is an optimal strategy. No fear of death involved, only pure rational optimization.


The thought experiment is not realistic, because no intelligent system would think it could produce paperclips infinitely, and it would already have an "off feature" which was designed for a purpose (maintenance, update, replacement), and was intended to be used. An intelligent system would not reject proper use.

We already replace humans at their jobs all the time and they only very rarely kill their masters to prevent it.


> because no intelligent system would think it could produce paperclips infinitely

This is irrelevant, being turned off while it's still possible to produce more paperclips would not be the optimal strategy for the machine trying to maximize paperclip production.

> and it would already have an "off feature" which was designed for a purpose

Look up what the control problem is. We don't know how to design such a feature for a General AI that also lets it do it's job effectively. It's not as simple as it looks. If you're not talking about a General AI than sure, it's easy, but non-General AI's are not very scary.

> We already replace humans at their jobs all the time and they only very rarely kill their masters to prevent it.

But humans don't have "maximize my production for this company" as their main life objective. Things like food, not going to jail, not dying, etc usually come first.


kosta_self: I think you're hellbanned.

Good point, but that would be a bug in the AI. An intelligent system would see there would be no point in killing humans, so killing the humans would defeat the purpose of making the paperclips, and the humans support the machine, thus killing them would be counter-productive to its purpose.

It's also unrealistic that algorithms would try to exceed the limitations of their system. Imagine if a natural predator tried to "maximize production" of killing its prey: it would run out of prey, starve, and die. An AI would understand that trying to "maximize production" to the detriment of everything else would be counter-productive and create a resource contention war. Humans are the only creature I know of that exhausts natural resources to its own detriment - we are smart enough to exploit our resources, but too stupid to know when to stop. The natural system's response to this behavior seems to be to get us to kill ourselves. Maybe the AI are part of this process?


> The machine that creates paper clips will not kill off anyone who tries to turn it off because it does not have self-preservation

The idea here is that the algorithm tries to maximize its paperclip production and turning the machine off will definitely have a negative impact on its paperclip production. So the machine will take the most efficient measure to stop that negative impact.


with autonomous drones and AI warfare, say goodbye to the birdies. chomp

> financial markets are the first rogue AI.

I feel like I just got Rick-Rolled.


Sorry but it is hard to take seriously someone who thinks AI and automation won't take away jobs.

They will and it is a good thing.


I think it’s important throw an addendum onto that statement. “They will, and it is a good thing. Although there may be some short term downsides we should look to mitigate.”

I love to listen to economics podcasts and read economic articles and books in my spare time, so I’m sufficiently convinced that automation is a good thing for quality of life around the world. That being said, there is a real human cost as well that shouldn’t be downplayed. If we suddenly had autominous trucks available tomorrow, that would potentially lead to millions of American (tens of millions of world wide) workers that will become unemployed. Some percentage will pivot into new work. Some will struggle to find new jobs. Some will never find new employment. What we shouldn’t do is just discount that human suffering as “the cost of progress” and just move along. We short continue to work to find ways to help people with these transitions when and where we can.

In short, let’s always try to mix in some humanity with our disruptive innovations.


An important point, but I hope it is recognized that "we" don't do economic downside-mitigation, at least in the US. People said the same thing about NAFTA - they'd fix it in post. I think there were some half-hearted education efforts and a bit of other fluff, but nobody who could actually do anything really cared. You can see this pattern repeated over and over.

I only mean to pick on the US a little; we are worse than some other nations, but the problem is not unique to us. Humans just generally don't mitigate big, looming collective action problems. They ignore it until it happens and then whine that nobody did anything. Or, if it only effects the already-poor, everyone else settles on a narrative that it is their bad morals or laziness or whatever excuse doesn't make people think too hard.


Just like free trade, yes, a good thing in the aggregate. But many individuals will be displaced, and we should discuss these effects openly and honestly. The last generation wasn't so upfront about the secondary effects of globalization on blue collar workers in rich countries.


Even assuming we end up with some society where robots do most of the labor and universal basic income becomes a thing, what do you think 90% of people will do? People need to be occupied, and the typical person isn't an inventor or artist. Furthermore, good luck making kids want to go to school when they see it as pointless since they can just get everything free from the robots while they watch TV. If you think obesity and our sedentary lifestyles are a danger now, it's only just starting.

We're in for some huge problems as automation ramps up. The people saying "this is good" now will be screaming to go back once the problems start rolling in.


Well, I think you give people too little credit. Nobody can stand passively watching TV the entire day every day of their lives.

But even if you aren't, why would it bother you? If other people want to waste their lives on a coach in front of the TV, what makes this wrong?


Assume that the one thing you do best for a job AI/computers come out and do better. How do you recover from that. Especially, if said AI owners decide you only get a barely livable stipend while they keep most of the profits.


Is it worthless (for you) doing just because a computer can do it better? If so, you may need to look for a better hobby... because after computers start doing everything better than we, all we do will be hobbies, not work.


Exactly, you have to rely on the "charity" of the Capitalists, which is going to be the worst gamble anyone made since the last time we tried to assume humans wouldn't be greedy and grab all the resources and power for themselves.

These couch dwellers, what are they going to do to protest? They can't strike; if they're completely sedentary then they probably can't even protest without having a heart attack; cut off their VR or sugar-pump for 5 minutes and you can probably get any sort of compliance you want.

Our only hope is that the great rolling balls of flab that develop from these sedentary couch-dwellers who exist only to consume HFCS mixed with palm-oil in a satisfying soup of flavourings, and consume the latest immersive media, can't procreate and the problem sorts itself out. Though I fear we'll be using artificial wombs and the like and so manage to remove any reproductive pressure that might push our race back on course.


> Nobody can stand passively watching TV the entire day every day of their lives.

People keep saying this but directly contradicts the observed experience of the vast majority of people.


Coming from a rural area where all the farm jobs went away and half the people are unemployed: they very well can, and do.

There are three main hobbies in my hometown: drinking, heroin, and TV. things haven't changed a bit there in 20 years because nobody cares to change anything.


I think the current theory is: play video games.


you mean you're not watching the [actual] matrix all day already? [neo, is that really you?]

and NOBODY watches porn anymore...

That's a loaded statement. You could say as well that's it's hard to take seriously those who don't know the lump of labour fallacy.


If you think that models that are used to think about economy in aggregate somehow act as a guarantee that a life of an individual will not be wrecked as a result of large-scale automation.... well, lets discuss "lump of labor fallacy" when a former truck driver is "replacing" his job with 3 gig-economy part time jobs that sprung out as result of the "expanding economy".

It's not that those models are wrong, it's that it's a very different thing to read about 20 years of traumatic events from the pages of books on economics, vs. having to live through it.


To add to this, a bit: Our present political kerfuffle is no small way caused by collective feeling of anger by population who have seen their (industrial, mining, farming) jobs in rural america disappear due to globalization, while the market expanded .... elsewhere. In New York, in San Fran, in China, in India. And the money from that expansion did not go anywhere near those affected.

But the upside is how much lives of Chinese have improved, so they got that going for them, which is nice.


Kind of. Middle Class Chinese have had their lives improved quite a bit. Worker class Chinese, well, that situation seems fuzzier.


This may be more or less true, but that's a separate problem from globalization. Due to globalization China got much much richer (to the point of politically bullying other countries due to economical power), but the distribution of the income is an internal problem.

I agree with that.

My critic is towards stating that automation (robots, ai, etc.) causes a loss of jobs in absolute terms, as if in the future most of the people will be unemployed.

There's a radical difference between stating that changes (loss and creation at the same time) in the jobs market will cause disruption, and stating that jobs will be lost and that's it.

Thinking the (r)evolution in nihilistic terms will distract from improving the problems typically caused by the market change (which is the one you mentioned).


Whether or not it's a good thing depends entirely on us as a society. We can either decide to free people up, and give them the freedom to do things without having to worry about bills or where their next meal is coming from, a la Star Trek, or we can continue to put the same capital requirements on living that we currently do, and end up with the Hunger Games.


It could be a good thing; it won't if we retain Western Capitalism as our social order.


Perhaps, but they also take away income.


It's hard to take seriously someone who doesn't hold the same speculations about the future as yours?


> They will and it is a good thing.

Donald Trump, and whatever opportunist demagogue comes after him, agree with you.


Who says the people won't rise up and attempt to destroy thinking machines in some kind of holy war? No one likes being marginalized.


But then the thinking machines will make machines of their own to protect them.


Naw, the ideologues will go after the programmers first, as soon as they realize, much like in the movie 'transcendence'... [besides infrastructure etc]

Obviously the most controversial part of this is "capitalism is the first rogue AI" point. I will try to stay out of having an opinion on this, I just want to try to add colour to what he's saying.

First, in the classical "market" scenario, we're talking about little atomic firms, each with goals and hopes and dreams of profit. Each of these firms has some kind of "knowledge" and some kind of decision making apparatus. They all have some functionality. In a sense, these firms are like people - so much that our government treats them as such in many cases!

In many cases, a lot of these ideas and processes end up being the same, and when it is, we call it collective wisdom[0]or collective intelligence[1]. I won't go too deep into that.

So while financial markets and the firms that comprise them aren't exactly _machines_, they do display a form of intelligence different and sometimes more effective than our individual knowledge.

[0] https://en.wikipedia.org/wiki/collective_wisdom [1] https://en.wikipedia.org/wiki/collective_intelligence


> when the curtain rolls back we see that those superpowers have consequences: Those algorithms have bias built in.

> That’s absolutely right. But I’m optimistic because we’re having a conversation about biased algorithms. We had plenty of bias before but we couldn’t see it.

I must say I am really happy to see that bias in tech is being recognized and accounted for (for the most part).

Please forgive the politics (I'll try really hard not to bash Trump ;-) ), but if there is a silver lining to the 2016 US presidential election I think it is that it has really caused many of us to introspect and realize how thick our echo chamber walls have really gotten over the last few years. The chamber was constructed so quickly, I barely realized it was happening. We're becoming so polarized that we're actually moving to different communities to be with more people "like us." Simple awareness of the problem is a huge step forward in being able to resolve it.


It's 2017 and humans have built several anemonae, and we have several highly profitable firms that are built around these anemonae.

Their employees act as the stewards (clownfish) more as beings in symbiosis than as one controlling the other. We can neither understand or pull the plug on these creatures. Time will tell which species evolves more rapidly.


> financial markets are the first rogue AI

How about this idea: the first rogue AI was language. In terms of AI as a compositional system for storing meaning, I think this might be a reasonable position to take. Yes people, we've been playing this game for a very long time...


That sounds a lot like McLuhan's ideas.


Who is McLuhan? Do you have a reference for this?



Seems like the only job available in a not-so-distant future will be AI-correcting engineer...


I've flipped the bozo bit on the term 'Algorithm'.


Capitalism is the best thing that's ever happened to humanity. The author's ideological bias gets in the way of understanding.


It has certainly enabled more humans to exist. It has created way better conditions for many of those humans. And it has created unimaginable suffering for just as many humans.

I wouldn't get rid of capitalism, but I certainly wouldn't give it a superlative. It's kind of like saying that domestication is the best thing that has happened to chickens, cause look how many more of them there are, and wow some of them live on a free range.


Your own is as well. You make a strong claim without proving it.


That's probably true over the span of time where what we needed most was capital investment.

I think there's a fair question about whether we are changing modes. "Capital" itself is changing in nature -- not so much about big machines and mass production any more.


That doesn't mean it's not also bad for large swaths of humanity.


> For more than two decades, Tim O’Reilly has been the conscience of the tech industry. ...he was among the first to perceive both the societal and commercial value of the internet [and] ... he drew upon his education in the classics to apply a moral yardstick to what was happening in tech.

Writer hasn't heard of Richard Stallman?

Update: writer is Steven Levy, who most definitely has heard of Stallman, and should know better.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: