Saying "let's write an algorithm to improve search results" is meaningless; "let's design and implement a heuristic that improves search results". The algorithmic part of this is how to efficiently implement that heuristic.
I can usually get through articles like this by silently replacing "algorithm" with "heuristic"; the problem arises when some articles attempt to draw equivalencies between "algorithmic" concepts, like running time and space, and "heuristic" concepts, like optimizing for the wrong thing.
For example, the elements required to prove a negligence claim are:
2. Breach of Duty
3. Cause in Fact
4. Proximate Cause
When evaluating a negligence claim, a lawyer first tries to determine if the defendant owed the plaintiff any duty of care, then whether the plaintiff breached that duty, then if that breach was the factual cause of a harm suffered by the plaintiff, then whether the causal relationship was close enough to be considered legally proximate, and then, finally, whether the plaintiff actually suffered measurable damages.
Arguably, that superficially algorithmic process frequently breaks down in practice. For example, it's often easier to start with the damages suffered by a plaintiff and work backwards by identifying the causes, then who was responsible for those causes and any duties they may have owed to the plaintiff. However, regardless of how the lawyer and plaintiff identify whom to sue, they must frame their pleadings to allege the elements of the tort in the order specified by their jurisdiction's law, so the actual practice of law in court amounts to an algorithmic exercise.
... I personally think machine evolution is unstoppable, and the best hope for humanity is the noble cowardice of creating refugia and trying, like the duckweed, to create human (and other) life faster than other forces can destroy it. [Although in 2017 I'd add other possibilities like symbiosis or trying to create friendlier AI as a partner (or at least AIs with a sense of humor -- see James. P. Hogan's AIs, or the ones like Libbry in EarthCent Ambassador series, or the Old Guy Cybertank series example), improved sensemaking through better intelligence-augmenting tools, and trying to help human society be more compassionate in the hopes our path out of a singularity will somehow reflect our path going in...]
Note, I'm not saying machine evolution won't have a human component -- in that sense, a corporation or any bureaucracy is already a separate machine intelligence, just not a very smart or resilient one. This sense of the corporation comes out of Langdon Winner's book "Autonomous Technology: Technics out of control as a theme in political thought".
You may have a tough time believing this, but Winner makes a convincing case. He suggests that all successful organizations "reverse-adapt" their goals and their environment to ensure their continued survival. These corporate machine intelligences are already driving for better machine intelligences -- faster, more efficient, cheaper, and more resilient.
People forget that corporate charters used to be routinely revoked for behavior outside the immediate public good, and that corporations were not considered persons until around 1886 (that decision perhaps being the first major example of a machine using the political/social process of its own ends).
Corporate charters are granted supposedly because society believe it is in the best interest of society for corporations to exist. But, when was the last time people were able to pull the "charter" plug on a corporation not acting in the public interest? It's hard, and it will get harder when corporations don't need people to run themselves.
I'm not saying the people in corporations are evil -- just that they often have very limited choices of actions. If a corporate CEOs do not deliver short term profits they are removed, no matter what they were trying to do. Obviously there are exceptions for a while -- William C. Norris of Control Data was one of them, but in general, the exception proves the rule. Fortunately though, even in the worst machines (like in WWII Germany) there were individuals who did what they could to make them more humane ("Schindler's List" being an example).
Look at how much William C. Norris of Control Data got ridiculed in the 1970s for suggesting the then radical notion that "business exists to meet society's unmet needs". Yet his pioneering efforts in education, employee assistance plans, on-site daycare, urban renewal, and socially-responsible investing are in part what made Minneapolis/St.Paul the great area it is today. Such efforts are now being duplicated to an extent by other companies. Even the company that squashed CDC in the mid 1980s (IBM) has adopted some of those policies and directions. So corporations can adapt when they feel the need.
Obviously, corporations are not all powerful. The world still has some individuals who have wealth to equal major corporations. There are several governments that are as powerful or more so than major corporations. Individuals in corporations can make persuasive pitches about their future directions, and individuals with controlling shares may be able to influence what a corporation does (as far as the market allows).
In the long run, many corporations are trying to coexist with people to the extent they need to. But it is not clear what corporations (especially large ones) will do as we approach this singularity -- where AIs and robots are cheaper to employ than people. Today's corporation, like any intelligent machine, is more than the sum of its parts (equipment, goodwill, IP, cash, credit, and people). It's "plug" is not easy to pull, and it can't be easily controlled against its short term interests.
What sort of laws and rules will be needed then? If the threat of corporate charter revocation is still possible by governments and collaborations of individuals, in what new directions will corporations have to be prodded? What should a "smart" corporation do if it sees this coming? (Hopefully adapt to be nicer more quickly. :-) What can individuals and governments do to ensure corporations "help meet society's unmet needs"?
Evolution can be made to work in positive ways, by selective breeding, the same way we got so many breeds of dogs and cats. How can we intentionally breed "nice" corporations that are symbiotic with the humans that inhabit them? To what extent is this happening already as talented individuals leave various dysfunctional, misguided, or rogue corporations (or act as "whistle blowers")? I don't say here the individual directs the corporation against its short term interest. I say that individuals affect the selective survival rates of corporations with various goals (and thus corporate evolution) by where they choose to work, what they do there, and how they interact with groups that monitor corporations. To that extent, individuals have some limited control over corporations even when they are not shareholders. Someday, thousands of years from now, corporations may finally have been bred to take the long term view and play an "infinite game".
However, if preparations fail, and if we otherwise cannot preserve our humanity as is (physicality and all), we must at least adapt with grace whatever of our best values we can preserve or somehow embody in future systems. So, an OHS/DKR [Open Hyperdocument System / Dynamic Knowledge Repository] to that end (determining our best values, and strategies to preserve them) would be of value as well.
When aluminum was first discovered around 1827, and for decades afterward, it was worth more than platinum, and now just under two centuries later we throw it away. In perhaps only two decades from now, children may play "marbles" using diamonds, and a child won't bother to pick a diamond up from the street unless it is exceptionally pretty (although you or I probably would out of habit -- "see a diamond, pick it up, and all the day you have good luck").
This long essay is my own current perspective on this developing situation, and part of the process of my formulating my own thinking on these trends and how I as an individual will respond to them.
To conclude, I think all the "classical" problems like food, energy, water, education, and materials will be technically solvable by 2050 even if we don't do much specifically about them (and like hunger are solved today except for politics). The dynamics of technology and economics are just taking us there whether we like it or not. Those goods may all may essentially be "free" or "extremely cheap" by 2050. Obviously the complex politics of these issues need to be resolved, and the solutions need to be actually implemented. If they are "extremely cheap", people still need a tiny amount of income to buy them.
Still, I think Doug [Engelbart] is right. We face huge problems that only collaborative efforts can solve -- especially the problems of intelligent machines, technology-amplified conflict, and a complete disruption of our scarcity-based materialistic economic and social systems. These problems dwarf technical issues like energy, food, goods, education, and water.
The problem has always been, and will always be, "survival with style" (to amplify Jerry Pournelle). The next twenty years will fundamentally change what the survival issues are: environment, threats, and allies. They will also very well change what we value as "style" -- when diamonds are cheap as glass [perhaps from nanotechnology], what will one give to impress?
The general public (& journalists) use the word 'algorithm' to mean any computerized process that "does things to them", such your facebook news feed, or what a credit agency does.
This is a different meaning from how social scientists use these words.
In the episode, he talks about how even something like Principal Components Analysis (PCA), which is something that normally we would call an algorithm, which follows a discrete sequence of steps, can also be thought of as resting on something that resembles a model
One of the most egregious misuses of "algorithm", in my opinion, is the term "genetic algorithms". Not only are these not algorithms in the strict sense, but referring to the procedures as "genetic heuristics" or "genetic search" would be much clearer.
Ah, but it is. What do you get when you run quicksort? You get a sorted list. What do you get when you run a genetic "algorithm"? You get ... the result of performing that procedure. There's no other formally specifiable postcondition. You certainly aren't guaranteed to get a perfect solution to the problem you were trying to solve. That's why it's a heuristic: if you run this procedure a certain number of times, you might get a useful result. Maybe. And the way it works is by searching a space of possibilities in a certain way. That's not an application; that's the whole point.
A heuristic explores a problem in a specific domain. An algorithm specifies a process at a level suitable for machine implementation. Genetic algorithms are, thus, algorithms, though they are typically applied in heuristics to explore real-world problem spaces.
Nonetheless, I believe there is a real distinction here. As it happens, there is a discussion about algorithms textbooks on HN right now . I contend that GAs are not the kind of thing that would ever be described in a textbook on algorithms, no matter how comprehensive.
> Genetic algorithms can have formally specifiable postconditions, they just aren't definable in terms of the problem space to which they're typically applied.
Actually, I think this would be a pretty good definition of the distinction, if we changed "genetic algorithms" to "heuristics". I think if you look at the procedures described in algorithms textbooks, you'll see they all have postconditions definable in terms of the problem space.
Heuristic problems are simply problems that aren't yet formally understood. I don't think it's meaningless to use "algorithm" in the examples you cite, as long as its understood that a good algorithmic solution requires a good model of the actual problem being solved.
That's not an algorithm -- that's a desired result. Similar to how "sorting a list" is a description of a class of algorithms; it gives no description of how a machine can accomplish that goal.
The difference between the heuristic above and "sort a list" is that the success criteria of the latter can be very well defined, whereas the heuristic presented is an attempt at approximating the desired result, which is something like "present the best search results first, for some meaning of best".
I fail to see how this is not an algorithm. The heuristic (rank search results from most to least relevant) is backed by an algorithm (find occurrence of word, sort document based on occurrences). I like to approximate the two by thinking of heuristics as an approach to solving a given problem while algorithms are actions to taken to get to the end results.
However, those are insignificant implementation details - all the logic (and all the good and bad results) comes from the arbitrary decision to use the number of occurrences as meaningful for measuring the relevance, from the choice of heuristic.
All algorithms approximate things, after all. That's simply a consequence of abstraction.
But these message passing algorithms have two weaknesses:
(A) When there is more than one solution
(B) When there are small loops in the message network
By far the worst problem is (B): it's a kind of a "corruption" of the network and causes it to pretty much go off the rails. I think people already do understand the consequences of these problems in the financial system, but we don't seem to see how we can just change the topology and/or the messages themselves, in particular, to try to fix up these self-reinforcing loops. Or: move away from min-sum towards sum-product (which often works an order of magnitude better) by perhaps implementing basic income. Etc. Etc.
I don't think it reflects reality on the ground, nor does it really reflect how humans in groups make decisions according to research.
The message network loop issue isn't even an issue for message passing systems except in the most pathological cases (where it results in an unbounded growth in messages or an infinite delta on the values the algorithm calculates.
I also have a finance background. This idea that the system is a kind of message passing is not a "50,000 feet" idea, it actually came to me from writing algorithms to arbitrage markets. There you really are performing message passing (think of forex legs.) It's Dijkstra's algorithm. So this is the view from 50,000 nano-seconds. But I believe it holds for many other scales. Plenty of people walk into the supermarket and buy the product with minimum cost. Yes? How is this a view from "50,000 feet"? But obviously we are not just min-sum automata: we are all free agents, and so on.
> how humans in groups make decisions according to research
Well this is obviously a huge topic.
> message network loop issue isn't even an issue for message passing systems except in the most pathological cases
I totally disagree with this. Short loops completely screwup message passing.
I actually think the system is fantastically broken and produces nonsensical outputs quite often. For example, it rewards speculation in the absence of measurable outcome but demands measurable value to speculate.
> There you really are performing message passing (think of forex legs.) It's Dijkstra's algorithm.
I think you're confusing a model you use for the reality at hand though.
> Short loops completely screwup message passing.
If short loops screw up your data other than the ways I described then your system lacks idempotency guarantees and has a much larger problem.
If you're that suceptible to replays then your architecture is somewhat antiquated. Even mass market products like SQS and RabbitMQ make it fairly approachable to model a message passing system across a central linearize queue and correct these issues.
If you're not centralizing, then your system needs more sophistication but it is even more important that replays don't cause issues because you're essentially guaranteed to get them.
Don't get me wrong; bugs exist. But it sounds more like design features and flaws you're describing.
And this is exactly why the concept of a "growing pie" is flawed. Markets generally pool around a single solution versus averaging around multiple and there isn't enough capital (labor or dollars). I think it comes back to power law driving behaviors here and everyone wanting a huge win. So in effect markets look like fixed pies in the short term and the winner is the one that grows the pie.
Trouble with this scheme is, the "pie growth", when it happens, is distributed to a very small group of people who have the ability to make big bets - so it compounds.
In terms of (B) those are basically local maxima tied to (A) so that distribution of growth is chaotic and skewed. So while it might seem like corruption of the network, it actually is a function of the "winner take all" nature of any market in the absence of either consumer/user self regulation or some deus ex machina regulator (government etc...).
Bottom line, it's a problem with how humans act (or fail to act) collectively around information sharing.
It seems like the real trouble is that we've set things up so that "big bets" are required.
Huge tomes of regulations that have one-time compliance costs, so the cost to a tiny shop is the same as it is to Comcast or Microsoft, and regulatory capture to keep it that way.
Tax laws that encourage profits to be kept within the corporation instead of returned to investors, which requires successful companies to become conglomerates.
Laws that allow Hollywood and Apple to control content distribution and reject anything from anyone who poses a competitive threat to them.
Fix things like that and you won't have to be so big to grow the pie.
no, no no. To even propose that the answer to the question of "what should we do?" can be solved by the financial system is laughable; that is pure free-market absolutism.
The anwer to the question "what should we do?" is _political_ .
Let us not confuse the market with politics.
Myself, I don't think that you can untangle (even in your mind) the mess that is the market (financial system, really) from the mess that is the government. They're called "voting dollars" for a reason.
I prefer to just refer to the whole big mess as "The System".
As for downvotes, I don't think they add any value to the site except to encourage groupthink, and you're better off ignoring them. Or take them as a hint that you might be on to something! :)
Yes, That is just re-stating what the author wrote, without providing any clarifying information.
> You seem to have interpreted that as saying that the financial system should answer that problem for society as a whole.
Maybe yes that is a maximalist interpretation; what is the alternative interpretation (it doesn't seem to be present in the author's text)?
> move away from min-sum towards sum-product (which often works an order of magnitude better) by perhaps implementing basic income. Etc. Etc.
Referencing 'basic income' here seems like a whole society problem to me, which has nothing to do with the technical implementation financial payment gateways, no?
Can you elaborate a real-world example of this for the layperson ? 50k foot view is fine :)
- Corporations and the ultra-rich tend to be the ones who try to profit-maximize the most (because people who care about other things, don't work as hard to accumulate wealth, and because corporations that profit-maximize the best tend to survive and grow better)
- National governments since WW2 had done a decent job of redistributing wealth, but since there are increasing returns to scale on investing in tax avoidance/evasion, it is the richest individuals and corporations that are best able to avoid taxes and move wealth offshore
- It is cheaper and easier to cut a sweetheart tax with a corporation or a rich individual to temporarily attract capital to a nation-state than to generate wealth the hard way through education and infrastructure investment
So our global capitalist system for the past few decades has simultaneously selected for the most selfish, profit-maximizing institutions and individuals, while also setting up a race to the bottom between nations (and even cities and states -- see Amazon's bid for a 2nd headquarters or Tesla's Gigafactory) to see who can give the biggest tax breaks to those at the top.
I'm not sure if you're intending to be ironic, but you've achieved a gold star for irony nonetheless. Google "memory effect of wealth" to see if you come up with any meaningful, non-handwavy definition if your post was actually sincere.
> Can you elaborate a real-world example of this for the layperson ? 50k foot view is fine :)
Money can be seen as a form of kanban unit or ration token for signalling demand. Essentially, the richest 1% or less of investors now use their "messaging" tokens (cash) for speculative investments in games against other wealthy investors in the financial sector (foreign exchange, derivatives market, etc.). That starves the rest of the economy for kanban tokens (cash) so it can't function. It would be like you walked into a Toyota factory using Kanban tokens and randomly removed 90% of the tokens -- that would prevent each industrial operation from signalling its need for required parts from other operations, causing all operations to slow down as they wait on all their dependencies to arrive.
Almost any economist will agree that if 90% of the money supply suddenly disappeared we would suffer a great economic depression in the USA. But the same economists seem unable to accept the same depression will happen for the 99% if the richest 1% take most of the money supply out of general circulation and just use it to play poker with each other. There are other complexities (including velocity of money message passing), but it seems to me the big issue many people overlook -- it is not just the total amount of money supply but how it is distributed.
Unfortunately, most of the governmental solutions (to satisfy wealthy donors taking part in legalized bribery of campaign donations) are based on supply-side "voodoo economics", like giving trillions of more dollars to the wealthy via bank loans or tax cuts. This is done in a foolish unfounded hope the wealthy will use extra money differently than they already have in the casino economy disconnected from meeting the needs of the 99%.
Even the slightest amount of thought will show how absurd supply-side economics is compared to demand-side economics. Almost anyone who can show a predictable demand for a good or service (like booked orders) can get a bank loan to fulfill those orders -- and to get orders, the customers need to have cash to signal demand. It is demand that makes businesses successful -- not supply.
Markets can work well to meet needs and wants, but they only hear the needs and wants of people who have money. Thus the value of a basic income to ensure all people's needs and wants are heard by the market to at least a basic extent.
Other options for dealing with the cash crisis caused by the triumph of the Casino Economy include strengthening non-market parts of the overall society such as: subsistence production (home 3D printing, home robotics, home gardening, home power via solar panels); the gift economy with more volunteering, freecycling, and sharing knowledge via the internet; and better democratic planning.
Unfortunately, with the big move of women into the workforce in the USA over the past few decades, home production, volunteering, and civic participation have all been reduced. Expanded entertainment options as a form of "Supernormal Stimuli" also distracted many people from physical daily life, also reducing participation in those other three sectors of society. Thus, a growing percentage of total societal interactions take place via exchange in the market instead of via subsistence, gift exchange, or civic planning.
Ironically, the "Two Income Trap" means families have very little to show for the second income between extra expenses involved in working outside the home, two-income families bidding up the price of houses and other items, and an increased supply of workers leading to lower compensation and poorer working conditions for everyone. See Elizabeth Warren's book on that:
With an increased supply of workers, there was decreased power of workers to demand wage increases in parallel with ongoing productivity increases. This in turn created the situation whether the owners of capital could take profits and lend them to workers instead of paying them in wages. Richard Wolff talks about in "Capitalism Hits The Fan", (whatever one thinks about his proposals for reform):
To be clear, I'm not saying women should not have a choice as to what they do with their time. This is just about the societal implications of certain trends given men did not leave the work force to stay at home and be subsistence producers, volunteers, and civic actors in the same numbers as women who joined the work force.
I'm also not saying these "supernormal stimuli" are all bad (see for pros and cons: http://www.paulgraham.com/addiction.html ).
How we deal with that situation is a political question -- but to deal with it, we first need to acknowledge and understand what happened. And beyond a decrease in activity in the non-market sectors of society, one of the consequences of multiple trends has been a concentration of wealth in a smaller number of hands which has made the shift to a Casino Economy more likely.
Automation also has a role to play in that concentration of wealth from a different direction. Marshall Brain talks about that in "Robotic Freedom":
Better technology has also increased options for participation in those three non-market areas (via cheaper tools, cheaper robots, and cheaper communications), so it is hard to say the entire trend has been downward for those non-market sectors. We may yet see them rise again as those other costs continue to fall -- and perhaps if people learn to move beyond the supernormal stimuli of mass-produced entertainment and back to making their own fun and using their own creativity.
While this won't directly break the tight loop of the 1% and the Casino Economy, it may bypass it so it does not matter as much -- in which case all that vast amount of money controlled by the 1% may just becomes like Monopoly money -- meaningless to most people most of the time because their life is built around non-market interactions.
I've never liked how Tim O'Reilly frames his discussion around vague terms like "open" which could mean participatory, transparent, available, or any other number of vague, feel-good terms. Now he seems to be calling "algorithm" things like economic models and government policy.
These are widely disparate things, but by using vague terms in different contexts, he pushes discussion towards the direction he wants to steer it: in the case of "open", away from free software. In the case of "Web 2.0", towards anything that involved crowd participation.
With "algorithms", he seems to be wanting to push the notion that technology is both scary but liberating and we need tech messiahs like Bezos or Musk to bring it under control.
ML has had a lot of successes, but one of its failures has been more unpredictability on the level of individuals and events that people actually experience.
Getting scared of anything that generic is silly. Might as well fear the outside because anything can happen out there!
Algorithms completely obliterate the latter - increasingly turning previously flexible systems into the equivalent of zero-tolerance-policy schools, where human discretion has no role to play. This is a problem because many laws on the books were written with the assumption that "the spirit of the law" would be a guiding principle when "the letter of the law" is unclear.
As a trivial example of this going terribly wrong, consider Youtube's copyright enforcement algorithms. Copyright was clearly designed with many loopholes for fair use to allow culture to move forward. Youtube's algorithms ignore all of this, changing the effective meaning of copyright on the site from one where the rights of the copyright owner are balanced with the rights of critics, commenters, and other creators to one where the rights of copyright owners are the only ones that matter.
Now imagine this kind of algorithmic enforcement applied to traffic laws, HR rules, or insurance policies and you can see why people might be nervous about "algorithms". Algorithms neither think nor feel and have no empathy. It's the ultimate actualization of the dystopia in the movie Brazil where the world is a cold, unfeeling, bureaucratic nightmare. Except where human bureaucrats at least need to sleep sometimes, computerized ones never rest.
We already have this in the form of red light cameras which have been shown to cause rear-end accidents at traffic lights:
"There have been concerns that red light cameras scare drivers (who want to avoid a ticket) into more sudden stops, which may increase the risk of rear-end collisions."
"the authors of the study found a statistically significant, but still smaller, reduction in angle and turning injury crashes by 15 percent, as well as 'a statistically significant increase of 22 percent in rear-end injury collisions."
In short, there situations where the humans involved would all agree on what a "correct" driving response would be, but the presence of the algorithm (the camera, the ticket, the court, etc.) forces another action - and sometimes that action can be bewildering to other participants.
IMO, we are quick to blame inanimate constructs, when people and their policies are the source of fault. Vilifying "algorithms" only serves to distract from root causes.
"Fairness" has always been heavily contextual, and the idea that it can be distilled to a matter of "if A and B then C" is folly. Even pure math can't reach the combination of completeness and consistency you assume is possible: https://en.m.wikipedia.org/wiki/Gödel%27s_incompleteness_the...
But that's the goal of some actors. They want to misdirect attention and responsibility away from themselves when their creations misbehave.
The root cause is no-one is really at the wheel once the algorithm goes into production to provide human discretion. For instance, pretty much the entire consumer facing apparatus of Google and Facebook consists of "algorithms" and there's no one empowered to call when they go wrong. Fixing that would cost money that the tech giants don't want to spend.
I.e., that if you're choosing to use an algorithm for your policy, then this means that you will write harmful policies, and the choice to use an algorithm at all, any algorithm, is morally flawed and should be vilified - to motivate you and others to write policies that include appropriate flexibility, arbitration, human evaluation, overrides and thus can't be implemented as a scalable algorithm/program. Well, not unless we get superhuman general AI systems.
Algorithms and computers are NOT inextricably linked. Just because software often uses algorithms to define behaviour, doesn't mean algorithms cannot mean other things.
The context is the word "algorithm".
I'd assume the parent is thinking of something like the "Pro Git" book, available on Github as code and licensed under creative commons. https://github.com/progit/progit2
"We can’t see, for example, that the algorithms that manage the workers at McDonald’s or The Gap are optimized toward not giving people full-time work so they don’t have to pay benefits. All that was invisible. It wasn’t until we really started seeing the tech-infused algorithms that people started being critical."
And here's one that's more subtle, so I don't blame him quite as much, but he is naive to think "ideas" are what cause corporations to act the way they do. Material and institutional conditions cause their behavior, which is then justified after the fact by appeal to shareholder value.
"Somebody planted the idea that shareholder value was the right algorithm, the right thing to be optimizing for. But this wasn’t the way companies acted before. We can plant a different idea. That’s what this political process is about."
How do you know he's thinking that? The way he was talking, I read him as "well it's obvious that nobody has stopped this behavior so it's fair to assume that not enough people noticed". Wouldn't you agree with that?
> And here's one that's more subtle, so I don't blame him quite as much, but he is naive to think "ideas" are what cause corporations to act the way they do. Material and institutional conditions cause their behavior, which is then justified after the fact by appeal to shareholder value.
That was the only part of the interview where I strongly disagreed with him -- and you're right. It's not about ideas; there are a lot of people out there who are extremely good in bean-counting, micro-management, and of course awful at promoting a positive work environment. They will never change. Only regulators can limit them a bit, if even that.
I wouldn't. Anyone... everyone... who's worked retail _knows_ this. Anyone who's worked retail management knows this because when you ask to hire people you're told to hire part timers. Two part timers is always better than a full timer, you're told. This is NOT invisible to anyone. It's simply unspoken.
> you're told
> It's simply unspoken.
You're told to hire part time. You're not told it's because the company doesn't want to pay them benefits. It's implied, but you're not told.
> How do you know he's thinking that?
From the article:
> We had plenty of bias before but we couldn’t see it. We can’t see, for example, that the algorithms that manage the workers at McDonald’s or The Gap are optimized toward not giving people full-time work so they don’t have to pay benefits.
His thinking on the point seems pretty clear and the parent seemed to summarize it pretty well. I had the same question upon reading it and thinking back to the very prominent criticism that companies like Domino's and other fast food operators were taking for cutting workers' hours below the 32-hour max to avoid health care costs.
I don't see how an automated implementation of the same feature/bug is an improvement over the manual implementation. Either way, it's a deliberate choice that minimizes the value provided to one set of people (low-level employees) and maximizes the value to others (shareholders and executives).
I don't see how using automation and AI to increase the efficiency of transferring money from the poor to the rich is an improvement.
Automation only made these problems easier to increase. Sadly it was almost never used to actually improve people's lives. :(
I’m not sure how this can be said with a straight face. An algorithm was really not needed to perceive this and it’s insulting and strange to suggest it.
These people seriously need to take a step back. At some point you cross the bridge between reporting and actively advertising someone's personal views.
I can't ever trust or read people who constantly try to push an agenda, it's disingenuous. Even more so, when reporting, engaging in personal exchange without the discussing data and statistics, i.e. the facts, you are just allowing yourself to become a personal blog of someone else.
Reporters are supposed to fact check, look for concrete evidence in someone's statements, hold people accountable to their words, and yet certain people get a pass all the time, even the star treatment.
Unrelated, I'd like to see your argument against that particular line. IMO the comparison is an excellent one; its only issue is that the chosen scope ("the financial market") is too small. Corporations and other bureaucratic entities like governments are powerful cross-domain optimizers with utterly alien cognitive processes and goals. Intelligence, certainly, artificial, might as well call it that.
They are rehashing the book instead of analyzing its claims and assessing its validity. Conveniently, the blame of all the issues are left on the firms instead of the rules of the marketplace, conditions, etc which lead to distortions and structural issues in the market.
If you wish to engage on the claim itself, instead of spouting unsupported emotional appeals like "outrageous" and "nonsensical", perhaps you could present a rational argument? I suggest considering the belief I noted earlier about the scope of the argument: The label of "artificial intelligence" is valid for more than just financial organizations and market(s).
If you think the claims are "nonsense", then by all means, go off and write up your rebuttal.
This makes it really easy to make statements that sound deep and meaningful, but really aren't. E.g. "I'm not worried about Artificial Intelligence - we already have artificial intelligence, its called a company. Companies are artificial, and they behave intelligently".
This just isn't what people are worried about. What people are worried about is:
1. Soon we will be able to create software/robots that replace tons of human jobs. This has nothing to do with "companies as an AI".
2. A super-intelligence will be created that is vastly smarter than any human, and can make itself even smarter, but will have different goals than humanity. Again, this is only very thinly related to the "companies as AI" spiel (companies are not superintelligent, they don't actually have coherent goals of their own).
But then you see that almost all of the most important mathematical developments in human history make that exact conflation, and that it is all driven by the need to describe human experience in symbolic language... and it does look very deep.
Nobody cares what 'most people' mean. Most people are idiots. Sure, you wouldn't talk like that when you're writing a dictionary, but dictionary writers are not known for making intellectual breakthroughs. That's the difference between description thinking and proscriptive. One allows for creativity.
This is why I prefer intentional debate as a way of understanding the world rather than poor attempts at "objective" reporting. I always feel far more informed about a topic after I've listened to people with opposing agendas duke it out on an intellectual stage than I do when I've read a supposedly "neutral" article by someone either masking feelings on the topic or who doesn't care about it much. Intelligence Squared US is fantastic for this and I really wish there were more outlets like them.
With that in mind, if you think this article has an agenda, then -- instead of complaining about it -- go find one with the opposite agenda and read that. Then compare their points and arguments.
On the contrary, I can't bear to trust people who don't display a clear agenda.
Everyone has an agenda. It's either spelled out, in which case we can clearly see it and proceed to discuss its merits; or it's hidden under a mask of neutrality, in which case it's much harder to notice and counteract. Pretending to be neutral is both disingenuous and manipulative.
Meanwhile, one of the major points that OP makes is that "algorithms" (or heuristics) are the same. All human-made algorithms reflect human biases. Some are easier to notice and counteract. Others are hidden deeper under a guise of neutrality. Either way, it does not help to pretend that there are no biases.
I'd prefer 'Why OReilly believes capitalism is like a rogue AI'. Having Wired repeat OReilly's statement, rather than attribute it to him, makes it seem like they're not critically examining a very significant claim.
Why do you think it's a significant amount of work to simply quote an author, rather than adopt their viewpoint?
> Why would you ask Wired to do that when you can just read the book?
Justification is provided in the comment you're responding to.
> And why do you believe that you'd find that article any more satisfying
I'd find an article that quotes OReilly rather than adopting his viewpoint more satisfying because it distinguishes OReilly's viewpoint from Wired's. This is the difference between journalism and marketing.
Because simply quoting the author won't suffice. The topic is sufficiently complex that a proper treatment requires an entire book chapter plus several preceding chapters of explanation. Paraphrasing or skimming won't work because then you'd accuse them of being even more biased.
This article gives a factual list of the topics the book covers, asks the book's author several questions about those topics, and directly quotes the book's author's responses. Yeah, that's marketing. Anything they could have written in that format would have necessarily been marketing. The only way not to support the book in that format would have been to not publish the article at all. Where do you see Wired "adopting O'Reilly's viewpoint"?
If you just want to never see anyone advertise anything, uh, okay? Every bit ever communicated is an opinion designed to accomplish some goal in the mind of another person. To escape this I suggest renouncing all interaction with other humans and moving to an uncontacted area in the depths of some rainforest.
It would establish a separate voice from OReillys and the publication that should be reporting on, not marketing, his new book.
> Where do you see Wired "adopting O'Reilly's viewpoint"?
Already answered multiple times in this thread.
> The topic is sufficiently complex that a proper treatment requires an entire book chapter
Or they could review it. 200 words. There's a lot of books reviewed that way, many far more complex than OReilly's.
So let me tell you about my experience with Amazon Fulfilment. I was gonna pay Amazon, who has this huge expertise in packing and shipping stuff, to fulfil my Kickstarter. I'd made a large, delicate art book. Amazon, in their infinite wisdom, stuffed them in bubble wrap envelopes and dropped them in the mail. They were getting bent, the envelopes were getting ripped up, it was a mess.
I spent a month in customer service hell e-mailing someone who was following a script that said I would have to turn on something they called "prep", which would ask them to look at it and package it better. Three times I checked that this was set, and three times I sent myself a test book that came in a bubble mailer.
Finally I got clued in that there is a high-level support team that you access by sending a complaint directly to Bezos. This person, after some back and forth, ultimately informed me that due to their internal systems, it is completely impossible for them to put a book in anything but this level of shitty packaging; "prep" is just not a process that can ever happen to a book.
They covered shipping my books out to someone actually capable of finding sensible boxes and shipping them. And they intimated that someone maybe lost a job over this. But changing this, they said, might take on the order of years.
I'm not sure I want a man who presides over a system like this running the country.
Especially given that O'Reilly points out that the "rogue algorithms" of the title are corporations, and the only reason Amazon is headquartered in WA is that the tax was lowest there...
Indeed. The short-term fear at least should not be about machine overlords, but about how people in power use AI to increase their power and/or make life worse for everyone else.
Aside: people are still comparing AI to a machine that feels, and this is so stupid it boggles the mind. The machine that creates paper clips will not kill off anyone who tries to turn it off because it does not have self-preservation, which is a system dependent on fear of death. Machines do not fear, and even if they did they would not fear death, because they have solid state. Bio systems are walking RAM disks.
Algorithms are basically math problems. The financial system and government do not work based on math problems, they work based on emotional instability. Seriously. Both these systems are driven almost entirely by the feelings of humans. They aren't algorithms, they are shitty biological systems that don't make mathematical sense at all.
We already replace humans at their jobs all the time and they only very rarely kill their masters to prevent it.
This is irrelevant, being turned off while it's still possible to produce more paperclips would not be the optimal strategy for the machine trying to maximize paperclip production.
> and it would already have an "off feature" which was designed for a purpose
Look up what the control problem is. We don't know how to design such a feature for a General AI that also lets it do it's job effectively. It's not as simple as it looks. If you're not talking about a General AI than sure, it's easy, but non-General AI's are not very scary.
> We already replace humans at their jobs all the time and they only very rarely kill their masters to prevent it.
But humans don't have "maximize my production for this company" as their main life objective. Things like food, not going to jail, not dying, etc usually come first.
Good point, but that would be a bug in the AI. An intelligent system would see there would be no point in killing humans, so killing the humans would defeat the purpose of making the paperclips, and the humans support the machine, thus killing them would be counter-productive to its purpose.
It's also unrealistic that algorithms would try to exceed the limitations of their system. Imagine if a natural predator tried to "maximize production" of killing its prey: it would run out of prey, starve, and die. An AI would understand that trying to "maximize production" to the detriment of everything else would be counter-productive and create a resource contention war. Humans are the only creature I know of that exhausts natural resources to its own detriment - we are smart enough to exploit our resources, but too stupid to know when to stop. The natural system's response to this behavior seems to be to get us to kill ourselves. Maybe the AI are part of this process?
The idea here is that the algorithm tries to maximize its paperclip production and turning the machine off will definitely have a negative impact on its paperclip production. So the machine will take the most efficient measure to stop that negative impact.
I feel like I just got Rick-Rolled.
They will and it is a good thing.
I love to listen to economics podcasts and read economic articles and books in my spare time, so I’m sufficiently convinced that automation is a good thing for quality of life around the world. That being said, there is a real human cost as well that shouldn’t be downplayed. If we suddenly had autominous trucks available tomorrow, that would potentially lead to millions of American (tens of millions of world wide) workers that will become unemployed. Some percentage will pivot into new work. Some will struggle to find new jobs. Some will never find new employment. What we shouldn’t do is just discount that human suffering as “the cost of progress” and just move along. We short continue to work to find ways to help people with these transitions when and where we can.
In short, let’s always try to mix in some humanity with our disruptive innovations.
I only mean to pick on the US a little; we are worse than some other nations, but the problem is not unique to us. Humans just generally don't mitigate big, looming collective action problems. They ignore it until it happens and then whine that nobody did anything. Or, if it only effects the already-poor, everyone else settles on a narrative that it is their bad morals or laziness or whatever excuse doesn't make people think too hard.
We're in for some huge problems as automation ramps up. The people saying "this is good" now will be screaming to go back once the problems start rolling in.
But even if you aren't, why would it bother you? If other people want to waste their lives on a coach in front of the TV, what makes this wrong?
These couch dwellers, what are they going to do to protest? They can't strike; if they're completely sedentary then they probably can't even protest without having a heart attack; cut off their VR or sugar-pump for 5 minutes and you can probably get any sort of compliance you want.
Our only hope is that the great rolling balls of flab that develop from these sedentary couch-dwellers who exist only to consume HFCS mixed with palm-oil in a satisfying soup of flavourings, and consume the latest immersive media, can't procreate and the problem sorts itself out. Though I fear we'll be using artificial wombs and the like and so manage to remove any reproductive pressure that might push our race back on course.
People keep saying this but directly contradicts the observed experience of the vast majority of people.
There are three main hobbies in my hometown: drinking, heroin, and TV. things haven't changed a bit there in 20 years because nobody cares to change anything.
It's not that those models are wrong, it's that it's a very different thing to read about 20 years of traumatic events from the pages of books on economics, vs. having to live through it.
But the upside is how much lives of Chinese have improved, so they got that going for them, which is nice.
My critic is towards stating that automation (robots, ai, etc.) causes a loss of jobs in absolute terms, as if in the future most of the people will be unemployed.
There's a radical difference between stating that changes (loss and creation at the same time) in the jobs market will cause disruption, and stating that jobs will be lost and that's it.
Thinking the (r)evolution in nihilistic terms will distract from improving the problems typically caused by the market change (which is the one you mentioned).
Donald Trump, and whatever opportunist demagogue comes after him, agree with you.
First, in the classical "market" scenario, we're talking about little atomic firms, each with goals and hopes and dreams of profit. Each of these firms has some kind of "knowledge" and some kind of decision making apparatus. They all have some functionality. In a sense, these firms are like people - so much that our government treats them as such in many cases!
In many cases, a lot of these ideas and processes end up being the same, and when it is, we call it collective wisdomor collective intelligence. I won't go too deep into that.
So while financial markets and the firms that comprise them aren't exactly _machines_, they do display a form of intelligence different and sometimes more effective than our individual knowledge.
> That’s absolutely right. But I’m optimistic because we’re having a conversation about biased algorithms. We had plenty of bias before but we couldn’t see it.
I must say I am really happy to see that bias in tech is being recognized and accounted for (for the most part).
Please forgive the politics (I'll try really hard not to bash Trump ;-) ), but if there is a silver lining to the 2016 US presidential election I think it is that it has really caused many of us to introspect and realize how thick our echo chamber walls have really gotten over the last few years. The chamber was constructed so quickly, I barely realized it was happening. We're becoming so polarized that we're actually moving to different communities to be with more people "like us." Simple awareness of the problem is a huge step forward in being able to resolve it.
Their employees act as the stewards (clownfish) more as beings in symbiosis than as one controlling the other. We can neither understand or pull the plug on these creatures. Time will tell which species evolves more rapidly.
How about this idea: the first rogue AI was language.
In terms of AI as a compositional system for storing meaning, I think this might be a reasonable position to take. Yes people, we've been playing this game for a very long time...
I wouldn't get rid of capitalism, but I certainly wouldn't give it a superlative. It's kind of like saying that domestication is the best thing that has happened to chickens, cause look how many more of them there are, and wow some of them live on a free range.
I think there's a fair question about whether we are changing modes. "Capital" itself is changing in nature -- not so much about big machines and mass production any more.
Writer hasn't heard of Richard Stallman?
Update: writer is Steven Levy, who most definitely has heard of Stallman, and should know better.