I think that The Atlantic's recent article on this topic is a more nuanced insight; human-machine cooperation is probably where the big money will be. Companies that seek to cut people out of the loop will probably run into a lot of problems, as will those that smash the looms. Whereas trying to smooth the interface between AI/ML conclusions and human oversight is probably going to see the most success.
As it has been for as long as machines have existed, really. This reminds me of Douglas Engelbart and his vision for computers. I'll cite the section of his wikipedia page that paraphrases an interview with him from 2002.
> [Douglas Engelbart] reasoned that because the complexity of the world's problems was increasing, and that any effort to improve the world would require the coordination of groups of people, the most effective way to solve problems was to augment human intelligence and develop ways of building collective intelligence. He believed that the computer, which was at the time thought of only as a tool for automation, would be an essential tool for future knowledge workers to solve such problems.
He was right of course, and his work lead to "The Mother of All Demos".
Machine learning is the next step in using computers as thought enhancement tools. What we still need to figure out is an appropriate interface that is not as "black-boxy" as "we trained a neural net, and now we can put X in and get Y out".
EDIT: Now that I read that quoted section of wikipedia again, it's funny to note that computers were "only seen as tools of automation", and how modern fears of AI are also about automation. Automation of thinking.
This is a computer-oriented analogy, but most fields have their own tables and charts and maths that are tedious to keep on the tip of your mind. Still, for example, I don't need to remember the details of every API that I use; I can just remember that there is a 'do X' call available, and refer to the documentation when and if I need to actually use it.
In the same vein, I can quickly get a feel for whether an idea is possible by stringing together a bunch of abstract mental models. "Can I do X?" becomes, "are there good tools available for doing A, B, C, and D?", and that information is only a quick search away. Actually using those tools involves an enormous amount of detail, but it's detail that I can ignore when putting an idea together.
And in most cases, that 'detail' is a library or part that already abstracts a broad range of deeper complexities into something that I don't have to think about.
The question becomes something like: how do we expose people to enough information that they are aware of how much they can learn if they need to, without drowning them in trivia that they will never be interested in?
I suspect that the concept of 'extended cognition' as it is realized with the use of computers and how people use it day to day to get work done is in conflict with how we all are mostly taught via rote memorization, and then application of information; therefore it should naturally follow that those who are heavily invested/exposed in 'non extended' cognition services have relatively more to lose, as well as any currently realistic answer to this:
>The question becomes something like: how do we expose people to enough information that they are aware of how much they can learn if they need to, without drowning them in trivia that they will never be interested in?
will bring cognitive dissonance to those who need the answer most (those with heavy exposure to relatively 'non extended' cognition services).
Are more data sources being made available? Is data being preprocessed? Is an initial task being automated?
Because the truth of any worker (in less than a ruthlessly specialized huge company) is that they may be an "extended cognition" worker, but still perform many "non-extended cognition" activities as part of their job. Because there was previously no alternative and work needs to get done.
Fast forward that, and you're never going to fully automate a goal. But you will automate sections of the process that are amenable to machines.
Advice? Recognize which type of work you spend most of your time in, and don't get caught being the "non-extended cognition" person...
Do you have any sources for this? I find "sentiment at the time" especially hard to find, historically speaking.
And I'd be fascinated to read something about this.
To expand; one way to do something like Siri would be to have a system that routed requests to human operators. The human operator would give the correct answer to the request, and then the system would use that as training data. If the system was reasonably confident it already knew the answer to the request from previous training data, it would answer right away, but if it was below a certain confidence it would route to a human. This seems like the smartest way to leverage machine learning in these kinds of scenarios, and I'd be surprised if someone hasn't already tried it or something similar in the past.
It would be difficult to see how to apply narrow AI to this kind of thing. It seems really good for routine tasks, like high frequency trading, but maybe not so great for these big one-off deals which constitute many of the best investments.
Of course, Buffet might still benefit from AI analysis of broad market trends, and the like. If I'm wrong, I'd be interested to know.
I understand what you mean from an opportunity identification perspective, but you have to keep in mind that even the "big one-off deals" require routine tasks at a lower level to verify the merit of such deals. If you think about tasks like reviewing financial statements, AI could provide faster evaluation and potentially identify trends that would elude a human analyst. In any case, Buffett is known for avoiding investments in companies he doesn't deeply understand and I would bet the same stance holds for employing new technology in his investment process.
Whether or not they give any weight to the output is quite something else, but it would be such a (relatively) trivial cost to get that answer that it seems less likely they'd not do it.
Also check out The Intelligent Investor, which is arguably Ben Graham's most accessible work on the topic.
Note also that he’s intentionally hamstrung himself for the last 46 years or so by investing through Berkshire Hathaway. Had he kept investing through his partnerships he would have made another $80 Billion from performance fees. Had he just invested his own money his returns would have been much higher given he’d have far more investment opportunities,
Everyone has played games where the AI can beat you in a straight shot, but you can lead the AI into situations that are predictable where you can gain a predictable advantage, and vice versa with humans.
Example: Buy on the dip strategy and technical strategies, big players could drive down the market and HFT can buy in the dip based on fundamentals. Bad news floods the market and HFT reacts.
Humans can predict what AI would do, then AI will reactively start to predict what humans will do when this happens, but humans always have one step ahead in new techniques, AI will be built to defend against it.
Regarding defending against a buy on the dip strategy, AI can start to learn player specifics and not react, or react differently (preemption) if those players return, however this can also eventually be played.
Humans and AI will be playing a cat and mouse game for eternity, microcosms of this can be seen in gaming AI. I think of it more like a game that will be fun to play, yes sometimes you will lose, other times you will predictably win. Bots will be challenging bots unexpectedly and predictably, but they will almost always originate from human programming originally.
In that frame, I think its natural that discomfort is linked to autonomy. Autonomous taxis & cruise control may be points on a continuum technically, but economically, no human involvement is different. Autonomy separates the PCs from the Looms. Cooperation where a human is involved the machine is tool use, recognizably. The humans labour gets more efficient with tools. More trinkets per human.
Maybe the Luddites thought of looms as autonomous, with humans in a supporting role.
Anyway, I think it's hard to predict where this goes on a 25y scale.
The sceptical counterargument to that, which I go back and forth on, is "that's what they said about chess". There was a transitional period when this was true, then the engines disappeared into the middle distance.
I work on the hunch that the middle-ground of tasks where humans improve on, or with, machines is both small and unpredictable; computers will tend towards being either useless or strongly superhuman for each problem.
It's not like everybody has somehow switched to watching engine games. That is in fact just a niche market of the chess world. We are humans and as such we still enjoy seeing real humans thrive and compete in chess more than we care about machines.
If anything chess is the perfect example that the pessimism is misplaced, chess engines have not killed chess as a human endeavour.
But now it's rather futile to play it, because at some point you'll run up against a machine that can outplay you perfectly. I mean, before you could play a grandmaster or something, but the moment you or someone else for you buys a ten-dollar chess program, you realize that despite improvement, the computer will always defeat you.
I don't really see chess as that big any more in the public eye. The kids might play Minecraft, which rewards human creativity and doesn't really force you to compete against an A.I. optimized to beat 99% of all Minecraft players.
You're confusing the economics of recreation and entertainment with the economics of efficiently making things. They both exist, but they're very different.
On the other hand, if you are optimistic and excited in a world where everyone else is in despair, you have some distinct advantages. :)
As far as the return forecastability deniers out there, particularly the ones who claim to be doing it on the basis of some sort of empirical thing, well, if you can't be bothered to actually look at the data or even read academic literature on the subject, I can't be bothered to educate you.
I've literally missed the sign on a trade before, and it was 7-figure disastrous. (I've missed the direction of movement on individual symbols a number of times, but this one time I literally went the wrong way on everything by accident.)
Markets adjust too quickly to flip your position and profit in any reliable way. On planned or anticipated events, people are all locked and loaded waiting for something to happen.
However, I'd much rather know the sign because at least I can put on some position and guess a little at the magnitude.
> Mr. Amador attributed the underperformance to a normal variability in returns. The fund’s programming beat the market when tested against historical data, he said, and he expects the same in real life as time passes.
Backfitting in all its forms is known to give false confidence and usually fails. It may work for a moment, but then other traders exploit whatever the backfitting had noticed causing the backfit to no longer work.
Side note: Why is it that we need something so physical to attach these concepts to?
The photo of the monolithic POWER7 rig that houses Watson with it's translucent logo is akin to all of the Bitcoin articles with shiny gold coins with an icon. I understand the need to have some kind of image, but it's just so detached from the reality of what's going on in practice.
Getting back on topic, I do wonder how much data they're feeding in - it's one thing to pass masses of historical trades into the algorithm, quite another to have it watch for relevant news events that affect the asset prices.
Posts with images get more clicks.
Sometimes I wonder if one couldn't actually decipher these background radiation patterns, given enough resolution in sensors and enough calculation power to crunch possible models fitting the patterns.
Imho not even the drop of a dice is random and a seed is nothing but a pre-defined set of variables.
But at that point, we could probably just simulate our own realities.
The other reality is that over the long-term it's highly unlikely to beat the market. Realistically (almost) nothing beats the market over a long-enough period. At the same time in my testbed, with real data, real 'money where your mouth is' it worked. It's no crazier than any other idea.
Ultimately whether humans or AI drive investment is immaterial if you believe in an indexed portfolio. Should those investment approaches succeed, they'll join the indexes in some way. Similarly, should they fail, they won't
The database is MySQL, and communicated with via SQLAlchemy (through errbot of course), with a series of commands and crons (errcron) set up, in order to both notify myself and execute on various data gathering activities. The rest of the processing code is likewise - in python. I don't rely on scipy, numpy, or anything else, given that I don't see the need.
The reality is that there are a series of activities that are profitable at the micro) level in the geography in which I trade, which is why my robot currently integrates with Questrade - specifically so that I can execute from Slack, while I work at my 'regular' job. All passwords and reusable tokens are stored in an ansible-vault, so that I can commit and push my repository around.
I'm running two different experiments actively: one that does an arbitrage based on data I'm looking into, the other than specifically tries to eke out a $0.10 gain per share, closed daily. Going into Jan 1 2018, I'd made ~57% from August 31 (first day of trading). This year, I'm down ~8% overall so far. Passively, the return has been great.
Now, I'm changing my focus - enough people I know are generally interested and willing to light the same amount of money that I am on fire. So, I'll keep experimenting, but I'm taking 1% of the overall return for the 'bank' (i.e. my corp).
This will all clearly catch fire.
Every time I read an article that mentions Watson, it's sprouted a new thing the name is applied to. Previously it was a question-answering system, which famously won Jeopardy. Then it became a general NLP platform. Then it became a brand name for basically all IBM machine learning offerings. Now it's also a supercomputer?
If what this really means is that they built a bot that plugs a bunch of data into IBM's cloud ML platform and trades on that basis, I'm not really surprised it's not beating the market. Building an auto-trading bot using off the shelf ML techniques is actually a pretty popular university project that's worth trying if you're curious, though (at least with simulated money, or money you can afford to lose). They can probably do better than a typical university project, because I assume they have more extensive financial data feeds. But everyone else serious about automated trading (which lots of people are) also has those data feeds plus the same off-the-shelf ML, so unless they have something else...
Think of it similar as "Amazon Cloud", which really consists of over 100 different type of services/products, some of them very different, and build by different teams, but the "Amazon Cloud" is more of an umbrella.
MD Anderson Cancer Center wasted $62 million on it: https://www.healthnewsreview.org/2017/02/md-anderson-cancer-...
In a world ran and dominated by humans, there will always be an inherent advantage to being part of the race that creates the game. If algorithms perfect a system in such a way that there stands no gain to be made by those at the top, people will simply create a new game to play.
This is true, but does not require machine learning.
That was being done in HFT long before ML came about. In fact, it's supposed to be the primary source of profit.
What a waste to have all these computational resources engaging in a continual 'series of countermeasure/measure battles' instead of calculating something useful.
> Those programs may be useful, but they are not A.I. because they are static; they do the same thing over and over until someone changes them.
Oh, I see. It's better because it's AI. My mistake, then.
It was likely some form of technical analysis.
I wonder sometimes if it ever worked out for him.
It's Mr Spock's problem. He always produced inferior decisions because he failed to take into account the emotions of others.
For example, in humans, an innate lack of empathy (the ability to feel the emotions of others) and being unemotional yourself are factors correlated with being a better, more effective detector of emotions and manipulator of emotions; taking into account the emotions of others can be done better if its done in an analytical way (however, it requires attention, it's not an "always active" skill then), and lack of emotionality allows you to express the emotion that's most beneficial for your goals in current situation instead of whatever you actually think.
If anything, a realistic advanced AI / Spock should be expected to have the communication skills of a good hostage negotiator combined with a charismatic politician combined with a wise psychotherapist combined with a sleazy car salesman. Having and feeling emotions is not required to understand them in others and show them yourself. For normal humans (excepting e.g. some cases of sociopathy) it's hard to fake emotions because we're evolved to have emotional expressions as a somewhat trustworthy, hard to fake signal; it's a limitation built in homo sapiens, not an inherent limitation.
Oh, I know that well. I just find it amusing. Spock is actually the most illogical character in the show, and the most emotional.
I'm not convinced this is intentional on the part of the scriptwriters. For example, how does a scriptwriter write a character who is more intelligent than the writer is? Most "advanced intellects" in scifi seem remarkably average in their intelligence, reflecting the intelligence of the writer.
Also on Spock in particular, there's a good talk by Julia Galef, The Straw Vulcan, about how irrational Spock really is and what a rational Vulcan should look like. https://www.youtube.com/watch?v=Fv1nMc-k0N4
Anyhow, the book "Brainwave" by Poul Anderson has the best description of what more intelligent characters would be like - they spoke with fewer words, as the rest of the information was more obvious from context.
That's exactly why I never believed Spock-like characters in fiction. Human psychology isn't that complicated. If you're logical and smart, how on Earth wouldn't you be capable of understanding human emotions?
I just pulled out of my "intelligent" portfolio from a 401k rollover into the S&P. Using that portfolio tool was unintelligent for me :(
When were baby boomers set to retire again?
they've been retiring for years. 1945 births are 73 now, well into retirement age. boomers will start retiring over the span of 2007 to 2034, depending on when they were born and the age they choose to retire at. they'll then be drawing down their retirement funds for decades.
are you trying to suggest that this ongoing multidecadal process will constitute a large "withdrawl event"?
Fear of an insolvent retirement can trigger this behaviour which then can compound on itself as other retirement plans are jeopardized. An entire new generation of wealth giving up on prior security and stock distributions in favor of new markets can also trigger this, such as what almost happened in South Korea with crypto currencies.
Hope none of this happens of course, but please be aware of the risks you are implicitly taking.
South Korea has a much different demographic than the United States. Samsung plays a large part of the whole countries GDP.
Insolvent retirement is actually a fear of anyone. Primarily because you don't know when you will die. So, how long do you accept the inherent risk and start making your assets more liquid.
I really think that Baby boomers retiring isn't as big as an issue as the consumer credit market and student loan credit. It seems right now that some of the S&P's upside is the fact that its on the backs of people putting their new toys on credit cards and finance plans. I don't think that this can last forever and also the fact that the things they put on them keep on lasting longer and longer.
But, for my retirement I'm pretty long on S&P (I'm only 30 years old). I am not going to pull out at the moment and timing the market for things like that is hard for me to fathom. Taking defensive positions is more for actually Baby Boomers and people that are day trading. As you say that this correction will be triggered by Baby boomers retiring the only thing that actually counters that is medical science. I have a few coworkers in their 70s and they look and act like 50 year olds.
There is a massive generational theft that's been happening over many centuries. Property prices inflating along with the rising cost of education and loans are further rigging the system towards the older, wealthy and established.
Instead of this trend slowing, it's accelerating at the expense of class mobility for the young, poor and intelligent. This disillusions these individuals en masse.
Where have disillusioned intelligent people recently been life-changingly rewarded for their efforts? Cryptocurrencies have done so, loudly. In fact, there are developer celebrities in many of these communities.
The choice to the young and intelligent:
Seemingly immediate power, prestige, and potential class mobility versus an stressful period of self improvement that causes extreme debt (college).
The game needs to be better for the young and intelligent or they are going to play a different one. Many already are.
I'm extremely long on cryptocurrency for this (and other) reasons. For a sense of time scale, I have an iota retirement plan that begins distribution in 10 years and lasts 35.
Why do you think that Cryto-currencies are fundamentally different to the internet boom which made 20-somethings like Larry Page/Sergy Brin/Mark Zuckerberg some of the richest people on earth in only 10 to 20 years?
I’m sure plenty will get rich on Cryto. I’m unconvinced that this time that makes it different for some fundamental reason.
The ease of access to capital for good ideas without any of the bullshit involved in startup fund raising is what has convinced me of this. It really doesn't matter what ivy league school the CEO went to, it's outweighed by the idea, the ability to execute and the ability to convince others to contribute resources.
Crypto is like the internet boom if the boom was more distributed, as anyone could take part in investment from the seed round.
An example of one of the best algorithmic strategies I have seen is the following. During secular bull market eras. Simply buy and hold for a period of 24 months. Every IPO that comes down the pike. Regardless of sector. Backtesting this strategy yields annualized 50% rates of return. Which beats $FB performance the last four years :) No doubt, ML could further optimize selectivity, weightings, hold duration, etc. The central thesis is that growth in market cap is strongest during the growth phase of a company.
Of course, today is the day another great algorithmic trading idea: fading volatility spikes. Unwinds in most violent and consequential fashion. Be cautious out there!
Two Big Volatility Players May Be on the Loose as VIX Tops 15
Humans, at present, also seem better equipped to adapt to irrational markets; especially when they are the source of irrational behavior.
A three month track record? "Dominating"? Come on. This article is either an advertisement or a nothing-burger to get that clickbait headline although I can't decide which.
The New York Times are now hiring people who don't know the difference between "to" and "too"? Well, that explains the sophomoric understanding of AI showcased throughout the rest of the article!
I wonder what the profits have been so far. People have invested in faster internet trunks for trading ages ago, just for a few ms quicker trades.
No connection to these guys BTW
Cats selecting stocks with its whiskers and monkey throwing darts on a newspaper on average beats most human professional investors. Most amateur investors are better of with low priced index funds tracking stock index than buying more expensive managed products as those have higher fees.
Book A random down walk wall street.
One problem is it poor's money into all stocks in an index which is great in a long bull market - but it does cause problems with being over concentrated in stocks which will cause more losses when the down turn happens.
check out https://seekingalpha.com/article/4081504-invest-high-flying-...
which is basically saying.
"The top five holdings of the S&P 500 and the NASDAQ 100 indices have generated the bulk of the market's returns in 2017. Increasing fund flows from actively managed to passive index funds has contributed to the market outperformance of these issues and has resulted in concentration risk for index investors as well as investors in the individual stocks. The next bear market or market correction could result in disproportionately large outflows from these stocks."
Buying the index forces you to by all stocks in the index good and bad over and undervalued.
Some of my active funds in the UK got out of banks before the crash a FSTE 100 tracker could not - so some one investing the same amount in a tracker is almost never going to catch up to that active fund.
Total market index funds do not overweight or overconcentrate stocks. As I correctly stated, the weight of the stock in the index and in the passive fund are set by the consensus of all active traders.
Everything that loses to that is a con (98-99% of actively managed funds). Matters little if you were ripped off with a human picking the losers, an AI or both or neither.