Hacker News new | comments | show | ask | jobs | submit login
A venture fund's experiment in human-free investing (bloomberg.com)
75 points by rafaelc 4 months ago | hide | past | web | favorite | 19 comments



“if the algorithms liked what they saw, the venture fund would back them...Similar tactics have brought promising results in other competitive fields. The most famous example comes from the 1970s, when five major orchestras began requiring musicians to stand behind a screen while auditioning.”

These things are in no way similar.

The article seems to pre-suppose that an algorithm can not be biased. The truth is, if the algorithm is trained on past deals then it could easily encode bias. More than this, it can give plausible deniability to biases/prejudiced behavior because “the algorithm did it”.


You have missed most of the context of the article there: "Carroll laughs as she recalls the thread, without seeming particularly amused. “I was just like, ‘Says no one about a male entrepreneur, ever.’ ”


And this actually almost makes me dismiss her outright. Everybody raising money has heard those.

I have personally heard: "Not enthusiastic enough/Too enthusiastic" (Irony: two different people sitting at the same presentation) "Too little/too much experience" "Not enough/too many generalists." It goes on and on ...

You learn to ignore the excuse and move on--the excuse is irrelevant.

"No" is "no". Move on.

"Maybe" is "no". Move on.

"Yes" is no until you cash the check and it clears.


> You learn to ignore the excuse and move on (...)

That's actually a great attitude in almost everything in life: recognize that some things are best modeled as a random variable, and if you fail there might not be a causal explanation and you just have to keep trying.

The downside is that you might be given valuable information (feedback) and you might dismiss it as noise.

So it's a real skill telling signal from noise.


That’s not the point. Or maybe it’s exactly the point.

The point is that doing the equivalent of blind auditions might solve it for everyone. It is similar enough to the blind auditions example anyway to make the “it is no way similar” argument clearly wrong.


If you want to make it like a blind audition, then do that. Either just review anonymized decks, or do the pitches over anonymized email.

This is not a blind audition... it’s something quite different.


the point that the algorithm is or isn't biased. the point is, humans are also training their experience on past deals, so the issue is, is the algorithm more biased, on average, than humans, or less? I have absolutely no doubt that an average VC fund decision making process can be replaced by an algorithm. Best vc funds - that's a tougher call. But in general, algorithmic investing is superior to seat of pants investing.


If a human is biased, there are individuals whose attitude can be corrected, and if necessary pressure can be put on individuals to correct the situation (politically or otherwise).

With an algorithm, you show the inputs, show the outputs and say “no bias, it’s all automated”.

Look, part of the problem might be that at seed stage the optimal short term strategy might be to be biased. If series A,B investors are sexist or racist, then that’s going to make things harder for those companies to progress. An algorithm might pick up on that, particularly if it can find features correlated with sex, race, or some other factor that investors may be biased against.

As to your point regarding the average VC. It’s possible that selecting companies at random might do better than the average VC. I suspect if we did, the societal outcomes might be better.


You forget the last source of bias : the bias could be on a lower level. For instance, there could be some sort of bias, hidden to the algorithm, that makes certain decisions succeed and others fail in a biased way, but for instance if we're talking companies, state-backed companies succeed, foreign companies suddenly fail.

There could be a real world bias.


> The CaaS form already asks applicants for some personal information, such as LinkedIn profiles and educational background. Carroll is resistant to the idea of using such data to glean insights about businesses. Because well-educated white men have the easiest time raising money today, any model using demographics to predict success would favor them—the opposite of her intention. Still, Social Capital is experimenting with building personalized models anyway, though it hasn’t implemented any yet

They specifically talk about this risk, current data models only use business data and they discuss the dangers of using personal data. It seems to me that data like customer loyalty and cash on hand are perfectly fine to use and don’t come with any direct gender or ethnicity bias.


My point is that a number of features can be well correlated with gender/other social factors and result in training bias into the algorithm.

The following article discusses this in the context of policing for example:

https://boingboing.net/2015/12/02/racist-algorithms-how-big-...


My point is the article doesn't ignore it, and are specifically targeting data points less likely to have gender/race bias. You presented your comment like it was a point of view not covered in the article.


It's interesting how they've already spotted a number of possible problems with using an algo to do VC, but there isn't really a compelling solution yet.

There are a number of parallels with the time when I was trading fixed income at a hedge fund. We had a senior guy looking at the output of various opportunity scanners, and deciding what to do.

There's several problems with this approach.

- The human is always out to prove himself. If you don't override the system now and again, what's the point of you? This means the humans are always on the looking for some special one-off condition they can claim.

- The algo dev stops short of where he could go with it. You ought to be fully automating it, but you don't because you need to leave something on the table. There's a number of data problems that you just don't get around to solving because it's tedious and you aren't going to use it.

- The VC guys have a much worse data problem, by the looks of it. Not every startup will fill out the form. If they don't need your money, no form. If they crash early, no form. After they fill out the form, how do you track what happened to them? Seems like a big problem. Also if you're going to use ML you need a fairly large number of rows. Not just filled out forms, but also labels for how things turned out. And the more features you collect, the more labelled rows you'll want.

So there's a real risk of falling into the pseudo-systematic hole here. You take the data that you have and make conclusions that are very close to your initial priors. Basically you end up with stylized "facts" that aren't necessarily true, just believed.

Seems like a they've thought about these things though, will be interesting to see what happens.


I'll defend pseudo-systematic, for the sake of it if nothing else :)

Investment types with many rows of data, enabling truly systemic decision making are securities markets. Whether you are using ML, or a human analyst with a theory, the economic conclusion is the same. A securities market has lots of data. This lets traders price systemically. Pricing, investing & trading are the same thing, in a securities market.

Startup investing is not like that, generally. A person using their subjective faculties is heavily involved, biases and all. This exists because human's subjective cognitive abilities are not just delusion, they are a real cognitive ability even if flawed^.

Systemic & non-systemic systems have their strengths and weaknesses. Human biases are a big weakness for the systemic side. "Searching where the lights (data) are" is the big systemic one. A pseudo-systemic system is basically just a compromise. If the weaknesses of a non-systemic system are a major problem (the whole premise here is that it is), then it is not a stupid idea necessarily.

It doesn't even have to be all that sophisticated. Manual overides are not necessarily a bad system. They make it clear where subjective judgement was used, how often. At least you are aware that it has taken place.

I think the blind orchestra is a good metaphor. It identifies the kind of human bias you are trying to minimize. If one orchestra looks like a Viennese orchestra and the other looks like a hillbilly high school, you want to hide this from human judges. You want human judges to focus their subjective brains elsewhere. You don't want to create an objective test of "good music" and "bad music," because the definition you would be forced to use would suck. At least, the definition would be different, like a sport with rules. So, compromise.

I don't think it's impossible to think of pseudo-systemic investment strategies, based on a similar compromise.

^ For best results, use skin in the game.


In the end this is still an investment, and the stock purchase agreement will probably include terms that would require companies to upload their P/L each quarter. They could easily track progress based on that.


That doesn't tell you how firms you didn't invest in are doing. Presumably you want to know how well your selection algo is doing, and you can't without having an idea of how your non investments are going.


What I find most interesting here is that for a relatively small amount of money (compared with their fund size), Social Capital is creating an amazing dataset about founders and startups. It will be really useful for them a few years from now to be able to go back and see what was most predictive of successful startups.


It's really easy to overfit and create biased algorithms, especially with how small their 'startup' training set is likely to be. That is, of course, if they're doing ML and not just encoding their own investor 'intuition' into software.

Still, I agree the weight of personal relationships and human-powered-decision-making guiding the 'tech' industry is a bit ironic.


Why would it be a problem that white males get funded? Besides, I'm willing to bet that they'll find out that the algorithm also picks white males, for objective reasons.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: