
Heat Death: Venture Capital in the 1980s - pw
http://reactionwheel.net/2015/01/80s-vc.html?newpost
======
dcre
It's worth noting that Marc Andreessen called this "the single best essay on
modern venture capital" he's ever read.

[https://twitter.com/pmarca/status/554829268459343874](https://twitter.com/pmarca/status/554829268459343874)

------
7Figures2Commas
> You can have lots of plausible theories about what venture capitalists as a
> class can do to get good returns, until you take the 1980s into account.
> Then you can only have one: the only thing VCs can control that will improve
> their outcomes is having enough guts to bet on markets that don’t yet exist.
> Everything else is noise.

I'm curious as to why the author believes VCs need to improve their outcomes.

Venture capital is a compensation scheme disguised as an asset class. As long
as there is enough dumb money willing to permit VCs to lock in a decade's
worth of fee-based compensation with each fund they raise, VCs have no
incentive to go rogue and "bet on markets that don't yet exist." In fact, they
have every incentive not to.

------
jazzdev
Fascinating. "The ’90s were a bubble. ... But the ’80s were not a bubble. Not
every cycle is a bubble. ... In the ’80s they tried to avoid risk and got
nothing."

------
_delirium
This is the most in-depth writing I've seen on this subject outside of a book.
Thanks for linking.

------
hga
Take this with some serious grains of salt; I started skimming when it gave
100% of the economic recovery credit to Carter's Fed appointee, which is
particularly ridiculous due to the actions with capital gains tax rates which
are critical for VC investing, and then I noticed it labeling Control Data as
a hard disk startup. Uh, yeah; founded in 1957, the C in BUNCH, 2nd home of an
obscure computer designer named Seymore Cray, etc.:
[https://en.wikipedia.org/wiki/Control_Data_Corporation](https://en.wikipedia.org/wiki/Control_Data_Corporation)
Its magnetic disk unit was a bright star in the '70s, e.g. the prototype Lisp
Machine's hard disk was a CDC SMD 80 MiB unit.

~~~
graycat
Seymour Cray and Control Data were really something. In the 1960s, Control
Data was very much a hot stock.

Yes, later the Control Data SMDs (storage module drives) were good units and
quite popular in the industry.

While I was working my wife and myself through our Ph.D. degrees, I was doing
applied math on some military systems analysis work but also served as system
administrator for a Prime super-mini computer (basically a version of
Multics). We started with one of the 80 MB SMDs but later got a 300 MB
version. Later as a B-school prof, I proposed a Prime, and we got one with two
of the SMD drives.

There at the B-school, from my proposal, at one point there was a shootout in
front of my dean between me and the campus CIO: The CIO didn't want us to get
our own computer and wanted to claim that the SMD drives needed _computer
room_ air conditioning.

I'd contacted Control Data to see if we might buy the SMDs directly from them,
and they'd sent me their official engineering specs which I happened to bring
to the shoot out. There I quoted the specs to show that simple _comfort_ A/C
would be fine in both temperature control and humidity control. The CIO
claimed Control Data, Prime, and I were wrong. I explained that while in grad
school we'd used no A/C at all until summer when the room got a little warm
for people; then we hung an A/C evaporator unit off the ceiling with the
compressor and condenser unit on the roof; the SMD drives were happy all
along.

The Prime lasted for 15 years, well into when PCs were better.

The 300 MB SMD drives seemed amazing. The unit weighed some hundreds of
pounds. Now can get 3 TB or so in a 3.5" package -- amazing, one of the most
amazing rates of improvement in all of history.

The paper has some okay content. It also has some flat statements about how VC
works now: No technical risk. Maybe some market risk but not much.

My approach doesn't fly:

(1) Pick a big problem, one where the first good or much better solution will,
clearly, obviously, no doubt, be a _must-have_ for enough people and earnings
per person to eliminate _market risk_ , achieve _product-market fit_ right
away, and be financially successful.

The extreme case, but in biomedical, would be a safe, effective, cheap one
pill cure taken once for any cancer.

(2) Then, for that problem, use technology to get the needed first good or
much better solution. There, for _information technology_ , get an applied
math solution where the theorems and proofs can be checked. This _solution_ is
evaluated just on paper, that is, before any software is written or, usually,
before spending any big bucks.

Then the rest of the project should be low risk and high payoff from just
routine work.

(1) and (2) have been, for 70+ years, very effective in a huge range of
astounding projects for the US DoD and, for much longer, much like how some of
the best applied math, applied science, and engineering have been done for
100+ years.

But for this approach, it is necessary to be able to evaluate (1) and (2)
essentially just on paper, and somehow VCs don't want to do this.

So be it! The flip side of that situation can be an opportunity!

~~~
hga
Yeah, I'm intimately familiar with the environmental requirements of that era
of CDC drives because in 1980 I started a student run computer center with the
Logo Lab's surplus PDP-11/45, and managed to snag the above mentioned CDC
drive and controller. We put it in a MIT Building 20 room with just comfort AC
(which had a tendency to freeze up, humans produce water vapor that computers
don't, so a system to detect that and give a unit a time out was built), and
aside from occasionally blowing a power transistor for the big solenoid that
moved the heads it ran like a champ. The best at hardware student I attracted
showed me how he diagnosed this, by simply pushing the solenoid's outer coil
in both directions and seeing which failed to resist. Since we had almost no
money at the time, he was generally able pop the top off the transistor and
reattach a failed lead (I guess vibration was the cause of those failures).

As for VC investing, if it didn't mention the formula of how many buyers a
company's product might serve and the value to them, which of course produces
a very high possible value for a lot of biomedical firms as you note ...
bleah. If current VCs don't want to play this game, which I was told was part
of it during e.g. the '80s, it's indeed an opportunity.

Although I'd say developing software is seldom truly low risk, most people
just don't understand how difficult this can be (it's a people problem).

~~~
graycat
> Although I'd say developing software is seldom truly low risk, most people
> just don't understand how difficult this can be (it's a people problem).

Yes, there's a long history of too long delayed or failed software projects.

One way and another, mostly I've managed to avoid such projects. My experience
has been that software projects are fast, fun, easy, routine, and low risk.

But, currently I have encountered an exception: My current project involves my
bringing up a Web site, and, for whatever the balance of reasons were, I
decided to build on Windows. I'm writing in Visual Basic .NET -- apparently
essentially equivalent to C# but with a different flavor of _syntactic sugar_
that I like better than the deliberately _idiosyncratic syntax_ of C borrowed
by C#.

The Visual Basic .NET has been fine -- really nice. And how it works in
developing Web pages is even nicer. And I just keep typing into my favorite
text editor, where I have 100+ macros and can easily write more, and don't try
to make use of Visual Studio.

But the problems are: (1) Documentation of the many classes I needed to use in
the .NET Framework, ASP.NET (for the Web pages), and ADO.NET (for using SQL
Server), (2) software installation, (3) more in system administration, and (4)
system security.

The part of the software development uniquely mine has remained fast, fun,
easy, routine, and low risk.

E.g., once it was a week of mud wrestling just to get a connection string that
worked between one of my Web pages in Visual Basic and SQL Server. The
documentation was clear as mud, and I just had to throw guesses out until one
_stuck_.

A virus wiped out essentially all of my installation, and I had to reinstall
the OS and all the software starting again with an empty boot partition.

Then the reinstall of SQL Server messed up, and I had to start again, with an
empty boot partition.

Then I tried to get SQL Server to use my old database, and SQL Server got
sick; I tried to reinstall SQL Server, and Windows got sick, and I had to
reinstall everything to an empty boot partition again.

I decided to back up the boot partition using NTBACKUP, and when I needed the
back up copy, the restored copy didn't boot. So, right, I had to reinstall
everything to an empty boot partition, again. This was getting old.

Why did the restore not boot? Secret #119,385,232,345: When back up a boot
partition with NTBACKUP and want the result, if restored, bootable, then have
to request "do save system state". What the heck was _system state_ , my
options for Outlook? Of course, the documentation never said, never gave any
discussion, etc. So, all there was was a check box. Thanks a lot, guys. Nope,
"system state" is much more important than my options for Outlook, as I
eventually discovered largely by accident from looking at the log files from a
save of my boot partition with WinZip.

So, to be able to use NTBACKUP, which actually has some amazing functionality
if can get it to work, I ran experiments, installed Windows, backed up with
NTBACKUP, tried a restore, which failed, tried again, and kept up this work
for days before I got it all working well. Then I documented it, the whole
thing, click by click, keystroke by keystroke. Now if my boot drive gets sick
for any reason, I can boot Windows from another bootable partition and restore
my main partition. And I can back up my bootable partition with NTBACKUP
easily, routinely, in less than an hour, any day at lunch.

I found, downloaded, read, studied, saved, indexed, and abstracted 5000+ Web
pages of documentation, nearly all from MSDN. That was a lot of work. Plus
more work for documentation of SQL Server.

For SQL Server, even for some basic, simple things, I couldn't make sense out
of the Microsoft documentation. And for their SQL Server Management Studio,
eventually I saw how to use it usefully for just _browsing_ but never saw how
to use it for actual SQL Server _management_ \-- it's awash in check boxes,
panels, and windows I have no idea what the heck they do if anything or
anything useful. So for management of SQL Server, I used just standard,
vanilla SQL commands as documented by other than Microsoft. So, eventually I
was able to get by the issues of users, logins, passwords, the SQL Server
versions of _capabilities_ and _access control lists_ , databases, etc.

For using TCP/IP on Windows, I could make no sense out of most of the
Microsoft documentation so just returned to some old documentation of TCP/IP
and some old sample code of mine, saw again what the basic API functions were,
guessed what Microsoft had as the equivalent, and got some TCP/IP working
between some servers.

It went on this way: Documentation, installation, management, and security --
mud wrestling. It cost me years, a big chunk of my life, a WMD to my finances,
where all the while the part of the software uniquely mine was fast, fun,
easy, routine, low risk.

But apparently now I'm past much of that mud wrestling with documentation.

Security? Just spent the past six weeks (outrageous) getting rid of a virus I
picked up from somewhere, maybe Flash (they keep saying that their software is
a security risk). The Microsoft virus scanning tools, which I ran for days,
didn't fix the virus. Eventually a System Restore back to two months earlier
did.

What other people do about these issues I don't know.

My part? Fast, fun, easy, routine, low risk.

My Web site architecture has several _back end_ software _servers_
communicating via TCP/IP. The architecture is all nicely _scalable_ just via
simple _sharding_. E.g., I wrote my own Web site session state server, just as
a key-value store, using just TCP/IP sockets, class instance de/serialization,
and two collection classes. Fast, fun, easy, and the code is blindingly fast
-- from my timings a server costing less than $1000 in parts should be able to
do the session state _put_ and _get_ work for sending 5000+ Web pages a
second, and simple sharding could increase that by a factor of 100. Easy. Fun.

Some of my servers are doing, say, an exploitation of the Hahn-Banach theorem
in a Hilbert space. Not trivial.

And, yes, have to expect that at some point will need some numerical linear
algebra. So, I downloaded the Fortran version of LINPACK, used the utility f2c
(translate Fortran to C), compiled the C code, tested it, worked fine, and
used Microsoft's _platform invoke_ to call the C code. Did lots of testing and
timing -- it works fine.

I needed a couple of fast, cute algorithms, invented them, programmed, tested,
timed, and documented them -- they work fine.

For my SQL Server _schema_ (that is, the tables, columns, and other
properties, that is, not what Microsoft's SQL Server Management Studio calls a
_schema_ ), SQL was fast, fun, and easy. Heck, long ago in a few days eating
dinner at the Mount Kisco Diner while reading Ullman's book on database and
SQL, I _got it_ on the language, right away. So, for my project, in about two
hours, fun, easy, I wrote out, with soft pencil, my schema for what my project
needs now and for much more in the future, typed in the schema, and it's been
fine ever since. I have enough keys, etc. to make everything really fast
without any _joins_. Simple. Fun. Right, so far I should be able to get by
with just a key-value store, but RDBMS still has some advantages if I can get
through the documentation actually to make the advantages work. Here I
anticipate a lot of mud wrestling and/or some really good technical support.

Looks like I will have to set up a computer on the side and use it just to
practice installs, backup, restores, etc., over and over and over, finally
document what actually works. Then use system management automation means to
automate that work.

There's an old statement about some of IBM's software: "If you just follow the
documentation for how to do the installs, then I will guarantee you it won't
work."

SQL Server installation, documentation, and administration -- barbed wire
enema.

My part's easy.

For working through Microsoft's part, maybe I'm past that.

~~~
hga
" _What other people do about these issues I don 't know._"

Work in a shop that's acquired a lot of Microsoft knowledge. With a very
distrusting attitude.

The last major MS ecosystem project I worked on was with a big MS fan, during
his college summers he'd worked on Internet Explorer back when it was getting
to be _much_ better than Netscape. And turned down a full time offer from them
for more of the same to work on his brother's company.

But circa 1997 he was not willing to trust SQL Server as it was then with the
company's fate when it was time to move from the superb Access Jet database
engine, and I suggested and implemented IBM's DB2 as a 1/2 price alternative
to Oracle (biggest issue was no MVCC, but that was OK for a tight knit group
of programmers; the sysadmin and DBA job was in turn easier than Oracle's long
evolved mess).

Fortunately never had to go back after that, and prior to that I learned
Windows 3.0 though NT 3.51 the hard way, back when that wasn't a nightmare. NT
3.51 SP2 or so was when it started going to hell). I made Windows GUIs work by
starting with a sample program that worked, and carefully mutating and
checking for viability each step. Used SoftICE to debug the hard stuff. Built
a stunningly good system using Microsoft's implementation of DCE RPC (the
alternative to Sun's haphazard stuff). But as I note the stuff outside of my
code kept getting better and better. Oh, did I say in that last job that MS's
Visual C++ STL library wasn't thread safe? On a naively multi-threaded system,
that I used with 2 CPUs until 2002?

Bleah. You've learned a lot of stuff, but the price sounds like it's too high
unless you're going to continue in this ecosystem, and I'm advising all my
friends to leave it. Especially with the new CEO, who very possibly has
serious damaged its existing QA system without replacing it with something
better (rather like one of the causes of the 2nd part of the Vista nightmare).

And, yeah, SQL is _GREAT_. Just helped a friend with a project using
PostgreSQL ... and its doc is great, like IBM's for Windows and UNIX(TM) back
then, and Oracle's in the 1994-2001 time frame. Get some books by the greats,
like C.J. Date, and you will have _oodles_ of fun.

