Hacker News new | comments | show | ask | jobs | submit login
What made Xerox PARC special? Who else today is like them? (quora.com)
514 points by mpweiher 219 days ago | hide | past | web | 218 comments | favorite



"You know what makes the rockets fly? Funding." - The Right Stuff.

What really made PARC work is that they were funded to develop the future of computing by building machines which were not cost-effective. It was too early in the mid-1970s to develop a commercially viable personal workstation. But it was possible to do it if you didn't have to make it cost-effective.

That's what I was told when I got a tour of PARC in 1975. They had the Alto sort of working, (the custom CRT making wasn't going well; the phosphor coating wasn't uniform yet) the first Ethernet up, a disk server, and I think the first Dover laser printer. All that gear cost maybe 10x what the market would pay for it. But that was OK with Xerox HQ. By the time the hardware cost came down, they'd know what to build.

Previous attempts at GUI development had been successful, but tied up entire mainframe computers. Sutherland's Sketchpad (1963)[1] and Engelbart's famous demo (1968)[2] showed what was possible with a million dollars of hardware per user. The cost had to come down by three orders of magnitude, which took a while.

Another big advantage which no one mentions is that Xerox PARC was also an R&D center for Xerox copiers. That's why they were able to make the first decent xerographic laser printer, which was a mod to a Xerox 2400 copier. They had access to facilities and engineers able to make good electronic and electromechanical equipment, and thoroughly familiar with xerographic machines.

Ah, the glory days of Big Science.

[1] https://www.youtube.com/watch?v=6orsmFndx_o [2] https://www.youtube.com/watch?v=yJDv-zdhzMY


> Ah, the glory days of Big Science.

This still happens! Silicon Valley gets a lots of long-term oriented funding from taxpayers via DARPA and other government agencies. Siri (named after SRI International where Engelbart did his work) and autonomous driving (DARPA Grand Challenge) are just a couple of recent examples.

In fact PARC still does too. Although owned by a private company, PARC does government-funded research.[1]

I would say generally people in SV are less consciously aware of the major role government funding plays in it to this day. It's probably because of the mythos of private entrepreneurship and the uncomfortable narrative of all this so-called "free market capitalism" being supported by billions in govt funds, with only a fraction of the profits returned to the public coffers via taxes.

[1] https://www.parc.com/news-release/97/parc-awarded-up-to-2-mi...


I've been funded by DARPA in one way or another and DARPA is not big science. DARPA is an applied R&D agency, meaning they want to see a path from what you are talking about to some broad application, preferably in the military space. If you can't describe that path to them, the odds of getting funded by them are low (this is drawn from experience of winning more than 10mil in funding from DARPA on many different programs).

The DARPA model is also not big science because the researchers on the outside do not pick the problems, the PMs at DARPA do. The model works when a visionary becomes a PM and is given a "big" (20-80 mil) bucket of money to disperse to researchers to enact their big ideas. If the PMs are truly visionary, then this works. If they are not, you wind up with a sort of soup of crap.

Additionally, since it's applied, DARPA will do frequent (once a quarter and once a year) check-ins, measurements, and progress reports. If you don't measure up during those, your funding gets cut. So as a researcher, it is a challenge if what you want to do is a little bit off the path of what the PM wants to do.

I have much more limited exposure to IARPA but it seems to be the same way there.

I have more experience with NSF, which is the total opposite: researchers propose their own projects and there are infrequent touchpoints, and no cut points really. However, NSF will give you one to two orders of magnitude less money than DARPA.


I work on several DARPA programs and agree with most of this. But some programs are by design far more basic or applied than others. Also worth noting - DARPA generally doesn't pay contractors anything like what the private sector pays employees.

The programs (and their vision, and their success) depend quite a bit on the PM, and on their ideas and engagement. To a lesser extent success depends upon the SETAs as well.


> DARPA generally doesn't pay contractors anything like what the private sector pays employees.

Eh. You can get like a $200/hr rate. If you're a small company with low overhead, you can get paid quite competitively with the private sector in base comp. Difficult to match big G stock grants, though, no one is becoming a millionaire off of this work.

Unless of course what you do on the grant can be turned into a billion dollar company, because (generally) contractors walk away from research programs with liberal rights to the IP they create, with the government retaining some rights to use, but very rarely (IME) retaining direct ownership. Of course, it's probably not that likely that what you do can turn into a billion dollar company but hey, one can dream...


Thankfully one doesn't (usually) need to create a billion dollar company to become a millionaire.


Gosh, this is an interesting thread...

Request: can you give an AMA on how one actually proposes/gets funding from darpa or at least an ELI5 on the topic...

What sort of idea should one have where you say to yourself "gee, I should hit up darpa with this!"


Honestly there is a truth and a super-truth, to use theater terms.

The truth, is that you should look at the Broad Agency Announcements (BAAs) that DARPA, specifically the Information Innovation Office (I2O) makes. There will be big PDFs that have lots of boiler plate and look really boring, but these are basically documents written by a program manager (PM) that describes what their envisioned research program is, how they see it being divided up, and what kind of researchers they see doing work in each area of their research program. A research program is split up into one of many (between 3 and 8) technical areas (TA) and there will be language in the BAA describing how many TAs one performer might propose to.

Overall a TA can be described at a high level as one of a few roles: a blue team, a red team, and a white team. There will usually be more than one blue team, at most one red team, and precisely one white team. A blue team does the R&D on the program. Each blue team can have their own unique approach to addressing the research challenges laid out by the PM. Usually the blue teams have competing or complimentary approaches, it is seen as a waste of money to pay two blue teams to do exactly the same thing. A red team evaluates the work done by the blue team. Exactly what this means depends on what the blue team builds. The white team "holds the room together" by fitting the PMs vision together with what the blue team does. The white team usually has a much closer relationship with the PM and usually the white team is pre-determined by the time the BAA is made public.

Blue teams are usually teams of 2-4 companies / universities. These teams usually form before a BAA comes out due to people "in the know" getting a rumor about an upcoming program, or just after it comes out, based on prior relationships. It's kind of rare to have a blue team that is only one entity, but depending on the entity and the size of the program, it can happen.

The path to a successful proposal is demonstrating that your team has ideas that are relevant to the PM and that you can successfully execute them. You show this by having a stack of prior work, good ideas, and a well formed and coherent proposal. Think of the proposal as an audition - if you as a team can't get it together for a month and write a 30 page document describing what you want to do, you probably can't keep it together for 4 years working on something. Your team reads the BAA, does some analysis, figures out how your ideas and past work can apply to the research program proposed by the BAA, and then you write the story up. If you do a good job, and your story is more compelling than other people, you get the money.

The super-truth is that it's an old boys club with a lot of luck and nepotism.

The way people get "in" is when their peers go to DARPA to be a PM. Then you, the researcher, can think "Well Dr. Smith knows about X and likes X+Y, I do some Y, I have an idea that could use some funding, what would Dr. Smith like to read about." Then you write it up and e-mail it to them.

Luck is a big factor here because PMs get e-mails like this all the time. It's almost essential to already be in the PMs rolodex/friend group to get some of their cycles. They like to say that they want to have people outside their circle approach them with new ideas but that's kind of a white lie, there are lots of people in the world and a finite amount of the PMs time.


Seems like most of the best opportunities in general are exactly like this. It's not about what you know, but who you know...


If you see humanity as a brain and a human like a neuron, the network produces outcomes unimaginable from looking at the individual. Which is easy if you look at an individual stone-age human, and even they are actually more advanced thanks to culture and groups (tribes) than an individual human would be (which would be more akin to a "Mowgli", grown up without learning from and support of the group).

So it's no surprise who you know is important - the brain is mostly about connectivity, not about the individual neuron.

Just a model for thought, obviously not a complete description ("every model is wrong"). I think the value of this model, if you can warm up to it, is that you stop worrying that "it's all about the connections" - because it really is and it's useful that way because that's a major point of how a network works. So do work on your connectivity! And also just like in the brain, a few high-quality connections are worth much more than a thousand low-quality ones.


Oh it's worse than that, it's what you know and who you know.


It's not who you know, it's who knows you.


Fantastic insight. Thank you.


I think the biggest difference is the funding went from unrestricted to being very specific. A lot of questions are asked before money is given. But the answers for an interesting project may involve concepts and vocabulary that do not yet exist, ideas that are not currently trendy. While I can understand the desire to avoid misallocation, this model assumes the funder has a better idea of what needs resources that the funded. From the second point in the answer:

> 2. Fund people not projects — the scientists find the problems not the funders. So, for many reasons, you have to have the best researchers.

So yes, the total amount may be there but is now diverted.


> While I can understand the desire to avoid misallocation, this model assumes the funder has a better idea of what needs resources that the funded.

Reminds me of a (probably the) reason we're doing standardized testing in schools. A decent teacher is able to teach and test kids much better than standardized tests, but the society needs consistency more than quality, and we don't trust that every teacher will try to be good at their job. That (arguably, justified) lack of trust leads us as a society to choose worse but more consistent and people-independent process.

I'm starting to see this trend everywhere, and I'm not sure I like it. Consistency is an important thing (it lets us abstract things away more easily, helping turn systems into black boxes which can be composed better), but we're losing a lot of efficiency that comes from just trusting the other guy.

I'd argue we don't have many PARC and MIT around anymore because the R&D process matured. The funding structures are now established, and they prefer consistency (and safety) over effectiveness. But while throwing money at random people will indeed lead to lots of waste, I'd argue that in some areas, we need to take that risk and start throwing extra money at some smart people, also isolating them from demands of policy and markets. It's probably easier for companies to do that (especially before processes mature), because they're smaller - that's why we had PARC, Skunk Works, experimental projects at Google, and an occasional billionaire trying to save the world.

TL;DR: we're being slowed down by processes, exchanging peak efficiency for consistency of output.


> A decent teacher is able to teach and test kids much better than standardized tests, but the society needs consistency more than quality, and we don't trust that every teacher will try to be good at their job.

It's more about it being way to hard to fire bad teachers and standardized tests, in theory at least, are an attempt at providing objective proof of bad teaching that won't be subject to union pushback and/or lawsuits.

I think a lot of the desire for standardized testing would dissolve if principals could just make personnel decisions like normal managers. And, of course, it's up to their superiors to hold them accountable for being good at that job.

> ...we're losing a lot of efficiency that comes from just trusting the other guy.

I think people have been trusting public schools a lot. That generally worked out great for people in affluent suburban districts. And it worked out horribly for people in poor, remote, and urban districts.


Well, that's my point expressed in different examples. Standardized testing makes the system easier to manage from the top, and makes the result consistent. Firing "bad teachers" is a complex problem - some teachers can get fired because they suck at educating, but others just because the principal doesn't like them, etc. With standardized rules and procedures, you try to sidestep the whole issue, at the cost of the rules becoming what you optimize for, instead of actual education (which is hard to measure).

> I think a lot of the desire for standardized testing would dissolve if principals could just make personnel decisions like normal managers.

That could maybe affect the desire coming from the bottom, but not the one from the top - up there, the school's output is an input to further processes. Funding decisions are made easier thanks to standardized tests. University recruitment is made easier thanks to standardized tests. Etc.

> I think people have been trusting public schools a lot. That generally worked out great for people in affluent suburban districts. And it worked out horribly for people in poor, remote, and urban districts.

That's the think I'm talking about. I believe it's like this (with Q standing for Quality):

  Q(affluent_free) + Q(poor_free) > Q(affluent_standardized) + Q(poor_standardized)
  Q(affluent_standardized) < Q(affluent_free)
  Q(poor_standardized) > Q(poor_free)
E.g. education without standardized tests may be better for society in total in terms of quality, but then it totally sucks to be poor. Standardized educaion gives mediocre results for everyone.


The usual "business school" approach to preventing metrics from causing unintended effects is to assemble a balanced scorecard of a dozen or metrics. If you do it right the negative factors of each cancel each other out. So for example to evaluate public school teachers instead of just looking at student improvements in standardized test scores relative to their peers you could also factor in subjective (but quantified) ratings from principals / students / peers, number of continuing education credit hours, student participation levels in extracurricular programs, counts of "suggestion box" ideas, etc.

http://www.balancedscorecard.org/Resources/About-the-Balance...


Public education is a bad comparison because it drags in a lot of contentious political debates about how we deal with poverty, racism, difficult home environments, taxation, etc. Based on a fair amount of reading on the subject I think the primary thing driving the standardized testing push is the dream that with enough data we'll find some miracle teaching technique which will avoid the need to tackle the actual hard problems.


Agreed. Optimization for process over outcomes seems to be a recurring problem in large organizations. Unfortunately it's not hard to understand why decision maker's prefer this approach given the pressures of shareholders/voters. No one gets fired for following the process.

A similar issue was just discussed yesterday in a thread on Jeff Bezos' letter to shareholders:

"A common example is process as proxy. Good process serves you so you can serve customers. But if you’re not watchful, the process can become the thing. This can happen very easily in large organizations. The process becomes the proxy for the result you want. You stop looking at outcomes and just make sure you’re doing the process right. Gulp."

https://news.ycombinator.com/item?id=14107766


Standardized testing has also restricted teachers to standardized teaching. They must complete X number of units from the binder every Y number of days. This leaves little time for the teacher to come up with their own lesson plans. I think there needs to be a balance between a teacher's autonomy in the classroom and standardization. I've heard many teachers complain while I was in school that they felt like they don't have the time to focus on the things they want to do in class. In my elementry school, the class would go to a different teacher for either STEM or liberal arts. I think this is a good start since because teachers can solely focus on material that is related, they can tie in lesson plans and make sure that the standardized subjects are taught efficiently to leave time for the other topics they want to focus on.

I remember in middle school, I had a couple teachers who would say "ok, this is the standardized stuff I have to say" and then go into a very boring lecture of what we just had been learning except the way we first learned it was interesting and engaging. They were simply dotting their i's to make sure that they did their job in conveying the material the state wanted us to know.


But is it really a trade-off , for every teacher or scientist ?

Sure , some teachers could use the freedom to teach better(and create new teaching methods) , but statistically , won't most teachers perform better when driven by above average methods ?

And as for research , sure we need those big breakthroughs , but we also need many people to work on the many small evolutionary steps on each technology . Surely we cannot skip that ?


The issue here is IMO that the trustless processes deliver consistency and reliability at such huge costs to quality of outcome, that the outcome becomes below average. In other words, an average teacher could do a better job free than directed by standardized testing, but the results would be so varied in quality and direction[0] as to be useless within the current social framework. And it's not that the society couldn't handle it, it's that when building systems, we want to make things more consistent, so they can be more easily built upon.

Basically, think of why you build layers of abstraction in code, even though it almost always costs performance.

--

[0] - e.g. one teacher focusing on quality math skills, another on quality outdoors skills, etc.


Layers of abstraction in technology not always cost in performance, when you look historically and system wide. For example the fact that digital technologies we're modularized(physics+lithography/transistors/gates/../cpu's/software developed without tight coupling/optimization) allowed faster improvements across the whole stack , according to some researchers, who also think, that in general, modular technologies advance faster. And modularity always requires some standardization.

And as for education, i don't think standardization always leads for performance loss. For example ,the area of reading has seen a lot of research, and one of the results is "direct instruction", one of the best methods to teach reading, and is a highly standardized method.

But maybe what's your saying is true for standardized tests in current implementation.


You always trade something off when you're abstracting. Some things become easier, but at the expense of other things becoming more difficult.

To use your example - more modularized technologies are more flexible and can be developed faster, but they no longer use resources efficiently. A modular CPU design makes design process easier, but a particular modular CPU will not outperform a hypothetical "mudball" CPU designed so that each transistor has close-to-optimal utilization (with "optimal" defined by requirements).

Or compare standard coding practices vs. the code demoscene people write for constrained machines, in which a single variable can have 20 meanings, each code line does 10 things in parallel, and sometimes compiled code is itself its own data source.

--

The way I see it, building abstractions on top of something is shifting around difficulty distributions in the space of things you could do with that thing.


Sure, in the small, a few developers working - fully optimizing stuff usually works better.

But on bigger design spaces, when you let hundreds of thousands of people collaborate, create bigger markets faster, and grab more revenue - you enable a much more detailed exploration of the design space.

And often, you discover hidden gold. But also - if you've discovered you've made a terrible mistake in your abstractions - you can often fix that, maybe in the next generation.


This is one of the most important points


> It's probably because of the mythos of private entrepreneurship and the uncomfortable narrative of all this so-called "free market capitalism" being supported by billions in govt funds...

I think this is a really bad straw man. There's broad bipartisan support for funding research into good ideas, assuming it's done the right way. The problem, though, is that the billions in government research spending we already allocate are subject to incredible amounts of lobbying and earmarking.

We'll have a NASA allocation, but it has to take place in the state of a particular Senator. Or the research will be funneled through the department of defense and received by defense contractors.

And, besides, for every DARPAnet, we have many "Where does it hurt most to be stung by a bee?" (1).

It's hard to evaluate whether we would still end up in a net positive situation. I don't have hard feelings against people who are both enthusiastic and skeptical about government funding grants.

(1) https://www.flake.senate.gov/public/_cache/files/ef6fcd58-c5...


I really dislike research hit jobs like the link you posted. At best they are disingenuous, at worst they are outright lies. The funding attached to the projects is for the total grant amount provided, which may fund multiple projects, staff, or overhead.

From the link:

METHODOLOGY Specific dollar amounts expended to support each study were not available for the projects profiled in this report. Most were conducted as parts of more extensive research funded with government grants or financial support. The costs provided, therefore, represent the total amount of the grant or grants from which the study was supported and not the precise amount spent on the individual studies. This is not intended to imply or suggest other research supported by these grants was wasteful, unnecessary or without merit.


Agreed. I poked around for more primary sources but the office of a senator seemed less controversial than partisan news organizations. Actual links to studies would be less useful since the "interesting bit" would be a note about funding coming from the NSF or something.

It's an incomplete and distorted picture of how bureaucracy funds research. I brought it up to add counterpoints and context to the implication that government is actually good at this sort of thing if we'd just get over free market capitalism (or something).


You think the office of an avowedly partisan senator would be less contentious than a putatively 'neutral' news organization?


Than the sources I found? Yes.


> And, besides, for every DARPAnet, we have many "Where does it hurt most to be stung by a bee?" (1).

Did they invest the same amount in these two?


They invest more than DARPAnet in programs that get earmarked. Entire jets and battleships are researched, developed, and manufactured for political reasons.

I was saying that just pointing out that programs like DARPAnet might not make up for all the other nonsense.

And we can flip that around as well, maybe PARC was spun off from some successful grants, but we're not taking into account these sorts of failures when attaching the price tag to those initial grants.


Siri was an inferior product comparing to others not funded by government (even after Apple took over). So that is not a glorious example of Government funding. Government is extremely ineffective, and most funds are waste. But sure some stuff will appear from all that loot.


Inefficient != ineffective. If you're accessing a resource via the Internet, you're testament that public funding of R&D can work.


That was not what the parent was saying. US Government spends large amount of money every year. They are bound to make some things work. Private sector makes some things work also. So, the question is : 'which should be preferred' ? If you agree that government spending is less efficient in making things work, you should prefer private spending. Unless you believe there are things private spending cannot do which is a reasonable case to demand public funding for research. For example, some argue that the quarter-to-quarter nature of how private companies operate precludes them from making investment in long running research. But companies like Google do run multi-year research projects.


DARPA doesn't spend that much money, a few billions a year , and has a pretty good track record , probably better than most corporations .


Yes, i mean, its not like a company locked the telephone away because they feared a damage to their gramophone disc sales. That would be shortsighted madness, company's are incapable off, if you read the history books released by their PR-Departments.


Almost correct. The first laser printers were not the Dover but SLOT by Gary Starkweather at PARC, which was made on top of a 3600 (much faster engine) in the very early 70s, and its connection to the Ethernet EARS a little later (by Ron Rider using an Alto as the server computer).

PARC really wasn't an R&D center for Xerox copiers (this stayed in Rochester), and Gary was a great engineer who could do the work of making a laser and its optics and scanner work with a standard copying engine pretty much by himself.


Money is a necessary ingredient, but I've been in and near enough very costly endeavors that produced very little brilliance to think it's probably not even in the top 3 most important ones.


Money is necessary, but not sufficient.


Kay said the organic collaboration between talents that made things possible. If one guy had an idea that required some piece he didn't know how to get, he could ask some guy who would build it. Surely it took roots in the copier R&D resources, but a symbiotic pool isn't a given either.


"Sutherland's Sketchpad"

Fantastic read, "Sketchpad: A man-machine graphical communication system" ~ https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-574.pdf


There are still a lot of companies that will let you work on your own internal projects as long as you can get your manager to sign off on it. The idea is that if you can get a working prototype or some proof of concept working, you can try to convince a customer to sign on to fund actual production. Some companies have leadership that isn't focused on short-term profits and thus let the smart people they hire use their brains to come up with new things to build to make money on. Not everything is going to be a winner and maybe someone else will come along and make a better version before you can get yours sold but as long as any R&D project resources can be put towards another, the money and effort was not in vain.


Great post, +1.

So in that case imagine there is a PARC bis somewhere out there... what kind of out-of-regular-folks wallet things they might be working on right now?

I want to be surprised today like Jobs was when he visited them first time.


I think DeepMind is another PARC. They never cease to amaze me with their brilliant papers. Something just as great as the personal computer and GUI will be built there.


Any standouts you are able to link? I'd love to dive in, but I am not sure where to start.


Wavenet impressed me much: https://deepmind.com/blog/wavenet-generative-model-raw-audio...

Also Decoupled Neural Interfaces: https://deepmind.com/blog/decoupled-neural-networks-using-sy...

And AlphaGo of course.


Thanks for the links!


Funding is a necessary but not sufficient condition. They were the right people, in the right place, at the right time. Off all of those, the time period, i.e, the beginning of the computer revolution, is the most important factor in my view.

There are certain moments in time where such innovation is possible and this was one of them. In this respect they are like The Beatles--the right talent just at the right time.

There isn't anything like Xerox PARC at the moment and won't be until the conditions are once again ripe for another Cambrian Explosion (as another commentor aptly described it) of innovation.


To be like Xerox PARC today you'd have to move back to the time when Xerox did what they did there. It's not just the company or the people but the total environment in which that took place.

Once the canvas is no longer blank it becomes a lot harder to be that innovative. All kinds of brakes on the system engage almost automatically: conventions, languages, processors, window managers have all become a lot less open to really new concepts.

The biggest changes of the last couple of years are deep learning coupled with the advent of GPUs that have computing power that was only affordable to universities not that long ago for very little money. Possibly that will engender a totally new computing environment in which our 'old' stuff no longer matters as much and we'll be more free to pursue things that are less anchored in the practical requirement to make money.

I suspect the whole 'deep learning' revolution will be as big as the original invention of the transistor or the web. It won't give us full AI but maybe by the time we've mined that for all it's worth we won't feel the need for it anymore.


The paleontologist Stephen Jay Gould wrote a book called 'Wonderful Life' about the Cambrian Explosion, when mutlicellular life really got cranking. Basically at that time the ecological slate was clean, and an enormous diversity of organisms developed very quickly. All the expansion of ecological diversity was lateral--essentially green-field evolutionary innovation.

Over the next hundred million years or so, the evolutionary innovation was mostly in smaller and smaller tweaks on some evolutionary branches, and the extinction of other branches as they were outcompeted. Never again was there the same wide range of innovation and experimentation--except following expansion into new environments (e.g. when organisms moved onto land) and following mass extinction events.

It is really interesting to see how research centers like PARC develop but you're absolutely correct in that it's easier to be creative with a clean slate.

Maybe we shouldn't be trying to emulate or reproduce PARC unless we have found a new clean slate somewhere. Instead we should look to where innovation develops particularly rapidly in spite of an establishment.


> Instead we should look to where innovation develops particularly rapidly in spite of an establishment.

I haven't had my coffee yet, but this part sounds like a contradiction to me? Can you think of any examples (maybe from other areas)?


Such an employer would have to tolerate high risk, frequent failure, high cost, low "productivity", and slow unpredictable progress toward financially unrewarding intermediate results... and do so for years without administrative interruption.

Unless the innovators are somehow overlooked by the omnipresent bureaucrats and bookkeepers -- e.g. working in a small lab hidden deep in a very large organization disinclined to optimize costs/profit -- I don't see any established organizational model that could sustain innovators like Nicolai Tesla or Kurt Godel.

Maybe a bored billionaire?


When we talk about innovation today we mostly think about computers. But Xerox PARC operated on a new domain.

I think we will see new canvases in the future, but maybe not in computing. Maybe we will see the rise of new materials, new ways to heal, ways to combine humans with animals, living computers, living tools of war, space travel, and so on.

If you want to be like Xerox PARC you are already limiting yourself when you only think about the computer domain.


Once the canvas is no longer blank it becomes a lot harder to be that innovative.

That depends where you think the edge of the canvas is. If you're looking at ergonomic commodity computing with a GUI, the days of blankness are long gone.

But there are always completely blank areas elsewhere.

AI may not be one of them. I'd suggest it's more of a marginal area. It has plenty of potential, but it's also struggling against its history and existing beliefs.

What's missing now is the next wave of physical substrate technology. It won't be based on the the post-war wave of technology at all.

PARC was part of the post-war wave, which actually started with Church and Turing and became implementable at scale with transistors.

Who is today's Turing? Does he/she exist? Discover that, and you'll see where the next big wave is.


do you really think that about deep learning? it has zero application or effect in my job (i don't see that changing for at least the next decade) and in my personal life, it only serves to produce "better ads".


I would disagree. If you've ever requested a web page translation, done an image search, or spoken with an Alexa or google device I believe it's deep learning going on in the backend. I don't work at any of the large tech companies but I imagine they're applying deep learning as pervasively as possible where it can make a difference.

That's not to disparage with your comment, only to say that while it might feel like a negligible effect on your life, I would say that it is non-zero.

My completely uninformed and lay persons opinion is that we might not yet know what we can really do with deep learning yet. For example there were many years between the discovery of electricity and the invention of the microprocessor. The latter certainly required the discovery of the former, but did not logically lead to it. I use this example because Andrew Ng called AI the new electricity.


>I would disagree. If you've ever requested a web page translation, done an image search, or spoken with an Alexa or google device I believe it's deep learning going on in the backend.

Web page translations (at least in Google and earlier) didn't do "deep learning" for ages.

Plus most of those things, web page translation, image search, speech assistants, are pretty much either niches or gimmicks. Even if they worked perfectly, there would not be much to write home about.

It's not like people think "my life was changed by the Alexa". GUIs, on the other hand, made lots of things possible that weren't, including whole new professions (heck, the web IS a GUI thing).


Google Translate didn't use deep learning until less than a year ago: https://research.googleblog.com/2016/09/a-neural-network-for...


That's why it got that much better. Before it was barely usable, now it is positively stunningly good (with the occasional booboo but if they keep improving at the present rate that will become quite rare in another 3 to 5 years).


It's also stunningly poor in lots of cases. Largely because the system has no idea of the reality it's talking about.


Yeah, the differences are kind of interesting -- check out the last section of this article http://legendsoflocalization.com/funky-fantasy-iv/ for some comparisons of Google-translated Japanese text from Final Fantasy IV before and after the switch.


Thats fascinating how you can see the rules based formalism of version 1 translate go wrong in very different ways from the ML pattern matching version 2. ML creates a grammatical but nonsensical sentence from which little or no original meaning can be recovered. As someone in Japanese translation makes me feel better about my job.


> Before it was barely usable, now it is positively stunningly good

That varies a lot by language. I keep finding its attempts at Polish painfully bad, and I only know a little Polish myself.


> do you really think that about deep learning?

Yes, absolutely. To put it bluntly: how many jobs do you think are affected by being deaf and blind?

Computers are now able to see and hear, and on top of that they have become better at most classification tasks than humans. Those are game changers, this goes much further than advertising.


What you state is factual. In the context of "as revolutionary as transistor", then I'd say yes as well. But when it comes to jobs, deep learning's impact is overstated.

When I hear claims on how many jobs will be replaced because of deep learning (in contrast to the base rate of the long, steady slope of automation over the last 175 years), it's often about an abstract and simplified notion of the complexities of jobs outside the claimant's area of expertise. In such cases, it's useful to ask the claimant whether their profession is soon to be made obsolete. Will AI replace the pundit anytime soon? The economist? The enterprise programmer? The AI researcher?


> Will AI replace the pundit anytime soon?

Why not. You can go to any major news outlet, and expect a certain bias/comfort in their news, and a definite bias/comfort in their overall editorials or those of individual editorialists. Why, it's almost like they're programmed to reliably write under those biases.

We already have the beginnings of auto-written news articles. There was a system featured on HN a few years ago about weather news articles at, I think, the LA Times.

One reason some people stay logged out of Google is so their search results aren't colored by what Google thinks it knows about them. It's not that large a leap to systems reading all the opinion letters its owners receive, and maybe all the printed letters that other outlets receive, turn a knob or two and collect max-Laffer curve revenue, based on where it's set on the bias curve.


The welder, the courier, the truck driver, the cab driver, the pilot, the GP, the gardener... I could probably make a really long list of jobs that will be affected, pundits are safe for now (but composers and painters are not), economists are probably safe too if only because they always seem to manage to weather the storm without damage even if they caused the storm and still failed to predict it. Enterprise programmers are safe - for now -, but keep in mind that humans already know how to teach each other stuff, as soon as programming becomes hard to distinguish from teaching by example that enterprise programmer had better watch out.

Let's review this in a decade or so and see what happened.


Four of those are bets on autonomous navigation and driving/flying. One (welding) has nothing to do with deep learning (that I'm aware of) and everything to do with the base rate of automation.

In terms of level 5 autonomous driving, the most common retort I've seen recently is "well, we're going to need special roads with sensors built in, and then it is a tractable problem." There are more than 4 million miles of road in the US; at any given time maybe 2% are under construction. The US Interstate Highway System was built during a boom and the cold war (where the system learned from Eisenhower's WWII experiences and learning from the Autobahn's military utility.) [1] If we're lucky, maybe the interstates will get sensors. But the complexity of re-doing an exit or interchange, which involves dozens of on-the-fly lane changes and cutovers, seems impractical merely to support self-driving cars.

I would love autonomous vehicles for the sake of my children, but I suspect that at best, maybe my grandchildren will benefit.

[1] https://www.fhwa.dot.gov/interstate/brainiacs/eisenhowerinte...


>The welder, the courier, the truck driver, the cab driver, the pilot

Nobody involved closely with self driving cars thinks we're anywhere close to a level 5 system. Why do you think we are?


Because the rate of change has changed itself. Just like the internet existed for many years before it took off and car phones were around in 1946 there is a point in time where the rate-of-change in either the application or the development of something new changes.

That point in time has definitely come for self driving cars, we're not 'close' but we are much closer than we were even 5 years ago, there is legislation on the books permitting self driving cars from participating in regular traffic instead of on private property only, there is special hardware geared towards such systems and so on.

It's an idea whose time has come, now we have a bunch of engineering problems to solve and they're far from simple but I would not at all be surprised if self driving cars are going to be equal partners in traffic 10 years from now.

Lots of hard work between now but I think we'll see self driving cars long before we will see an operational fusion reactor.


Self driving cars are worth 40+ billion to the economy every year. That quickly pays for quite a bit of work.


Nothing like a large carrot and the feeling that something should be possible to bring out the hounds. if it can be done with present day tech it will be done, if it can't be done with present day tech then we will have a pretty good idea of what the stumbling blocks are and what kind of tech will be required in order to solve them. That's the minimum that will come out of this, possibly a lot of spin-off tech as well for more limited situations.


Well, what do you mean?

I think it's obvious that machines can, for example, weld better than humans can. But machines can't decide what to weld. For now in that arena, things are mostly still human controlled.

I think your argument gets significantly weaker when you push truck, car, and plane driving onto the conversation stack.

Even the best of self-driving AI automation isn't nearly as good as bad human driving. Yet.

But the absolute best human weld joints are no better than the absolute best robotic weld joints.

I think you're talking about two entirely different domains here when you conflate welders and couriers. One of those needs a human decision to place the components to be welded next it in front of a robot that basically does one thing. The other is an actual adaptive AI that's trying to make decisions on it's own.

Yes, it's obvious that the lowest-wage and lowest-skilled workers are going to be displaced by robots the soonest. That's like arguing that the sun is going to rise. And as a society, we need to account for that.

But arguing that the lowest hanging fruit of automation is going to displace taxi drivers and gardeners in the next 10 years is pretty nuts, even for HN.


It is definitely not obvious that machines can weld better than humans can. I think a lot of computer tech people would be surprised by how much welding is still done by hand. Most of the welds on a Boeing 787 are done by humans, for example.

I feel like this fits the description in the comment above:

> When I hear claims on how many jobs will be replaced because of deep learning (in contrast to the base rate of the long, steady slope of automation over the last 175 years), it's often about an abstract and simplified notion of the complexities of jobs outside the claimant's area of expertise.


Machines have nowhere near the flexibility of humans, so obviously they're worse at tasks in environments designed for humans.

But were you optimize the Boeing manufacturing process for machines, the way car factories are optimized, you could utilize strengths of machines while sidestepping their limitations.

(Why this does not happen often is I believe a combination of robots having high initial costs, coupled with products starting as human-assembled prototypes, and companies just optimizing that process incrementally, instead of redesigning it around machines.)


And unions.


And relatively low volumes. If you're going to pump out 3 million cars a year vs 2000 of that plane ever, it changes the equation somewhat.


> Well, what do you mean?

> I think it's obvious that machines can, for example, weld better than humans can.

Ok. That wasn't all that obvious not all that long ago.

> But machines can't decide what to weld. For now in that arena, things are mostly still human controlled.

Yes.

> I think your argument gets significantly weaker when you push truck, car, and plane driving onto the conversation stack.

What's so incredibly special about truck, car or plane driving/piloting that you feel they are immune to automation?

> Even the best of self-driving AI automation isn't nearly as good as bad human driving. Yet.

Precisely. Let's give it 10 years and look again. I wouldn't bet against it.

> But the absolute best human weld joints are no better than the absolute best robotic weld joints.

It's not about 'best'. It is all about repeat accuracy and consistency. That's what makes the robot better. Maybe the best human welds are better than the best robot welds. But to get a human to consistently outperform the robot or to consistently deliver a specific quality level is not going to happen.

> I think you're talking about two entirely different domains here when you conflate welders and couriers.

We consider welding to be a highly skilled job and yet welding robots are a thing. Courier on the other hand 'merely' requires a driving license, something we hand out to 16 year olds after a perfunctory inspection.

> One of those needs a human decision to place the components to be welded next it in front of a robot that basically does one thing.

Which takes a couple of years training for a human.

> The other is an actual adaptive AI that's trying to make decisions on it's own.

Yes, it's a harder problem for a computer. But it was mostly harder because in the past computers could not see the way we can. But deeply layered convolutional neural networks and their descendants have changed that, dramatically. What was SF 5 years ago is now commonplace. And computers now have access to senses that we don't have, such as radar and lidar.

That takes care of one of the harder parts of the problem.

There is still plenty left, but one of the most formidable obstacles to self driving vehicles is now behind us.

> Yes, it's obvious that the lowest-wage and lowest-skilled workers are going to be displaced by robots the soonest. That's like arguing that the sun is going to rise. And as a society, we need to account for that.

Yes. But I wonder if we're going to be ready when the software is ready. In fact I doubt we will be ready. It will be the industrial revolution all over again, only this time it will play out in a decade instead of in 100 years.

> But arguing that the lowest hanging fruit of automation is going to displace taxi drivers and gardeners in the next 10 years is pretty nuts, even for HN.

I don't think that was called for, that's pretty rude, even for HN.


"and on top of that they have become better at most classification tasks than humans."

That hasn't been my experience. But perhaps we should define better?

Computers are faster at doing the classification than a human, but not as accurate. So it depends what variable you're optimizing for when you say "better."

I'd be careful with that.


Have a look at the latest imagenet results. It's stunning, to put it mildy.


It's also stunning how poorly it performs in a real world (i.e., non-human-prefiltered photo) scenario.

Like autonomous cars or security cameras or robotics.


But it's stunningly better than it was 3 years ago. It's like going from crawling to walking. That takes a long time. But going from walking to running is much easier.

We're not yet at a 'general vision is a solved problem' level and this may take a long time still - if it is ever solved. But many problems can be reduced to the point that they become tractable even absent 'general vision'. Affordable LIDAR is a huge step, so is radar. Those two reduce the problem of vision with two crappy stereoscopic cameras set 5" apart to something much closer to the domain.

As far as I can see the race is on.


We are currently overfitting the public datasets. The only valid test is a new never been used image set.


try it for yourself - my team did, initially I sent the guy doing the tests away for screwing up the sums because I didn't believe it could be as good as he said. I was wrong - it was. This was a relatively early CNN as well. AGNN and RNN are revolutions in themselves and everyone is now scrambling to understand and harness them.

Our GPU kit has got 6 times better in 2 years.

Exciting times.


Hand someone ideally a 3-5 year old an average digital camera and have them walk around taking random pictures where half of them are a blurry mess and the results are much worse. Or drive around with a camera on you bumper etc. Public Datasets have style of photo that represent it's own bias.


Our data set was from surveys collected for human use using a variety of digital cameras - not DSLRs though. The surveyors did take specific images, blurry images would have meant not getting paid so none of those. We used transfer learning from the imagenet network vs a few hundred of the images well labelled and we trained it on a 6 gpu machine (titan x's) This is no place to claim results or do the detailed write up so all I would say is that do try for yourself if you haven't already, I was shocked by the quality of the classifier. Maybe you will be too!

I started from the dataset bias position myself - billions of family snaps and selfies can't provide the reference for arbitrary images of stuff from arbitrary angles. My reading of the results that we got is that the claims of lower level features extracted by the cnn are correct, and that a network trained on massive public data can be the basis for specialised tools trained on more constrained proprietary datasets.

Your milage may vary though...


> Hand someone ideally a 3-5 year old an average digital camera and have them walk around taking random pictures where half of them are a blurry mess and the results are much worse.

Garbage in, garbage out. When was it any different?

The question is: can a human do much better on that garbage?

> Public Datasets have style of photo that represent it's own bias.

That I readily agree with.


The problem is self driving cars need to work with Garbage when their sensor is covered by slush from a puddle, scratches on the glass etc etc. The problem is we are not training them on Garbage which is it's own problem.

Scoring is also critical, you want to gracefully degrade classification systems not getting the correct dog breed is insignificant vs calling a dog a sofa. Further, people base real world classification on stereoscopic video footage. Training robots to classify photos is a handicap for building robots to operate in the real world.


"Better than three years ago" is a different measure of success than "better than humans"

Different tasks have different error tolerances. When an MI tool can get within the acceptable margin of error for certain tasks, then there is a possibility for the MI to take jobs.

As long as accuracy is critical to a classifier, in many cases it's better to pay for human eyeballs to look at the criteria.

We're talking about a huge spectrum of applications here. Most of us probably don't care that much about the difference between human and MI error rates about classifying images used in broad-scale advertising on the internet.

We do (or perhaps should) care very deeply about human vs. MI error rate in terms of granting or not-granting things like parole.


Change to non-canonical lighting and it fails on classification tasks.

Add non-standard background noise, and it also fails on hearing tasks.


And what effect did a better ux manager have when pc's were used by a miniscule percentage of the population?


> do you really think that about deep learning? it has zero application or effect in my job (i don't see that changing for at least the next decade)

That was exactly true of Xerox PARC in the early '70s.


It's a deep subject, not only in CS, new empty space to release creative forces in synergistic ways.



> It won't give us full AI but maybe by the time we've mined that for all it's worth we won't feel the need for it anymore

Its going to start to challenge questions around what the definitions of full AI are.


I'm less of a PARC fan and more of a Lampson fan. In fact, I'm a huge Butler Lampson fan. I've read his PARC valedictory Hints on Computer System Design many times and he's updated it recently. He went from PARC to DEC SRC (maybe 5 miles). I saw him give a talk at SRC and for about 15 minutes afterwards, I understood.

Hints is the favorite paper of a lot of my heroes like John Regehr.

https://www.microsoft.com/en-us/research/wp-content/uploads/...

http://bwlampson.site/Slides/Hints%20and%20principles%20(HLF...

http://blog.regehr.org/archives/1143

From SRC, he went to Microsoft and he also taught at MIT.

https://ocw.mit.edu/courses/electrical-engineering-and-compu...


Let me encourage you to be both. Butler is one of the most amazing people I've met and had the great pleasure of working in the same environment with him for many years. I also think that all of us who were there would not want to try to separate "the great people" from "the great environment" nor would it be possible without great distortion (this is the point that most people miss about ARPA/PARC).


Thanks Alan. If my current project succeeds, and it should, I'm going to try to reach out to Butler and present it to him because frankly, it has Lampson written all over it with maybe some Thacker graffiti here and there.


May I ask what your current project is?


Microsoft Research Silicon Valley (MSR-SVC) used to be like PARC of old. Too bad it was shut down a few years ago. I don't know if any place comes close. To be fair, there are pockets of awesomeness at places like IBM T.J. Watson and MSR Redmond .. you need to be lucky enough to work with the right group of people.

Edit: I should point out that while it is awesome to work in such a storied place, there is some stress associated with it. Read about the weekly "Dealer" meetings at PARC.


Interesting read on "Dealer" meetings.

From The Myths of Creativity By David Burkus

>> In the 1970s at Xerox PARC, regularly scheduled arguments were routine. The company that gave birth to the personal computer staged formal discussions designed to train their people on how to fight properly over ideas and not egos. PARC held weekly meetings they called "Dealer" (from a popular book of the time titled Beat the Dealer). Before each meeting, one person, known as "the dealer," was selected as the speaker. The speaker would present his idea and then try to defend it against a room of engineers and scientists determined to prove him wrong. Such debates helped improve products under development and sometimes resulted in wholly new ideas for future pursuit. The facilitators of the Dealer meetings were careful to make sure that only intellectual criticism of the merit of an idea received attention and consideration. Those in the audience or at the podium were never allowed to personally criticize their colleagues or bring their colleagues' character or personality into play. Bob Taylor, a former manager at PARC, said of their meetings, "If someone tried to push their personality rather than their argument, they'd find that it wouldn't work." Inside these debates, Taylor taught his people the difference between what he called Class 1 disagreements, in which neither party understood the other party's true position, and Class 2 disagreements, in which each side could articulate the other's stance. Class 1 disagreements were always discouraged, but Class 2 disagreements were allowed, as they often resulted in a higher quality of ideas. Taylor's model removed the personal friction from debates and taught individuals to use conflict as a means to find common, often higher, ground.


This is overdrawn and misses the process and the intent. They weren't staged, they were not "designed to train their people", etc. It was part of the larger ARPA community to learn how to argue to illuminate rather than merely to win. PARC came out of the ARPA community and Bob Taylor had been the third director of IPTO.

The main purposes of Dealer -- as invented and implemented by Bob Taylor -- were to deal with how to make things work and make progress without having a formal manager structure. The presentations and argumentation were a small part of a deal session (they did quite bother visiting Xeroids). It was quite rare for anything like a personal attack to happen (because people for the most part came into PARC having been blessed by everyone there -- another Taylor rule -- and already knowing how "to argue reasonably".


As a relatively young scientist, I feel I lack the "argue reasonably" skill. How does one get better at this?


Maybe dealer was stressful in the past, but I interned at CSL a few years ago and it was pretty relaxed. People seemed to ask questions for clarification, or because they genuinely wanted to improve your research.

Being in your early 20s something giving a talk at PARC is inherently stressful, but having spoken at a multiple conferences they were a very respectful crowd.

PS: Yes, PARC still exists, though it's "PARC - A Xerox Company" now, as they repeately remind the interns: https://en.wikipedia.org/wiki/PARC_(company)#PARC_today


Not the same crew or dynamic ...


I interned a Microsoft Research last summer and will be interning at IBM Research this coming summer, and they both seem great.

I can't compare to PARC, but there are a lot of cool things happening at these places.


In that case, I should see you around the building in a few months.


This link led me on to a little bit of wikipedia reading where I learned something that really surprised me: according to wiki, Xerox today still employs more people than either Microsoft or Apple, 130k to their 115k each. Am I the only person who finds that shocking? I had no idea they were still so huge. And this despite their market cap being a hundredth of Apple or Microsoft's.


"Employment" like everything is subjective. Contractors don't count as employees. Foxconn employees doing Apple's manufacturing don't count as employees. It artificially decreases the people count depending on the company's FTE ratio.

Also consider the different industries. Xerox may need on-the-ground support staff to go onsite in numerous countries, in a way Apple does not need to.


HP tops all of them at 300K (or rather, did until the formation of HP Inc.).


IBM currently at 380K according to wikipedia https://en.wikipedia.org/wiki/IBM


Wow. What do those people do? For that matter, what does Xerox do these days?


Xerox these days still make photocopiers, but they also work as really nice printers and network connected scanners.

I imagine a lot of their employees are in the service division. When you buy a high end printer it also comes with a service contract where someone will proactively come and do things like clean the printer and make sure toner is stocked.


I work in a building next to a Xerox rented building. Apparently Xerox does a lot of government contracting. They bid on government IT contracts and hire people to work on them.


Even after the Conduent spin off?


"Who else today is like them?"

I don't know, but for sure, they're not in computing. PARC was special, among many other reasons, because it existed (and had the vision and the funding) in the early, gold rush era of computing. Google Research isn't it, not because they're not generously funding important and worthy research, but because computing as a field is too far along for them to have a chance of introducing truly fundamental research in the area. Self-driving cars comes close, no doubt hugely important, but even they have a quite incremental feel to them, especially next to inventing something like the graphical user interface.


I'd say quantum computing is still in a very early stage with many unsolved problems, especially on the implementation side.


There are some especially qualified answers on Quora. But to have your question answered by Alan Kay on the topic on Xerox PARC is truly special.


Agreed. I think he is a hackernews user as well



Wasn't he going to do some kind of work with ycombinator attempting to recreate some of the spirit of Xerox PARC? Maybe I just dreamt it.



That was it. Official site: https://harc.ycr.org/ Nothing super interesting there yet, at least to me :/


Bret Victor (worrydream.com) is a PI at HARC, having seen his previous demos, I'm very optimistic about HARC's future.


I've been reading the book he mentions, "The Dream Machine", by Mitchell Waldrop. It has been a great read so far (I'm a little past the point of the creation of ARPA), particularly the section about John McCarthy developing LISP and conceiving of the architecture for time-sharing.

Reading it has felt like I'm paying my respects to the pioneers of our field, because I am humbled by what they achieved.


It's interesting (to me at least) how much of the ideas they had back then are still waiting to be expounded on, they (and people like them) did the foundational work we still build new things on now, I read a book (I wish I could remember the title) about the architecture of the 2nd and 3rd generations of IBM's mainframes as they battled with the problem of "how do we keep the same software running on different hardware" and came up with virtualisation etc.

I wonder how many dusty copies of papers sat in university libraries contain ideas that brushed up and given a new coat of paint would be considered revolutionary.

The other side of it is I still use stuff those guys wrote every day, while typing this I switched a bash terminal to run an install command using a program that is pretty much identical (if a superset) of a program written in 1977, 3 years before I was born.


Same here with a lot of the older works. I felt it as I read on Barton's B5000, LISP machines, PARC's stuff, high-assurance security, some high-availability (eg NonStop), SGI's graphic hardware, and so on. Although not with new fundamentals, even Microsoft's VerveOS was a pile of nice engineering with ExpressOS following its lead for Android.


A few favorite quotes from "The Dream Machine"

The idea began to dominate my thinking and for the next two evenings I went out after dinner and walked the streets in the dark thinking about it

-Jay Forrester, on the idea of magnetic memory

Trying to avoid every conceivable error is a recipe for paralysis, having the freedom to make mistakes is necessary for achieving any progress at all

-Waldrop (author) on Herbert Simon & the idea of satisficing in the context of behavioral econonmics (he was awarded the Nobel in 1978)

. . . a way of life in an integrated domain where hunches, cut-and-try, and the human 'feel for a situation' usefully coexist with powerful concepts, streamlined technology and notation, sophisticated methods, and high powered electronic aids

-Englebart, prophesying

You couldn't develop and improve the system without seeing how a large and diverse community used it in practice.

-Robert Fano, on the development of time sharing

The spirit at that meeting was as good as it gets - the most civilized, ecumenical, incredibly supportive atmosphere you could want. People would cheer you on even if they didn't agree with you, just because they loved the fact that you were good

-Alan Kay on the early ARPA community

We felt strongly that only a very small group could do the project on that timescale . . . I don't remember such a thing as a weekly progress meeting. We were more in tune with progress than that.

-Dave Walden on the development of the IMP, the first packet switching node

To Lick, the obvious solution to the software crisis was to apply better and more interactive computing . . . when the systems are truly complex, programming has to be a process of exploration and discovery

[H]ire the smartest people you can find and give them their head - but let them know who's paying the bills . . . it wasn't enough to hire a bunch of super smart individuals, you had to build a community, a culture, and an environment of innovation. You had to give your people the kind of challenge that would light a fire in their eyes, that would generate an atmosphere of non-stop intellectual excitement, that would let them feel in their gut that this is where the action is. You had to provide them with lavish resources - everything they needed to do the job. And through it all, you had to keep your bottom line guys at bay so your guys could have the freedom to make mistakes

-Jack Goldman, founder of PARC


For those interested, more answers from alan kay available here: https://www.quora.com/profile/Alan-Kay-11


I’d strongly recommend the book Dealers of Lightning. Now I should comment that Alan Kay was at Parc and its where he did amazing work that holds up to this day so pay the most attention to his answer of course. But I learned a ton about Parc’s history from Dealers of Lightning.

I’d also point out that Bob Taylor apparently had a hard limit of 50 researchers because he felt that was all he could manage. This meant that he, by this criteria, had to get the absolute best, smartest scientists / researchers he could find. If you look at the sheer number of absolutely brilliant people assembled at Parc during the same time period, it is astonishing: Alan Kay (smalltalk), Butler Lampson (alto, *), Bob Metcalf (ethernet), the founders of Adobe (postscript), Charles Simonyi (bravo which later became Microsoft Word), etc. Close to 100% of modern computing directly came from Parc.


This might be relevant: in the AMA that he did less than a year ago [1], Alan Kay said [2]:

> "Dealers of Lightning" is not the best book to read (try Mitchell Waldrop's "The Dream Machine").

That said, I have read Dealers of Lightning myself, and really liked it. I have not yet got around to The Dream Machine.

[1] https://news.ycombinator.com/item?id=11939851

[2] https://news.ycombinator.com/item?id=11940756


The article itself is by Alan Kay, and lists Butler Lampson, as well as Bob Taylor and Chuck Thacker by name.


I second this, if any youngsters out there want to learn some computer history. This is a great place to start.


1) Smart people who have great ideas

2) A business model that has no need of those ideas

3) Management who have absolutely no idea how to capitalise on those ideas outside of a core business?

...

That's Google. Google are Xerox.


Google does very little unrelated to making users use their services more. The self driving car is one of the very few examples and a negligible portion of the employees get to work on that.


Google hasn't done much that's game changing that I've seen. Their incentives and goals are different than PARC most of the time. They did a lot of practical solutions to their problems that are similar to other solutions. The one, ultra-clever thing that jumped out at me was Spanner's TrueTime. Utterly brilliant.

Google could become PARC if they chose. Even have some groups like Spanner's doing unconventional stuff. I think it would take changes in management's priorities and evaluation of employees to get it there. It's not yet but there's potential.


Google is an ad company which makes products to put ads on


Like self-driving cars? I think they are desperately groping for something to diversify, because the ad models come and go, and the revenue can some day dry up.


Well, what are you going to watch while you're being driven around?


Technically Google doesn't own Waymo, Alphabet does. Maybe that's part of the reason why.


Google is a data company, offering the opportunity to manipulate some of that data in exchange for money.


.. manipulate some of that data to improve their ads.


A desire to recreate great things from the past always comes with some hindsight bias. What we'd like is a dataset about hundreds of labs like PARC and what happened to them. Of course, we could never get it, so we have to work with our observed history. But sometimes, in the case of these big labs, I wonder how much we can conclude.

I recently read The Idea Factory, about Bell Labs, and it has great insights, to be sure, but enough information about the causality to recreate Bell Labs? I don't know.

Maybe it really does come down to one thing, like funding, as the top comment (currently) on this thread suggests. But I doubt it.

When I was younger, my siblings and I played this game that we sort of made up as we went (too detailed to explain), and it was awesome. Years later, in a bout of nostalgia, we tried to recreate it and it was just awful. Enough small details had changed that it didn't work. One of the important details that changed was a total lack of spontaneity. All of us knew what the outcome should be like, and it made us behave differently. I don't think big orgs are at all immune from this effect of expectations.

Don't get me wrong, I'm obsessed with the famous labs like anyone, a big fan of Alan Kay, etc. I just think somebody needs to call attention to a giant hurdle in learning from them.


There's another lab that was successful in a somewhat similar manner, a way larger scale, but still great results.

I recently bought a book on the Philips Natlab, same idea as Xerox PARC. They invented the optical drive (cd's), the wafersteppers that bootstrapped ASML, the company behind the machines that create chips and some other inventions that I can't find right now.

I have more details in the Natlab book which I have at home, if you're (or anyone else is..) interested.


> Who else today is like them?

> PARC still exists, but Google advanced technology projects is probably the closest. Neither of these is that close to the old Xerox PARC since they tent to focus on near term commercializable projects.

https://www.quora.com/Whats-the-most-similar-thing-to-Xerox-...

...

While I can see how Google is an engine of scientific progress, Silicon Valley is bigger than Google. Stanford deserves a lot of credit.

Without diving into "what makes Silicon Valley Silicon Valley," I think I should point out that Stanford has consistently produced disruptions since at least Xerox PARC's founding.

(Obligatory: I have only ever visited Stanford, long after graduating from a different university.)


When you look at PARC's contributions (laser printer, bitmap graphics, GUI/windows/icons, WYSIWYG editor, Ethernet, OOP, MVC) or those of Bell Labs (transistor, laser, information theory, Unix, C & C++, radio astronomy), we haven't seen anything comparable from Google's Advanced Technology and Projects program. We can give them another 20-30 years and reassess, but so far, it's not there. Most of the projects have been announced with great fanfare and then quietly scrapped, or are solutions that haven't found problems.

PARC and Bell Labs were at the right place and right time to make fundamental contributions in the nascent areas of digital computers, software, and digital communications. They caught that wave perfectly. Now we're searching a similar revolutionary technology that will open the floodgates of innovation, but it's not apparent yet. Machine learning and AI? If that pans out, Google Brain/DeepMind would be well situated.


> we haven't seen anything comparable from Google's Advanced Technology and Projects program

And we won't because all those things have already been done.

The laser didn't come from Bell but from Hughes btw.

> PARC and Bell Labs were at the right place and right time to make fundamental contributions in the nascent areas of digital computers, software, and digital communications. They caught that wave perfectly.

Precisely.

> Now we're searching a similar revolutionary technology that will open the floodgates of innovation, but it's not apparent yet. Machine learning and AI? If that pans out, Google Brain/DeepMind would be well situated.

Machine learning is disruptive to a degree that the web never was. The web is augmentative, machine learning is pure disruption. Jobs that require lots of people will soon open up to automation, this is going to change the world in very fundamental ways if it keeps going at the rate that it does right now.

The last three years have seen one humans-only benchmark after another give way and the party is just getting started.


"> we haven't seen anything comparable from Google's Advanced Technology and Projects program

And we won't because all those things have already been done."

We haven't because they're not doing them that I've seen. I hope they prove me wrong as they're in a good position to be next PARC with some changes. So far, they've wowed me only once (TrueTime) with other stuff being variations on prior work with usually short-term focus or for pragmatic goals rather than vision. Their wide-eyed visions I've read about are rarely executed to completion. I don't see them as another PARC.


I 100% disagree. The web is far more significant than machine learning. Machine learning hasn't changed anything yet and I doubt it will ever significantly affect society.

Web has made society unrecognisable only 20 years later.


Unrecognisable means that if you take someone from 1997 and transport them into the present they wouldn't recognise today's society.

In reality societal changes in the past 20 years are superficial, so they would have no trouble recognising it.


I think that is a little unfair, unrecognisable would only really apply after a revolution/war, and even 1955 Germany would be recognisable to 1935 Germans (as one example).

To say that the changes have been merely superficial understates the case.

Online dating, online gaming, the disruption of existing media companies, Facebooks influence over the election, a president of the united states who tweets from the can, khan academy, youtube, the fact you can learn almost anything you can wish to learn by pulling a device out your pocket that is attached to an appreciable part of all human knowledge.

Computation on demand, I can pull out a credit card and have access to computing power that would have been unimaginable in 1997.

Social movements that have leveraged the web to achieve greater reach.

Those are things that mostly existed prior to 1997 but not in the sheer scope and reach that they do now.

The social changes in current teens who live an always connected life (I'm not sure that's a good thing but I'm 36, I'm too old to judge without it sounding like 'back in my day').

I think when it comes to the web things are just getting started, the utility of the network is so great that barring the fall of civilisation I can't see it ever going away and the impact will just keep growing and we'll just keep connecting more and more stuff to it until it becomes a planetary zeitgeist.


The changes you mention are almost invisible in day to day life though, so there's nothing to recognise.

Perhaps those innovations which the tech community considers revolutionary didn't really change society in a recognisable way?


Unrecognisable is a bit dramatic.

I've been around somewhat longer than that, I don't really think much has changed. People still want the same sorts of things and do the same sorts of thing.


> I don't really think much has changed.

I think that is the trap of having been there through the changes, you are acclimatised to them, for what it's worth I was born 1980 and also was around during the birth and the web and I do think much has changed.


For some reason, I love reading stories about Bell labs. Even though I wasn't alive back then, I look at it with nostalgia. Richard Hamming's "You and your research" talk is awesome and gives a good perspective about how the environment used to be back then.


There was an interesting post on HN years ago about why Google isn't a modern day Bell Labs. [1]

A common sentiment on the thread was that Microsoft Research was more akin to Bell Labs.

> I've always had the impression that MSR (Microsoft Research) was generally doing much more fundamental research than Google.

[1] https://news.ycombinator.com/item?id=6919184


"> I've always had the impression that MSR (Microsoft Research) was generally doing much more fundamental research than Google."

They definitely are. Just look at all the MS Research labs and projects. I did once to find them all over the place in terms of both categories of tech and fundamental vs general vs narrow. The Duffy posts on Midori with its design/implementation decisions show how unconventional they can be even when just trying to build a robust, desktop OS whose UI isn't actually radical. Only bad thing is most of best stuff gets turned into patents to stomp on competition. :(


I've met people who worked on Midori and they describe it as the best project they've ever worked on. Used to work in a building with a bunch of old Midori-folks and would ask them about it anytime I heard the word 'Midori'. I don't think it was MSR though.

Personally, I've always been most impressed by the research into formal verification: Z3, Dafny, TLA+, F*, Lean, and many more.


It was MSR:

http://joeduffyblog.com/2015/11/03/blogging-about-midori/

Yeah, their verification tools are incredible. They applied VCC to Hyper-V. The driver tools mostly wiped out blue screens. Got a new one called P language used in USB stack. The methods you mentioned like Dafny got applied in ExpressOS by another team:

https://github.com/ExpressOS/expressos

Microsoft did it first in VerveOS although Nucleus was ripped off a mainframe OS from the 1970's. VerveOS was impressive where it was one of first safe to the assembly with much less effort than other projects. They also did CoqASM for verified assembly.

https://www.microsoft.com/en-us/research/publication/safe-to...

https://www.microsoft.com/en-us/research/publication/coq-wor...


> PARC still exists, but Google advanced technology projects is probably the closest. Neither of these is that close to the old Xerox PARC since they tent to focus on near term commercializable projects.

Yep, researchers at Google just don't seem to publish much at all. Unlike MSR and IBM which have a great culture of publishing and releasing things publicly at high rates (yes, they do also commercialize stuff).


>There was no software religion. Everyone made the languages and OSs and apps, etc that they felt would advance their research.

I feel like there is a lot of software religion in this industry today, however it is mostly perpetuated by mediocre developers.


Naturally that group does not include you or most of HN.


Me? I try not to. I don't like windows but I don't preach about it. C++, Java,C#, etc. I like em. I prefer Python. But I don't preach about it. Python 2 vs 3? Who freaking cares? Just get stuff done. But I don't preach about it.

I just go about getting my work done.

I think my only true religion is that I want open protocols. Chat, home automation, auto, etc. should all use open and documented protocols.


I can vouch for the "fund people, not projects" aspect of ARPA funding in the early 70's.

I was a lowly undergrad but working with Tom Cheatham (who ran the grad Harvard Center for Research in Computing Technology) and others, and helping in minor ways with their annual ARPA proposal. ARPA pretty much was sending money our way, and we just had to cast the annual work in terms of what was "hot" at the time (mostly "program understanding" at Harvard) to get the funds.


> The commonsense idea that “computer people should not try to make their own tools (because of the infinite Turing Tarpit that results)”. The ARPA idea was a second order notion: “if you can make your own tools, HW and SW, then you must!”

All the best breakthroughs of my career have come from a build don't buy bias.


Got some examples to roughly outline? Am curious for encouraging anecdotes as a chronic (semi-willing) NIH syndrome "sufferer" in all areas I can hypothetically attack with just a laptop and a compiler..


To like PARC you need the following things:

1. A monopoly that prints money

2. Company desire to investigate cool stuff

3. No pressure to productize the research

4. Smart people hired and given lots of leeway.

I think parts of Google, Microsoft,AT&T, and IBM are/have been like this.


As mentioned by Alan right at the beginning of his answer, the true visionary behind our current technological age was Licklider. He was an ARPA researcher who, it is argued [1], initially dreamt up the connected network we now know as the internet. Interestingly, when an early iteration of the 'net was put to UCLA academics, they refused to get involved!

[1] Sharon Weinberger, The Imagineers of War (2017).


YC HARC (https://harc.ycr.org) has a lot of talented people working on PARC-like problems. Alan Kay has close contact with a lot of the team as well.


Having worked there, HARC is explicitly modeled after PARC (it's even there in the name) e.g. give a few visionaries a relatively small amount of funding and let them go. Even if nothing interesting happens your investment wasn't very large, but if anything interesting does come out of it then it might be as big as the personal computer and networking (so the theory goes).


Related to this special time in computing that Kay discusses is the book "What the Doormouse Said?" It's a good, relatively quick read about the rise of personal computing in the 1960's.

https://en.m.wikipedia.org/wiki/What_the_Dormouse_Said


i wonder why he (presumably) did not think "dealers of lightning" was a good book. not read "the dream machine" yet, but i thought "dealers of lightning" was a great look at the history of xerox parc.


Superb book, recommended reading for those managing engineering teams as you will learn a great deal about the culture at Xerox Parc, and what made it so successful (peer review was a huge part of it, or as they called it "beat the dealer"): http://amzn.to/2nLvMFO


recommended reading for engineers too; I found it very inspiring in a "makes you feel like building things" sense.


It is special because it happened in a different era and its technological and cultural ideas are no longer legible. Time was computers were for mathematics, now they are Potemkin villages and vehicles for almost stunning cultural paranoia and new sectarian hatreds.


> vehicles for almost stunning cultural paranoia and new sectarian hatreds

Expound =)


I was working as my department's internal R&D director a couple years ago and I was interested in the first question as well. Note that that position probably sounds way more important than it actually was. Coincidentally, it was at one of the places Alan Kay mentions in an answer to the linked Quora question.

I pretty much focused on 3 different entities: DARPA, Xerox PARC, and Bell Labs. These are the books I read to try to answer that question:

[1] Dealers of Lightning. https://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer... [2] The Department of Mad Scientists. https://www.amazon.com/Department-Mad-Scientists-Remaking-Ar... [3] The Idea Factory. https://www.amazon.com/Idea-Factory-Great-American-Innovatio...

I personally thought that having access to a diverse set of disciplines & skills and a reasonable budget were two of the more important things.


Organization succeed when people understand and believe in WHY they do what they do and how.

PARC's mission was to "create the architecture of information" in a way that enabled strategic business growth.

In hind sight, it's obvious how important this was, but back then, it was just a belief, and one that turned out to be adopted in mass.

If you want to be like PARC, understand WHY you do what you do and how in a way that makes sense strategically for the organization, it's members, and those it impacts.


Kay's own Viewpoints Research Institute is worth mentioning in this context. http://www.vpri.org/


Xerox was an immensely profitable business once. Being first to market with photocopiers, and keeping a large market share for a long time.

In the Palo Alto Research Center (PARC), they had the right team, the right ideas and research was absolutely going in the right direction.

For instance, they had former SRI International researchers that participated in the Douglas Engelbart's "oN-Line-System", presented in 1968 in what is now known as "the mother of all demos" (https://www.youtube.com/watch?v=yJDv-zdhzMY, https://en.wikipedia.org/wiki/The_Mother_of_All_Demos).

Their achievements include the creation of the excellent Xerox Alto computer system featuring a GUI and a mouse as input device, which inspired the Apple Macintosh and MS Windows (a story dramatized in multiple occasions, notably in the classic "Pirates of Silicon Valley").

Xerox leadership failed to visualize how innovations like the Alto could be converted into profitable products... even if it looks self-evident today. That's a once in a lifetime opportunity that they let go and as a result other companies heavily profited from PARC's findings and continue to do so today.

In addition, photocopiers are no longer at the center of business activities, and usage paper is decreasing. This makes Xerox a company of the past, like Kodak or Blockbuster (not trying to be offensive, but it is fair to say so).


>What made Xerox PARC special?

All expenses paid, pure but pragmatic research without having to rush to market and add buzzwords and marketing-inspired crap. Oh, and no compartmentalization between teams and narrow-focused projects either.

>Who else today is like them?

Nobody. Google research labs for example is more like a "throw something out there as a marketing gimmick to show we do 'innovation', and see if it sticks" affair.


It's probably not anywhere in america.

Szhenzen is pretty close to being the global center of hardware innovation and getting into software in a climate where state funding and commercial enterprise is merged in a way california havent seen since the rise of modern liberatarian economics in the 80ies.

The birthplace of the web at CERN is also still in play as a center where lots of things happens.

And thats before we head into the fringes where the oil exploration industry is leading in VR research almost as an afterthought of having to process and visualize the kind of big data most big data start ups only dreams about being able to handle.

Remember that Xerox wasn't an IT company but a printing/photocopier copier so it's just as reasonable to expect that the next big leap will come from someone that is not currently seen as an IT giant, as to go looking within the Californian IT industry.


The Shenzhen model is really nothing anywhere close to Xerox PARC. If any place gets close, I'd say it's Microsoft Research.

But even that has changed a lot in the last few years.

Disclaimer: I work there.


I'll say it: Microsoft. They have a lot of research stuff that never makes it onto (mass) products. It's atrange that Apple does not publish its internal research projects; maybe they are more focused on actual direct applicability, which is decidedly un-PARC like.


I feel like Apple's culture is one of extreme secrecy even where its counterproductive or unnecessary. I once worked at an ad agency who did a lot of work with Apple and it was ridiculous. People working on banner ads for artists to be shown in iTunes had to go and sit in a windowless office behind a card lock. This wasn't highly secret stuff, this was stuff like "You can buy the new Coldplay album on iTunes".


They did great groundwork for Apple, they monetized on it while Xerox was left behind empty handed.


There's a few, not quite the same, but bits of pieces of places like this exist. Monopoly driven companies like Google or Microsoft have nice R&D arms, car companies can get involved in weird things. There's the big research universities, who are these days just as commercial as PARC ever was. There's also DoE National Labs who, because of the downturn in the nuke business, get involved in all kinds of cool R&D and are surprising to most people only semi-government. And finally there's pure government R&D centers, mostly in the military.


Control F for national labs, and this is the only comment that came up. That makes me sad.

The DOE and NNSA national labs are enormous R&D institutions and have grown far beyond their original propose in the atomic weapons complex.[1]

I personally work for an office that pumps over 100 million annually into the system, and frankly that's small change. Probably about 40%-60% of that money goes into overhead and facilities which enables research outside of my project.

There are a lot of fair criticisms of the system, but most of that is because real R&D works like ycombinator. Lots of investment, with the hope that eventually something pays off in a huge way. Like the system or not, most people who are unhappy with it are really just unhappy about government sponsored R&D. Almost by definition, it's not going to be "efficient" in the short term.

If people are interested in big money, complex problem science, i encourage you to take a look at the labs. They cover everything from supercomputers, to marine science, to renewable energy.

[1] https://energy.gov/about-national-labs


A similar organization today is DARPA.

You can submit proposals to them and they may fund you.


It isn't similar, either to PARC or to ARPA-IPTO (before the "D"). It was that "D" that caused PARC to happen, because of the qualitative change in what "Advances Research Projects" were supposed to mean.


I've always wondered what enabled Xerox PARC to 'succeed' and Interval Research to languish. PARC was developed more or less organically where IR was an intentional construction. To me, that contrast is a proxy for other regions attempting to re-create Silicon Valley.

https://en.wikipedia.org/wiki/Interval_Research_Corporation


> Fund people not projects — the scientists find the problems not the funders. So, for many reasons, you have to have the best researchers.

> Problem Finding — not just Problem Solving

Lessons not yet learned


I try to run the R&D side of my sports science business like Xerox PARC. I take a ton of inspiration from them.


Different space, but after reading Alan Kay's answer, the only current entity that comes to mind is CERN.


I'm not sure what places are like Xerox PARC now, but I want to create something like it again in the near future. Only for space-related ventures.

My uncle was on the original PARC team so I'll see if I can get an answer from him to answer the quora question.


I remember thinking a lot about PARC when visiting biohacklabs. The feeling of random ideas implemented by people of various skills organically without clear goals. Lesser costs, new designs, new implementations.


Zzz, suuwuuuojuuusujuuojoljllujluo ass wasàhhhhhl


We are (playing the Wayne Gretzky game of invention).


Bob Taylor made it that way.


A blank check from the Feds in the hopes that giving money to smart people will make tgem develop something we could use to blow up the Russkies. That's what made Xerox PARC.


You mean DARPA. But DARPA was far from being a blank check. It was highly controlled, and based on projects not talent.


Nope


Yeah people go on and on about how much 'amazing' stuff Google does or Microsoft does as if they just have a knack for picking the right people.

No, they have some good people and a huge amount of money gained from monopolies, which are by definition not legal.


We detached this subthread from https://news.ycombinator.com/item?id=14112384 and marked it off-topic.


> monopolies, which are by definition not legal

By exactly what definition?

Monopolies are not illegal. Abusing a monopoly is illegal in some cases.


If it's not illegal, it's not a monopoly.

Microsoft still has an illegal monopoly with Windows. There's no political will to deal with it, but it absolutely exists and has for years.


> If it's not illegal, it's not a monopoly.

Where are you getting this information from?

"The courts have interpreted this to mean that monopoly is not unlawful per se, but only if acquired through prohibited conduct."

https://en.wikipedia.org/wiki/United_States_antitrust_law#Mo..., which cites a 1945 court case United States v. Aluminum Corp. of America.


Why would American law be relevant?


You've mentioned two companies - Microsoft and Google. Both of these are American companies so they're subject to US competition law in at least their home markets. No other country would have the power to do anything fundamental about their monopoly such as breaking them up. They could only fine them, require them to support competitors, or lesser things like that.

If you want to talk about some other countries, a monopoly is also not illegal in the UK. Again, only abusing the monopoly may be considered illegal.

https://www.gov.uk/cartels-price-fixing/overview

If you were thinking of a different country's law (which would be odd since you mentioned two US companies) can you name one where a monopoly is by definition illegal?


>You've mentioned two companies - Microsoft and Google. Both of these are American companies so they're subject to US competition law in at least their home markets. No other country would have the power to do anything fundamental about their monopoly such as breaking them up.

That's quite untrue. Any sovereign country can impose whatever restrictions they like on any company they like. That company can withdraw from that country, but in doing so they are giving up on all the income they could get from that country.

They might not be able to force them to break up, but they can certainly force them to either break up or fuck off.

>If you want to talk about some other countries, a monopoly is also not illegal in the UK. Again, only abusing the monopoly may be considered illegal.

You don't seem to understand. By definition, if it isn't abusive, it isn't a monopoly.


> You don't seem to understand. By definition, if it isn't abusive, it isn't a monopoly.

That's not what you said originally. You said 'if it's not illegal, it's not a monopoly' I've shown that it at least is not true in the US and even referred you to a precedent which says that you can have a monopoly that is legal - 'monopoly is not unlawful per se'.

So if you weren't referring to the US, which country do you think your claim is true in?


I'm going to end this conversation before you get too confused


We've banned this account for repeatedly violating the site guidelines and ignoring our requests to stop.


Could you please undo that? I don't know this person, but all his arguments are worth discussing, his points are valid, and don't see any other site violation, than the monopoly discussion being of course only a side discussion of Google/Microsoft. I also don't see any request to stop. His last post on this was 19 hrs ago, you de-attached it 15 hrs ago, and then banned him 15 hrs ago.

Rob Taylor by himself explained how important it is to let discussions happen at PARC. They trained it. RIP


Even if that were true, it's like arguing a good hockey player shouldn't get penalties or ever be suspended.

Taylor did not foster a 'no-holds-barred' culture. Alan Kay has explained this many times, e.g. https://news.ycombinator.com/item?id=14120241.

There were many violations and several warnings.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: