What really made PARC work is that they were funded to develop the future of computing by building machines which were not cost-effective. It was too early in the mid-1970s to develop a commercially viable personal workstation. But it was possible to do it if you didn't have to make it cost-effective.
That's what I was told when I got a tour of PARC in 1975. They had the Alto sort of working, (the custom CRT making wasn't going well; the phosphor coating wasn't uniform yet) the first Ethernet up, a disk server, and I think the first Dover laser printer. All that gear cost maybe 10x what the market would pay for it. But that was OK with Xerox HQ. By the time the hardware cost came down, they'd know what to build.
Previous attempts at GUI development had been successful, but tied up entire mainframe computers. Sutherland's Sketchpad (1963) and Engelbart's famous demo (1968) showed what was possible with a million dollars of hardware per user. The cost had to come down by three orders of magnitude, which took a while.
Another big advantage which no one mentions is that Xerox PARC was also an R&D center for Xerox copiers. That's why they were able to make the first decent xerographic laser printer, which was a mod to a Xerox 2400 copier. They had access to facilities and engineers able to make good electronic and electromechanical equipment, and thoroughly familiar with xerographic machines.
Ah, the glory days of Big Science.
This still happens! Silicon Valley gets a lots of long-term oriented funding from taxpayers via DARPA and other government agencies. Siri (named after SRI International where Engelbart did his work) and autonomous driving (DARPA Grand Challenge) are just a couple of recent examples.
In fact PARC still does too. Although owned by a private company, PARC does government-funded research.
I would say generally people in SV are less consciously aware of the major role government funding plays in it to this day. It's probably because of the mythos of private entrepreneurship and the uncomfortable narrative of all this so-called "free market capitalism" being supported by billions in govt funds, with only a fraction of the profits returned to the public coffers via taxes.
The DARPA model is also not big science because the researchers on the outside do not pick the problems, the PMs at DARPA do. The model works when a visionary becomes a PM and is given a "big" (20-80 mil) bucket of money to disperse to researchers to enact their big ideas. If the PMs are truly visionary, then this works. If they are not, you wind up with a sort of soup of crap.
Additionally, since it's applied, DARPA will do frequent (once a quarter and once a year) check-ins, measurements, and progress reports. If you don't measure up during those, your funding gets cut. So as a researcher, it is a challenge if what you want to do is a little bit off the path of what the PM wants to do.
I have much more limited exposure to IARPA but it seems to be the same way there.
I have more experience with NSF, which is the total opposite: researchers propose their own projects and there are infrequent touchpoints, and no cut points really. However, NSF will give you one to two orders of magnitude less money than DARPA.
The programs (and their vision, and their success) depend quite a bit on the PM, and on their ideas and engagement. To a lesser extent success depends upon the SETAs as well.
Eh. You can get like a $200/hr rate. If you're a small company with low overhead, you can get paid quite competitively with the private sector in base comp. Difficult to match big G stock grants, though, no one is becoming a millionaire off of this work.
Unless of course what you do on the grant can be turned into a billion dollar company, because (generally) contractors walk away from research programs with liberal rights to the IP they create, with the government retaining some rights to use, but very rarely (IME) retaining direct ownership. Of course, it's probably not that likely that what you do can turn into a billion dollar company but hey, one can dream...
Request: can you give an AMA on how one actually proposes/gets funding from darpa or at least an ELI5 on the topic...
What sort of idea should one have where you say to yourself "gee, I should hit up darpa with this!"
The truth, is that you should look at the Broad Agency Announcements (BAAs) that DARPA, specifically the Information Innovation Office (I2O) makes. There will be big PDFs that have lots of boiler plate and look really boring, but these are basically documents written by a program manager (PM) that describes what their envisioned research program is, how they see it being divided up, and what kind of researchers they see doing work in each area of their research program. A research program is split up into one of many (between 3 and 8) technical areas (TA) and there will be language in the BAA describing how many TAs one performer might propose to.
Overall a TA can be described at a high level as one of a few roles: a blue team, a red team, and a white team. There will usually be more than one blue team, at most one red team, and precisely one white team. A blue team does the R&D on the program. Each blue team can have their own unique approach to addressing the research challenges laid out by the PM. Usually the blue teams have competing or complimentary approaches, it is seen as a waste of money to pay two blue teams to do exactly the same thing. A red team evaluates the work done by the blue team. Exactly what this means depends on what the blue team builds. The white team "holds the room together" by fitting the PMs vision together with what the blue team does. The white team usually has a much closer relationship with the PM and usually the white team is pre-determined by the time the BAA is made public.
Blue teams are usually teams of 2-4 companies / universities. These teams usually form before a BAA comes out due to people "in the know" getting a rumor about an upcoming program, or just after it comes out, based on prior relationships. It's kind of rare to have a blue team that is only one entity, but depending on the entity and the size of the program, it can happen.
The path to a successful proposal is demonstrating that your team has ideas that are relevant to the PM and that you can successfully execute them. You show this by having a stack of prior work, good ideas, and a well formed and coherent proposal. Think of the proposal as an audition - if you as a team can't get it together for a month and write a 30 page document describing what you want to do, you probably can't keep it together for 4 years working on something. Your team reads the BAA, does some analysis, figures out how your ideas and past work can apply to the research program proposed by the BAA, and then you write the story up. If you do a good job, and your story is more compelling than other people, you get the money.
The super-truth is that it's an old boys club with a lot of luck and nepotism.
The way people get "in" is when their peers go to DARPA to be a PM. Then you, the researcher, can think "Well Dr. Smith knows about X and likes X+Y, I do some Y, I have an idea that could use some funding, what would Dr. Smith like to read about." Then you write it up and e-mail it to them.
Luck is a big factor here because PMs get e-mails like this all the time. It's almost essential to already be in the PMs rolodex/friend group to get some of their cycles. They like to say that they want to have people outside their circle approach them with new ideas but that's kind of a white lie, there are lots of people in the world and a finite amount of the PMs time.
So it's no surprise who you know is important - the brain is mostly about connectivity, not about the individual neuron.
Just a model for thought, obviously not a complete description ("every model is wrong"). I think the value of this model, if you can warm up to it, is that you stop worrying that "it's all about the connections" - because it really is and it's useful that way because that's a major point of how a network works. So do work on your connectivity! And also just like in the brain, a few high-quality connections are worth much more than a thousand low-quality ones.
> 2. Fund people not projects — the scientists find the problems not the funders. So, for many reasons, you have to have the best researchers.
So yes, the total amount may be there but is now diverted.
Reminds me of a (probably the) reason we're doing standardized testing in schools. A decent teacher is able to teach and test kids much better than standardized tests, but the society needs consistency more than quality, and we don't trust that every teacher will try to be good at their job. That (arguably, justified) lack of trust leads us as a society to choose worse but more consistent and people-independent process.
I'm starting to see this trend everywhere, and I'm not sure I like it. Consistency is an important thing (it lets us abstract things away more easily, helping turn systems into black boxes which can be composed better), but we're losing a lot of efficiency that comes from just trusting the other guy.
I'd argue we don't have many PARC and MIT around anymore because the R&D process matured. The funding structures are now established, and they prefer consistency (and safety) over effectiveness. But while throwing money at random people will indeed lead to lots of waste, I'd argue that in some areas, we need to take that risk and start throwing extra money at some smart people, also isolating them from demands of policy and markets. It's probably easier for companies to do that (especially before processes mature), because they're smaller - that's why we had PARC, Skunk Works, experimental projects at Google, and an occasional billionaire trying to save the world.
TL;DR: we're being slowed down by processes, exchanging peak efficiency for consistency of output.
It's more about it being way to hard to fire bad teachers and standardized tests, in theory at least, are an attempt at providing objective proof of bad teaching that won't be subject to union pushback and/or lawsuits.
I think a lot of the desire for standardized testing would dissolve if principals could just make personnel decisions like normal managers. And, of course, it's up to their superiors to hold them accountable for being good at that job.
> ...we're losing a lot of efficiency that comes from just trusting the other guy.
I think people have been trusting public schools a lot. That generally worked out great for people in affluent suburban districts. And it worked out horribly for people in poor, remote, and urban districts.
> I think a lot of the desire for standardized testing would dissolve if principals could just make personnel decisions like normal managers.
That could maybe affect the desire coming from the bottom, but not the one from the top - up there, the school's output is an input to further processes. Funding decisions are made easier thanks to standardized tests. University recruitment is made easier thanks to standardized tests. Etc.
> I think people have been trusting public schools a lot. That generally worked out great for people in affluent suburban districts. And it worked out horribly for people in poor, remote, and urban districts.
That's the think I'm talking about. I believe it's like this (with Q standing for Quality):
Q(affluent_free) + Q(poor_free) > Q(affluent_standardized) + Q(poor_standardized)
Q(affluent_standardized) < Q(affluent_free)
Q(poor_standardized) > Q(poor_free)
A similar issue was just discussed yesterday in a thread on Jeff Bezos' letter to shareholders:
"A common example is process as proxy. Good process serves you so you can serve customers. But if you’re not watchful, the process can become the thing. This can happen very easily in large organizations. The process becomes the proxy for the result you want. You stop looking at outcomes and just make sure you’re doing the process right. Gulp."
I remember in middle school, I had a couple teachers who would say "ok, this is the standardized stuff I have to say" and then go into a very boring lecture of what we just had been learning except the way we first learned it was interesting and engaging. They were simply dotting their i's to make sure that they did their job in conveying the material the state wanted us to know.
Sure , some teachers could use the freedom to teach better(and create new teaching methods) , but statistically , won't most teachers perform better when driven by above average methods ?
And as for research , sure we need those big breakthroughs , but we also need many people to work on the many small evolutionary steps on each technology . Surely we cannot skip that ?
Basically, think of why you build layers of abstraction in code, even though it almost always costs performance.
 - e.g. one teacher focusing on quality math skills, another on quality outdoors skills, etc.
And as for education, i don't think standardization always leads for performance loss. For example ,the area of reading has seen a lot of research, and one of the results is "direct instruction", one of the best methods to teach reading, and is a highly standardized method.
But maybe what's your saying is true for standardized tests in current implementation.
To use your example - more modularized technologies are more flexible and can be developed faster, but they no longer use resources efficiently. A modular CPU design makes design process easier, but a particular modular CPU will not outperform a hypothetical "mudball" CPU designed so that each transistor has close-to-optimal utilization (with "optimal" defined by requirements).
Or compare standard coding practices vs. the code demoscene people write for constrained machines, in which a single variable can have 20 meanings, each code line does 10 things in parallel, and sometimes compiled code is itself its own data source.
The way I see it, building abstractions on top of something is shifting around difficulty distributions in the space of things you could do with that thing.
But on bigger design spaces, when you let hundreds of thousands of people collaborate, create bigger markets faster, and grab more revenue - you enable a much more detailed exploration of the design space.
And often, you discover hidden gold. But also - if you've discovered you've made a terrible mistake in your abstractions - you can often fix that, maybe in the next generation.
I think this is a really bad straw man. There's broad bipartisan support for funding research into good ideas, assuming it's done the right way. The problem, though, is that the billions in government research spending we already allocate are subject to incredible amounts of lobbying and earmarking.
We'll have a NASA allocation, but it has to take place in the state of a particular Senator. Or the research will be funneled through the department of defense and received by defense contractors.
And, besides, for every DARPAnet, we have many "Where does it hurt most to be stung by a bee?" (1).
It's hard to evaluate whether we would still end up in a net positive situation. I don't have hard feelings against people who are both enthusiastic and skeptical about government funding grants.
From the link:
Specific dollar amounts expended to support each study were not available for the projects
profiled in this report. Most were conducted as parts of more extensive research funded with
government grants or financial support. The costs provided, therefore, represent the total
amount of the grant or grants from which the study was supported and not the precise amount
spent on the individual studies. This is not intended to imply or suggest other research
supported by these grants was wasteful, unnecessary or without merit.
It's an incomplete and distorted picture of how bureaucracy funds research. I brought it up to add counterpoints and context to the implication that government is actually good at this sort of thing if we'd just get over free market capitalism (or something).
Did they invest the same amount in these two?
I was saying that just pointing out that programs like DARPAnet might not make up for all the other nonsense.
And we can flip that around as well, maybe PARC was spun off from some successful grants, but we're not taking into account these sorts of failures when attaching the price tag to those initial grants.
PARC really wasn't an R&D center for Xerox copiers (this stayed in Rochester), and Gary was a great engineer who could do the work of making a laser and its optics and scanner work with a standard copying engine pretty much by himself.
Fantastic read, "Sketchpad: A man-machine graphical communication system" ~ https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-574.pdf
So in that case imagine there is a PARC bis somewhere out there... what kind of out-of-regular-folks wallet things they might be working on right now?
I want to be surprised today like Jobs was when he visited them first time.
Also Decoupled Neural Interfaces: https://deepmind.com/blog/decoupled-neural-networks-using-sy...
And AlphaGo of course.
There are certain moments in time where such innovation is possible and this was one of them. In this respect they are like The Beatles--the right talent just at the right time.
There isn't anything like Xerox PARC at the moment and won't be until the conditions are once again ripe for another Cambrian Explosion (as another commentor aptly described it) of innovation.
Once the canvas is no longer blank it becomes a lot harder to be that innovative. All kinds of brakes on the system engage almost automatically: conventions, languages, processors, window managers have all become a lot less open to really new concepts.
The biggest changes of the last couple of years are deep learning coupled with the advent of GPUs that have computing power that was only affordable to universities not that long ago for very little money. Possibly that will engender a totally new computing environment in which our 'old' stuff no longer matters as much and we'll be more free to pursue things that are less anchored in the practical requirement to make money.
I suspect the whole 'deep learning' revolution will be as big as the original invention of the transistor or the web. It won't give us full AI but maybe by the time we've mined that for all it's worth we won't feel the need for it anymore.
Over the next hundred million years or so, the evolutionary innovation was mostly in smaller and smaller tweaks on some evolutionary branches, and the extinction of other branches as they were outcompeted. Never again was there the same wide range of innovation and experimentation--except following expansion into new environments (e.g. when organisms moved onto land) and following mass extinction events.
It is really interesting to see how research centers like PARC develop but you're absolutely correct in that it's easier to be creative with a clean slate.
Maybe we shouldn't be trying to emulate or reproduce PARC unless we have found a new clean slate somewhere. Instead we should look to where innovation develops particularly rapidly in spite of an establishment.
I haven't had my coffee yet, but this part sounds like a contradiction to me? Can you think of any examples (maybe from other areas)?
Unless the innovators are somehow overlooked by the omnipresent bureaucrats and bookkeepers -- e.g. working in a small lab hidden deep in a very large organization disinclined to optimize costs/profit -- I don't see any established organizational model that could sustain innovators like Nicolai Tesla or Kurt Godel.
Maybe a bored billionaire?
I think we will see new canvases in the future, but maybe not in computing. Maybe we will see the rise of new materials, new ways to heal, ways to combine humans with animals, living computers, living tools of war, space travel, and so on.
If you want to be like Xerox PARC you are already limiting yourself when you only think about the computer domain.
That depends where you think the edge of the canvas is. If you're looking at ergonomic commodity computing with a GUI, the days of blankness are long gone.
But there are always completely blank areas elsewhere.
AI may not be one of them. I'd suggest it's more of a marginal area. It has plenty of potential, but it's also struggling against its history and existing beliefs.
What's missing now is the next wave of physical substrate technology. It won't be based on the the post-war wave of technology at all.
PARC was part of the post-war wave, which actually started with Church and Turing and became implementable at scale with transistors.
Who is today's Turing? Does he/she exist? Discover that, and you'll see where the next big wave is.
That's not to disparage with your comment, only to say that while it might feel like a negligible effect on your life, I would say that it is non-zero.
My completely uninformed and lay persons opinion is that we might not yet know what we can really do with deep learning yet. For example there were many years between the discovery of electricity and the invention of the microprocessor. The latter certainly required the discovery of the former, but did not logically lead to it. I use this example because Andrew Ng called AI the new electricity.
Web page translations (at least in Google and earlier) didn't do "deep learning" for ages.
Plus most of those things, web page translation, image search, speech assistants, are pretty much either niches or gimmicks. Even if they worked perfectly, there would not be much to write home about.
It's not like people think "my life was changed by the Alexa". GUIs, on the other hand, made lots of things possible that weren't, including whole new professions (heck, the web IS a GUI thing).
That varies a lot by language. I keep finding its attempts at Polish painfully bad, and I only know a little Polish myself.
Yes, absolutely. To put it bluntly: how many jobs do you think are affected by being deaf and blind?
Computers are now able to see and hear, and on top of that they have become better at most classification tasks than humans. Those are game changers, this goes much further than advertising.
When I hear claims on how many jobs will be replaced because of deep learning (in contrast to the base rate of the long, steady slope of automation over the last 175 years), it's often about an abstract and simplified notion of the complexities of jobs outside the claimant's area of expertise. In such cases, it's useful to ask the claimant whether their profession is soon to be made obsolete. Will AI replace the pundit anytime soon? The economist? The enterprise programmer? The AI researcher?
Why not. You can go to any major news outlet, and expect a certain bias/comfort in their news, and a definite bias/comfort in their overall editorials or those of individual editorialists. Why, it's almost like they're programmed to reliably write under those biases.
We already have the beginnings of auto-written news articles. There was a system featured on HN a few years ago about weather news articles at, I think, the LA Times.
One reason some people stay logged out of Google is so their search results aren't colored by what Google thinks it knows about them. It's not that large a leap to systems reading all the opinion letters its owners receive, and maybe all the printed letters that other outlets receive, turn a knob or two and collect max-Laffer curve revenue, based on where it's set on the bias curve.
Let's review this in a decade or so and see what happened.
In terms of level 5 autonomous driving, the most common retort I've seen recently is "well, we're going to need special roads with sensors built in, and then it is a tractable problem." There are more than 4 million miles of road in the US; at any given time maybe 2% are under construction. The US Interstate Highway System was built during a boom and the cold war (where the system learned from Eisenhower's WWII experiences and learning from the Autobahn's military utility.)  If we're lucky, maybe the interstates will get sensors. But the complexity of re-doing an exit or interchange, which involves dozens of on-the-fly lane changes and cutovers, seems impractical merely to support self-driving cars.
I would love autonomous vehicles for the sake of my children, but I suspect that at best, maybe my grandchildren will benefit.
Nobody involved closely with self driving cars thinks we're anywhere close to a level 5 system. Why do you think we are?
That point in time has definitely come for self driving cars, we're not 'close' but we are much closer than we were even 5 years ago, there is legislation on the books permitting self driving cars from participating in regular traffic instead of on private property only, there is special hardware geared towards such systems and so on.
It's an idea whose time has come, now we have a bunch of engineering problems to solve and they're far from simple but I would not at all be surprised if self driving cars are going to be equal partners in traffic 10 years from now.
Lots of hard work between now but I think we'll see self driving cars long before we will see an operational fusion reactor.
I think it's obvious that machines can, for example, weld better than humans can. But machines can't decide what to weld. For now in that arena, things are mostly still human controlled.
I think your argument gets significantly weaker when you push truck, car, and plane driving onto the conversation stack.
Even the best of self-driving AI automation isn't nearly as good as bad human driving. Yet.
But the absolute best human weld joints are no better than the absolute best robotic weld joints.
I think you're talking about two entirely different domains here when you conflate welders and couriers. One of those needs a human decision to place the components to be welded next it in front of a robot that basically does one thing. The other is an actual adaptive AI that's trying to make decisions on it's own.
Yes, it's obvious that the lowest-wage and lowest-skilled workers are going to be displaced by robots the soonest. That's like arguing that the sun is going to rise. And as a society, we need to account for that.
But arguing that the lowest hanging fruit of automation is going to displace taxi drivers and gardeners in the next 10 years is pretty nuts, even for HN.
I feel like this fits the description in the comment above:
> When I hear claims on how many jobs will be replaced because of deep learning (in contrast to the base rate of the long, steady slope of automation over the last 175 years), it's often about an abstract and simplified notion of the complexities of jobs outside the claimant's area of expertise.
But were you optimize the Boeing manufacturing process for machines, the way car factories are optimized, you could utilize strengths of machines while sidestepping their limitations.
(Why this does not happen often is I believe a combination of robots having high initial costs, coupled with products starting as human-assembled prototypes, and companies just optimizing that process incrementally, instead of redesigning it around machines.)
> I think it's obvious that machines can, for example, weld better than humans can.
Ok. That wasn't all that obvious not all that long ago.
> But machines can't decide what to weld. For now in that arena, things are mostly still human controlled.
> I think your argument gets significantly weaker when you push truck, car, and plane driving onto the conversation stack.
What's so incredibly special about truck, car or plane driving/piloting that you feel they are immune to automation?
> Even the best of self-driving AI automation isn't nearly as good as bad human driving. Yet.
Precisely. Let's give it 10 years and look again. I wouldn't bet against it.
> But the absolute best human weld joints are no better than the absolute best robotic weld joints.
It's not about 'best'. It is all about repeat accuracy and consistency. That's what makes the robot better. Maybe the best human welds are better than the best robot welds. But to get a human to consistently outperform the robot or to consistently deliver a specific quality level is not going to happen.
> I think you're talking about two entirely different domains here when you conflate welders and couriers.
We consider welding to be a highly skilled job and yet welding robots are a thing. Courier on the other hand 'merely' requires a driving license, something we hand out to 16 year olds after a perfunctory inspection.
> One of those needs a human decision to place the components to be welded next it in front of a robot that basically does one thing.
Which takes a couple of years training for a human.
> The other is an actual adaptive AI that's trying to make decisions on it's own.
Yes, it's a harder problem for a computer. But it was mostly harder because in the past computers could not see the way we can. But deeply layered convolutional neural networks and their descendants have changed that, dramatically. What was SF 5 years ago is now commonplace. And computers now have access to senses that we don't have, such as radar and lidar.
That takes care of one of the harder parts of the problem.
There is still plenty left, but one of the most formidable obstacles to self driving vehicles is now behind us.
> Yes, it's obvious that the lowest-wage and lowest-skilled workers are going to be displaced by robots the soonest. That's like arguing that the sun is going to rise. And as a society, we need to account for that.
Yes. But I wonder if we're going to be ready when the software is ready. In fact I doubt we will be ready. It will be the industrial revolution all over again, only this time it will play out in a decade instead of in 100 years.
> But arguing that the lowest hanging fruit of automation is going to displace taxi drivers and gardeners in the next 10 years is pretty nuts, even for HN.
I don't think that was called for, that's pretty rude, even for HN.
That hasn't been my experience. But perhaps we should define better?
Computers are faster at doing the classification than a human, but not as accurate. So it depends what variable you're optimizing for when you say "better."
I'd be careful with that.
Like autonomous cars or security cameras or robotics.
We're not yet at a 'general vision is a solved problem' level and this may take a long time still - if it is ever solved. But many problems can be reduced to the point that they become tractable even absent 'general vision'. Affordable LIDAR is a huge step, so is radar. Those two reduce the problem of vision with two crappy stereoscopic cameras set 5" apart to something much closer to the domain.
As far as I can see the race is on.
Our GPU kit has got 6 times better in 2 years.
I started from the dataset bias position myself - billions of family snaps and selfies can't provide the reference for arbitrary images of stuff from arbitrary angles. My reading of the results that we got is that the claims of lower level features extracted by the cnn are correct, and that a network trained on massive public data can be the basis for specialised tools trained on more constrained proprietary datasets.
Your milage may vary though...
Garbage in, garbage out. When was it any different?
The question is: can a human do much better on that garbage?
> Public Datasets have style of photo that represent it's own bias.
That I readily agree with.
Scoring is also critical, you want to gracefully degrade classification systems not getting the correct dog breed is insignificant vs calling a dog a sofa. Further, people base real world classification on stereoscopic video footage. Training robots to classify photos is a handicap for building robots to operate in the real world.
Different tasks have different error tolerances. When an MI tool can get within the acceptable margin of error for certain tasks, then there is a possibility for the MI to take jobs.
As long as accuracy is critical to a classifier, in many cases it's better to pay for human eyeballs to look at the criteria.
We're talking about a huge spectrum of applications here. Most of us probably don't care that much about the difference between human and MI error rates about classifying images used in broad-scale advertising on the internet.
We do (or perhaps should) care very deeply about human vs. MI error rate in terms of granting or not-granting things like parole.
Add non-standard background noise, and it also fails on hearing tasks.
That was exactly true of Xerox PARC in the early '70s.
Its going to start to challenge questions around what the definitions of full AI are.
Hints is the favorite paper of a lot of my heroes like John Regehr.
From SRC, he went to Microsoft and he also taught at MIT.
Edit: I should point out that while it is awesome to work in such a storied place, there is some stress associated with it. Read about the weekly "Dealer" meetings at PARC.
From The Myths of Creativity By David Burkus
>> In the 1970s at Xerox PARC, regularly scheduled arguments were routine. The company that gave birth to the personal computer staged formal discussions designed to train their people on how to fight properly over ideas and not egos. PARC held weekly meetings they called "Dealer" (from a popular book of the time titled Beat the Dealer). Before each meeting, one person, known as "the dealer," was selected as the speaker. The speaker would present his idea and then try to defend it against a room of engineers and scientists determined to prove him wrong. Such debates helped improve products under development and sometimes resulted in wholly new ideas for future pursuit. The facilitators of the Dealer meetings were careful to make sure that only intellectual criticism of the merit of an idea received attention and consideration. Those in the audience or at the podium were never allowed to personally criticize their colleagues or bring their colleagues' character or personality into play. Bob Taylor, a former manager at PARC, said of their meetings, "If someone tried to push their personality rather than their argument, they'd find that it wouldn't work." Inside these debates, Taylor taught his people the difference between what he called Class 1 disagreements, in which neither party understood the other party's true position, and Class 2 disagreements, in which each side could articulate the other's stance. Class 1 disagreements were always discouraged, but Class 2 disagreements were allowed, as they often resulted in a higher quality of ideas. Taylor's model removed the personal friction from debates and taught individuals to use conflict as a means to find common, often higher, ground.
The main purposes of Dealer -- as invented and implemented by Bob Taylor -- were to deal with how to make things work and make progress without having a formal manager structure. The presentations and argumentation were a small part of a deal session (they did quite bother visiting Xeroids). It was quite rare for anything like a personal attack to happen (because people for the most part came into PARC having been blessed by everyone there -- another Taylor rule -- and already knowing how "to argue reasonably".
Being in your early 20s something giving a talk at PARC is inherently stressful, but having spoken at a multiple conferences they were a very respectful crowd.
PS: Yes, PARC still exists, though it's "PARC - A Xerox Company" now, as they repeately remind the interns: https://en.wikipedia.org/wiki/PARC_(company)#PARC_today
I can't compare to PARC, but there are a lot of cool things happening at these places.
Also consider the different industries. Xerox may need on-the-ground support staff to go onsite in numerous countries, in a way Apple does not need to.
I imagine a lot of their employees are in the service division. When you buy a high end printer it also comes with a service contract where someone will proactively come and do things like clean the printer and make sure toner is stocked.
I don't know, but for sure, they're not in computing. PARC was special, among many other reasons, because it existed (and had the vision and the funding) in the early, gold rush era of computing. Google Research isn't it, not because they're not generously funding important and worthy research, but because computing as a field is too far along for them to have a chance of introducing truly fundamental research in the area. Self-driving cars comes close, no doubt hugely important, but even they have a quite incremental feel to them, especially next to inventing something like the graphical user interface.
Reading it has felt like I'm paying my respects to the pioneers of our field, because I am humbled by what they achieved.
I wonder how many dusty copies of papers sat in university libraries contain ideas that brushed up and given a new coat of paint would be considered revolutionary.
The other side of it is I still use stuff those guys wrote every day, while typing this I switched a bash terminal to run an install command using a program that is pretty much identical (if a superset) of a program written in 1977, 3 years before I was born.
The idea began to dominate my thinking and for the next two evenings I went out after dinner and walked the streets in the dark thinking about it
-Jay Forrester, on the idea of magnetic memory
Trying to avoid every conceivable error is a recipe for paralysis, having the freedom to make mistakes is necessary for achieving any progress at all
-Waldrop (author) on Herbert Simon & the idea of satisficing in the context of behavioral econonmics (he was awarded the Nobel in 1978)
. . . a way of life in an integrated domain where hunches, cut-and-try, and the human 'feel for a situation' usefully coexist with powerful concepts, streamlined technology and notation, sophisticated methods, and high powered electronic aids
You couldn't develop and improve the system without seeing how a large and diverse community used it in practice.
-Robert Fano, on the development of time sharing
The spirit at that meeting was as good as it gets - the most civilized, ecumenical, incredibly supportive atmosphere you could want. People would cheer you on even if they didn't agree with you, just because they loved the fact that you were good
-Alan Kay on the early ARPA community
We felt strongly that only a very small group could do the project on that timescale . . . I don't remember such a thing as a weekly progress meeting. We were more in tune with progress than that.
-Dave Walden on the development of the IMP, the first packet switching node
To Lick, the obvious solution to the software crisis was to apply better and more interactive computing . . . when the systems are truly complex, programming has to be a process of exploration and discovery
[H]ire the smartest people you can find and give them their head - but let them know who's paying the bills . . . it wasn't enough to hire a bunch of super smart individuals, you had to build a community, a culture, and an environment of innovation. You had to give your people the kind of challenge that would light a fire in their eyes, that would generate an atmosphere of non-stop intellectual excitement, that would let them feel in their gut that this is where the action is. You had to provide them with lavish resources - everything they needed to do the job. And through it all, you had to keep your bottom line guys at bay so your guys could have the freedom to make mistakes
-Jack Goldman, founder of PARC
I’d also point out that Bob Taylor apparently had a hard limit of 50 researchers because he felt that was all he could manage. This meant that he, by this criteria, had to get the absolute best, smartest scientists / researchers he could find. If you look at the sheer number of absolutely brilliant people assembled at Parc during the same time period, it is astonishing: Alan Kay (smalltalk), Butler Lampson (alto, *), Bob Metcalf (ethernet), the founders of Adobe (postscript), Charles Simonyi (bravo which later became Microsoft Word), etc. Close to 100% of modern computing directly came from Parc.
> "Dealers of Lightning" is not the best book to read (try Mitchell Waldrop's "The Dream Machine").
That said, I have read Dealers of Lightning myself, and really liked it. I have not yet got around to The Dream Machine.
2) A business model that has no need of those ideas
3) Management who have absolutely no idea how to capitalise on those ideas outside of a core business?
That's Google. Google are Xerox.
Google could become PARC if they chose. Even have some groups like Spanner's doing unconventional stuff. I think it would take changes in management's priorities and evaluation of employees to get it there. It's not yet but there's potential.
I recently read The Idea Factory, about Bell Labs, and it has great insights, to be sure, but enough information about the causality to recreate Bell Labs? I don't know.
Maybe it really does come down to one thing, like funding, as the top comment (currently) on this thread suggests. But I doubt it.
When I was younger, my siblings and I played this game that we sort of made up as we went (too detailed to explain), and it was awesome. Years later, in a bout of nostalgia, we tried to recreate it and it was just awful. Enough small details had changed that it didn't work. One of the important details that changed was a total lack of spontaneity. All of us knew what the outcome should be like, and it made us behave differently. I don't think big orgs are at all immune from this effect of expectations.
Don't get me wrong, I'm obsessed with the famous labs like anyone, a big fan of Alan Kay, etc. I just think somebody needs to call attention to a giant hurdle in learning from them.
I recently bought a book on the Philips Natlab, same idea as Xerox PARC. They invented the optical drive (cd's), the wafersteppers that bootstrapped ASML, the company behind the machines that create chips and some other inventions that I can't find right now.
I have more details in the Natlab book which I have at home, if you're (or anyone else is..) interested.
> PARC still exists, but Google advanced technology projects is probably the closest. Neither of these is that close to the old Xerox PARC since they tent to focus on near term commercializable projects.
While I can see how Google is an engine of scientific progress, Silicon Valley is bigger than Google. Stanford deserves a lot of credit.
Without diving into "what makes Silicon Valley Silicon Valley," I think I should point out that Stanford has consistently produced disruptions since at least Xerox PARC's founding.
(Obligatory: I have only ever visited Stanford, long after graduating from a different university.)
PARC and Bell Labs were at the right place and right time to make fundamental contributions in the nascent areas of digital computers, software, and digital communications. They caught that wave perfectly. Now we're searching a similar revolutionary technology that will open the floodgates of innovation, but it's not apparent yet. Machine learning and AI? If that pans out, Google Brain/DeepMind would be well situated.
And we won't because all those things have already been done.
The laser didn't come from Bell but from Hughes btw.
> PARC and Bell Labs were at the right place and right time to make fundamental contributions in the nascent areas of digital computers, software, and digital communications. They caught that wave perfectly.
> Now we're searching a similar revolutionary technology that will open the floodgates of innovation, but it's not apparent yet. Machine learning and AI? If that pans out, Google Brain/DeepMind would be well situated.
Machine learning is disruptive to a degree that the web never was. The web is augmentative, machine learning is pure disruption. Jobs that require lots of people will soon open up to automation, this is going to change the world in very fundamental ways if it keeps going at the rate that it does right now.
The last three years have seen one humans-only benchmark after another give way and the party is just getting started.
And we won't because all those things have already been done."
We haven't because they're not doing them that I've seen. I hope they prove me wrong as they're in a good position to be next PARC with some changes. So far, they've wowed me only once (TrueTime) with other stuff being variations on prior work with usually short-term focus or for pragmatic goals rather than vision. Their wide-eyed visions I've read about are rarely executed to completion. I don't see them as another PARC.
Web has made society unrecognisable only 20 years later.
In reality societal changes in the past 20 years are superficial, so they would have no trouble recognising it.
To say that the changes have been merely superficial understates the case.
Online dating, online gaming, the disruption of existing media companies, Facebooks influence over the election, a president of the united states who tweets from the can, khan academy, youtube, the fact you can learn almost anything you can wish to learn by pulling a device out your pocket that is attached to an appreciable part of all human knowledge.
Computation on demand, I can pull out a credit card and have access to computing power that would have been unimaginable in 1997.
Social movements that have leveraged the web to achieve greater reach.
Those are things that mostly existed prior to 1997 but not in the sheer scope and reach that they do now.
The social changes in current teens who live an always connected life (I'm not sure that's a good thing but I'm 36, I'm too old to judge without it sounding like 'back in my day').
I think when it comes to the web things are just getting started, the utility of the network is so great that barring the fall of civilisation I can't see it ever going away and the impact will just keep growing and we'll just keep connecting more and more stuff to it until it becomes a planetary zeitgeist.
Perhaps those innovations which the tech community considers revolutionary didn't really change society in a recognisable way?
I've been around somewhat longer than that, I don't really think much has changed. People still want the same sorts of things and do the same sorts of thing.
I think that is the trap of having been there through the changes, you are acclimatised to them, for what it's worth I was born 1980 and also was around during the birth and the web and I do think much has changed.
A common sentiment on the thread was that Microsoft Research was more akin to Bell Labs.
> I've always had the impression that MSR (Microsoft Research) was generally doing much more fundamental research than Google.
They definitely are. Just look at all the MS Research labs and projects. I did once to find them all over the place in terms of both categories of tech and fundamental vs general vs narrow. The Duffy posts on Midori with its design/implementation decisions show how unconventional they can be even when just trying to build a robust, desktop OS whose UI isn't actually radical. Only bad thing is most of best stuff gets turned into patents to stomp on competition. :(
Personally, I've always been most impressed by the research into formal verification: Z3, Dafny, TLA+, F*, Lean, and many more.
Yeah, their verification tools are incredible. They applied VCC to Hyper-V. The driver tools mostly wiped out blue screens. Got a new one called P language used in USB stack. The methods you mentioned like Dafny got applied in ExpressOS by another team:
Microsoft did it first in VerveOS although Nucleus was ripped off a mainframe OS from the 1970's. VerveOS was impressive where it was one of first safe to the assembly with much less effort than other projects. They also did CoqASM for verified assembly.
Yep, researchers at Google just don't seem to publish much at all. Unlike MSR and IBM which have a great culture of publishing and releasing things publicly at high rates (yes, they do also commercialize stuff).
I feel like there is a lot of software religion in this industry today, however it is mostly perpetuated by mediocre developers.
I just go about getting my work done.
I think my only true religion is that I want open protocols. Chat, home automation, auto, etc. should all use open and documented protocols.
I was a lowly undergrad but working with Tom Cheatham (who ran the grad Harvard Center for Research in Computing Technology) and others, and helping in minor ways with their annual ARPA proposal. ARPA pretty much was sending money our way, and we just had to cast the annual work in terms of what was "hot" at the time (mostly "program understanding" at Harvard) to get the funds.
All the best breakthroughs of my career have come from a build don't buy bias.
1. A monopoly that prints money
2. Company desire to investigate cool stuff
3. No pressure to productize the research
4. Smart people hired and given lots of leeway.
I think parts of Google, Microsoft,AT&T, and IBM are/have been like this.
 Sharon Weinberger, The Imagineers of War (2017).
I pretty much focused on 3 different entities: DARPA, Xerox PARC, and Bell Labs. These are the books I read to try to answer that question:
 Dealers of Lightning. https://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer...
 The Department of Mad Scientists. https://www.amazon.com/Department-Mad-Scientists-Remaking-Ar...
 The Idea Factory. https://www.amazon.com/Idea-Factory-Great-American-Innovatio...
I personally thought that having access to a diverse set of disciplines & skills and a reasonable budget were two of the more important things.
PARC's mission was to "create the architecture of information" in a way that enabled strategic business growth.
In hind sight, it's obvious how important this was, but back then, it was just a belief, and one that turned out to be adopted in mass.
If you want to be like PARC, understand WHY you do what you do and how in a way that makes sense strategically for the organization, it's members, and those it impacts.
In the Palo Alto Research Center (PARC), they had the right team, the right ideas and research was absolutely going in the right direction.
For instance, they had former SRI International researchers that participated in the Douglas Engelbart's "oN-Line-System", presented in 1968 in what is now known as "the mother of all demos" (https://www.youtube.com/watch?v=yJDv-zdhzMY, https://en.wikipedia.org/wiki/The_Mother_of_All_Demos).
Their achievements include the creation of the excellent Xerox Alto computer system featuring a GUI and a mouse as input device, which inspired the Apple Macintosh and MS Windows (a story dramatized in multiple occasions, notably in the classic "Pirates of Silicon Valley").
Xerox leadership failed to visualize how innovations like the Alto could be converted into profitable products... even if it looks self-evident today. That's a once in a lifetime opportunity that they let go and as a result other companies heavily profited from PARC's findings and continue to do so today.
In addition, photocopiers are no longer at the center of business activities, and usage paper is decreasing. This makes Xerox a company of the past, like Kodak or Blockbuster (not trying to be offensive, but it is fair to say so).
All expenses paid, pure but pragmatic research without having to rush to market and add buzzwords and marketing-inspired crap. Oh, and no compartmentalization between teams and narrow-focused projects either.
>Who else today is like them?
Nobody. Google research labs for example is more like a "throw something out there as a marketing gimmick to show we do 'innovation', and see if it sticks" affair.
Szhenzen is pretty close to being the global center of hardware innovation and getting into software in a climate where state funding and commercial enterprise is merged in a way california havent seen since the rise of modern liberatarian economics in the 80ies.
The birthplace of the web at CERN is also still in play as a center where lots of things happens.
And thats before we head into the fringes where the oil exploration industry is leading in VR research almost as an afterthought of having to process and visualize the kind of big data most big data start ups only dreams about being able to handle.
Remember that Xerox wasn't an IT company but a printing/photocopier copier so it's just as reasonable to expect that the next big leap will come from someone that is not currently seen as an IT giant, as to go looking within the Californian IT industry.
But even that has changed a lot in the last few years.
Disclaimer: I work there.
The DOE and NNSA national labs are enormous R&D institutions and have grown far beyond their original propose in the atomic weapons complex.
I personally work for an office that pumps over 100 million annually into the system, and frankly that's small change. Probably about 40%-60% of that money goes into overhead and facilities which enables research outside of my project.
There are a lot of fair criticisms of the system, but most of that is because real R&D works like ycombinator. Lots of investment, with the hope that eventually something pays off in a huge way. Like the system or not, most people who are unhappy with it are really just unhappy about government sponsored R&D. Almost by definition, it's not going to be "efficient" in the short term.
If people are interested in big money, complex problem science, i encourage you to take a look at the labs. They cover everything from supercomputers, to marine science, to renewable energy.
You can submit proposals to them and they may fund you.
> Problem Finding — not just Problem Solving
Lessons not yet learned
My uncle was on the original PARC team so I'll see if I can get an answer from him to answer the quora question.
No, they have some good people and a huge amount of money gained from monopolies, which are by definition not legal.
By exactly what definition?
Monopolies are not illegal. Abusing a monopoly is illegal in some cases.
Microsoft still has an illegal monopoly with Windows. There's no political will to deal with it, but it absolutely exists and has for years.
Where are you getting this information from?
"The courts have interpreted this to mean that monopoly is not unlawful per se, but only if acquired through prohibited conduct."
https://en.wikipedia.org/wiki/United_States_antitrust_law#Mo..., which cites a 1945 court case United States v. Aluminum Corp. of America.
If you want to talk about some other countries, a monopoly is also not illegal in the UK. Again, only abusing the monopoly may be considered illegal.
If you were thinking of a different country's law (which would be odd since you mentioned two US companies) can you name one where a monopoly is by definition illegal?
That's quite untrue. Any sovereign country can impose whatever restrictions they like on any company they like. That company can withdraw from that country, but in doing so they are giving up on all the income they could get from that country.
They might not be able to force them to break up, but they can certainly force them to either break up or fuck off.
>If you want to talk about some other countries, a monopoly is also not illegal in the UK. Again, only abusing the monopoly may be considered illegal.
You don't seem to understand. By definition, if it isn't abusive, it isn't a monopoly.
That's not what you said originally. You said 'if it's not illegal, it's not a monopoly' I've shown that it at least is not true in the US and even referred you to a precedent which says that you can have a monopoly that is legal - 'monopoly is not unlawful per se'.
So if you weren't referring to the US, which country do you think your claim is true in?
Rob Taylor by himself explained how important it is to let discussions happen at PARC. They trained it. RIP
Taylor did not foster a 'no-holds-barred' culture. Alan Kay has explained this many times, e.g. https://news.ycombinator.com/item?id=14120241.
There were many violations and several warnings.