Hacker News new | past | comments | ask | show | jobs | submit login
Why Did Symbolics Fail? (2009) (danweinreb.org)
78 points by pjmlp on Feb 26, 2014 | hide | past | favorite | 61 comments



Good article I hadn't run across before; thanks for linking it.

One interesting side story is the odd Symbolics foray into animation and video through the Symbolics Graphics Division, which afaict was reasonably successful for a few years. It seemed like the sort of application-oriented business that might let the Symbolics machines break out of single-purpose "AI machines" into being seen as general workstations, or at least dual-purpose AI/graphics workstations. But that business seems to have sort of evaporated along with the rest of their business, despite not being AI and there being no "graphics winter". Perhaps steamrollered by Silicon Graphics, and the same trends towards Unix and commodity CPUs? Or not marketed well enough, by a company whose public materials focused too exclusively on Lisp+AI?

Some bits from that era: the S-Package 3d modeling/animation system (http://www.youtube.com/watch?v=gV5obrYaogU), the PaintAmation package (http://www.youtube.com/watch?v=Cwer_xKrmI4), and the first HDTV processing on a workstation (http://www.youtube.com/watch?v=KppVP8PiZag).


Sorry for the small plug: I gathered some videos and links about S-Graphics offsprings, mostly Mirai, done at Nichimen in http://www.reddit.com/r/nichimen

Some of these videos are already in, but if anyone has other articles or papers about anything S-Graphics/Mirai, like news about the izware guys, feel free to post.

ps: in case the subreddit is locked, I'll take links here.


I knew several of the Symbolics founding group, especially the engineers, but several of the management team as well. The tl;dr in the article comes here:

Meanwhile, back at Symbolics, there were huge internal management conflicts, leading to the resignation of much of top management, who were replaced by the board of directors with new CEO’s who did not do a good job, and did not have the vision to see what was happening.

Symbolics was supposed to be the consensus Next Big Thing. A rival to Sun. A gravy train.

There were titanic egos and divas among the engineers, too. A physical expression of which here: http://en.wikipedia.org/wiki/Space-cadet_keyboard But several of them are also remembered for big contributions to the industry.


Regarding the graphics, the whole lisp thing did not take off is what I think happened. When Symbolics began there was a lot of excitement - them, MIT, LMI, Xerox, even VAX early on. But it did not really catch-on, some of that was because of fighting about the code between Symbolics, MIT, and LMI that held it back. So when Symbolics had good stuff, it was pretty much just them at that point cause they had won the fights and unix vendors were coming along.

So for their price you could get a few Sun boxes say, and you could have a few users on it. Genera was terrible about that. You could only have one user logged in at a time and that guy could change anything in the OS or programs however he felt, and then the next guy has no idea why everything is different.

Basically the graphics stuff was for effects and animation, and that did well for a while, but it was so alien to what everyone else was doing at that point, no other vendors could really make sofware easily portable between Genera and other hardware/OSes. So they did not and the S- software was really expensive route, it made the most sense to run it on VMs in OSF/1 of all things!

The other angle they could have exploited was CAD, but again Genera was the thorn. There was no way a Pro/E or anything of that sort would make it. The C on Symbolics was some strange half dynamic thing with GC implemented in lisp. Again just so alien. Essentially Dassault made a CAD (ICAD?) for it and that was it. But it was sort of ahead of it's time (there are now add-ons to all the CAD software that do similar approach, but back then it was so much like simply drafting on a computer what all the software was doing) and viewed as odd. Again very expensive and with all single user drawbacks, for example AutoCAD on PC and Pro/E on Sun were so much more appealing. And again the whole lisp thing because it was abandoned made it so that even Dassault could never really port it to modern hardware and it was a big expensive golden goose so they tried but eventually killed it and created/bought other products.


I worked in an expert systems company in the 80s (one mentioned in the Phillips thesis referenced in the danweinreb article). Take all this with a dose of IIRC.

As part of out work, we evaluated and benchmarked Xerox Interlisp machines, Symbolics systems, VAXen, later Gold Hill etc. to find a cost-effective delivery platform. We even eventually funded the development of a delivery-focused subset of Common Lisp.

One aspect that Symbolics didn't seem to understand back then was cost of entry and deployment: the Xerox D-machines were (IIRC) around 1/3 the cost of the Symbolics. Perhaps not as speedy, but adequate for our day-to-day development work as well as for the end customer's needs.

Symbolics had great development systems, but the delivery answers were late in coming; too late to help us.

There's lots more to be said about the late 80s collapse of AI (ES) applications and expectations, but the margins here are too small to contain it....


There's lots more to be said about the late 80s collapse of AI (ES) applications and expectations

On that subject, Richard Gabriel [1] writes about his experiences as a founder of Lucid, which produced Common Lisp for regular Unix workstations, in the "Into the Ground: Lisp" chapter of his book, "Patterns of Software", which is available as a free PDF from his web site [2].

(Lucid's pivot to developing a C++ environment is covered in the "Into the Ground: C++" chapter).

There's some interesting history there (and the rest of the book is probably worth reading as well for a variety of reasons).

[1] https://en.wikipedia.org/wiki/Richard_P._Gabriel

[2] http://dreamsongs.net/Files/PatternsOfSoftware.pdf


Lucid CL was a very nice implementation. Maintenance is still available for it. It's now called Liquid CL and maintenance comes from the LispWorks guys.

Lucid took the money they earned and invested it into some ill-fated and ill-designed C++ environment. Lisp competitors from that time, Franz Inc. and LispWorks are still in business.


Your memory of the costs of the systems is fine.

I couldn't afford a Lisp Machine, I just used Franz Lisp on an Atari ST.


You did benchmarking so you may be able to confirm/deny this: I've heard that Lisp machines went out of favor because Lisp just ran faster on a VAX. Was this the case?


Besides horrible management---echoing Zigurd, a friend and contemporary, I was at one point reliably told that manufacturing, R&D and marketing were paying no attention to each other, such that manufacturing had built a factory that couldn't make the latest hardware R&D had developed, which was done completely independently of what marketing thought was needed---they were killed by non-recurring engineering (NRE) costs.

Basically they couldn't amortize the NRE for their custom hardware and later most especially chips (from memory, first, a chipset that spread the CPU across several chips, rather like the one Western Digital did that among other things enabled the LSI-11, then of course an all in one chip) across the huge number of units that Intel and Motorola sold. They also canceled their RISC rethink of their basic low level architecture on the day it was supposed to tape out; don't know if it had a low enough gate count to be like the first SPARC processor, which was implemented on 2 20,000 gate arrays, one for the CPU and one for the floating point unit (gate arrays are all alike until a few layers of metal are put on top).

So soon enough you could run a full Common Lisp, almost certainly without as much run-time error checking, faster on cheap commodity hardware than on a Lisp Machine.

Something like that seems to have happened to Azul Systems, which apparently isn't developing any new hardware, but is selling their version of the HotSpot JVM to run on x86_64 hardware. A prior generation of their pauseless GC (vs. 1 second per GiB, a big deal if your heap is 100s of GiB) required a software write or read barrier that cost ~20% of performance (all this from memory). It's likely that soon enough, even if it perhaps ran slower than their custom hardware, it was a lot cheaper to run it on commodity Intel/AMD hardware.


I can't find a reference for this, but I seem to recall that Azul uses virtualization features of modern CPUs to decrease the read-barrier overhead; if that is correct, then that's a case of the general-purpose hardware fortuitously getting features to out-compete special-purpose hardware.


No, the older version of their GC used bulk VM operations, but not virtualization features, and there was still a penalty reported ... errr, I can't find it now. Probably in a Clifford Click blog posting. I just skimmed the new edition of Jones' GC book (http://www.amazon.com/gp/product/1420082795/), published before it could consider the newer one; it talks about the changes needed on stock hardware but I didn't see any estimation of costs while glancing through it. (I'm not searching any more right now because it's obsolete.)

The base papers are:

Pauseless GC, uses a read barrier instruction in their custom 64 bit RISC chips: https://www.usenix.org/legacy/events/vee05/full_papers/p46-c...

And the newer one that they're using in that old hardware and the software on commodity hardware Zing JVM, the Continuously Concurrent Compacting Collector (C4), which I have not studied (the paper was published 2 weeks after the Joplin tornado trashed my apartment and rather disrupted my life): http://www.azulsystems.com/sites/default/files/images/c4_pap...

It's possible they figured out how to minimize or eliminate the penalties of the original software read barrier they applied to the Pauseless system in C4 (or perhaps in relation to their custom vs. newer commodity hardware); I just did a quick skim of the relevant part of the C4 paper and a few keywords and couldn't tell.

This is all great stuff that I hope to get back to soon....


I don't have any of the old papers / results. But my recollection is that the Lisp workstations greatly outperformed LISP on a contemporary VAX for a given cost, in part because of the workstation's microcoded instruction set tailored to Lisp. Over time, though, commodity hardware increased significantly in relative performance simply due to the economies of scale in producing it.

You might've been able to buy a Xerox workstation for, say, $15K-$20K, while a VAX box was over $100K. BUT for a production system, the VAX could run multiple Lisp processes at the same time. (I'm guessing at costs here, it's been too long.)

Also, Richard Gabriel did a bunch of Common Lisp benchmarks, might help to look for them. (Great fellow.)

The real threat to Symbolics et al, circa 1987, was things like Gold Hill Common Lisp running on, say, a IBM PC/AT with a 286 chip and maybe a meg of memory. At, perhaps, $3,000. It ran pretty fast, the Gold Hill tech folks were very good. But GHCL had an unsophisticated development environment compared to Symbolics / Xerox.

As excellent as the workstation environments were for development, a market demands that you eventually deliver a cost effective product into customers' hands. (Or is that too much old-think?)


The dates on Dan Weinreb's blog are all wrong for some reason. This post likely dates back to 2011 or before. (EDIT: Ah, corrected by the moderators, thanks.)

I had the pleasure of meeting Dan at ECLM 2008, four years before his death. He was hacking away on his XO-1 and showing around fancy things. A very memorable man. He's missed.


His blog seems to have JUST been rehosted by somebody. I got pingbacks on DBMS2 from it, with the new post dates, yesterday. It also looks quite different than it did before.

I miss Dan. One of the nicest guys in the industry.


I met Dan only once, despite working in some similar areas and living in the same town. I wish I'd had the chance to spend more time with him. He was very bright, very energetic, and very friendly. Mutual friends have said he was like that most of the time. He just wanted to understand stuff, and thought I could help (I'm the author of a blog post he says inspired this one), so he simply reached out and made a new friend. What a great hacker.


Thanks, I searched for him but wasn't sure if it was the same Dan Weinreb, given the dates on the web site.


Corrected by moderators? How corrected? The dates are still wrong


The original title didn't mention the date at all.


Exactly why Java succeed - the world is dominated by mediocrity while idiots are mostly on management positions. This is why all we got nowadays are Windows PCs with Java. It "worth us" or "suits us well".

Designers were too smart (consider David Moon) while management was "as usual".


No - the world is dominated by stuff that works and solves real problems. It doesn't care for how much smarter than all those mediocre idiots you think you are. It offers lots of opportunities for very smart people who mistake their mental masturbation for unstoppable genius to be very, very stupid.


Symbolics stuff not just worked, it worked remarkably well on the very modest hardware and solved real word problems.

For an example of unimaginable by "modern standards" quality, go ahead and take a look at the Common Lisp documentation produced by Symbolics (the art of writing, "motorcycle maintenance" kind of clarity and conscience) which is buried somewhere inside bitsavers and take a look at the sources of Open Genera 2.0, the whole OS, which was available of dynamic modification and extension by the programmer, which could be found in a torrent search, and then go to the rosettacode website and look at every single page too see what a fucking crap is the language you are using to get shit done in exchange for a paycheck. It is worst in any possible way - verbosity, clumsiness, clutter, cryptic syntax, you name it.

This is actually not interesting notion. Everyone with a bit of brains could visit rosettacode. The interesting questions are why it is so that the world is dominated by Java.

Pages and pages were written about what's wrong with Java and why it is so popular. One sentence summary could go like this "a sandbox for crowds of packers to get shit done, agressively marketed to them via "you don't have to understand" memes by corporations to make big money". It spread like a virus due to bandwagon and peer effects and catchy memes like "write once" or "millions of man hours".

Ok, use it, enjoy yourself, but, please, do not touch so rare and really wonderful things which you unable to comprehend.


This comment is to programming language analysis what Pitchfork reviews are to music analysis- lots of tribalism, lots of emotion, lots of apparently non-self-aware hyperbole, and entertaining as long as what the reader is looking for is entertainment and not actual analysis of anything. If your goal as a programmer is simply to have fun, then carry on, but if your goal is to have a handle on whatever scraps of objective truth are out there (or even just to solve concrete problems) then you may want to devote at least some tiny part of your time to trying to see things from other people's perspectives. Those other people may not necessarily be correct about anything, but when it comes to engineering even the faintest wisp of objective truth is worth a lot more than strongly expressed emotions, no matter how poetic it may seem to you.


Let's say that Lisp is very different because it was based on a "good ideas", well researched in "good places" like MIT AI and CS labs (no punks could get there). It has something to do with shoulders of Titans. The set of selected "good ideas" is intentionally keept small and well-balanced. Let's take a walk.

The "layers of DSLs" metaphor mimics as close as possible the way our mind uses a language, which we call "verbal", or "logical" mind, as opposed to "non-verbal, ancient, emotional" mind, which is based on evolved reflexes, captured by genes (Sorry!).

We are using symbols to refer to inner representations (concepts) we have in our minds. so "everything is a symbol" (just a reference to a "storage") is a "good idea".

When we're talking about some specific context (situation, or "domain") we tend to use some reduced, appropriate set of words (symbols) related to it, along with usual "glue". This what we call Domain Specific Language, or a Slang, if you wish. This is also a "good idea", groups has slangs.

Layered structure is, in some sense, what the Nature is. Atoms, molecules, proteins, tissues,, organs, systems, brain, body, you see. So, layers of data is a good idea, but layers of DSLs is even better one. It not only mimics how the world is, but how we should structure our programs.

Neither the language nor its layers are set in stone, they could be transformed, extended, adapted using the very same mechanism which underlies it.

Iterative looping constructs, for example, were build out of macros, and the Loop DSL is the most striking example of how a language could be expended with itself as a Meta-language.

Some "good people", like R. Gabriel, have argued that we need more complex control constructs (special forms) as long as we are trying to write complex programs (language should be adequate to the domain), so, the ability do define new special forms as we go, without breaking everything, is a "good idea".

This is, btw, the idea behind the recursive bottom-up process of software development, popularized by SICP and championed by pg and rtm. Language should evolve together with our understanding of the problem domain.

Structures is also a DSL. This gives us the way to structure our data (everything is an expression in a Lisp, everything could be evaluated, this is another very "good idea") to mimic or represent more conveniently the objects of real world. Structures could be nested, getters and setters were created automatically, but could be redefined, etc.

So, by extending the language with new, appropriate constructs (by defining new special forms) we could improve our way of modeling the reality and create better "inner representations" for the concepts we made (captured)? Seems like a "good idea".

But wait, because everything is an expression (a Lisp form) which could be evaluated, why not just put code blocks (expressions) into the same structures? Thus we have "data structures" which captures not just characteristics (state) but also behavior of real world "objects".

To capture the behavior we need the notion of "protocols", which is just a named set of generic functions. So we have defprotocol which, essentially, creates a structure and binds names (symbols) to procedures (expressions) which consists of Lisp forms. Thus we got MOP implemented, which is the basis of CLOS.

I forgot to mention, that since everything is a symbol (reference) we could combine "objects" into lists and other aggregates, map them, reduce them - the whole set of layers of language "below" are available.

This doesn't mean that this is the only way to program, this is just one of the possible programming paradigms. What is really matter is that we could, by using the very same means of combination and abstraction "import" any other paradigm we wish.

And everything so uniform and concise that it could be easily traced back to "conses".

btw, this is not a "strict", "rigid", "set in stone" language (it is a nonsense to try to "fix" a language, it contradicts with the its nature). We have reasonable, "evolved" defaults, such as Lexical scoping for variables, but we could have Dynamic scoped ones if we wish. Thus, it is possible to extend the language with "non-local exit" control structures, which is your fucking Exceptions.

Immutablity has also reasonable defaults. List and mapping functions are always producing a new copy of a list, leaving original ones unaltered, while their "destructive" equivalents were segregated by following an explicit calling convention (Scheme is famous for this).

Evaluation strategies could also be explicitly selected, so lazy lists or streams could be defined using the very same conses and macros and list notation.

Being a small language (after all the transformations - macro-expansions, rewriting rules, inlining has been done) it could be efficiently compiled (using a compiler written in itself) diretly into Machine code, which runs on more plathorms than fucking JVM.

This is what some people are considered as a work of a fine art, and called "programmable programming language".

But this is only a half of the story. There were machines (a hardware FSM if you wish) which was able to run the Lisp code efficiently. But this is another story.


Thank you.


Everything in the Java language is done so as to prevent clueless people from messing up too much. It is an useful goal, but it impairs creativity and productivity. Complex concepts are either not implemented or forbidden outright. This is even in contrast to its closest competitor, which is C#. The latter has evolved a lot since its creation, while in the Java world lambdas are the hot new thing.

But hey, you can hire cheap labour and they will be prevented from messing up too much, so corporations like it.


For corporations, consistant work is often more desirable than exceptional work. Amassing thousands of lines of code that only exceptional programmers can use, understand and modify is a liability, not an asset, no matter how efficient it is.


Which is a quality of a fundamentally broken organizational system that measures power in (and therefore optimizes for) number of people controlled, as well as the government-directed economy that underwrites it.


tl;dr: there's a reason most startups prefer javascript to smalltalk.

Which is a quality of a fundamentally broken organizational system

No, it's a quality of a robust organizational system. Let's take a program that parses html as an example. What would you think of an html parser that only works well with high quality, strictly structured html and breaks under poorly formatted html? You would call it a fragile, broken system. But a parser that deals equally well with low quality html would be a well thought out, robust system. And here's the key to the point I was making: the robust system could be relied upon to do its job consistently even under less than ideal conditions.

A system that requires exceptional engineers to maintain it is a huge source of risk for any company. If your current exceptional engineer leaves for whatever reason, you have to find another one. If the reason your engineer left is your company is suddenly facing insolvency thanks to a nasty patent dispute, all you may be able to afford to hire is a college grad to maintain what you've already got.


You're speaking from the perspective of someone who has to be responsible for something without understanding it, which is another pathology endemic to that system. By only using black box reasoning, of course you come to the conclusion that you need many such boxen in case one malfunctions in an unforeseeable way.

The thing is, the level of intelligence to maintain the software has to exist somewhere. If it's not vested in a small number of people able to actually understand the system, then the intelligence ends up being an emergent property of the human automatons, Chinese-room style, with the now-important "manager" pretending to control it when in fact nobody does.


You're the only one in this thread talking about human automatons, outsourcing and broken corporate structures. I'm explaining why a healthy corporation would want to use a simpler and easier to understand system over a more complex, harder to understand one. Nothing more, nothing less.

The thing is, the level of intelligence to maintain the software has to exist somewhere.

Yes it does. Nothing I've said suggests otherwise. But, let's say you can lower the amount of intelligence required to actually understand the system. Not much, just from "exceptional" (by which I mean 10x or the top 1% of programmers working in the field) to "average" (by which I mean the median level of intelligence for programmers working in the field). If you can do that, you lower your risk over the long term. If someone leaves, you can easily hire a replacement and get them up to speed.


> let's say you can lower the amount of intelligence required to actually understand the system

Every system has accidental complexity, so a better language can obviously help achieve this. However, Java does not do this. What Java does is lower the intelligence required to make changes without understanding the system, by having few abstractions and effectively making the programmer work in an intermediate language with added redundancy.

And while this approach allows for cheaper unskilled programmers to be used, they take up more time figuring out what to attempt to change and then crossing their fingers hoping the code will compile. A similar amount could have been spent for an intelligent consultant working in a high level language and serving several such clients, but management would rather feel secure by having someone they control.

> I'm explaining why a healthy corporation

Once it's in the territory of hiring more people instead of better people, you have something that prioritizes control over obtaining results, which is not healthy. Unfortunately, it can indeed still pass for a healthy corporation and live many decades, a source of many modern ills.


I guess you don't agree with our host's "Beating the Averages" essay: http://paulgraham.com/avg.html, specifically:

The average big company grows at about ten percent a year. So if you're running a big company and you do everything the way the average big company does it, you can expect to do as well as the average big company-- that is, to grow about ten percent a year.

The same thing will happen if you're running a startup, of course. If you do everything the way the average startup does it, you should expect average performance. The problem here is, average performance means that you'll go out of business. The survival rate for startups is way less than fifty percent. So if you're running a startup, you had better be doing something odd. If not, you're in trouble.


I couldn't agree with the article more. I should have come up with a different tl;dr for the previous post, considering everything I wrote in this thread has specifically been about corporations and not startups. Sorry about the confusion.


Yes, this is what sold it to managers.)


That's an opinion commonly held by people who think they are hot shit.

(and for the record, I don't really like java either)


Running Open Genera in a VM is quite an eye-opening experience. Such a very different vision of computing and what is possible.


Yes, this is why I sayed in another thread here in HN that UNIX made the industry do a left turn.

Lisp Machines I only know through videos and old papers. But I do have experience with Smalltalk during my CS degree, before Java was a thing.

Smalltalk is by no means Lisp, but shares a few ideas.

If those systems had become mainstream, computing would probably be like Bret Victor discusses on his "Future of Programming" talk.

Instead we have developers still using systems as if UNIX had been released last year.


At the European Common Lisp Meeting later this year in Berlin/Germany I'm probably demoing a real hardware Symbolics Lisp Machine and its OS...


Sounds tempting, as I live in Germany. Thanks for the hint.


If you build a better programming language, and the world does not beat a path to your door (even after 50 years), that's because the world is too stupid to appreciate the genius of your beautiful language? Right. Grow up, kid.

Lisp has its virtues, but the Lisp evangelists/cheerleaders/fanboys would be more effective at converting the rest of us if they quit assuming that anyone who disagreed with them was too stupid to see the obvious virtues of their wonderful language.

And don't say that you don't care whether others like your language. You care whether others like it enough for businesses like Symbolics to survive, so that you don't have to visit bitsavers to see this stuff that you love so much.


the whole OS, which was available of dynamic modification and extension by the programmer,

Sounds like a nightmare, writing code for an OS where every chucklehead with a keyboard has modified it just because they can.

Rather sounds like the curse of the gifted[1] writ large. Sure it was cooler and had neat features but couldn't reliably deliver over the long term.

[1] https://news.ycombinator.com/item?id=7219872


Symbolics stuff not just worked, it worked remarkably well on the very modest hardware and solved real word problems.

Sadly, sadly, that's not enough. Oftentimes, that doesn't even matter.


Yeah, I guess you'll keep telling yourself that because it makes you happy and remain unable to comprehend just how wrong you are...


Why, yes, reading Symbolics CL docs or David Moon's "language for oldtimers" makes me happy, like any other decent reading.


Neither you nor OP are wrong. It's possible, and common, to have practical tools that suck hard.

It's also true that the more people use a tool (and for longer time), the more ways it will be found to suck (and Lisp has been around for quite a while (and used by quite a few people too (probably used by less people than Java) and (indeed) it managed to spawn quite a few heated reactions (to it's salient characteristics)))


I think that's a bit harsh - I actually moved from mostly doing Common Lisp development (on DEC Alpha workstations) to Java development in early '95 and I really liked Java back then - I was as passionate about it as people are today about Go/CoffeeScript/Haskell. Java, at least at the start, really was something fresh and good.

Sure it became a horrible bloated mess - but isn't that the doom that faces all succesful software eventually?


Same here, I remember how fresh it felt by bringing GC to the masses and providing a more consistent experience across platforms than C or C++ were capable of, with their compiler specific behaviours, extensions and catching up with the ongoing standardization efforts.


Sorry but it was Visual Basic that brought GC to the masses.


That's why "Haskell avoids success at all costs".


And it seems to be very good at this!


Not good enough. I use it at a bank.


"We were not out to win over the Lisp programmers; we were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp."

- Guy Steele, Java spec co-author


What is also funny is that there was also the JavaStation which was once going to use special Java CPUs.


Do google for "Jazelle".)


I'd very much like to see a similar retrospective on why Jazelle failed.


Are you asking about the proprietary ARM instruction set extension for speeding up interpreting Java bytecodes? It failed because JITs were faster, so as soon as devices had enough RAM/Flash/CPU to JIT they stopped using Jazelle.


Dan was the guy at Symbolics most commonly tasked with making me believe Symbolics would succeed. He succeeded at the task. I forgave him.


For those that had no idea what the hell symbolics are/is (As I did) I think it is referring to the company: http://smbx.org/ has info


It's sad what's become of http://symbolics.com/. I thought it would go to some Lisp-related project eventually.


I highly recommend the book Patterns of Software for which the link is given here. It includes the story of Lucid's troubles, which were similar to Symbolic's, though Lucid was not in the hardware business. Quite apart from that, it is well worth reading.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: