Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Immortal software: how would you write it?
39 points by oftenwrong 22 days ago | hide | past | web | favorite | 67 comments
Given the task of designing a software system to be used for at least 100 years, what design choices would you make to help ensure its survival and usefulness? For example, to allow it to be ported to different hardware/software platforms, or to allow it to be integrated with other systems.

Inspired by https://news.ycombinator.com/item?id=19272428




Given that you explicitly allow maintenance? Then your #1 problem is making an organization which persists for 100 years; the software is a detail.

If you didn't allow that, then you're asking for a complicated series of bets on future substrates your software will run on, many of decompose into "Ship an organization that will care enough about it to keep the underlying substrates running and available."


That in itself is pretty challenging: take a look at the Dow components in 1920:

https://en.wikipedia.org/wiki/Historical_components_of_the_D...

Of those, zero are in the index today. The only ones I've even heard of are AT&T, GE, Studebaker, US Steel, Western Union, and Westinghouse, and half of those don't exist today.

(As a side note, it makes me wonder if the common wisdom of buying a stock market index and holding it is really just survivorship bias. If you bought the Dow in 1920 and held the individual companies, you'd be nearly bankrupt today; it's only because the index continually switches out declining companies for new large-caps that it's gone up continuously over long periods of time. That in itself might be a lesson that supports your point: the way to build a system that lives forever is to build a Ship of Theseus that can swap out and discard components that no longer serve its purposes without losing its overall identity.)


I'm not sure that your side note is a good example.

If you owned shares in the American Can Company, you would have then owned shares in Primerica and would now own shares in Citigroup which was only bumped out of the DJIA in 2009.

Several of the mining companies were bought out.

The Texas Company is Texaco.

United States Rubber Company became Uniroyal and was bought out by Michelin and Continental.

Sure, some like Studebaker and some of the locomotive companies collapsed and would have been a total loss, but it's entirely likely that holding all of those original companies would have made you a whole lot of money.


You also have to worry about bankruptcies, where even if the company is not liquidated, the shareholders are usually wiped out and the company recapitalized among debtholders. Texaco went bankrupt in 1987, for example - while it's part of Chevron now, the people who actually owned the stocks that were purchased were largely the bankruptcy creditors. Western Union went bankrupt in 1987 and 1991, Citigroup nearly went bankrupt in 2009 and was recapitalized with the U.S. government becoming a major shareholder (and diluting the existing shareholders).


For fun, let us say you will be writing it for an organisation that is long-lived and likely to continue for at least 100 years. Something like the Catholic Church - perhaps they need a system for archiving, browsing and searching digitised documents.


Open source is probably more likely to survive that long than commercial software. The Free Software Foundation does a lot of maintenance work but other people could take over if the FSF disappeared.


> Then your #1 problem is making an organization which persists for 100 years; the software is a detail.

Problem solved if you work for Uncle Sam. His organization is very, very likely to survive the next 100 years.


this.


As someone who just retired a fairly interesting piece of software that had been running for 20 years (not immortal, but not too bad), I have a few thoughts:

1) Have standards that don't change appreciably. I first wrote the system in the early days of the Web and put it into full production at or about 1997. The fact that HTTP, CGI and SQL didn't change much and continued to be supported helped immeasurably.

2) Use a platform that you can manage effectively. We started with SGI IRIX, then moved it to Linux, where it spent most of its life. That let us move it from machine to machine and finally a datacenter. This would have been more difficult on an OS with a commercial pressure to upgrade.

3) Be lucky with your language choice. I used Perl 5, which had just come out at the time. I started on Perl 4, but the OOP features of 5 were very attractive (was heavily influenced by articles I read about SmallTalk at the time). Because Perl 6 never really happened (for me), I avoided the Python 2.x/3.x problem.

4) Make it documentable. Every code object in the system could be easily documented (the editor included a comment field for everything, which could be automatically extracted to provide developer help). Not everyone who worked on the system used it, but I did, which made it easier to figure out what was going on years after I had been doing other things.

5) Make it straightforward. Not sure that's the best way to put it, but after I left the company, the system suffered a good chunk of updates and rewrites by people who knew more or less what they were doing, but because they had to remain compatible with the original, fairly simple architecture, I was always able to pick up the thread when I did contract maintenance on it.

6) Security is a process. At the time I wrote it, it was... let's say "fairly" secure. Some of the updates/rewrites badly degraded the capability-based security it had (the maintainers didn't understand it and thought it was decreasing performance (which, to be fair, it was), so my original security thinking was rolled under the pressure of customer requests and deadlines.

Hope that helps!


You avoided the Python 2.x/3.x problem because Perl6 isn't intended to replace Perl5 in the same way Python3 is intended to replace Python2.

Perl6 would replace Perl5 in the same way Go is a replacement for Python.

This was perhaps a little more muddy in the beginning of the Perl6 project. Larry has said that part of the reason for breaking compatibility was that the Perl5 codebase was stable enough. Meaning existing code could continue to run on it unaltered.


I wouldn't write software as such, but a standard; that is, a description of input, expected output, and the processes in between.

Maybe even more important than to describe expected behaviour though, is to reduce the scope to the bare minimum. I like using e.g. Git's file format as an example, because you can wrap your head around it in less than an hour. If you have a Git repository and a description of the Git file format, I'm confident you could write an application that could read the contents within a day.

I mean yeah you could probably create a codebase based on e.g. the C standard, package it together with all the necessary tools and documentation to make it compile, but you can also wonder whether that would even be relevant in 100 years. What if in 100 years it has to process a million times more data? What if in 100 years computer architecture has changed so vastly that you need a completely different programming paradigm to make it work? (think science fiction). What if instead of trying to make it work in 100 year old code, you could feed the description to an AI that can do the work?

Long story short, I think that if you want to future proof software, you should understand and describe the problem first and foremost. Write a specification of e.g. the Git file format and spec, instead of the Git application.


Agreed 100%


C (C99) and Javascript (ES5) are very likely to still be supported.

Most protocols will be phased out for security reasons. So I would go with direct TCP/UDP based on Linux kernel syscall (still the weakest point) in C or with fetch in JS.

Most operating systems will be trivially vulnerable to attacks, so I would go with the minimal possible attack surface (hardened kernel, firewalled, etc.)

It should survive and TCP/UDP communication makes it easy to integrate with any other software.

For the hardware I would go with a Raspberry pi if possible (and in C), otherwise just hosted on AWS.

About its usefulness, that would be hardest point. If it was useful and not updated for 5-10 years, it will be replaced however complex it is, 100 years later the problem might be generalized or not relevant anymore.

Looking at what's there:

- 61 years old, MOCAS, OS written in COBOL used US Department of Defense, on a mainframe

- Linux, Python, glibc, perl, emacs, ...: Most old software still relevant is a core foundation (programming languages, operating systems, text editors)

- NASA's software: C, C++, Ada, Python. Linux for applications, Windows for end users.

So I would probably make a minimal programming language, maybe some kind of minimal Crystal.


> otherwise just hosted on AWS.

Talking about decreasing risk of failure by minimizing attack surface, but at the same time greatly increasing it by maximizing vendor lock-in, essentially betting Amazon will still be around in 100 years, it just does not feel like a very sound strategy.


C, yes. As terrible as C is, I have no doubt that people will be using software written in C everyday in year 2300.

> So I would probably make a minimal programming language, maybe some kind of minimal Crystal.

A new programming language made now (such as Crystal) has the least likelihood of lasting 100+ years, no matter how good it is.


> based on Linux kernel syscall (still the weakest point)

The Berkley sockets API is pretty well supported on all major operating systems via C libraries with only slight differences and even if it dies out will likely have shims available for a long time to come. Why use a direct syscall interface over this?


I would write it using the oldest popular technologies currently available in production. https://en.wikipedia.org/wiki/Lindy_effect


So you want to make sure your software can always be run on whatever platforms are in common use in the future, no matter what those platforms are, right?

Write your software for the NES. As in, the Nintendo Entertainment System. Yes I’m serious.

The Nintendo Entertainment System is the most stable platform I can imagine. No aspect of the platform can ever be modified, much less depricated or removed. Furthermore, NES emulators are relatively simple to write and have been ported to basically every major hardware platform, and I don’t see this changing any time within even the next century.

Make sure to test your software against real hardware, so you know you aren’t relying on emulator bugs that may not be replicated on a future emulator.

I’ve actually been thinking about this for a while. Old game consoles are, as far as I can think of, the most stable and cross-platform computing platform we have.

Of course, if your software requirements exceed the capabilities of an NES, this won’t work.


Lindy's effect comes to mind: immaterial things that were around for a long time, are still likely to be around for a long time.

So, using text files for storage could be really good, if possible. A desktop application could be written as a single process binding to a port and displaying a HTML-based GUI could work as a console, probably in all kinds of unforeseen ways.

As for the language, I would write it in Perl. There's gonna be an interpreter in an every UNIX based box for many years to come, and the community thinks backwards compatibility is a feature. I don't know if they'll keep it going for a hundred years, but we might also have bigger problems than keeping applications going.


Well a software is reflection of present working. Write any software, it remain that way if your business remain same for 100yrs. Which means if you don't grow, scale, and keep using 100yrs old tech and still survive in market. If not, than who will use it? You still can run PacMan on modern PC, and it is what 40+ yrs for pacman ? But how many plays it now? Software cannot and shouldn't be immortal. The always Evolve and adjust to environment. They don't remain constant.


First you define a very simple virtual machine so that the part that needs to be ported to different hardware is minimal. Then you write software for that machine using tools that are themselves written for that machine. Make sure that the parts that are needed for software development inside your VM are very precisely defined.


So far, no software has managed to last 100 years, and no computer language has either (though if C isn't still going, something has happened, IMHO).

You have the right idea. I want to note that, in a sense, the program would partly be written in the specification language with which ths VM is defined. One could use English - probably OK, though English has shifted a bit in the last 100 years (is it shifting more slowly with the internet, or faster?). And you can use mathematical notation, which has also shifted, but more slowly.

That helps with porting, so it can run, but the question of maintaining the software remains (changing it to meet changing circumstances and needs)

- Is it better to totally nail it perfectly, so only known aspects need to change? (A bit like your VM approach, in some ways).

- Or make it minimal, simple and clear, so the core can be preserved, modified, even rewritten, without losing the basic idea of it?


So the JVM?


Given the task of designing a software system to be used for at least 100 years...

I wouldn't build it in software if I had the choice. I'd build a mechanical system. Software, and electronic computers in general, haven't proven themselves as good long term bets yet. There are mechanical systems around that have been going for many hundreds of years (eg https://en.wikipedia.org/wiki/Salisbury_cathedral_clock).


I'd go down different routes, depending on what the software system was for.

Every software system is about translating the work of some humans into the rules that some body of humans cares about. As an example, let's take something that we humans like to do a lot: buying things.

So let's design an in-person payment system. Select item to buy, pay for item. Totally simple, right?

So the first iteration uses a touchscreen kiosk? In twenty-to-ninety years, that will look soooo dated as everyone is paying with their hand gestures or their minds. ("Ma, I'm blinkin but it ain't buyin!")

The part of the system that lives for 100 years needs to have no concept of a user interface.

So you produce abstractions, you define a boundary to your system that is described as simply "user submit payee details". Now that bit can be swapped out - as long as it receives a valid user payment bundle, we're good.

What about currency? Not too long ago, some countries moved from a coinage-based pounds-shillings-ha'pennies system to one based on hundredths of a pound. Soon after, many european countries banded together and moved to a single currency, still based on hundredths though. Next step might be to pay via bitcoin, at which point each mundane transaction will probably cost trillionths of a satoshi, and we'll come up with some system for calculating what order of magnitude your payment is, just to save us typing out all those zeros. So we might be representing money as 1.8265e-972. Maybe we'll write it as 18\927. Maybe it'll all be emoji, and in discussion we just call it "18 breads". Maybe society will have collapsed and we're paying with bottle tops. Two Buds and a Pepsi.

Whatever we do, how we represent the cost will either need to be so vague that we can never do any calculation on it, or again, we farm the representation and manipulation to another system that doesn't need to last for a century.

In the end, our program looks like:

ask_user_what_they_want_to_buy => find_price_of_users_basket => take_payment => inform_user_of_success

And that's all assuming that users in the future leave their homes (could be radiation), exchange money for goods (could be post-scarcity) or that there is such a thing as a company, much less that the company that commissioned the system even exists.

So my answer is probably give up and do something easier. Aim for 20 years. You'll still fail, but at least the failure will cost less.


Use forth.

Pro: Forth is standardized. It runs everywhere. It's most likely the first language that will be ported to any new platform. If it isn't avaible on a new platform yet, you can write your own interpreter.

Contra: Today's software requirements most likely are too complex to be handled in forth. You might need 100 years to debug your program.


technically:

- Separate the UI from the application and business logic and make the interface to the latter clean and well documented.

- Either use a currently very widely deployed ecosystem (aka Java) which will have a fair chance of evolving or being emulated etc over the course of 100 years, or use a self-bootstrapping environment with a simple core like forth or scheme/lisp etc, where everything you need is self-contained.

But really what makes a system still useful in 100 years or further is that it solves a problem so well, so reliably, etc, or just well enough if it's a huge and complex problem, that there's no incentive, or the investment would be prohibitive, to replace it with newer technology. Think CICS.


Not quite the same, but https://www.osti.gov/biblio/10117359 discusses a long term (10000 years!) system for marking areas of high radiation.


The idea how incredibly long nuclear waste needs to be at least marked (better: properly dealt with) always baffled me. You use that stuff now and it will be dangerous waste for the next 10 millenia. Yet you wil hardly find any private nuclear corporation that even has a plan for the next century.

I suspect most nuclear powers depend on the private sector for maintaining their nuclear arsenal, and having them really factor these future risks, storage costs and insurances in would make nuclear energy uncompetitive.

Gladly in most nations the public readily jumps in and pays the bill (and will have to do so for centuries). Imagine the outrage if any renewable energy would demand such things from the public.

I understand the fascination with nuclear energy, look at the Voyager, which powered by a nuclear battery flew for two decades through the void. But the points I raised above are only speaking of nuclear energy as it is now used – I can imagine future cleaner reactors with less waste. But right now nobody has any incentive to really put money into it.


Compare that to carbon dioxide, which also lasts in the atmosphere a long time (roughly 1-2 orders of magnitude less than nuclear waste, which on the other hand has very localised effects) and is released by thousands of coal power plants unchecked and with minimal regulation all over the world. Not to mention the deadly air pollution, which causes chronic illnesses and death in the thousands.


CO2 like nuclear waste is effectively cost that you let future generations pay. Nuclear energy as we have it now is not a solution.

The same money that would go into taking over costs for private nuclear companies could easily flow into new technologies and energetic cycles, some of which the US is actively working against on political and economic levels.

Climate change will have an incredible impact on migration, so the best way to reduce CO2 is to make it in a way that gives Africa and its people a future. You could e.g. Force CO2 neutrality for carbon plants by having them plant trees in Africa. Which not only would reduce their CO2 impact and create jobs where they are needed to stop future drama, but increase soil stability, help with the increasing temperatures and potentially even provide food.

Btw. Coal plants are heavily regulated over hear in Europe — the filters they have to have have the size of a little house.

The question is: is the future more important than beeing/staying the geopolitical and economical leader? It is only about priorities and not about the unavailability of solutions. Are tou willing to give up long term competitveness for short term gain?

Petrol could be replaced by closed Ethyl cycles produced by solar panels in the sahara (which would again create jobs there) and we wouldn’t have to use electric cars whose battery production costs more CO2 than the car would save. The infrastructure to use Ethyl is basically there (If I am not wrong the Chinese add over 10% of Ethyl to their petrol already).

I think the solution are closed cycles. If water wouldn’t have a cycle on it’s own or without us creating it we would all have died of thirst.


Maybe look at SQLite? Extremely portable between operating systems, comes in a single C file for maximum ease of integration to your application.

I have no doubt SQLite deployments will be up and running in 2100.


It seems like too many people are planning for the worst possible case, i.e., computers get worse.

What if they get much, much better? Can your design be adapted to use technologies that don't exist yet? Is your design parallelizable?

If people or machines get used to sub-picosecond runtimes, someone would demand your program gets rewritten. I think that's a more likely death scenario for your program than a reversion to punch cards and abacuses.


I'll write it in php. That thing refuses to die.


Easily readable and based on text files is a good way to last a long time. Yes, you can make bad code in php, but you can also do that with any language. It's also easy to make simple, useful code in php which is why it stays around.


You write a temporary shell script that "should suffice for now but will replace it in a few weeks with proper solution". Done.


Don't reinvent the wheel. If you are thinking in terms of centuries, look at the work done for software preservation (mostly through emulation). It is already the standard practice for digital libraries, the preservation of games, and much more. For some research: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=soft... The Bodleian Libraries have an excellent presentation on the subject: https://libguides.bodleian.ox.ac.uk/digitalpreservation So, you basically design your software, and make sure there is an emulator that is able to impersonate the hardware you run it on.


I don't have any software that's been in use for 100 years but I have a couple of items in the 15-20 year range.

About the only thing that they have in common is that they started with the thought: "I'll just knock this up in 5 mins then come back and do it properly when I have the time".


Depends on what kind of software it is:

If its speed-oriented, C99(supported by most platforms).

If its security-oriented and needs to be maintained(e.g. a large project): Ada.

If its just needs to survive as long as possible and its web-based, JavaScript - amount of javascript multiplies with size of the web and all of it needs to work.

For everything non-performance critical, shell scripts and makefiles.

What i wouldn't pick:

1.Any language without a clear standard and backward compatibility.

2.Any language with one compiler/interpreter/library/framework dominating the market. ("Single-point-of-failure").

3.Any closed-source language compilers/frameworks/engines/libraries.

4.Any niche academic language, especially with poor FFI and lack of libraries/frameworks.

5.Anything that has a steep learning curve. You'll probably want it to be easy to modify, no heavy abstraction and no complex conventions.


I like Techdragon's idea "I’d write it in math" and Onion2k's "I'd build a mechanical system."

A lot of it depends on the complexity of the software and the complexity of the interface to the user(s). If the software merely needs to do something simple like accounting, I think a mechanical system could be made, especially one that processes punch cards (see Vaylian's answer). Mechanical systems usually require some maintenance, but if you made many copies of the mechanical system and many copies of the documentation, then you could probably still run it in 100 years on one of the mechanical computers.

If the software is an unchanging, complex, but it has a simple interface (like terminal standard input/output ascii), then consider: 1) writing the software in an old language like C or Lisp, 2) compiling the software into a Turing machine, and 3) writing a program in Fortran, C, or Lisp that can run the Turing machine quickly. (In particular the implementation should make memory access a constant time operation.)

If the interface is complex, like the internet, or graphical, then you are pretty much hosed. Somebody is going to have to maintain the interface, but you could still use the Turing Machine idea to implement the logic that is not tied to the interface. As Patio11 said, your #1 problem is "making an organization which persists for 100 years." I suggest finding an organization that has lasted 100 years and making one person responsible for the software with the idea that he/she will maintain the software/hardware for at least 40 years and then pass along the responsibility to someone else. You would need some good contracts to make this happen, but it is not too unusual for a teacher to teach in the same high school for 40 years, so you could probably design the contracts accordingly.

You should probably write the documentation in English and put it on acid free paper.

The bottom lines are: 1) Compile into a very very simple language, 2) Implement on very simple hardware (Onion2k), and 3) Set up a human organization (Patio11) as simple as possible with good contracts that will last 100 years.


To start, I would choose systems and languages that have, so far, stood the test of time as there's no reason to think those won't be around for 100 more years. I would write the code to solve the exact problem and not try and create some application framework that would allow for any feature request because you don't know what is going to happen in 2 years much less 98. I would do the absolute simplest thing necessary to solve the problem. The goal is to minimize the surface for bugs to manifest. I would use plain text for storage. I would strictly avoid anything "flavor of the week."


Use XML or ASCII text flat files as your data storage solution. Make it as self documenting as possible.

I would build on the java vm or the JavaScript vm. Both those will maintain a lot of backwards compatibility and have a large population of developers for the foreseeable future.

I would buy your own domain and not depend on a third party solution for your software's web presence (GitHub pages, etc) because those businesses come and go over decades. In 100 years we'll still be doing DNS or something similar.


I would not write it at all. Instead, I would write only the test cases, thereby documenting the requirements. I would then evolve the software using modern machine learning techniques which are capable of program synthesis: https://arxiv.org/abs/1802.02353 The underlying implementation can keep changing but the requirements and tests don't change so much.


Have your major components communicate only via networked requests/responses using only language-indepedendent protocols such as SOAP, AMQP, and using only message payloads designed upfront; validate payloads

Use standardized infrastructure components with multiple, widely used implementations such as SQL RDBMs

Use standardized languages with multiple, widely used implementations such as C and JavaScript (ES5 or even ES3); if you can help it, avoid JavaScript-heavy frontends alltogether


Software to do what? Run one instance or a billion instances? Networked or inaccessible? Updates or no updates?

The challenges are very different for banks versus space probes. Voyager is approaching 40 years: https://www.popularmechanics.com/space/a17991/voyager-1-voya...


Make sure it works with punch cards. No, I am not kidding.

Yes, paper can also be subject to destructive forces, but fixing damages is probably easier than with other media.


If you are designing software that can run for 100 years you should try to make it runnable in year 1.

There are approximately zero punch card readers for new software today.


> There are approximately zero punch card readers for new software today.

Correct. But there are also approximately zero software projects with the these long term requirements today. Punch cards might be primitive, but they are relatively easy to create hardware for.

Just assume you need to have software running in space. Cosmic radiation is a major problem. Punch cards are very reliable in such a situation.


Punch cards in space??


Not unlikely, if we are talking about creating a system that should still be useful/useable in at least 100 years.


I’d write it in math. The entire thing as close to pure mathematical notation as possible. As truly annoying as it could be to complete the process, depending on the project size and software complexity. It’s the only thing I’d trust to survive ANYTHING the next 100 years could throw at the work.

The availability of software designed to “compute math” is something I am prepared to bet on lasting the next 100 years.


If the aim is survival then perhaps an evolutionary software that adapts to the challenges of the time would probably be appropriate.


If we subvert the old adage that past performance is no guarantee of future performance, then I'd target the Win32 API. GUI Windows applications written 20 years ago still work today without any modification or even recompiling and on more platforms than ever (because of Wine). I don't know of any other platform with that track record.


IBM 360 applications written 55 years ago still work on System Z mainframes sold today...


I feel like a niche mainframe platform is hardly comparable. You could as easily say that pencils sold today still support writing ones and zeros on paper. For me, a certain threshold of graphics and networking modernity that began to take shape around 1998 is needed to satisfy this exercise.

I mean, don't get me wrong, 55 years is no slouch, but 360 mainframe software used today feels archaic and Win32 software used today feels exactly the same as it did when it was new. The paradigm of the mainframe ended when the paradigm of the personal computer began.

How many System Z mainframes are sold today?


That's likely a function of when you came of age. (FWIW, I also came of age in the late 90s, and a Win98-style GUI feels perfectly natural to me.) Folks who were working with computers in the 70s are perfectly comfortable with green-screen terminals. Kids who grow up today think anything that doesn't look like either a mobile app or a Bootstrap webpage is archaic.

System Z revenue was up 70% in 2017, to nearly $3B/quarter [1]. That's pretty appreciable for any product. The PC market is definitely bigger, but it's on the same order of magnitude as brand-name PC manufacturers. Acer, for example, makes about $2B/quarter [2], while Apple makes about $8B/quarter from Macs [3][4]

[1] https://www.itjungle.com/2018/01/22/ibms-systems-group-finan...

[2] https://finance.yahoo.com/quote/2353.TW/financials?p=2353.TW (30 TWD = 1 USD)

[3] https://www.statista.com/statistics/382260/segments-share-re...

[4] https://9to5mac.com/2018/11/01/apple-earnings-fy18-q4/


Commit node_modules.


LISP

- has survived a very long time already

- can trivially be implemented on top of any instruction set or VM that will be used in the future

- basically no syntax, so its (nonexistent) syntax can't go out of style

- supports every possible programming paradigm, even ones that don't exist yet


The number one thing you would need to do is make it easy to modify safely.

Software will always need to change in that timeframe. If you make it head or risky to modify it then at some point it will get replaced.


you would design a yearly service contract into that software. that's the only thing that would keep it up in some condition. you know the "wise old wizard" in scifi/fantasy stories? that's the guy who's gonna be maintaining your software until he passes on the mantle to an apprentice when death is imminent.

i like the idea of software as replacing a societal service so as long as that society survives in some form your software will survive as well(for eg our current banking systems are such software)



Discrete apps aligned to SRP, such that they compose a System (Systems Thinking), solving the domain instead of the use case.


I'd make it to accept plain text as input and produce plain text as output.

(Picked that up from Rob Pike, I think)


If history is anything to go by:

Write it in COBOL.


Forget about it. Ideas are more important than code.


Which is why code that makes presenting ideas simply tends to stick around.


As a language, not as an application.

COBOL, for example.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: