Hacker News new | past | comments | ask | show | jobs | submit login
It Can Be Done (2003) (multicians.org)
87 points by dasmoth on Nov 10, 2018 | hide | past | favorite | 18 comments



My experience with writing assembler is the same. The run-try-correct sequence encourages trying things until it seems to work.

The best code I wrote was in assembly language. I'd divide the code into blocks with comments. Each comment would document the expected state at that point, what the block would do, and the state after that block.

I wrote pages and pages in my noteblock, and while not everything was bug-free most of it was, and I have never experienced such a nice development process. Nowadays I have to consult five coworkers to ask how to call their undocumented methods and initialize the classes.

I have recently encountered the book 'Toward Zero-defect Programming', where exactly this approach is recommended. Key points are (from the top of my head):

  1. You should know the programming language you're using inside-out.
  2. Don't *try* your code, but *reason* about it, and comment/document this reasoning.
  3. Do group reviews of the code and try to find holes in the reasoning, and problems with the code.


>How did André do this, with no tool but a pencil?

And paper, paper is the most important thing here. It is the extended human memory. The short term memory of human beings is extremely sort.

I use paper with color marker pens all the time for programming. Nothing beats paper right now. For doing the same thing on computers you need expensive gear like Ipads pro or surfaces and the useful surface is minimal.

I could use several A4s(normal paper size in Europe), A3s(double that), A2s(4 times that) A1 and A0. I increase the resolution using fineliners. It cost nothing compared with expensive gear.

When you are done you take a high res picture for archival. You can replot it if necessary or wacht it on TVs.


Former Multician here. Andre was super-smart, but it is perhaps also relevant that even the most junior developers (as I was then) had quiet private space to work with large desks. All design was done offline. Multics terminals were like type-writers (although video did show up near the end), hence no stream of popups. The environment both allowed for and demanded focus and concentration. This is no longer so.


Thanks for commenting!

And yes, more than the pencil-and-paper aspect of this, what I find striking is that it's one person working quietly and alone, with a manager who was prepared to take "Still designing" as an answer. There seems to be very little room for that in a lot of modern workplaces, and I find that rather sad.


I also prefer to spend a bit more time thinking before writing code. Things usually work fine but it's in direct contrast to TDD and other agile methodologies. Not saying they're bad, great things have been achieved with them, just that it's not a silver bullet (or the most enjoyable way to program for some people).


These types of designs work when the problem is well defined. For something like this, where the easy part is stating how it should behave, and you're 100% sure what perfect behaviour looks like, then by all means diagram and math it out until the system is complete or at least as complete as it can be for you to start performance testing some of the subsegments that can't be determined from first principles.

But most people are not programming up things like this. Most of the time, either the data or customer usage informs next design decisions.

Even so, I spend about 40% of my time these days in "plan mode" because nailing a plan always halves the time it takes to implement some feature and can even tenth it.


I wish I was working in an environment where it's possible to know everything well enough so I can design something on paper. Nowadays I plug a lot of systems together and and I first spend a lot of time establishing that they really behave the way as expected. It's really hard to design something well that always gets tripped up by some external system behaving in unexpected ways.

Makes me want to go back to pure C/C++ development where you know the environment in and out.


That kind of summarizes what I like about Haskell. The types of every value and function are so informative, and the definitions, even in third-party libraries, are often so short and to-the-point that surprises are very minimal even when working a library that I've never used before. The avoidance of code with side-effects also helps a lot, since it means that for most functions I only care about what goes in and what comes out of the function.

This also carries to frameworks. Web frameworks like Ember and Rails rely very much on side effects, especially Ember. I don't even know how a rendered template is updated when updating a controller property. It just works automagically! Defining an app in frameworks like this is a lot of defining hooks that get called by the framework environment. Trouble then happens when it calls it when you didn't expect it or when it doesn't when you did expect it.

Haskell's Yesod, on the other hand, has been a pleasure to work with. If something's calling my code, the caller is going to be in my repo. I get control all the way to the final executable's main function. Yesod's code is there only as support for me to call.


since this isn't immediately obvious: the code in question is linked in the article: https://multicians.org/vtoc_man.html


I used to do this with a method kind of like Cleanroom with inspiration from formal specs and refinement, too. You have to be able to figure out the specifics ahead of time without explorative execution. From there, I used decomposition noting the inputs, outputs, and so on of each function. I used a subset of a programming language with predictable behavior that maps to that. I'd usually know what's going to happen before it happened by just walking through the program in my head. Main problem was third-party libraries, which I had to profile ahead of time. Or just use a friggin' computer since even that wasn't reliable for those in C.

Here's a write-up on Cleanroom:

http://infohost.nmt.edu/~al/cseet-paper.html

Here's a test of it by NASA around 1990 where the E.3 and E.4 sections highlight its benefits and requirements:

https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/199100...


I am not sure if one should be taking a lesson away from here. Sure, a programmer was able to write software on paper with incredible clarity but in my opinion, that approach is counter-productive than useful in most contexts. Yes, it's unavoidable in certain cases (space rovers, critical health-related equipment), but in most cases, it's not.

Getting something on the screen is useful. It's motivating. It helps to expose the limits of the approach taken. The scope of the project becomes much clearer when have written a chunk of code. I have rarely felt that I should have spent more time dabbling with pen and paper, than firing up my text editor. After all, mistakes can always be corrected.

With the rising complexity of software, and of the whole ecosystem itself, does it make sense to know all the edge cases you're going to stumble upon beforehand? Not to mention that requirements can themselves change. And what about processes like code-reviews? You're not the only person working on the software. I am hard-pressed to say that seeking an extreme level of clarity isn't a great use of time.


> Yes, it's unavoidable in certain cases (space rovers, critical health-related equipment), but in most cases, it's not.

I'd still like it in my filesystem, though.


Dijkstra talks about this. Post-war European computing science departments couldn't afford computers and they only got to borrow something like an hour a week from the Americans. This forced them to devise methods for writing reliable code without a computer.


This article is older... OK, I see there's a 1994 there, but not in the HN title. I remember it clearly because I did a similar thing (for 1/4 the lines though) just before I read it in 1999. I felt like a million dollars :)


Yes it should be (1994) in the HN title and not 2003, as it was published in "IEEE Computer, April 1994."

I'd guess the mentioned "update" on the page is not relevant, that is, that there aren't any important changes compared to the 1994 original of the article.


Am I the only one who's fascinated by the UX of this site? It's just so clear and so usable. It resembles to https://motherfuckingwebsite.com/ but it's not a gag, it's an actual site.


Yes, I like it.

I'm not sure https://motherfuckingwebsite.com/ is the analogy I'd make though. It uses a non-trivial amount of CSS (about 5k) -- but not so much that it'll slow most people's downloads, and not a big "framework". It's got some somewhat distinctive branding -- you'll know instantly if you navigate away from the site -- but the elements don't take up too much of the screen. It actually has some responsive elements (try making your browser window narrow).

There's a few lines of JS to make the drop down menus work. But again, nothing huge.

It doesn't seem to be consciously avoiding anything. It looks to me like the work of someone who's started from a clean sheet and is trying to build a site that works for them rather than ticking off a list of best practices. And yes, I like the results.


What's ridiculous is that the design paradigm of "removing the unnecessary" shows up everywhere today with hardware (goodbye headphone jack) even as the exact opposite has happened with the majority of software.

And by ridiculous I mean "worthy of ridicule" because as developers, we're largely the people responsible for cramming in more tracking, frameworks, features, and styling into the work we create.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: