Hacker News new | past | comments | ask | show | jobs | submit login

Oh, Zed is eminently mockable, especially due to his absurd supermacho asshole persona, but his stuff is a pretty good antidote to three different classes of toxicity around programming:

1. "Programming is easy; what's hard is working together with other people." No, they're both sometimes really easy and sometimes really hard. For a lot of objectives you might have, you have to learn to do the hard parts of both of them. The hard parts of programming include design, analysis, testing, debugging, and verification. You aren't going to learn how to write a compiler in 21 days. (Most programmers don't learn how to write a compiler in 2100 days, although that's probably more because they aren't trying because they're intimidated.)

2. "Software is in a crisis because we don't know how to program, and the solution is to manage software projects as if they were construction sites, using misguided metaphors about bridges invented by people who have no idea what happens when you actually try to build a bridge." No, in fact you have to understand both management and programming (motherfucker!) to manage a programming project effectively. When programming is being done effectively, you automate all the repetitive parts, which dramatically reduces costs and timeframe, but dramatically inflates the unpredictability.

Instead of managing software projects like construction sites, we're going to be managing construction sites like software projects, using proven libraries, mathematical models, repeated simulations, iterative exploration of unknowns to reduce risks, and pervasive automation. This is all in Engelbart's papers from the 1960s. Doing things that way (applying software techniques to hardware) is how we became able to build hardware with billions of transistors in the first place. It might be 2020 or 2035 when most of the effort in building a house is done by people using computers to manipulate models of the house, and only the last step involves (fully automated) machines rolling onto the construction site to materialize your carefully debugged plans, but it's going to happen. (I don't know if being able to deploy a new iteration of your house every week in order to fix bugs and roll out new features is going to happen before or after that.)

Now, it's true that we don't know how to program (although developments like seL4, coqasm, Excel, NaCl, and PHP seem to be making some progress on that front). But the "software crisis" isn't about that. Rather, it's about how Moore's Law has reduced the cost of computers so much that programming is suddenly the limiting reagent in nearly everything in the economy. You can get a 48MHz ARM processor with 64kB of program Flash for US$1.76 and burn ten thousand lines of C into it, then use it to run a string of Christmas lights. For US$6.46 you can get a 48MHz ARM processor with 1MB of program Flash and burn a hundred thousand lines of C into it. Or you can put a Lua interpreter into 20% of that memory and fill the other 80% with eighty thousand lines of Lua. The "crisis" is that it costs literally a million times more to write the code than it does to make the processor. (Prices from Digi-Key, unit price for quantity 1000.) The "crisis" started in 1968 when the price of hardware at last fell below the cost of writing the software to take advantage of it, and it's been "worsening" ever since. And that's the origin of "software engineering".

3. "Programming is super hard and you have to be a wizard to do it at all." No, anybody can learn how to program. We have Excel, JS, and PHP. If you have a computer and already know how to read, you can learn how to program a little bit in a few days, and usefully in a month. There are still going to be a lot of things that are beyond your capacity, but you'll be able to do some things. It might take you years of work to learn how to write like Dean R. Koontz, but you don't have to be a Dean R. Koontz to write a shopping list, a YouTube comment, or a love letter. Programming's the same way.




Excellent breakdown of the issues. I can see the references on the site countering many of those problems. Your bridge analogy reminded me of this article that came up here around the same time:

http://www.dev9.com/article/2015/1/the-myth-of-developer-pro...

Considering making it my current reference for explaining this stuff to managers I meet in a short time and with a quick link. I also agree the software crisis largely came from the cheaper unit prices of hardware, complexity exploding in software to take advantage of it, and many management issues. Also inspired Wirth's Law where he made the same observation while trying to avoid it himself.

Only inaccuracies I see are these:

"When programming is being done effectively, you automate all the repetitive parts, which dramatically reduces costs and timeframe, but dramatically inflates the unpredictability."

It doesn't just do that. The 4GL's, the good ones, did that with great productivity benefits in the common case and ability to call custom code for the rest. Many programming aids do today, too. So do the languages offering good macros. However, the methods I advocate here do more than that: support designer's understanding of what system does; find high-level issues in requirements or specs; find interface errors (critical); prevent all kinds of things by construction; detect others; enable easier portability with native efficiency; ensure executing code matches source. Just a few recent examples and with each requiring human effort. No amount of programmer skill in C with ad hoc requirements/design techniques replicates these without considerably more effort or the most experienced, elite programmers.

So, the tactics and tools from Comp Sci + High assurance/integrity field certainly have value beyond automating tedium. These also tend to increase predictability once people are used to them because they reduce risk and the debugging that follows. The trick is to choose just the right ones that boost the developer's abilities without bogging them down. I don't have a single recommendation there as (a) one must use best tools for the problem and (b) they experiment too damn much in Comp Sci to the point that I can't say whats better for most methods or tools. Only what's worked and for what. So, the specifics I bring up change depending on topic & audience. Eventually, we'll see some convergence with more uptake. Like how Amazon's using TLA specifications now with results I predicted when encouraging that sort of thing for years.

" The "crisis" is that it costs literally a million times more to write the code than it does to make the processor. "

Here I believe your overall point is hardware, unit cost vs developer cost along with what leads to. I agree with that. Just wanting to make sure you know that line isn't true in general as many programmers think it is. Hardware development is expensive, esp CPU's. I've been pushing on Schneier's blog and elsewhere a number of CPU modifications that eliminate entire classes of errors (eg code injection, leaks). Most of them have prototypes to draw on or are specific enough for opencores crowd to handle. Yet, you see almost no attempts to put them into an ASIC even when people know about it. That goes for opencores stuff, too, for most part. That's because hardware development is ridiculously expensive even for adding a block or two to a RISC CPU with prototypes.

The reason is mainly the mask costs, tooling, and labor. The masks, which are necessary for printing chips, for 350nm-180nm (minimum useful) are $100+k per design test. Wafer and packaging might be $5-10k. Every screw-up requiring a change means new masks and maybe 4 months waiting. You minimum staff of 3-5 pros, as rookies make too many mistakes, will set you back same amount if not Asian and with tools that start around $50k a seat on low end. For hardware that's smaller or faster, all those variables go up dramatically with tools for deep submicron (esp synthesis) runnning $1+ mil a seat plus needing four or five different tools. Sometimes more than 10 on most advanced nodes. Your CPU was probably made on a node where one prototyping run (think unit test) of your HDL code cost $1+ million dollars.

There's tricks to keep costs down, mostly software-inspired as you guessed. Adapteva probably the best at it using pre-proven I.P., simplicity in custom stuff, re-use of it in future iterations, HW's version of structured programming, cheapest tools, utter pro's for team, and customers willing to pay significant, unit prices. Still cost them $2 million. Most SOC's costing $15-50 mil. I managed to come up with flows, open and proprietary EDA, for anti-subversion of open, secure hardware. Found shortcuts like S-ASIC's (eg eASIC, Triad) got it down to like $400,000 but with volume requirements. Still high. RISC-V Rocket processor, etc are happening because academics get paid in grants, get EDA tools for low five digits, and get discounts on ASIC prototypes. They're our best hope for OSS, secure chips. But I hope I've illustrated HW is in no way cheaper than software except in unit price of something where demand spreads out development costs to make it low.

Note: Microcontrollers you cite are a special case not like the rest. Developed by pro's, very limited functions, cheap tools, much reuse, on fully-depreciated fabs often owned by the vendors that reduce itereation costs, and in such ridiculous volume that further reduce costs. Even the cheapest fabs with your tiniest design will charge you $3,000+ for the minimum order. When has a compile-and-test cost you that much? ;)

With that foundation laid, I can then say your CPU is going to cost way more than the software. Especially if it's Intel, AMD, or IBM: full-custom for top performance, power efficiency, density. That requires more of the best engineers, more time, and more masks ($1+mil per set) on the best nodes. The inevitable logic screwups and their respin cost require extra tooling, both commercial EDA and custom, to counter the problems: Intel uses formal verification [1], IBM mostly falsification [2]. I'm not sure what your CPU cost vs the software on it as numbers are hard to come by. I do know that Solaris 10, a whole UNIX rewrite, cost around $200-300 million while the Cell processor cost IBM $2 billion to develop. Intel spends $10.6 billion a year on their product lines with most on hardware end. So, Standard Cell hardware is more expensive than software with full-custom plus high-end specs being insanely expensive.

So, it's probably best for that old meme to die since the world has changed and shit much more expensive now. Your points about effects of unit price vs developer cost still stand and all. I just wanted to make sure you knew just how hard hardware was today & at what prices. Plus, there's always the tiny chance that a smart guy like you reading these posts will say "Challenge Accepted!," then build a secure, surprisingly-fast processor for us on an older node. :)

Note: +1 for the reference to Engelbart. His videos prophetically talking about (and illustrating) remote collaboration, etc were some of the most visionary things I've ever watched from the field's history. Mind-altering to look back at them trying to look forward and getting a lot right for the time period. Very specific stuff, too.

" No, anybody can learn how to program. "

VERY TRUE! I try to remind everyone that doubts it. More tools and ease of use today than ever to get started at whatever level they're comfortable with. Worst case, they think it's too hard, say that yes they played with Legos, I hand them Scratch [3], and they on the way to programming haha. The basics are really as easy as breaking things down into steps and building them like Legos. A kid can do it... :)

[1] https://www.cl.cam.ac.uk/~jrh13/slides/nasa-14apr10/slides.p...

[2] http://fm.csl.sri.com/SSFT11/FV_Summer_School2011__HW_Verif....

[3] https://scratch.mit.edu/




Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: