
How We Teach Programming, and Where We're Going Wrong - gjstein
http://www.cachestocaches.com/2016/2/how-we-teach-programming/
======
mch82
One of the big gaps in programming education is the relative lack of material
on how to find, comprehend, and use code written by others. Most "Learn
language X" references assume the programmer is starting from scratch. Yet as
I explore programming farther I'm increasingly understanding the value of
learning about applications, libraries, and frameworks written by others. To
use human language education as an analogy, syntax, reserve words, and grammar
are the equivalent of grade school topics in programming. We need more high
school style tutorials that survey, critique, and teach the style of code
written by top programmers.

~~~
s986s
I would argue we need more analysis tools for testing. Top programmers
arguably just work the hardest, test the most and often focus on speed over
readability. Being able to test something is much better imo

\- grade school - list manipulation, loops, strings and if statements

\- middle school - servers and clients, oop, types and validation

\- high school - parsing, streams, byte code and efficiency

------
nickpsecurity
Games from people like World of Goo are exciting. I wasn't sure how I'd even
imagine a game that could teach skills with good mapping to programming.
They're doing a good job. Prior to that, the best results we were getting were
projects like the brilliant MIT Scratch that turned it into collaborative,
creative, lego building (below). Should still invest in stuff like that, too,
which is more direct but fun.

[https://scratch.mit.edu/](https://scratch.mit.edu/)

On other hand, that programming is a fundamental skill is an assertion that's
being repeated by many without evidence. It's a useful skill that will become
more useful as we go along. Yet, something like 80-90% of America, including
top 1%, know nothing about programming despite their job success or quality of
life. There's no direct tie in. Meanwhile, they use reading/writing and
mathematics on a regular basis. So, I think people should stop making this
claim as most people will mentally tune out upon seeing something like that.

This brings to a bigger point where software isn't engineered despite prior
work showing how to do it. That started with Margaret Hamilton and Dijkstra in
the 1960's where both of them used modularity, layering, and interface checks
to build flawless systems. By the 1980's, Hansen had safe concurrency
(Concurrent Pascal), Mill's Cleanroom let teams hit almost no defects first
try, LISP machines let you live-edit, incrementally-develop running systems w/
no compile phase, Schell had a NSA-resistant kernel (GEMSOS) that ran other
things as processes/containers on virtual CPU's (sound familiar?), and Wirth's
simplicity + safe language approaches let ETH run on their students' OS's.
Each of these approaches led, with small teams, to systems that reached high
levels of security, reliability, extensibility, and (for Lisp machines)
productivity.

Yet, the techniques that built them are largely not used by mainstream. The
negative results are predictable. So, we have two problems: (a) getting
nonprogrammers into programming so they can solve problems with _some_
approach and toolset; (b) getting programmers into _engineering_ so they can
build robust, maintainable systems using lessons learned from the past. Like
other engineering fields do. So far, we're doing better at (a). Some good news
at least.

~~~
contingencies
At the risk of stating the obvious, (b) doesn't progress because it's more
expensive and often unnecessary. In the cases where it would be useful, when
TCO incorporating whole lifecycle costs such as maintenance and/or rebuilds
are considered, (b) is still often a failure because it requires higher lead
time and up front investment. That's not to say there's no space for very
carefully built systems, just that they're a niche and likely always will be,
particularly as hardware is now basically free and improved tooling (eg.
popularization of CI) helps to reduce the difference between one-off hacks and
"robust, maintainable" systems built with knowledge of prior work and theory.

~~~
nickpsecurity
Regarding expensive, empirical tests found the medium assurance methods to
cost either the same or cheaper due to huge reductions in debugging or
maintenance issues. This was the case for Fagan Inspections, Cleanroom
especially, TSP, and iterative methods with safer languages.

Far as high assurance, the level of assurance is usually applied to a high
level design or some small, critical component. The cost reported for LOCK was
about 30% extra on top of a solid lifecycle. Same percentage for security
costs of one Special Access Program. A small team did an EAL7 IPsec VPN in
relatively short time. I'd love to know cost overhead of Praxis Correct by
Construction. seL4 cost a few million for highest extreme.

So, applied correctly, even high assurance methods result in a reasonable
premium for certain industries. Niche as you said. Time to market is another
story with high assurance killing that. Yet, medium assurance is nearly as
fast and around same price as solid lifecycle while paying dividends later.

So, the cost and lead time objections are a misconception if we're aiming for
medium level which is merely high quality. It benefits long-term better than
alternatives. Only exception is bottom-dollar deals and one offs where it
barely working is acceptable.

~~~
contingencies
The study likely assumed that people already familiar with working in these
methodologies were equally available, equally happy to do so, and equally
priced as regular developers, which is unlikely to be case. A certain minimum
team size is probably required to make many of these processes viable; many
real world systems are written by solo coders. Also, in reality it is
relatively trivial to scavenge work like seL4, Linux kernel security toolkits,
software HA clustering and/or proxy implementations, hardware watchdogs, ssss
and/or distribute infrastructure across disparate locations to provide what
probably amounts to magnitude higher redundancy and resilience against real
world attacks or component failures without paying those costs. Indeed, the
implementation of such is becoming near free many cases due to convergence on
SoA/microservices.

To put this more plainly, "I can get another if I break it, so a clay cup
trumps a grail." \- Mirza Asadullah Khan Ghalib, classical Urdu and Persian
poet from the Mughal Empire ... quote #1 in my fortune db @
[https://github.com/globalcitizen/taoup](https://github.com/globalcitizen/taoup)

~~~
nickpsecurity
"The study likely assumed that people already familiar with working in these
methodologies were equally available, equally happy to do so, and equally
priced as regular developers, which is unlikely to be case."

For things like Cleanroom and PSP/TSP, they were first-time users of the
process. Not long-time. Just decent developers shown a better way of getting
results. In high assurance, it varied from first-time to experienced users.
First time users usually gained the benefits of the method but with
difficulty. There'd be at least one specialist aiding the team. Best results
came from people who had done it a while. So, for high that's partially true
and for stuff I push on commercial sector it's the opposite.

"A certain minimum team size is probably required to make many of these
processes viable; many real world systems are written by solo coders. "

That's a legit critique. They can still benefit as I experienced myself and
some others. It requires one to divide the work into periods where you wear
various hats. It takes discipline to externalize one's own work and attack it
objectively. Good news is that inspection, testing, static analysis, model-
checking, design-by-contract... these all catch problems that are objective
more than subjective. Still get plenty of results. There's been one or two
threads on here where even solo people do QA on their own stuff with good
results.

Still open-ended problem worth some experiments to see how much can be done at
what effectiveness for a solo coder juggling work. So, good point.

"in reality it is relatively trivial to scavenge "

It's so trivial we had today yet another HN thread about distributed locks
showing how people were copying one implementation and screwing it up. Then
there was disagreement in the comments. There's threads like that on various
critical things at least once a day or so on HN. Shows it's trivial to do and
hard to do right for many things. Better to make re-usable, strong solutions
to the various problems we keep encountering where possible. Can always mix
high, medium, and average assurance components as a compromise where we have X
goals but only Y resources. Pieces can improve over time.

"Indeed, the implementation of such is becoming near free many cases due to
convergence on SoA/microservices."

I've yet to see one go 5 years under heavy load without downtime much less the
17 years VMS clusters did or 30+ years for some IBM mainframe installations.
For security, things are more abysmal. I'll agree many solutions are popping
up that can do amazing things. Robust, scalable, fairly secure, and near-zero-
downtime systems don't seem to be among them in this nearly-free, mainstream
market you're talking about. A rarity in the market in general.

So, there's still room for both medium and high assurance software. To be
clear, I'm only pushing medium because even first-timers did a better job than
most do today by eliminating the problem areas & doing great detection.
Similar cost, too. Only _hoping_ for high-assurance to be selectively applied
to high-impact stuff that doesn't change often. A recent example was a formal
verification of SCOOP: Eiffel's race-free, easy-to-analyze, concurrency model.
It found issues in the model that were fixed. Three other analysis, ranging
from specific to general, eliminated deadlocks, livelocks, and performance-hit
respectively. That's the kind of situation where a significant investment of
resources can keep providing benefits with almost no effort. Otherwise, medium
assurance aka commercial + sane extras.

""I can get another if I break it, so a clay cup trumps a grail.""

Nice quote and good principle in much of current economy. Not so much my
personal data or business I.P. when a byzantine failure takes out it and my
hot standby. I think the comparison mainly applies to non-critical services
and hardware. ;)

~~~
contingencies
Interesting discussion, I'd be keen to try such a formalistic approach if you
ever have a need for a remote critical eye.

Regarding the quote, I don't think you quite took the spirit of it: the idea
is that, given a replaceable component with failure expected, we can manage
reasonably to achieve the same outcomes for less overall cost - ie. do not
build the one true system to solve all availability and security challenges
(which would be very expensive (the old "pick two" adage)), but rather build a
system which makes use of the properties of components intelligently to reduce
overall hassle (as measured in thought, hassle, time, cost, third party
dependencies, untested code or processes) to achieve the same objectives, just
as reliably/securely.

I think the main reason there are less modern systems running for 15+ year
timeframes is that the centralized models of yore only remain valid within
traditional large organizations that are slow moving: finance, government
infrastructure, military, etc. Even there, they are being dismantled, as COTS
hardware + FOSS topple traditional mainframe vendors through cheap and highly
scalable clustering. The paradigm has shifted: today we have components like
the Linux kernel and sqlite which despite not generally running a single
instance for >15 years (though they are definitely capable, and with Linux it
has been done; sqlite is not old enough yet!) they do run in harsh
environments with limited resources in hundreds of millions - nay - billions
of instances, extremely reliably.

~~~
nickpsecurity
"Interesting discussion, I'd be keen to try such a formalistic approach if you
ever have a need for a remote critical eye."

I'll try to keep it in mind. There's a few people doing stuff in public if you
want to play with it. Plus lists to stuff to try. Here's a few.

One person did an overview of Cleanroom and examples in Python, etc here:

[http://infohost.nmt.edu/~shipman/soft/clean/](http://infohost.nmt.edu/~shipman/soft/clean/)

I just pulled a list of techniques that were empirically proven to increase
robustness into a Pastebin:

[http://pastebin.com/xZ6m4T8Z](http://pastebin.com/xZ6m4T8Z)

Note: One can pick and choose among these to find right work/benefit ratio.
I'd like to see lots of experiments on mainstream software to get data on
what's 80/20 rule. I'm betting on sane subsets of safe languages, code
reviews, usage-driven testing, interface checks (even asserts), and fuzz
testing maybe. The first four are widely available and take little work.

Case study of Altran's Correct-by-Construction approach showing great metrics:

[https://www.commoncriteriaportal.org/iccc/7iccc/t1/t1201100....](https://www.commoncriteriaportal.org/iccc/7iccc/t1/t1201100.pdf)

Note: Altran does plenty of medium and high assurance work. Their method can
do either. I'd rate Tokeener as Medium-High if just focusing on their part of
it. Note that they got it done with 3 part-timers in less than a year with
code-level proofs. Quite a leap from millions spent on teams of geniuses doing
proofs in Gypsy, etc two years at a time back in the day. :)

"Regarding the quote, I don't think you quite took the spirit of it: the idea
is that, given a replaceable component with failure expected, we can manage
reasonably to achieve the same outcomes for less overall cost "

I'm not questioning the spirit so much as the implementation.

" do not build the one true system to solve all availability and security
challenges (which would be very expensive (the old "pick two" adage)), but
rather build a system which makes use of the properties of components
intelligently to reduce overall hassle"

That's what I advocated. With medium assurance, it cost almost nothing extra
so apply it to the components. With high assurance, if used at all, it can be
used on just the most important and is by some niche players with good
cost/benefit results. It's just that...

"to achieve the same objectives, just as reliably/securely."

...this has rarely proven to work. There's an old rule that you can't make
high robustness systems by composing low-robustness components. They
experience both individual and byzantine failures. That's why we see the
companies building on them experiencing downtime that shouldn't happen on top
of unavoidable downtime which I don't judge. So, certainly break the problem
into smaller ones and compose simple (or cheap) solutions where possible. You
just have a baseline of quality you need for the components. Commonly
expressed as "weakest link" or "Garbage In, Garbage Out."

"I think the main reason there are less modern systems running for 15+ year
timeframes is that the centralized models of yore only remain valid within
traditional large organizations that are slow moving: finance, government
infrastructure, military, etc."

Someone else speculated the same thing. There's some truth to it. Yet, there's
overlap between the kinds of services running on them and in the modern,
downtime-prone infrastructures. The stuff works on robust systems of old times
but fails on new ones. What's the key difference? One was designed to be
robust and one is a composition of pieces of known to unknown quality. Garbage
in, garbage out.

"as COTS hardware + FOSS topple traditional mainframe vendors through cheap
and highly scalable clustering."

They're definitely cheaper and faster. That's been the case since Beowulf
clusters I built back in the day. Yet, their uptimes still haven't reached
what OpenVMS hit in the 80's with bullet-proof clustering. Are you truly
saying hardware and admin labor is so free right now that we can just throw
dozens to hundreds of servers in 3-5 locations at every businesses' problems
while ignoring the architectural, interface or code quality of any of it? I'll
take QNX 2 here and 2 there over a bunch of Linux boxes any day so long as
budget will allow the license. I'll have predictability, performance, self-
healing, damage containment, and live updates all in a few megs of well-
written code w/ POSIX apps.

See the difference between that and mainstream systems? Good architecture
still matters. System/38 (AS/400), OpenVMS, NonStop, QNX, BeOS... all great
architecture that squeezed way more out of any given piece of hardware,
software, or admin effort. OpenVMS was also an example of medium assurance
that was point of our discussion. Their Red/Blue iterations did 1 week
development & test writing then 1 straight week of review and fixes with
regression tests over weekends. Results were boxes so reliable admins
sometimes forgot how to reboot it and such. Still not common on Windows and
Linux boxes despite the great strides in reliability that nearly a _billion
dollars_ worth of labor bought those bad architectures. I like to poke at
Linux folks that MINIX 3 eliminated most crashes in a few years with a small
team while UNIX architecture took _decades_ to _mostly_ do the same thing.
Architecture & assurance activities matter.

"though they are definitely capable, and with Linux it has been done"

I'd like to see evidence of that. The mainframes, AS/400's, VMS, NonStop's...
these run all sorts of workloads without software repairs for years. Heavy
workloads. I've investigated numerous claims of individual Linux boxes doing
the same. It was almost always someone's barely-utilized, hobbyist web server,
DHCP server, etc. I want to see the equivalent of a mainframe or VMS server
running everything in a business 24/7 without a crash in years. Optionally
that system getting lost due to never needing maintenance with people scouring
the building looking for it. That was a recurring problem with pizza box
servers and embedded cards running aforementioned OS's. ;)

"they do run in harsh environments with limited resources in hundreds of
millions - nay - billions of instances, extremely reliably."

Again, a source for this. They certainly run regular workloads they're
optimized for quite reliably. The cloud vendors optimizing on specific HW
configurations with all kinds of disaster tolerance appear to run them
reliably by hiding the failures. Yet, surveys of industry on their servers'
downtime contradicts the overall claim with all kinds of downtime still in
mainstream stuff. Plus, they show recent Windows servers ahead of Linux in
reliability and AIX usually stayed on top. They don't include the stuff I
mentioned due to maybe market share or perhaps the contrast would make the
others look bad. ;)

Nonetheless, adopting medium assurance methods to build our components or
using components built that way is a solution. It was proven... is used... in
many commercial systems that have higher quality at acceptable price and
performance. Outside one-offs, the only reason it's not happening is lack of
knowledge or will to do it. Mainstream devs were told about the stuff for
years so they just don't give a shit. Surprisingly, Microsoft is ahead in
efforts as they implemented SDL, formal methods on drivers, integrity
controls, app protections, safe language use, and so on. I would _love_ to see
that level of commitment on BSD and Linux side given I use both. Least they
sometimes accept fixes from academics who apply various tools in their spare
time or with grant funding. ;)

~~~
contingencies
The pastebin looked fine but an issue may be that some of the data/samples in
supporting research are so old as to be difficult to apply to modern
infrastructure, and the HR question still stands.

I will take a look at
[http://infohost.nmt.edu/~shipman/soft/clean/](http://infohost.nmt.edu/~shipman/soft/clean/)
tomorrow (evening here).

I agree with your assertions around reasonable use of prior work. I personally
came to similar methods designing security-oriented systems (eg. at Kraken),
eg. by strictly defining a protocol in a formalistic manner with a language
suited for that purpose, then generating interfaces for client systems ... and
even application-level enforcement proxies that can be inserted either during
testing or (preferably) even automatically during live deployment. This
amounts to automated application-level middleware, for SOA-type systems,
direct from protocol spec.

Enforcement proxies ensure that broken implementation A and broken
implementation B may only talk correct subset C of any given protocol, block
and alarm on noncompliant communications, make otherwise restricted
communication between nodes extremely difficult for attackers to use to
horizontally expand any foothold established through security breaches.
Assertions may be enforced in real time. Logging may occur there also. Very
useful stuff. Not actually that hard to do, in practice. By contrast, the
current norm of arbitrary HTTP POST with barely enforced JSON is a leaky
bucket of slop.

Altran link 404s for me.

The internet's basis in decentralization roughly equates to FOSS clustering
equates to torrents equates to the wisdom and utility of failable components,
I do not think you can claim in good faith that embracing failure has "rarely
proven to work".

Therefore, I'd like to know the source of your "old rule" that you can't make
high robustness systems by composing low-robustness components, because in the
face of just the above evidence it looks like bullshit. Get enough redundancy
in a system and it will operate with high resilience, as long as the
orchestration algorithms are well tested. Redundancy is virtually free today
in hardware, software, network connectivity, legal jurisdiction and a host of
other ways. Not to take advantage of this reality is to have one's head firmly
in the sand. To take advantage brings benefits beyond merely security and
availability, such as system mobility, peer service provider independence,
legal jurisdiction independence, cheaper sourcing, easier scaling, etc.

Some of the rest of your post seems to me to be comparing apples and oranges,
bygone eras to the present, monolithic transaction-oriented mainframes to
general purpose complex COTS disposables, so I don't feel it warranted to
discuss further.

With regards to Linux uptimes, you never specified workload, and that's hard
to assure in hindsight, so you've created an impossible requirement to
disprove your assertion. You do recognize however that many standard load
Linux systems have run for 5+ or 10+ years with no problem, as I too know well
as I've had some. Further, with changes in hardware (more work in RAM than
ever, less unpredictable disk failure, higher quality cooling, more
recognition of component lifetimes, ECC RAM, etc.) I expect that net COTS
system longevity is increasing rather than decreasing, and Linux will not
crash before the hardware in all but very unlucky cases... particularly with
easily scripted, reasonable load testing.

My quote about sqlite and Linux running in harsh environments with limited
resources in hundreds of millions - nay - billions of instances, extremely
reliably, was primarily referring to mobile devices. Power challenges, many
power cycles, physical bashing, temperature variations, heavily intermittent
connectivity, dynamic clock speed migrations, moisture, etc. Furthermore, they
are far more complex in terms of software stack than 1980s/early 1990s
transaction processing oriented mainframes.

~~~
nickpsecurity
" may be that some of the data/samples in supporting research are so old as to
be difficult to apply to modern infrastructure"

They still apply based on similar stuff published since then. Several are best
practices for robust systems even today. The HR question should've been
addressed by fact that medium assurance techniques take either little or no
training: mostly just effort and accountability.

"This amounts to automated application-level middleware, for SOA-type systems,
direct from protocol spec."

Great work. An example of the higher end of what I'm talking about. I did
something similar and kept checks on in production. This lets me laugh about
these obvious vulnerabilities like protocol downgrades while others scurry
around trying to patch them. Common problem that tools can help us do better
so we use tools to do it better. I'd like to see more think that way.

I cleared history and tried Altran link again. It works on my end. Try this
one again:

[https://www.commoncriteriaportal.org/iccc/7iccc/t1/t1201100....](https://www.commoncriteriaportal.org/iccc/7iccc/t1/t1201100.pdf)

Note: I'd really rather give you one that shows the use of the method like on
the CA they built. Yet, the Altran transistion and effect of time means links
are disappearing off the net slowly. Happened to old language studies, too,
that weren't put into IEEE or ACM.

"The internet's basis in decentralization roughly equates to FOSS clustering "

Not quite. When a site goes down, it goes down. I have to manually look for
mirrors, archive.org, etc. One can use load balancers, CDN's, etc to cluster
such things. That's not default, though.

" equates to torrents"

The default at the time was FTP or HTTP. This led to problems. Following my
recommend, a person sat down to think of how they'd meet their goals
differently. A combo of good architecture and coding led to BitTorrent.

"I do not think you can claim in good faith that embracing failure has "rarely
proven to work"."

My overall claim is that you benefit by taking extra time to apply lessons
learned to architecture, design, and coding to increase effectiveness (esp
quality). The two cases you cited support _my point_ that this creates
benefits rather than your original case of cheap labor throwing together
whatever they can download. Most people who are stitching together components
with good long-term results are using components or cookbooks made with my
approach. So one may even precede the other.

Far as embracing failure, high assurance has been on that for a while. Hell,
the concept even presupposes failures across the lifecycle. Most such systems
are designed as interacting FSM's that incorporate failure states with a fail
fast and fail safe approach. Later on, it was taken further with "recovery-
oriented computing" funded by DARPA, etc. Google that phrase with words
"survey" and/or "security" for some interesting architectures.

"I'd like to know the source of your "old rule" that you can't make high
robustness systems by composing low-robustness component"

It came Bell of Bell-Lapadula model. Probably another person, too, though
can't recall name. The DOD funded high assurance safety & security research
leading up to Orange Book A1 systems over a period of three decades. Some
systems were built to right principles from the start. Some tried to retrofit
things like UNIX for both availability and security. All the security plus
some availability retrofits failed with pentesters finding more and more
obscure failures or vulnerabilities. These often resulted from interactions of
overly complex, stitched-together components. That's because combining two
complex programs essentially makes a third, complex program. Being unable to
prove reliability or security of input components makes it difficult to say
anything about output.

Note that this is true for security more than availability. HA solutions show
that, depending on failure type, one can get at least three 9's out of crap
systems. The HA had to be well-designed and coded. Fault-tolerance took
solutions like NonStop or Stratus straight designed for them. So, both took my
model to varying degrees either at middleware or whole-system level for
availability. Many of these still failed at upper layers of stack,
_occasionally_ lower ones, due to Byzantine failures due to how data moves
throughout the cluster & corrupts stuff. Led to Byzantine-tolerant schemes,
recovery-oriented architectures, and MILS architecture for secure composition
of stuff like Linux. These efforts had mixed success with the problem hard
enough that DARPA & NSF are _still_ funding smart people trying to find &
prove anything about a solution up-front.

So, I think my claim is well-grounded in evidence. Availability often applies
on common cases that really smart people... carefully designing and coding as
I advocate... optimized the hell out of. Same at various layers of the stack.
These things work well enough that average users will get results unless they
stress them unexpectedly. Others are solving root problems where they can with
better systems and software. Think RDBMS clusters vs Hadoop stuff vs Google's
F1 RDMBS. The best method is composing a combo of what exists... in common,
battle-hardened configuration/usage... with higher-quality stuff while
incrementally replacing what you can as better stuff appears. This was high-
assurance strategy called "incremental MLS" designed to address high costs and
slow time-to-market of full, custom MLS. Worked wonders applied to web
servers, databases, etc.

" You do recognize however that many standard load Linux systems have run for
5+ or 10+ years with no problem, as I too know well as I've had some. "

Most I've seen didn't. Yours did. Just collecting data to support or refute
their claim. It's certainly possible. The question is how probable. It's close
to 100% for some solutions in real-world loads. Those are pricey for sure but
they work by design that can be emulated. I'd like to know where Linux's
design & code puts it.

"comparing apples and oranges, bygone eras to the present, monolithic
transaction-oriented mainframes to general purpose complex COTS disposables"

The _2016_ mainframes, NonStop clusters, IBM i's, etc run similar workloads
with high availability and throughput on mainframes due to Channel I/O. The
modern COTS "disposables" in cloud sector are even setup to emulate mainframes
with many CPU's, virtualization, metered CPU/memory use, some I/O offloading,
management software, etc. It's getting more apples to apples all the time.
Their reliability just still isn't there with whole datacenters going offline
here and there. The management tech, on other hand, has caught up in the
lights out datacenters maybe even exceeded prior work.

You want to talk disposables, though. So, let's compare good architecture on a
budget vs common architecture. In embedded sector, there's uptake of Java
subsets to run safety-critical applications. Common deployments are
microcontrollers or PowerPC boards with weak CPU's running a watered down JVM.
The JOP people and Sandia's SSP team each just built a JVM on silicon w/out
any abstraction issues that would hurt them. Everything down to OS code is
Java bytecode. Results were low cost, low watts, acceptable performance, and
high safety. The Azul Systems Vega processors did something similar for
premium, enterprise market with hardware-assisted pauseless GC and tons of
cores doing native Java. A fraction of that could be implemented inexpensively
with standard cell model and sold as plug-in cards to disposable market.

[https://www.azul.com/products/vega/](https://www.azul.com/products/vega/)

There's doing what's typical but keeps causing problems vs doing something
different that addresses needs & root problems from ground up. Not always an
option but is more than people would admit.

"I expect that net COTS system longevity is increasing rather than decreasing"

That's debatable. You cited good evidence to back it up. The counter evidence
is that the ridiculous levels of complexity in modern hardware and problems on
deep-submicron nodes mean things fail in little ways more often. Overall, your
claim holds as hardware is very reliable given complexity level and they've
forced the failures into a few, predictable spots. We occasionally get
surprises like RAM fault paper. Yet, clustering covers basic stuff pretty
well.

"My quote about sqlite and Linux running in harsh environments with limited
resources in hundreds of millions - nay - billions of instances, extremely
reliably, was primarily referring to mobile devices."

You should've said that. Yes, embedded stuff is easier to do that with. Lots
of embedded devices run for years. Especially with clock gating etc where most
of the chip stays off most of the time only coming online to do its work.
Quite effective in mitigating transient errors. Let's see. My Galaxy freezes
up on cameria here and there. Just failed to recognize SIM card yesterday
requiring battery reset. Otherwise, very reliable as you say given complexity
of stack.

Yet again, as it's almost a trend in this discussion, were SQlite and Linux
done in the cheap/free people stich together whatever exists method you
espoused or clean-slate solution that carefully designed or reviewed changes
as I push? I'm cheating as I know Linux kernel process is pickier & lower
defect than average. I also know SQLite has a strong review process designed
to keep complexity to a minimum and quality up. Two more examples leveraging
my recommendations. :P

------
32bitkid
I used to be more involved in the educational space–I used to teach college-
level classes and also participate/organize/mentor coding spaces for young
adults–and I don't think there is anything inherently wrong with the way that
we teach programming.

I have come to believe, after years of teaching, that you cannot _teach_
curiosity, but you can help create environments that foster a students desire
to learn and explore. As such, the point of education is to expose and empower
children (people, really) to have the tools they need to explore and discover
the things that they _want_ to understand and master. You really can't _make_
someone enjoy "x", where "x" is the passion of the teacher.

Programming, to me, is like writing, painting, or mathematics. In so much that
it is a multi-faceted discipline. Coding is to writing as typing is to
penmanship. There are _many_ reasons to write; from jotting down a grocery
list to writing an epic novel, just as there are many reasons to code; from
bulk automation/text transformation to diving into deep artificial
intelligence. Just as there are many reasons to paint, and many reasons to do
math. The "how" is dependent on the "why".

Most people don't learn to mix oil paint simply for the joys of mixing oil
paints, most people don't learn penmanship simply because they long to see a
well formed letterform. Few people learn the how to take the functional
derivative, simply because. It's a means to an end; they learn to do it
because they need to express something _else_.

I don't expect every child in my english classes to go on to write a great
non-fiction masterpiece or an epic novel. In fact, most of them wont, but I
_do_ think its important that they are exposed to the fundamental skills
involved; penmanship, vocabulary, basic composition, in the event they _do_
want to write, or code, or compute, or paint, or whatever...

I think "school" is exposure to the mechanics, and individuals who desire more
will seek out the rest.

~~~
hellcow
As a counter-point: my favorite classes were always the ones in which the
teacher instilled their love of the subject into their teaching.

Show me what can be accomplished--show me the best of what can be done--and
let my imagination run with that.

The mechanics are generally simple and can be learned online or through
routine practice. It's the passion, the love, of the subject that teachers can
uniquely give their students, whether that's history or painting or
programming. Show us the joy behind it, and we'll seek out the rest.

I know that mechanics and passion aren't mutually exclusive, but the question
ultimately comes down to what teachers should prioritize. What's their goal?
Is it to teach how to add, multiply, and divide, or is it really to inspire a
lifelong love of mathematics in their students' lives?

~~~
32bitkid
And I agree with most what you are saying; I think the worst teachers are, at
best, not passionate about what they are teaching all the way to, at worst,
have no understanding of what they are teaching and are just "teaching" from a
book. Neither one of those really conducive for for what I would consider a
prerequisite for real learning: An environment of exploration and curiosity.

But, I do disagree about teachers "giving passion"; I don't think thats
possible. I think its possible for a good teacher, at the right time, to
ignite a students curiosity in a subject. But that spark _has_ to come from
the student, not the teacher. I don't care how passionate you are as a
teacher, if the student doesn't give a shit, it's not happening. It's the
teacher's job to fan it into flames of mastery, not create the initial sparks
of interest. Although, it seems that most of the time these days, it's the
teachers job to quell the sparks of interest from students and get them "back
on topic".

But even still, as a teacher, I think you have to accept that making that
impact is not going to happen to 100% of the students in the class, probably
not even 25%. And it's questionable if the teacher really had all that much to
do with it; would the student have sought out information on their own outside
of the context of the classroom? Hopefully... Would they have found their own
set of teachers, mentors, resources to express the thing burning a hole in
their head? Again, hopefully. Would it have taken them longer to do those
things? Sure.

I think, as a teacher, the hardest thing to do is not to take too much credit
for the accomplishments of your students. You didn't do much, other than help
orient them in the right direction, they did the hard part.

------
LeighJohnson
I agree with the article's thesis: programming general-education needs to step
away from abstracted problem sets. Instead, I think programming should first
be presented as a tool - rather than a complicated science.

The comp-sci courses I had access to in high school / college were all depth-
first, so the barrier of entry was overwhelming. The dense theory, to me, was
so much less attractive than solving a problem with a BB plugin or broken
Javascript in the real world. I believed I was going to be a script kiddy
forever, because I enjoyed tinkering and tweaking more than traditional
coursework.

It didn't click for me that I could be a "real" programmer until after I
dropped out of college (was studying Latin/Classics, no science or math
background). I started building my own games and game-helper apps, in Python,
while working an IT helpdesk. Observing people interact and engage with things
I built gave me the confidence to pursue a career in web development, using
MOOCs (MITx series was excellet) and project-based tutorials to self-educate.
I stepped up to tackle more real-world problems outside the scope of my
helpdesk position, mostly day-to-day scripting and org automation.

So, educational strategies like Human Resource Machine really tick the right
boxes for me. If I had played that game years ago, I think I would've
gravitated towards comp-sci much earlier. I needed a strong reward system
(coworker's gratitude) to propel me through advanced topics, and game-
ification of edu is all about treadmilling through a reward track.

I hold the title of software engineer today, without ever having completed a
formal comp-sci course outside of MOOCs. I don't think this narrative is
uncommon, especially among women or first-generation programmers. I clearly
demonstrated the skills necessary to be a successful developer: linguistics
and programming share many common themes. But I was funneled towards a silly
Latin program, because I and my academic advisers thought my non-existent
math/science background precluded me from computer science.

Progamming has seeped into the vernacular of day-to-day problem solving, and I
think this is the angle that intro courses should seize.

~~~
mch82
The [Head First
Labs]([http://www.headfirstlabs.com/](http://www.headfirstlabs.com/)) series
of books is a terrific example of teaching programming with relatable, real-
world problem sets. I've found these books are good for a general overview of
a topic that can later be supplemented by more advanced references.

~~~
kozukumi
The Head First series gets treated a bit like the For Dummies series of old
however I think it is a bit unfair. I read Head First C a year ago and it is a
fantastic introduction to C. I was really impressed to see an intro to C book
(not intro to programming, you do need to already understand another language
before reading the book) cover subjects such as threading and network
programming. Most books on C don't go anywhere near that stuff which is a
shame as it makes programming a lot more interesting IMHO.

------
sjbase
Like most here I always liked math and science. Nonetheless, growing up, I
never felt like I was given a meaningful answer on "why do I need to learn
math?" I think we can do better with programming.

The anecdote the OP gives, "learn coding so your really boring job [no offense
to OP] will be less painful" is useful. Even better for high-school kids:
"learn coding so you can Instagram all day at your job while your program
works for you." I'm not condoning this, and obviously it's better to actually
inspire true curiosity (extremely hard), but I think this kind of message
would work for a lot of kids.

------
conceit
So, don't just learn to code, learn to solve problems? Well, duh. There's at
least two sides to this. CS comes down to code and it's a prerequisite to
computational problem solving, most of the time. Problem solving OTOH is like
language, it's expected to be learned by doing. Code is the simpler,
formalized of the two topics.

I'd rather have not every step of development guided by institution, just in
case the methods are maybe just a little bit flawed. Maybe, just in case.
Arguably, learning problem solving is a problem to be solved, so it seems
almost impossible to teach, anyway.

~~~
anta40
>> While I enjoy tackling these exercises (and have devoted my academic career
to doing so), most people don't, further raising the barrier for learning to
code.

My formal background is computer science, so somehow I can resonate with the
author. My 1st programming language is C, though, not Scheme/Lisp. Damn I hate
messing with pointers :D

So, what is the best (most feasible) way to teach non-CS folks to program,
then?

~~~
douche
Games, and game modding, seems to be the most accessible and most successful
gateway drug to programming. Games that have decent mod support can offer a
natural skill progression, without overwhelming all at once. Start by tweaking
some data; move on to basic scripting. Eventually, you start to chafe against
the limitations of the game's modding abilities, and want to start building
your own thing.

This whole process is grounded in something real, and something that is more
fun than crunching numbers or building a fake CRUD app.

I'm curious about the spate of games that directly incorporate programming.
Maybe they have been more effective than I think, but for the most part, they
just aren't really that fun, IMHO. They lean too hard towards the edu, and not
enough toward the tainment... Particularly in comparison to, say, building a
custom StarCraft map with triggers and conditions or a Quake level, that you
can go play with your friends after.

~~~
bluejellybean
I'll second games being a great gateway drug.

My first introduction to programming was game development. Around early 2007,
Bungie ( the game studio that made Halo ) released a bunch of devcasts in
which a handful of the developers would all sit around, shoot the shit, and
talk about building the game. I naturally thought it was awesome and looked
for tutorials on Youtube. Unfortunately I didn't stick with it due to the
steep learning curve and decided software development was too hard ( If only I
had a mentor/teacher at the time ).

A few years later though my friend and I wanted to bot the game Runescape to
make some money. The most ban-proof way at the time was to use this program
called Simba and you wrote scripts with a pasacal-like language. Everything
was based on the pixel color of something and most of the bots were simple
procedural programs. Due to the simple nature of the scripts, you could easily
write something without a ton of prerequisite knowledge and see if it worked
or not. The biggest problem with this method was color and object detection,
how do you know if what you're clicking on is a tree or some grass? At the
time, the hotshots in the community were mostly people enrolled in cs programs
and building neat algorithms to detect stuff. The only way to compete with
them was to learn the fundamentals of cs. so of course that's exactly what I
did!

Gaming provides a fantastic playground for learning because it's fun and there
are very clear rewards for learning more ( Learn matrix manipulation, get
better object detection, make more gold ). I think teachers would do well by
getting games into the classroom. They really do open the imagination when
paired with programming.

------
justifier
hello world is outdated

i think this alone probably turns away the majority of would be
computationally literate individuals

as i was being taught programming i was so ridiculously underwhelmed by a
program writing out hello world

here i am on this machine that can play my favourite movie, entertain me with
a beautiful interactive gaming experience, and connect me to the world's
information and i am supposed to be wowed by 'hello world'

i only understood the power of hello world when i became interested in
embedded systems

so much time setting up the ide and connections and circuits and somehow i
managed to make this previously idle pile of organised metals print hello
world

that was a a profound experience but this was years after i had become a
competent programmer and had the understanding of physics to appreciate what
was happening

if you want to teach programming then you have to show people that the process
can help them do something they want to do

there used to be classes to solely teach typing without looking

everyone i knew hated it, it was a pain for everyone in the class, then
instant messenger came out and suddenly everyone was a professional typist
banging out 60wpm

if social media was only available to people through bash scripts i assure you
everyone would be able to program in a few weeks

because that is what people want to use a computer for right now

but that would be regressive at this point

so i'd suggest the future of programming education should be folded into
mathematics education

learning times tables? write a script to express that table

even better would be to pull problems from a web api, solve, and post

~~~
chris11
I disagree. "Hello World" as a problem is completely uninteresting. But it can
be a part of a great introduction. I'd expect most "Hello World" tutorials to
quickly tell me how to go from reading text in my browser to running code on
my pc. And that is really useful.

~~~
chjohasbrouck
It's definitely useful, but if we can do that and _also_ be interesting,
wouldn't that be better?

If this was a marketing course and the assignment was to sell people on the
idea of learning to program, "Hello World" introductions would get a failing
grade.

What's distinct about Hello World is that it's a ritualized version of
learning how to write a print statement. Teachers sell it as an important rite
of passage, but that just isn't how students feel about it (and they
shouldn't). I think this disconnect only causes students to disengage.

~~~
nikdaheratik
"Hello World" is basically the bare minimum you need to demonstrate that you
got the computer to run the program and do something (so it's setup and
configured correctly, and you understand some of the syntax). It also
demonstrates how much of the way programmers actually communicate with each
other is through conventions and not more formal standards. AFAIK there isn't
a programming language with a specific built-in "Hello World" function, but,
like Goldilocks and the Three Bears, everyone Just Knows About It.

Granted, if the rest of the course was just like it, I agree it would be
terrible, but the first lesson itself is actually not that bad.

~~~
justifier
i really like this mythos argument that the first lesson is about
communication to enforce the convention that languages should communicate

this is interesting

and trying to dream up languages that fail, or refuse, or do anything else but
communicate, is fascinating.. what would a language like that look like, what
would it be capable of?

i do like the idea that conventions supersede standards, but hello world seems
even counter intuitive to the programming convention of efficiency

    
    
        print("hello world")
    

demonstrates communication, but also has the individual typing 20 things to
print 11 thing out of the same editor thae typed into

    
    
        print(2**71)
    

demonstrates communication and running code, as well as having the potential
to elicit awe at the capabilities of a collaboration with this machine

i said in the original comment that i think programming should be folded into
mathematics education, in time parallel

stead trying to compartmentalise education into disparate parts we should be
showing everything's interconnected nature

~~~
chris11
Yeah, I think that that print(2*71) might also be just as useful as
print("hello world"). But I don't really think it is any more useful than
hello world.

And how connected do you think computer science and mathematical education
should be? I think they should definitely be different subjects, but they are
definitely related. I'd be really surprised if I found a cs student from a
decent program who graduated without realizing how connected they are. I'd
expect the average cs student to have more programming experience than the
average math major. But I think most math majors have some of the same general
skill sets, and would have done well in cs classes.

------
20years
I agree with this. I personally think you need to have really good critical
thinking skills in order to be half decent at programming. The education
system is failing with teaching critical thinking. I would love to see more
focus on teaching critical thinking at a young age, which would give kids the
tools needed to be a good programmer and problem solver.

------
stepvhen
I was hoping for a much more in depth post after reading the title. It doesn't
really go into much detail past an anecdote.

~~~
mch82
The article doesn't scan well, but the main ideas I got from it were that (1)
the general population has different needs than people who intend to make a
career in software, and (2) adapting the delivery and content of general
programming education to address those needs may improve the accessibility and
learning experience for non-career programmers. I agree with you that I'd
hoped for more depth to backup the title.

------
dclowd9901
I was surprised to see a lack of emphasis in this post on philosophical
education, which has a great emphasis on logic. To me, it is a keystone of
programming.

~~~
conceit
Philosophy is _the_ keystone to any knowledge, by the claim of the name.

