Hacker News new | past | comments | ask | show | jobs | submit login
Can Simplicity Scale? (2012) (regehr.org)
66 points by minthd on Jan 9, 2015 | hide | past | favorite | 15 comments



At the end of the day, programming is about mapping physical typing into (stored) changes in electrical signals (in transistors, capacitors, etc).

The 'theory' that we can somehow reduce this complexity to 'thousands of lines of code' is simply an understatement of the most profound nature. At the lowest level, the possibly vast [assembly | machine] code will have to exist.

Instead, a better viewpoint or name for this article could be 'Building extremely specified and well tested high level libraries to reduce programmer risk' - or something of that nature.

When you think about it in this way, its not different from any other library that you work with - except maybe its behavior is more understood.


>The 'theory' that we can somehow reduce this complexity to 'thousands of lines of code' is simply an understatement...

Apollo 11 managed to land on the moon with about 145 thousand lines of code so it can be done. Whether that kind of thing would be a good idea these days with vastly more capable hardware is another matter.

http://www.itworld.com/article/2725085/big-data/curiosity-ab...


Whether something can 'scale' or not in this case is a question of whether a piece of software can go from 'tech demo ' to 'industry ready'. When it comes to so called 'real-life' practicalities, there are presently a couple salient factors that limit our ability to practically introduce simplifying abstractions, which come down to 'having to speak in terms of the underlying machinery,' or that there is some sort of irreducible, intrinsic complexity in the problem; the first case is still very common, the second would be a pretty rare anomaly afaik. In that case I'll just address the first issue: the necessity to acknowledge the underlying computational processes goes away as our machinery becomes powerful enough to support more levels of fundamental (e.g. our starting point for building things) abstractions that further and further disguise how computations are performed so we can focus increasingly on expressing ourselves in the simplest manner possible.

I can think of two limitations with the above, however. The first is that in some cases, we're prevented from laying down higher level abstractions not just because it would be computationally inefficient, but because we haven't yet discovered a great abstraction that everyone wants to adopt. For example, dealing with user interaction or network events. I'm tempted to say concurrency presents a similar issue, but really concurrency is just a performance concern too—an aspect of the underlying computational process, not intrinsic in problems we want solved. The other issue (pointed out in the article) is that these simpler approaches tend to require more mathematical sophistication of potential developers. My feeling is that that's mostly an incidental state of affairs: I don't think there's anything intrinsically more difficult about the more mathematical (i.e. simple, abstract, and drawing from a store of concepts from mathematics and language theory) approaches to coding, it's just that the routes to acquiring the required knowledge have a scary status in our society (at the moment).

Doing a project with fairly intense performance requirements and dipping into expressing everything in terms of specially formatted data buffers to be shipped off to the GPU has given me a new perspective on present, practical limitations of abstraction in coding...


Reframe to the analogy of biology. Are living organisms simple? Did they scale? Will they continue to scale?

On the one hand, we can look at the amount of stuff that's going on at the cellular level and say "no, life is very complicated." On the other hand, we've gained an understanding of some of the basic constructs in a relatively short period of a few hundred years(vs. millions of years to evolve it) - so maybe there are some simple things about it, after all.

And we might say that it's failing to scale because it takes so much time for evolution to "progress". But if we are simply viewing complexity of life as the metric of progression, then we've done quite a few things to accelerate this progression, just within the time anyone reading this has been alive.

Our perspective is collectively narrowed by the existence of a marketplace and its momentary demands. The construct of "scaling" is imposed through it; but it is not a survival truth that you have to scale to be alive. A small system like a bacterium can remain relatively simple.


The situation with programs is different, though. Being able to scale descriptions of various complexity (computer programs) is a different matter from the existential viability of phenomena that we must describe complexly (organisms). Nature doesn't use our descriptions to produce itself, and complexity is only an interesting metric because it says certain things about whether/how the human mind can think about things.


I just watched Alan Kay's talk about the same research:

https://www.youtube.com/watch?v=ubaX1Smg6pY

I'm happy to see this blog post today because it saves me from writing the same thing. Prototypes can be beautifully simple in ways that systems with a bunch of customers and a long legacy often can't...


I transcribed a part of that talk for another thread, where he mentions the 10KLOC system from Parc:

https://news.ycombinator.com/item?id=8860680


Was there ever a final report for the STEPS project? I can't find it at http://www.vpri.org/html/writings.php but maybe I'm just doing something wrong.


No, you didn't miss it, it's absence has been much remarked upon on the mailing list.


In this thread[1] it was promised by EOY 2012, but I couldn't find it either.

1: https://www.mail-archive.com/fonc@vpri.org/msg03589.html


And AFAIK the code isn't available also! Only some small pieces are available.. I would be upset about the lack of availability of the code if I was an american taxpayer..


I guess complex adaptive system (http://en.wikipedia.org/wiki/Complex_adaptive_system) is one of the topics worth exploring. I think some new advanced technologies in programming besides "protocols" among programmers could help people to get closer to what we want, though I'm a complete layman of those techs. Simplicity probably does not lead to successful scaling every time, but it seems to be the only way to get there. My 2 cents as a layman.


I don't worry about the number of lines of code. I do worry about lack of clarity, readability, maintainability, clear flows of events. Unfortunately, hacked together fragile code that produces desired effect (as the article mentions) has been the norm at every place I have worked at. A serious source of job dissatisfaction.


I think that simplicity can scale, but it requires someone or something which can see through the abstractions that your normal programmer needs and distill out just the important points.

In other words, it will require artisans of the craft, or really good new languages and compilers.


That's what the post is about. What did you think of those new languages and compilers? Are they good?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: