Hacker News new | past | comments | ask | show | jobs | submit login

I shouldn’t post this.

It’s hard for me to single out specific patterns because it feels like the things I’m doing today (taking an idea from a written description, PowerPoint presentation or flowchart and converting it to run in an operating system with a C-based language) are the same things I was doing 25 years ago. It might be easier to identify changes.

If there’s one change that stands out, it’s that we’ve gone from imperative programming to somewhat declarative programming. However, where imperative languages like C, Java, Perl, Python etc (for all their warts) were written by a handful of experts, today we have standards designed by committee. So for example, compiling a shell script in C++ that spins up something like Qt to display “Hello, world!” is orders of magnitude more complicated than typing an index.html and opening it in a browser, and that’s fantastic. But what I didn’t see coming is that the standards have gotten so convoluted that a typical HTML5/CSS/Javascript/PHP/MySQL website is so difficult to get right across browsers that it’s an open problem. Building a browser that can reliably display those standards responsively is nontrivial to say the least.

That affects everyone, because when the standards are perceived as being so poor, they are ignored. So now we have the iOS/Android mobile bubble employing millions of software engineers, when really these devices should have just been browsers on the existing internet. There was no need to resurrect Objective-C and Java and write native apps. That’s a pretty strong statement I realize, but I started programming when an 8 MHz 68000 was considered a fast CPU, and even then, graphics processing totally dominated usage (like 95% of a game’s slowness was caused by rasterization). So today when most of that has been offloaded to the GPU, it just doesn’t make sense to still be using low level languages. Even if we used Python for everything, it would still be a couple of orders of magnitude faster than the speed we accepted in the 80s. But everything is still slow, slower in fact than what I grew up with, because of internet lag and megabytes of bloatware.

I started writing a big rant about it being unconscionable that nearly everyone is not only technologically illiterate, but that their devices are unprogrammable. But then I realized that that’s not quite accurate, because apps do a lot of computation for people without having to be programmed, which is great. I think what makes me uncomfortable about the app ecosystem is that it’s all cookie cutter. People are disenfranchised from their own computing because they don’t know that they could have the full control of it that we take for granted. They are happy with paying a dollar for a reminder app but don’t realize that a properly programmed cluster of their device and all the devices around them could form a distributed agent that could make writing a reminder superfluous. I think in terms of teraflops and petaflops but the mainstream is happy with megaflops (not even gigaflops!)

It’s not that I don’t think regular busy people can reason about this stuff. It’s more that there are systemic barriers in place preventing them from doing so. A simple example is that I grew up with a program called MacroMaker. It let me record my actions and play them back to control apps. So I was introduced to programming before I even knew what it was. Where is that today, and why isn’t that one of the first things they show people when they buy a computer? Boggles my mind.

Something similar to that, that would allow the mainstream to communicate with developers is Behavior-Driven Development (BDD). The gist of it is that the client writes the business logic rather than the developer, in a human-readable way. It turns out that their business logic description takes the same form as the unit tests we’ve been writing all these years, we just didn’t know it. It turns out to be fairly easy to build a static mockup with BDD/TDD using hardcoded values and then fill in the back end with a programming language to make an app that can run dynamically. Here is an example for Ruby called Cucumber:

http://cukes.info

More info:

http://dannorth.net/introducing-bdd/

https://www.youtube.com/watch?v=mT8QDNNhExg

http://en.wikipedia.org/wiki/Behavior-driven_development

How this relates back to my parent post is that a supercomputer today can almost fill in the back end if you provide it with the declarative BDD-oriented business logic written in Gherkin. That means that a distributed or cloud app written in something like Go would give the average person enough computing power that they could be writing their apps in business logic, making many of our jobs obsolete (or at the very least letting us move on to real problems). But where’s the money in that?




> There was no need to resurrect Objective-C and Java and write native apps. That’s a pretty strong statement I realize, but I started programming when an 8 MHz 68000 was considered a fast CPU, and even then, graphics processing totally dominated usage (like 95% of a game’s slowness was caused by rasterization). So today when most of that has been offloaded to the GPU, it just doesn’t make sense to still be using low level languages. Even if we used Python for everything, it would still be a couple of orders of magnitude faster than the speed we accepted in the 80s. But everything is still slow, slower in fact than what I grew up with, because of internet lag and megabytes of bloatware.

It's even slower when you write it in Python or Lua. I've built an app heavily relying on both on iOS and Android and it's a pig. This was last year, too, and outsourcing most of the heavy lifting (via native APIs) out of the language in question.

"Low-level languages" (as if Java was "low level") exist because no, web browser just aren't fast enough. There may be a day when that changes, but it's not today.


>There was no need to resurrect Objective-C and Java and write native apps.

I still find it crazy that the world went from gmail/googlemaps to appstores. It's beyond me why most apps arent anything more than cached javascript and uitoolkits.

I think one reason for this is because devices became personalized. You dont often go check your email on someone elses computer. People dont need the portability of being able to open something anywhere, so the cloud lost some of its appeal. Now its just being used to sync between your devices, instead of being used to turn every screen into a blank terminal.


> I still find it crazy that the world went from gmail/googlemaps to appstores. It's beyond me why most apps arent anything more than cached javascript and uitoolkits.

* because javascript is still not as efficient as native code

* because code efficiency translates directly into battery life

* because downloading code costs bandwidth and battery life

* because html rendering engines were too slow and kludgy at the time

* because people and/or device vendors value a common look and feel on their system

* because not everyone wants to use google services/services on US soil


Interestingly the Japanese smartphone market had most of these fixed by 1997 or so; defining a sane http/html subset (as opposed to wap). Less ambitious than (some) current apps - but probably a much saner approach. Maybe it's not too late for Firefox OS and/or Jolla.

[Edit: on a minimal micro-scale of simplification, I just came across this article (and follow up tutorial article) on how to just use npm and eschew grunt/gulp/whatnot in automating lint/test/concat/minify/process js/css/sass &c. Sort of apropos although it's definitely in the category of "improving tactics when both strategy and logistics are broken": prevous discussion on hn: https://news.ycombinator.com/item?id=8721078 (but read the article and follow-up -- he makes a better case than some of the comments indicate)]


but in the javascript model most of the processing is offloaded to the cloud when possible, making it MORE energy efficient. ideally you are just transmitting input and dom changes or operational transformations.

timesharing.

web ecosystems can still enforce common uitoolkits.

gmail was just an example.


The difference between javaScrip in browser and native code has nothing to do with the could. Both can offload processing to remote servers.


As soon as we have reliable and energy efficient wireless connections actually available with perfect coverage, maybe.


I think there's an element of "pendulum swinging" to how we ended up back at native. The browser is a technology that makes things easy, more than it is one that makes things possible. And "mobile" as a marketplace has outpaced the growth of the Web.

So you have a lot of people you could be selling to, and a technology that has conceptual elegance, but isn't strictly necessary. Wherever you hit the limits of a browser, you end up with one more reason to cut it off and go back to writing against lower level APIs.

In due course, the pendulum will swing back again. Easy is important too, especially if you're the "little guy" trying to break in. And we're getting closer to that with the onset of more and more development environments that aim to be platform-agnostic, if not platform-independent.


Well now you have a new job writing regular expressions for Gherkin...


I want to subscribe to your newsletter where you write things you shouldn't write. I'm glad you wrote this. I'm not alone!


Definitely a blog theme.

Meta: I'm glad I asked about the patterns now. I had doubts about the extent to which asking that question would add to this thread.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: