Hacker News new | comments | ask | show | jobs | submit login
Walled Gardens are Driving me to JavaScript
55 points by 16s on May 27, 2012 | hide | past | web | favorite | 71 comments
I fear that someday soon, native programs that I have written and rely on won't run on my own computers because I won't have the money to buy a vendor specific compiler tool chain or a vendor provided signing certificate. Because of this, I'm going to make an effort to port all of my applications to generic JavaScript and hope that the vendors won't try to devise ways in which to stop that from running.

Here's a demo of SHA1_Pass in generic JavaScript http://16s.us/sha1_pass/why. It's not as functional as the native, desktop application, but it's works for my needs. Edit: This demo only works with Chrome and FireFox.

As a kid who began writing code on a C64, it's really sad to see the move to app stores, signed code, approval processes and vendor required dev tools. What do others on HN think? Is there a future for independent developers who write native code? Will our native apps stop running on our own computers?

App store and signed code are not a bad thing per se. I have never installed anything on by Debian/Ubuntu boxes that was not GPG-signed and coming from an apt repository (an app store fwiw). Nothing except software I developed my self.

The problem is when it is the hardware manufacturer that dictates which signatures are to be trusted and which are not. The user should be the one that is free to choose the software that runs on its hardware, not the hardware maker.

I suggest «The coming war on general computation: the copyright war was just the beginning» by Cory Doctorow https://www.youtube.com/watch?v=HUEvRyemKSg .

Apple dictates the apps can you use and restricts the OS. It's not a matter of signed packages and security, but of corporate domination of technology and restriction of use of a hardware device you legitimately own. Would you buy a PC that restricts what OS and software you can run on it? But when it comes to smart phone and tablets, we except this. Blocking nefarious apps is one thing, having absolute control of what your computer and or smartphone can run (yes smartphones are just portable computers)is a monopoly. We wouldn't except this in any other industry. If Ford decided that you cant modify, improve or repair their products based on a restrictive license you had to agree to on purchase, you probably wouldn't buy a car from them. In the tech or media industries this is acceptable. Why should we accept a "walled garden" on smart phones when we own the hardware? If it were any other industry, people would be outraged.

You say on the one hand Blocking nefarious apps is one thing but on the other having absolute control of what your computer and or smartphone can run.

Which do you want?

The harsh truth is we've tried the 'run whatever you want' model and it sucks ass for the end user as they get infested with malware.

I say, which do you want?

That is a blatantly false dichotomy. You wouldn't use a browser that did not let you access potentially malicious pages at all, but you wouldn't want one that gave you no protection either. And so you run Chrome which gives you a scary warning page but lets you go see any page if you really want.

I really don't buy that Apple's absurdly draconian control is better than something akin to Chrome's warnings. You should make it difficult to install something potentially malicious by accident our without thought, but you should definitely not go out of your way to stop determined users from installing what they want either! Apple goes well beyond the reasonable and helpful and into the absurd.

There's nothing false about this dichotomy, you're just setting up a straw doll.

My point stands, we tried the run anything model and millions of machines are infested with malware as anyone with any basic computer experience can attest.The number of times I've had to clean friends or families computers I cannot count. There is a very good argument for these sandboxes.

The issue isn't security. The issue is control. The Debian project uses signed packages to ensure integrity but is open regardless. Of course there should be security and a reasonable expectation of utility but not at the cost of free use of something you bought and own. The tech industry is moving to a concept that you "rent", for lack of a better term, rather than own something you purchase. I have no issue with commercial software but all software is useless without the hardware to run it. If I have the hardware, I should be able to do what I want with it, just like an automobile. What's the difference? If a car manufacturer decided which highways you could drive on, based on a restrictive license agreement, you wouldn't buy that car, but you wouldn't buy a car without seatbelts. We wouldn't accept this concept in any other situation, why do we accept it in this situation?

Empirically, you're wrong. In order to install random plugins, Firefox makes you click through a dialog that you're forced to read because the install button isn't active for a couple seconds. People fill their browsers with spyware anyway, because they want the {smilies, titties, stolen movies, etc}.

To tell the story again: my brother runs a cash intensive business that regularly moves hundreds of thousands of dollars a month. He narrowly escaped having a mid six figure sum stolen after his computer was hacked, and his solution now is to have a separate laptop that is only used for accessing the bank website and not a single other site on the internet. He's not stupid, he makes way more money than the majority of people reading this, and he finished a math undergrad with honors. And yet his computer got spyware on it that harvested the bank login. It's time to admit that our current security models have absolutely failed their users. At least on an ipad it's more likely than not that an application that will run is safe to run.

> It's not a matter of signed packages and security, but of corporate domination of technology and restriction of use of a hardware device you legitimately own.

It is a matter of corporate domination of technology and restriction of use through signed packages and security. ;)

Really never? I think you're the minority here. Running a Linux for me meant a /lot/ of going to developer websites, downloading something, then: ./config make make install

I wrote that with binaries in mind. Yes, I, like you, have installed plenty of sw via `./configure && ...`. You did check the md5sums and their GPG signatures, didn't you? ;)

BTW, many non-developer will always be bound to what their distro provide them with. I am OK with that, as long as I can override all these security restrictions whenever I want and for whatever reason.

I think that Chromebook-like operating systems could provide an interesting middle-ground. By default the system is locked down and only Google can update the software. At the same time you can unlock the bootloader whenever you want and install your own things if you want to escape the walled garden.

There's a lot of confusion between "freedom to install anything" and "freedom of not worrying about what you install".

The Android app stores have some housekeeping problems. There are apps that shouldn't be in there because they're malicious. Is it restricting "freedom" to prevent people from installing these?

Cory is turning into a cantankerous crank lately. The future is not these general purpose computation platforms, naturally it is smaller more application focused devices that provide a safe environment for the user.

I can imagine he'd be railing against roads a hundred years ago as "the coming war on general driving" because they restrict a person's freedom to drive anywhere they want.

At the same time, users have never had more options when it comes to free and open in other spaces like Arduino or the new ARM-based machines. People can get a very capable computer for $25-35 and run a robust open-source environment like Ubuntu on it. What more do you want?

> "freedom of not worrying about what you install".

Nonsense. Or you could as well talk about "freedom to not hear unhealthy opinions", "freedom from the burden of choosing your profession", etc.

How about "freedom to eat safe food" where you wouldn't have to bring a chemistry kit to every restaurant to test for toxins?

How about "freedom from unnecessarily dangerous professions" where you won't have to concern yourself with being just another casualty in the factory?

Usually when people "give up" freedom they're just trading it for a different form.

It is extremely liberating for non-technical people to be able to browse an app store catalog and install things without concern that it will wreck their device.

> Usually when people "give up" freedom they're just trading it for a different form.

Go tell that to the many people who are fighting for more freedom.

The people that are usually fighting are the ones from which freedom has been forcibly taken. There was no option.

You expect me to have sympathy for people that are whinging about their iPhone not being as open as their Raspberry Pi? Where's your rage about refrigerators or washing machines? Why isn't your car's firmware open-source?

You want freedom, you can get freedom. You want a polished, appliance-like phone, you can choose that.

To play devil's advocate:

a) Money is a bit of a red herring. Or rather, if Apple gave away keys for free, would you no longer have a problem?

b) If you have physical layer access, you can override any security settings. Gatekeeper et al cannot change this fact. You will be able to write/run code, just not necessarily distribute it.

c) Code signing will nuke a large portion of malware. 99% of users are not developers, so why should the default state of the operating system be configured for our needs?

d) Code signing could be implemented on a javascript level, as well. Flash is/was an attempt at signed codebases/binaries distributed over the web. Presumably, for example, a site could required to pull resources from an HTML5 manifest style local cache that has been signed/verified. This could eliminate MITM style attacks (changing ads) that are already in use. Difficult? Yes. Impossible? No.

e) There's always Linux/BSD. Until the TPM security protocols of 2018 are implemented, of course.

Don't forget UEFI, which has the potential to make a large segment of consumer devices unbootable under linux and also invalidate your point b). If you think that walled-garden software signing is in the future, why wouldn't you expect that signing to be implemented straight down to a hardware level?

b) If you have physical layer access, you can override any security settings. Gatekeeper et al cannot change this fact. You will be able to write/run code, just not necessarily distribute it.

That is not necessarily true. It is a "simple"[1] matter to have non-overridable security programming arbitrarily close to core hardware, from a chip on the motherboard to etched directly into the same silicon as the processor or BIOS. This isn't just a theoretical concern: plans to do this are already underway, see http://en.wikipedia.org/wiki/UEFI#Secure_Boot

[1]: By "simple" I mean the concept is simple, the implementation is plenty complicated.

This is tangential to your arguments, but code signing doesn't enter the Flash platform until you start targeting native apps instead of the web. Flash has no code signing; only AIR does.

As a kid who began writing code on an Apple II I cant believe you feel that today's situation is inferior to our own. I didn't know anyone who HAD a computer let alone could program it and I sure as hell never imagined that I could actually distribute my programs anywhere. Look at what kids have today - an infinite supply of resources, an infinite audience via a multitude of channels, and even the ability to really market and sell their wares (no pun...) seems like a worthwhile trade off for being able to go native to me. And if you really care, fire up a virtual machine and go nuts.

Your hardship was not intentional.

Yeah, I remember not even having a diskette, and getting angry at my mother for turning off the computer and destroying all my work.

My brothers would just unplug it. Boy did that make me fly off the handle! I would write 50LOC programs in basic (C64), and when I was about to make my GOTO frenzy start, they would unplug it. takes a deep breath

Assuming of course, that your platform allows you to run virtual machines...

I don't think you need fear that. Unless it is made illegal you will always be able to buy a 'computer' on which you can write your own code. However, as the great mass of folks who are only users of what we now call computers (web, email, twitter, what ever) move on to the appliance that replaces them, the cost of a computer will go up because the volumes in which they are sold will go down.

In '78 I paid $800 for a computer kit which, when assembled, was waaaaay less capable than a Raspberry Pi. But there were maybe 10,000 ever made. My prediction is that the next 'gap' will be when the big players (HP, Apple, Asus, etc) move on to selling appliances for users.

Laptops might get eaten in this gap. But there will be the equivalent of a terminal program in the appliance thing you are using so you can program over the network. If you want to run code on the machine that you are typing on, you may have to have a 'deskside' type computer, which is living off the ecosystem of server machines.

Other risks are disk drives that are too smart for their own good (built in DRM as an example). Hard to build one from scratch, but flash is ok for now.

Overall you'll experience the same sort of changes car enthusiasts did where cars became less and less a motor and a transmission and a body and more all of that and a complex proprietary feedback control system to run it. Of course many kids these days don't care that they can't put 'glass packs' on their ride, they have other things to customize.

What is to prevent the browsers from also eventually becoming walled gardens, as they reach the complexity of the increasingly locked down operating systems? We already are seeing multiple threats to even the open web with things like content pay walls, cloud services, and Facebook. Sure if things got really bad, we could fork WebKit, use Linux, and run our own mail server. The problem is that the majority will use what is simple and safe - what they can understand. Whether browser, operating system, hardware, cars, or any sufficiently advanced technology, that will always end up being a walled garden of some sort.

“Once you have something that grows faster than education grows, you’re always going to get a pop culture.” - Alan Kay

The problem with computing is not any particular platform or our tools. The problem is education. As tinkerers, we learned how to make computers do what we want, and eagerly raced ahead while leaving the majority of humanity in the dust. The direction computing is moving can only be attributed to our own selfishness.

We can now either accept the walled gardens as computing for the masses, or we can work to create doorways out for those trapped inside by simplifying access to computing and improving education.

Lay less bricks, build more doors.

This is true, you can already see with the iDevices restricting browsers to Safari only (and similar restrictions in Windows 8) that the web as an applications platform could be easily killed by the restrictions on the device.

I sometimes think there is a false dichotomy presented between usability/security and flexibility. This seems a little like suggesting that we should install a totalitarian dictator so that the masses don't have to worry about messy subjects like voting and politics.

I think that the popularization of "walled garden" computing really came with the iPhone. Since the iPhone was more usable than any other smartphone to date there was an assumption that everything apple did was right. When really it was just that other companies did (and continue to do) an inexcusably bad job on usability.

Let's assume that the iPhone had somewhat less restrictive policies about what was allowed onto the store (assuming that malware and scamware was still disallowed) and also allowed side loading onto the phone, would this have impacted it's popularity in a negative way?

The crucial difference is that nobody can stop you from using or building any web app you want (minus government censorship). There's no single central authority with unchallenged veto power.

Openness is about allowing alternatives, not guaranteeing their success.

Your device manufacturer , OS/Browser provider or ISP can all stop you from using any web app you want.

They can either flat out block it or leave your browsing software in such a state that it would be difficult to build anything significant for it (see IE6).

In the rare cases where something like this has happened it has been met with almost instant outrage and can almost always be hacked around trivially.

On the app stores this is just business as usual and happens all the time every day.

Wise words.

What's the sense in spewing out endless chunks of code when no one (even the authors later on) can take the time to read it and understand it?

Why not review the code that's already been written (like the open source code Apple and other walled gardens rely on to build their systems)? This is the sort of tinkering that will help us develop alternatives.

To find the doors, and build new ones, we have to read old code, not simply write new code.

Why not just target Linux? That's what I'm doing.

1) Linux is going to be around for a long time. However nefarious the intentions of large corporations might be, there is now too much written on top of it for it for it vanish and too many distinct users for any one to control it.

2) Making portable that runs on both walled gardens and open system still helps the wall gardens by adding to their codebase.

I begin to think like this as well. Target Microsoft Windows first (pre-Windows 8), then target Linux, then Apple as an afterthought (since they're a tough crowd to please anyway).

Someone should port VirtualBox to NaCL. Then we could build web apps on top of little embedded virtual linux machines using Qt or WxWidgets or something.

Things like secure boot make me worry about the future of Linux as well.


If I'm not mistaken, this only applies to pre-built computers that ship with Windows. We still have hardware manufacturers that only ship Ubuntu or the like (e.g. System76) and we still have DIY computer builds (which are very easy to do nowadays).

>What do others on HN think?

Every day I wake up this is staring me in the face. On the one hand I feel a sense of inevitability about the whole thing. On the other hand I really really want to stop it but I'm not sure how. I look up from what I'm doing, ponder it for some minutes, then go back to work.

And then I do it again a few hours later. It's driving me crazy.


Whatever one may think of him, it is hard not to see Richard Stallman as prophetic in these regards (see "the right to read", etc). Gnu and the General Public License were efforts to deal with the kinds of software "unfreedoms" that have been creeping over the years (copyrights, patents, trademarks, walled gardens, controlled compiler, etc, etc).

Free software still has many warts and problems but if you are concerned with the many ways that software unfreedom is making inroads, you might consider trying to improve a free project as a way of fighting back.

As an added note: If you want to develop software for an ideological goal in addition to a practical one then you really need to be using the GPL license rather than anything like the BSD license.

Otherwise your code can simply be co-opted to build platforms which are very much closed (see OSX/BSD).

In fact it should really be argued that GPL isn't good enough and you should be using AGPL which also gives rights to users who use the program on a networked basis.

Is there such a thing: an ideological vs a practical one? I understand the OP's concern as related to the freedom of software he writes. Where is here the split between ideology and practical?

The OP's question by the way, reminds me of this discussion http://news.ycombinator.com/item?id=3802516 , in which some argue against the GPL as "unfair" and ruining their day because it prevents usage in walled gardens (http://news.ycombinator.com/item?id=3803492).

Would it be more pragmatic to keep the discussion on HN going instead of developing evasive concepts?

The GNU system was originally designed mostly from an ideological standpoints. Stallman and others wanted to use a UNIX system , but they wanted access to the source code.

A lot of GNU was replicating work that had been done by other Unixes but the goal was to have it available as GPL.

While obviously colored memory , the standpoint was very pragmatic: http://oreilly.com/openbook/freedom/ch01.html

Next to that: I would only call it ideological if it was for some higher, intangible value in a future yet to discover. But I find the purpose of the GPL from inception until now only pragmatic.

I find that sandboxed environments have a kind of inherent quality of liberation to them - a solution may be far slower and clunkier than the hardware would ideally allow, but when seen another way, you're really just "going back in time" to a more resource-constrained computing environment.

I've been working in Haxe for several years now, and have been targeting the Flash platform the entire time, but without really trying, the escape hatch has been built for me through the NME framework - a very close, open-source implementation of the Flash APIs for numerous client platforms(including native code). As a result I feel kind of distanced from worries about the platform; in fact, I benefit in several ways by having more platforms because they each have strengths and weaknesses w/r to iteration times, debugging, etc. I couldn't have gotten exactly this outcome if I were working in C++, because I would have been starting from too low-level a basis. Although there's a big effort afoot to get native code compiling to sandboxed platforms, my perception is that it strongly favors the platform owners.

Ultimately I think we're actually gaining by engaging in a platform arms race. We're forcing ourselves to confront some old problems with our existing technology stack by saying "rewrite in JS." We end up with another black box in the layer of native code, but our hardware already is, in practical terms, a black box, and we at least have a good groundwork of open code in browsers and operating systems.

This issue is why I'm a believer in the web as a platform. It's not the right tool for every job (yet), and I still use a great many native applications myself. But the moral hazard is simply too high for us to be comfortable with vendor lock-in being the norm in computing.

For all their warts, Javascript and the web are the first truly universal computing platform, and their capabilities only continue to grow, slowly but surely. And without hating on the (highly profitable) walled gardens, the open jungle is where I prefer my energies to go.

Unless you run elinks or IE...

Say what you want about walled gardens, but their flowers are more beautiful and their grounds better maintained than their wild counterparts.

You might want to look into V8 as a way to un-hook JavaScript from your browser while you are converting your apps. See: http://code.google.com/p/v8/ I've not investigated as I've been busy writing JavaScript apps for our projects browser/web app needs. :) Luck!

For what it's worth, this is exactly why Mozilla is helping to build out the missing pieces of an app ecosystem for the web. We're trying to take the good bits from other platforms and make open web alternatives.


Correct me if I'm wrong, but I've heard you can't install apps you made on your own iPhone without a $99 a year developer account, even if you're not going to publish anything on the appstore. That seems unnecessary. Anyone know why that's the case?

That is correct, only people with developer accounts can install apps on an iPhone. I couldn't say why this is the case.

If all you needed to install custom apps on iPhone was a toolchain, people would embed it in apps sold outside of the app store. Next thing you know theres a massive end run around the apple revenue model.

Surely there's other ways to prevent that? Like making the fee one time if you don't plan to publish to the app store.

Ah well, I guess I'll have to fork over that $99 when I want to install my own apps.

Also, if I install an app with a developer account, but don't renew it the next year, will my app still be usable, and will I be able to update it?

That was my thought but I couldn't articulate it as well as you did.

Off-topic: By all means, charge the $99 for distributing on the App Store, but I am not really sure why Apple doesn't allow for Apple ID level verification for personal apps. Something similar to the mobile provision they use right now, e.g. having Xcode generate a binary that is associated with the same Apple ID as on the iPhone.

I don't think that would work because setting up a shared Apple ID using a throwaway webmail account would be trivial. Each one would be good for multiple devices, and running a build script to generate multiple binaries for multiple accounts wouldn't be all that hard.

They used to ask for a credit card, but I believe that's no longer required (if it were, you could just use a prepaid gift card).

I never did cross platform desktop applications but from reading Python in a Nutshell, I was under the impression that it was pretty stable and should have long term support. Or maybe you are specifically talking about iPhones and Androids?

It seems Windows is moving in that direction as well.

I placed the wrong URL in the description and I did not want to edit it. Here's the correct url: http://16s.us/sha1_pass/js

I fail to see why you chose Javascript though. Surely a C base d approach would be the most platform independent. Works out of the box on most machines and much easier to port to Java/C#.

Also you need to spend some time without the tinfoil hat. App stores have resulted in a lot more independent developers because it provides them a cheap distribution channel. Likewise vendor required development tools have always been around. There really isn't anything to be afraid of.

Yes, I do C++. Perhaps you missed that bit. That's the native code I was referring to. I'm concerned that my native code will no longer be able to run in the future. I chose JavaScript because of the ECMA standard and because vendors seem to be allowing it to run in browsers unhindered for now. I'm hoping they'll leave it be, but I'm betting they won't. I continue to write native C++ code, it's just that I'm concerned that one day it won't be allowed to run.

Agree with the other comments. Choice of programming language is in some ways tangential to concerns over vendor lock in since other people can always come along with a different implementation of the language, unless of course programming languages end up being found to be patentable (http://tech.slashdot.org/story/12/04/13/1646215/oracle-and-g...).

"Edit: This demo only works with Chrome and FireFox."

What was it you were saying about walled gardens?

That's standard JavaScript (to me at least). I don't know why it would not work with IE. I'm a systems programmer by trade. Being newish to JavaScript, perhaps I made a mistake, so let me know any ideas you have to make it work with IE, or edit it yourself. It's GPL code.

Edit: Thanks to timb it now works with IE 9. Thanks!

in the html of http://16s.us/sha1_pass/js/ add <!doctype html> to the top. this will prevent "quirks mode" in IE and String.prototype.trim will reappear.

if you want to support IE 8 and lower you'll have to add a shim for String.prototype.trim. (see http://kangax.github.com/es5-compat-table/ )

Thank you. Adding that line made it work in IE 9.

Walled gardens, which are just another annoyance in a long line of progressive manipulation of the computer user for profit, together with an increasing lack of responsiveness (despite ridiculously powerful hardware), drove me back to UNIX, assembly, FORTH and C. In some strange way I'm thankful for the annoyances of "modern computing" because I realised there is still so much more to learn about the "old stuff". I don't consider this move back to the old school to be "tinfoil hat", although it certainly can be used for that purpose. I just like being able to understand and control the machine, and having it be consistently responsive.

I can definitely sympathise with the OP. But I have little motivation to learn Javascript. It is a workaround to get a little control through the browser, but it is still browser-centric and has so little power relative to lower level languages. It is not a long-term solution to the problem the OP describes.

Have you tried monodevelop? Mono is a free open source version of c# and it works very well on windows, mac, and linux.

If the poster is paranoid about the future then Mono isn't a good choice. Microsoft could easily revoke their license agreements and sue everyone for patent infringement.

That could happen with any language, as Oracle's (mostly failed, I might add) suit against Google over Java proves. Even for languages that were completely and fully OSS-developed, there's a near certainty their implementations are violating some patent or other that someone could come out of the woodwork and start suing over.

The Mono situation is actually pretty ideal practically, because much of what would be patented is known and not submarine patents, and it would be far harder than you suggest for Microsoft to start suing over that stuff after issuing their legally binding patent promise.

The probability of it happening with C# is much, much higher.

It's a given that Objective-C is entirely within Apple's domain and C# is within Microsoft's just as Java is now Oracle's. These languages are not independent of the organizations involved.

Other more standard-based languages, which has traditionally included things like C, C++, JavaScript and now Ruby are less likely to be disrupted because of shared ownership.

I'm worried about all the people chugging C# Kool-Aid when Microsoft wasn't the least bit concerned about taking SilverLight back out behind the barn...

I don't really think there's too much risk of Microsoft taking C#/CLI (particularly non-classic Asp* stuff) behind the barn and killing it. It's both a bad move considering how many of the successful cornerstones of their business run on it or use it extensively (SharePoint, Reporting Services, BizTalk, OWA, Powershell etc), and would be breaking from a tradition of supporting existing versions of everything for far longer than is useful.

Silverlight could only be killed like it was because it's on the client. Classic ASP, a 15 year old technology, still runs easily on IIS 8 / Windows 8. Yes, Microsoft could kill CLI on the desktop and stop distributing .net frameworks (or by removing the desktop mode from Windows 9 and removing .NET support from metro etc), but it's almost as safe a bet as there is that you'll be able to use it on the server for at least the next decade.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact