1. XML databases will surpass relational databases in popularity by 2011.
We got NoSQL (esp. JSON based ones) instead of XML. Close enough though.
2. Someone will make a lot of money by hosting open-source web applications.
Github is cool but with all the cheapskate web devs (no offense, I am one of them) I don't think it is _that_ profitable.
3. Multi-threaded programming will fall out of favor by 2012.
Not really. We are already in the multicore age, and it will only get moreticore.
4. Java's "market share" on the JVM will drop below 50% by 2010.
Not in 2010, not even yet, but we are certainly on the road. No surprise considering how shit Java is.
5. Lisp will be in the top 10 most popular programming languages by 2010.
This failed hard.
6. A new internet community-hangout will appear. One that you and I will frequent.
This is such a vague prediction does not even worth mentioning.
7. The mobile/wireless/handheld market is still at least 5 years out.
Happened sooner.
8. Someday I will voluntarily pay Google for one of their services.
You don't say? They are one of the biggest players out there, even if they are not primarily known for payware but still.
9. Apple's laptop sales will exceed those of HP/Compaq, IBM, Dell and Gateway combined by 2010.
No way, thanks God.
10. In five years' time, most programmers will still be average.
This makes no sense, but anyway, we are getting better. Slowly though, version control, more languages, testing is infiltrating even some dumber dev companies too.
--
I give this guy a mental thumbs up for points 1 and 4, and the cynicism in point 10.
> We got NoSQL (esp. JSON based ones) instead of XML. Close enough though.
JSON is basically XML with 50% less bullshit. It does less but it does it so much more elegantly than XML - and has the further benefit of a syntax similar to that of structured data in such languages as Javascript (duh), Python and Ruby - that it was a shoo-in to squeeze XML out of most non-enterprise uses, at least on the data transfer format side.
However, to this day most data still resides in SQL databases, which have the net advantages of speed, stability and reliability that come from long lead time. What has changed is that object-relational mapping has gotten vastly better than it was in 2004, so programmers can mostly enjoy the best of both worlds.
There's also the fact that SQL is an easy enough DSL to understand, read and write that it isn't a barrier to use even in edge cases where ORMs fall down.
It's true that JSON document databases very closely follow the model set out by earlier XML document databases, and have succeeded for the reason Yegge spelled out.
The big lack is the equivalent of XQuery and XPath, in my opinion. Javascript plays this role in some systems, but I don't think there is anything for Javascript matching the power and flexibility of XQuery/XPath for querying and transforming data.
I would be happy to learn about query languages for JSON document databases that prove me wrong.
JSON is traditional: dictionaries, arrays. OO languages have built-in syntax to deal with those, via [] and dot notation, and the array objects nowadays have ways to do map() apply() grep() etc. on the collections.
The main thing missing is that XML has attributes, so you can query against those.
I won't say that the JS stuff is better, but it's one less thing to learn, and what you learn is directly applicable to writing apps in JS.
> object-relational mapping has gotten vastly better than it was in 2004
Maybe so, but it still seems like it's only good for prototyping. With each new version of ORM i get excited and try to implement it, but sooner or later it comes to bite me in the ass, and long term maintenance becomes more painful than not using ORM with all the low level hacks that need to be added and tricky management of the mapping config.
I think the key difference, at least among most developers I work with, is that ORM has evolved from a complete abstraction of all relational querying, into more of a productivity tool that can be useful in certain circumstances.
The differences between something like Hibernate and something like ActiveRecord is pretty vast, at least in terms of usage, and I think one of the key learnings we've made since 2004 is that it's not practical or feasible to treat ORM as a silver bullet.
Honestly, no one should be surprised that ORM isn't a silver bullet. NoSQL isn't either - there's plenty of caveats that should be considered before deciding on using NoSQL. I'd still argue that about 80% of database interactions can be solved by a decent ORM. Anything that lets me ignore 80% of my trivial problems and focus on the 20% that's actually challenging is a net win in my book.
I've been working with Django for 5 years, and I would say Django's ORM is pretty pedestrian, when not downright painful. It fails at simple things like allowing you to GROUP BY without falling back to SQL.
I don't get this. XML has the capacity of being as simple as you want it to be. You don't need schemas, namespaces, xpath, and so on, and can make your XML life every bit as easy as JSON. In many ways easier given that JSON doesn't even have a bloody date or time type.
But if you do want strong-type validation, schemas and namespaces are there for you. If you want an easy, standard way to query, XPath/XQuery are there for you. If you want to wholesale completely restructure entire documents/data blobs, XSLT is there for you.
I have never gotten the hate for XML. I suspect -- and this is not some sort of slur against those who don't like something I like, but rather is an observation about general tech trends -- that many programmers were simply too lazy to understand all of XML, so rather than work in an area where they felt ignorant (again, nothing stops you from making bespoke XML at your desire), JSON and things like it seem so appealing.
Of course, I don't know what you're talking about "JSON doesn't even have a bloody date or time type" when XML doesn't have anything but strings. In order to parse this, I'd have to have code which looks for <int> tags and runs my language's atoi() on the content.
Maybe there are libraries that do this automatically. If there are, the XML community does an excellent job of hiding them from developers; I've worked a fair bit with XML, and haven't found them.
2. Easy interface with popular languages. You can just copy-paste a JSON string into a Javascript / Python app. (Warning: If you do this programmatically, it's going to be a security vulnerability.) Whereas if you want to use XML, you have to jump through a lot of hoops. Especially with Java; AFAIK the Java standard lib only ships a specification for XML parsing. So while the org.w3c.xml namespace is there (or whatever it's called, it's been a while), you have to install some Apache lib called Xerces or Xalan or Xavier (or whatever it's called, it's been a while) in order to have XML support.
This is Java's fault, not XML's; but XML was a shiny cool thing at about the same time Java was, so a lot of peoples' main XML experience (or at least mine) is in Java.
3. Is a subset of many popular languages. A lot of people -- especially web developers -- only know one language. If you know JS, then you know JSON. But XML syntax and semantics is very different from most programming languages, and that's a barrier. (General XML is much different than HTML. In particular, you can always see what HTML "does" by loading it in the browser because everything has default styles; not so with a general XML format, unless you go through the pain of writing a complete style yourself.)
Your ideas about Java and XML are very dated. JAXB makes it essentially transparent to convert an object to XML and back, and with JAX-RS I can give you a REST API that returns XML or JSON, your pick, for basically no effort on my part. These are part of the Java EE 6 web profile, so basically everybody gets them for free. Most of us don't "install libraries" anymore so much as add four lines to our Maven POMs.
Point #2 isn't very strong. Every language has good support for both technologies these days.
Point #3 is absurd. If you know JS and you know HTML, you're not far from the basics of XML. If you're contemplating JSON as a replacement, you're either assuming our poor web developer is really stupid or you're comparing apples to oranges.
In general, your argument here is that JSON is shorter and more popular. Sure. But unless your top concern is bandwidth (hey, maybe it is) there might exist other technical reasons to make the choice which are less childish.
1. What's a good tutorial to learn JAXB and JAX-RS?
2. Maven is a nightmare too. Every time I've used it (usually because I'm compiling someone else's code and they wrote the build in maven), it downloads hundreds of megabytes of who-knows-what from random Internet sites. I want to pin versions of dependencies (and dependencies-of-dependencies) so my builds are repeatable, and I want to be sure everyone who's forked my git repo can still build the code if the Maven URL's for the deps have gone out of business. I recall trying to read the Maven manual, but both the manual and the concepts were incomprehensible -- there's so much automagic, when things break, it's difficult to even come up with a hypothesis of what might be going wrong. (And anyone who has any experience with any build system knows that the build always breaks.)
I usually just use Ant. It's a bit of a pain shuttling jar's around (symlinks help), but at least I can understand the build.
3. Part of the problem is that I don't want this alphabet soup of technologies. If I'm running in Python, and I'm developing a prototype or a one-off hack, I just write a file called whatever.py. (I can "import json" in whatever.py if I don't want to use hacks like eval()/repr() or copy-and-paste for parsing/serializing.)
I want to be able to build a quick one-file prototype to process my XML, without enabling the "Java EE 6 web profile" (whatever that is) and writing a Maven build file. I want using XML to be this simple:
1. The Java EE 6 Tutorial is a good place to start for these things.
2. You may not like Maven--I hate it too--but it's how things are done. In practice the only people complaining about installing libraries and managing jar dependencies are people using ant without ivy. 90% of folks just use Maven and get on with it.
3. You're turning your complaint from "Java's XML support is weak" to "Java's XML support isn't Pythonic like Python's." This is a weak argument. You're saying you want to live the Python dream in Javaland. Well, it isn't going to happen, no matter what the Play guys tell you. But if you have to use Java--and sometimes you do--adding four lines to your maven config--which you will have--so you can use EE 6 or JAXB is not a big deal, and neither is putting a few annotations on a class.
I don't love Java. In fact I dislike it. I would rather be using Haskell, Python, or Ruby. Or any number of other things. But you can't be mad at Java for being Java and not Python.
> I want to pin versions of dependencies (and dependencies-of-dependencies) so my builds are repeatable
That's by default in maven - you have to do something really stupid to get non-repeatable builds (usually involves specifying open version ranges or some weird plugin magic)
>everyone who's forked my git repo can still build the code if the Maven URL's for the deps have gone out of business
One of the main advantages of maven is just the opposite of that - you don't have to bundle your dependencies and get them to other people - they can just ask maven to fetch them. And maven is not going away.
> That's by default in maven - you have to do something really stupid to get non-repeatable builds
I think they changed it now but for years the default behaviour was to get the latest versions of every plugin, making builds non-repeatable by default.
The appeal of JSON isn't that it is shorter, but that it is easier. Easy to read, easy to type, easy to parse. XML may be easy "enough", but I'll choose JSON anytime I don't need the extra things that xml offers. The verbosity of XML may sometimes be necessary, but it just isn't very much fun to work with.
You are a bit outdated, tooling is a bit better nowadays.
Java integrates everything you need to parse and generate XML. Most API are XSD driven (either through annotation in the code, or with the mapping code generated from the XSD), so you get plenty of types like data, int, ... You also get streaming parser (pull or push) and DOM.
Obviously, that is no longer plain XML, that is XML + XSD + Namespaces.
The J2SE (not JEE) also has built-in support to expose and consume webservices, without requiring to worrying about WSDL and the rest.
At the end of the day, that does not mean that dealing with XML is a great experience. Most real life XML is awful.
And well although the basic tooling is there, that is in the usual java fashion with factories of factory of creator of generator and very little "high level" utility.
It's an alright data / interchange format and especially suited to certain multi-vendor use cases, but the disadvantages cost more than the benefits compared to "simple" anonymous json.
XML has the capacity of being as simple as you want it to be
Unfortunately in practice it never is. I've seen programmers do too many dumb things with data structures in XML, which then becomes "part of the spec" so it never goes away.
Take an xml api in a fortune 500 and run it through 10 developers over the course of 5 years, then do the same thing with a json api, I guarantee you the json one will be more standardized and easier to consume.
bank of america, t. rowe price, wall st on demand, ebay, aflac, adobe, ge, nielson, navy federal...do you want me to open my entire client archive? You should see what java devs try to shove into xml structures.
I don't understand the complant that JSON doesn't support a date/time format.
It supports numbers and strings. You can use unix timestamps, you can use ISO 8601, you can use any format that makes sense for your application.
JSON doesn't support a Date/Time object because it's a data format. XML doesn't support a Date/Time object either.
Now if you want to complain that the JSON parsers don't automatically convert ISO 8601 dates into Date/Time objects, I understand.
To me 50% less bullshit means 50% less typing and maybe getting something wrong. JSON is easy to read and write by hand, which happens a lot.
XPath/XQuery and XSLT are nightmares in my opinion. I'd much rather type `Foo.bar` or `_.each(Foo.items, function(item) {...})` and be able to easily debug when something goes wrong.
Right, but xml "objects" are annotated with some name, whereas json nodes aren't. This is a hassle, but it's also very convenient when you want to represent something that doesn't map 100% perfectly onto the underlying structure. So as you say, you _could_ represent a date in lots of different ways. In xml that's doable, and somewhat comprehensible. In JSON it's usually jibberish. And that's just dates - good luck with anything more complex.
I still think JSON is a more promising format in the long run. But it certainly isn't great - it's just bad in a different way, and somewhat more minimalist (which is good).
Also, I can't for the life of me understand why someone would dislike xpath. It's like the one best thing about xml - it's simple, concise, and easy to read. I'd absolutely love an xpath equivalent for json, which just doesn't exist.
Just expressing obvious things like "person[name='bla']/nickname" is so much more wordy in an explicit function style. And notice how the rest of the world - including most major js(on) libs have long gravitated towards that (monadic) style of querying - it's not just me, almost everyone prefers it. CSS; SQL; jquery selectors; even a filesystem glob - every complex datastore gravitates to some variant of the same pattern, and it's something json consuming libs do not yet do very well. I'm sure they will someday - but it's a real lack right now.
JSON doesn't support a Date/Time object because it's a data format. XML doesn't support a Date/Time object either.
XML doesn't support objects, but there was enough thought put into the implementation that it has an easy way to document and define the types, with validated ways of representation. ISO 8601 for the xsd:dateTime, for instance. There is no ambiguity or uncertainty, no hand-rolled varieties of crazy, quite unlike JSON world where there are a hundred ways to represent a date or date/time, all of them broken in some way or other.
I think the 50% less bulshit figure is pretty generous. Let's not forget that, unless you opt out of it, most XML parsers, when given user-generated input, will do lots of crazy stuff. In the best case scenario, the machine that parsed the input will fall victim to DoS. In the worst, we have good old remote code execution. Tipically, an attacker will be able to read lots of files from the servers and make arbitrary network connections, many times from the viewpoint of a machine inside a corporate firewall. As a spare-time security researcher, let's say I absolutely love XML. As a developer, I despise it.
corresation: I can't answer your post directly, so I'll answer my own. It's absolutely not vague handwaving. HTML or JSON parsers can't make arbitrary network connections. XML parsers often can.
XML remains the lingua franca or most enterprise systems and interchanges (meaning the ones that people are most interested in trying to compromise
That is one of the reasons attackers are so happy. They are not trying to compromise. They are succeeding. Show me a SOAP/XMLRPC web service and I will show you a compromised machine, with very high probability.
See, for instance, last year's BlackHat presentation about SAP. Root with 1 request. Granted, there was an overflow involved, but the entry point was XML. Many more instances of this are available (I've compromised tens of systems in the last six months through the "magic" of XXEs, but unfortunately can't talk about them).
I'm sorry, but the vague handwaving about security issues with XML is utter nonsense. There have been issues, just as there have been issues with JSON, HTML, or any other parsed or shared content. Alluding to some vast chasm of danger is absurd, made especially obvious that XML remains the lingua franca or most enterprise systems and interchanges (meaning the ones that people are most interested in trying to compromise).
XML is good when the structure of the data returned is not self-evident to the data requester. In this case XML's verbosity assists in hand walking the tree to grok what the server is returning.
XLST is a friggin nightmare to code and I can't think of any reason why one would want to use it. It's much better to first parse the XML and then reshape to your needs rather than trying to do both at the same which XSLT does. In short, XSLT violates separation of conerns. Furthermore, if you think XML is hard to read, just try reading someone else's XL
> I don't get this. XML has the capacity of being as simple as you want it to be. You don't need schemas, namespaces, xpath
XML data model (trees with attributed nodes and special kinds of leaves) is not native to any language: XPath/XQuery and XSLT exist to bridge the alien world of XML to regular programming languages. Taking them out doesn't make XML simpler.
JSON doesn't have them because it can be represented with maps and lists, object types fundamental to many programming languages (it practically maps almost exactly to every scripting language created in the last 20 years). That's why it doesn't have, and doesn't need, XPath or XSLT equivalents. And that's also why it feels more natural to most programmers (ie. 50% less bullshit).
You could build a stronger case for the need of schemas and namespaces but IMHO that's actually bullshit, even in XML. But you can ignore them in XML too, as you noted, so it doesn't really matter.
> We got NoSQL (esp. JSON based ones) instead of XML.
Thank god we dodged that bullet. I'm just glad people see XML for what it really is (useful, but not that useful).
> Not really. We are already in the multicore age, and it will only get moreticore.
The problem is multi-threaded has never really took off, even with all the cores we have today. There is still no silverbullet to harnessing parallelism.
> Apple's laptop sales will exceed those of HP/Compaq, IBM, Dell and Gateway combined by 2010. No way, thanks God.
Apple's profits on laptop sales (not other iOS devices) exceed those of HP, Lenovo, and Dell (everyone else being dead). Apple has been in no hurry to capture the unit shipped crown (and if we count iPads, they are already there).
> In five years' time, most programmers will still be average.
> The problem is multi-threaded has never really took off, even with all the cores we have today. There is still no silverbullet to harnessing parallelism.
MMmmm..... yeah it has. I'll agree on the silver bullet, but I don't think they need one.
Threads are all over the place, and easier than ever to use. They do require intelligence to use well, and people to think about things like locking strategies, but they really are not all that hard. It amazes me that people seem to be so scared of them for some reason.
Perhaps it's the problem domain I work in (back-end server systems) but I come across them all the time. When dealing with lots of distinct tasks for lots of distinct clients you can make good use of threads without needing to do the kind of hard-core mathematical analysis that you would apply to splitting single tasks across parallel systems.
(And no, I'm not suggesting that a one-thread-per-client model is a silver bullet, there are more efficient async ways of doing things)
I really don't see threads being used explicitly more today then they were 10 years ago. The number of programmers who are proficient in dealing with locks, semaphores, etc...doesn't seem to have grown much (perhaps not outpacing the overall rate). If anything, we have a lot more code these days that is not explicitly multi-threaded but is implicitly concurrent in some other way that may or may not use threads under the covers (async, job managers).
Exactly, in fact I'd say there's a lot more multithreading going on today, it's just that it's hidden behind whatever JEE framework people are using. This isn't a bad thing, enterprise developers (i.e. 90% of us) shouldn't have to worry about mutexes any more than they have to worry about interrupts or sorting algorithms.
Out of interest, how does an enterprise developer (whatever that means) ensure consistency of data and good performance, the two competing aims of a decent mutex/lock strategy? Is it taken on trust that someone else handles this, and therefore hidden in 'safe' data structures or... ?
One place you see a lot of good "under the covers" stuff going on is with .Net 4.5. The task parallel stuff they have is really decent. You still have to think a little bit about concurrency issues, but if you are careful to use their thread safe structures, they have taken care of a lot of the harder parts for you. It is a neat model that just plays out pretty nicely.
Another cool product is Disruptor - they have figured out a good, fast way to do message passing that minimizes the need for concurrent data structures between worker threads. (Provided of course you can model your app in that way). Under the hood, they have some crazy dep memory barrier based stuff, but for most of the programmers on a team, thinking about that stuff is just not needed. (as an aside, I think if Beautiful Code 2 was being written, I think Disruptor should be included, that source code is downright inspiring.)
The point being, once you have a good, trusted and vetted set of primitives, and are willing to put a little effort into working in a paradigm someone else sets, you get a lot of benefit. This is no different than using RoR or Django instead of rolling your own web framework.
If threads or processes or distributed computing is being used under the covers, does it still count as multi-threaded programming? I could go out and design an machine/OS that didn't use what we would think of threads (think Atari's Transputer [1]), and all these implicit threaded schemes would still work with my new construct. Now, we might not have transputers, but some of us get to use GPUs and such (still niche, but growing), where the unit of parallelism no longer resembles a thread very much.
My point was that multi-threaded programming hasn't taken off because it hasn't needed to. We just get some really smart experienced guys to write our infrastructure that encapsulates thread use and all of us get to reuse that.
Somebody still needs to be the smart experienced guy. Somebody still needs to write the language VMs, low level libraries and interfaces, device support etc etc.
I really don't get on with this mindset that someone else can fix it.
I'm talking about higher level abstractions than concurrency frameworks integrated in Java or .Net . In an enterprise world, if you need to parallelise tasks between multiple consumers, you just add them to a JMS queue/topic and tweak some obscure xml to say who consumes it and how.
In fact, using any thread or concurrency mechanism in JEE code is considered a code smell, you should be handling it with the Spring/JMS/Jboss setup
Never done any JEE, or 'Enterprisey' coding in my life, so I wouldn't recognise code smell if it was waved under my nose :)
I've had the sort of career that involved highly-optimised in-memory databases, high-volume, low-latency payment processing and stuff like that. So I'm used to dealing with this stuff direct.
Can we, as a group, just agree that we're all sufficiently aware of the fact that yes, average (by which we all mean "mean", don't be pedantic) does not require that 50% of the referent group is above and below that number for all sets of data, but, since we're almost always talking about a more-or-less normally distributed data set, it's close enough that the deviation isn't really worth talking about?
Because I'm somewhat tired of seeing this exact comment every time the word "average" is mentioned.
If we take your very narrow definition, the chances of any programmer to be (exactly) average are zero, if we relax a little bit we're back at tautology.
No, it's not a tautology at all. For instance, one sensible interpretation of the prediction would be that > 50% of all developers are within a range of += 10% of the average performance score. This may or may not be true.
> Of course most programmers will still be average, by definition. The problem is that the average level of quality and productivity probably won't have changed all that much.
It would appear not.
It's always funny how people take a serious person who has otherwise written a very insightful article, take a sentence out of context, and proclaim it silly.
>> We got NoSQL (esp. JSON based ones) instead of XML.
>Thank god we dodged that bullet. I'm just glad people see XML for what it really is (useful, but not that useful).
XML, JSON, MUMPS-storage are semantically equivalent.
> A tautology if I ever heard one.
In fairness, that statement was clarified: "In five years' time, most [future] programmers will still be average [by today's standards]."
With all respect, I have hard time seeing Apple laptops outside the US.
Indeed. And where I am (also outside the US) I have a hard time seeing non-Apple laptops. Anecdotal evidence isn't really that useful when we're talking about global trends.
Not counting some fanboys who acquired the taste trough some snob blogs.
Making statements like this cause you come across as overly emotionally invested in your platform, and make others less likely to take your arguments seriously.
No. I happen to use anything. I am not against Apple, I am against the Apple fanboyism. And any type of fanboyism.
'Fanboy' is a pejorative term used on the internet to try and infer irrationality in others. It's like hipster. It doesn't make you magically qualified to judge others opinions or motivations just by using it.
"For example, most desktop and laptop computers use Microsoft Windows"
That is not anecdotal, though it's also not the fact being discussed. Note that I'm not trying to argue that Apple sells more laptops combined than the other main manufacturers, I don't believe they do.
Apples and oranges (no pun intended). We were talking about unit sales, not market share.
Turns out that Apple's laptop sales do not exceed those of HP/Compaq, IBM, Dell and Gateway combined but Apple's iPad sales was already doing it one year ago: http://www.computerworld.com/s/article/9223707/iPad_sales_be... and from profit POV I bet it is the same.
> With all respect, I have hard time seeing Apple laptops outside the US.
Maybe Africa? I live in China and I see a lot of them here; China is Apple's second largest market now.
The fanboy argument is funny to no end. I mean, I still can't find a freaking non-Apple laptop that isn't a huge piece of crap. Maybe the X1 will change my mind, but they are on hold right now due to faulty touch screens.
I'm scared to death that only Apple is willing to produce non-crappy laptops these days, that the other PC vendors have given up and are content putting out cheap crap (and expensive crap for the enterprise). Depressing.
Half the programmers and web developers/designers I know here in Sweden have Apple laptops, (including several Windows devs). Peeking into random coffee shops and walking past university campuses also seem to indicate that they're quite popular overall. Of course all those people could simply be fanboys who acquired the taste trough some snob blogs, but I'm not sure how to tell.
That is an unusual extrapolation. So people outside the US buy Apple laptops because they are fanboys? I understand you have complaints/grievances against Apple but this is not r/technology.
>With all respect, I have hard time seeing Apple laptops outside the US. Not counting some fanboys who acquired the taste trough some snob blogs.
Well, I see tons of Apple laptops in places ranging from France, to poverty striken Greece to SE Asia.
If you live in some Eastern European country, Latin America or Africa you might see way less. Macs are for the richer part of the computing buying public.
Worldwide Apple commands around 4-5% of laptop shipments, but in places like US and Japan that can get to 15-20%, and for specific categories like ultraportables it can get to 60%.
Recent trips (last 6-8 months) - Sydney, London, Germany/Continental Europe - I saw Macs all over. Were they the most numerous? Not really, but they were quite visible.
3: He explicitly mentions message passing and multi-process as opposed to multi-threaded, and that's dead on right. F#, Clojure, Scala, Erlang and Node are all approaches to concurrency that doesn't involve the developer dealing with threads.
2. Someone will make a lot of money by hosting open-source web applications. Github is cool but with all the cheapskate web devs (no offense, I am one of them) I don't think it is _that_ profitable.
I think he means more like wordpress.com, WPEngine and those ilk. Those are currently the most lucrative, but there are other examples of businesses monetising open source by hosting it.
I'm not sure if that's what he means. The discussion of the point specifically mentions applications like bug trackers and wikis - not software that tends to be developed in house. Heroku is generally used for hosting applications that you wrote, rather than for hosting open source stacks developed by others (e.g. phpBB or MediaWiki).
Saying that, it all falls under the general trend of providing system administration as a service. Given how much ongoing effort is required to maintain secure, up to date and efficient servers, this is probably a good thing.
I agree. And I think more and more open source web applications are using SaaS "supported" versions in addition to their open source/privately hosted options. I think this prediction is pretty spot on as well.
Yeah it may seem obvious to us now that there is one major social network where people use their real names and the total user base approaching 1 billion, but that was not at all obvious in 2004.
Most people have never heard of Reddit. Facebook is a global community that everyone has heard of. And, trust me, while people like you and I might hang out on HN/Reddit - most of the rest of the planet spends so much time on Facebook that they don't even use email any more. Conferencing, chatting, socializing is all done on FB.
Yeah, if you're living a decade ago. By 2005, MySpace was such a big deal that if you were a student without one, you'd feel like you were missing out on something important socially, and by 2008, Facebook had pretty much everybody, including grandparents, as a user. Nowadays the center is shifting again, and most of the people I know have started using Tumblr as their social locus, but it probably won't overtake FB. I'd predict something new coming up in the next year or two that ends up overtaking Facebook in market share, but its approach will have to be highly insightful and unusual, nothing like what I've seen yet online. (Path comes the closest, but Path's reliance on smartphones kills their audience; I still have friends without smartphones and I'd never use a platform that shuts out friends for their consumer decisions.)
> by 2008, Facebook had pretty much everybody, including grandparents, as a user.
Not in my circles... Facebook didn't even begin to pop up on my radar until 2009. Social media adoption is extremely dependent on where you live and what type of people you know. Generalizations based on personal observation are invariably going to be wrong.
To add some actual data to this: in February 2009, Facebook had recently surpassed 175 million users[1]. It's now got a billion. So 2008 is definitely too early to place your "pretty much everybody" statement.
Read the details of the prediction. It sounds to me like he's describing a post-apps API Facebook. Not in so many words; I doubt he was thinking along those exact lines. But his prediction doesn't cover MySpace or Twitter or other social sites quite as well, while he describes Facebook to a T.
The company I work for pays $250 a year for Github Enterprise per developer for about 400 developers... That is just one company. 100k/year! They are thinking about extending it to all 2,000 developers.
Announcing the investment on the GitHub blog, CEO Tom Preston-Warner admitted it wasn’t that they necessarily needed it: “Our company has been profitable for years, is growing fast, and doesn’t need money. So why bother? Because we want to be better. We want to build the best products.”
So, you were assuming that they weren't making a profit based on some sort of gut feeling that they were "still in start-up mode", but you used quite absolute language suggesting personal knowledge of the state of their books.
Why the heck would GitHub take $100 million in investment and still push for earning a profit every quarter? Does it make any sense to you? Now you can see why I'm confused.
The idea that taking investment automatically means you would actively forgo earning a profit is kinda odd. You take investment to grow, not because you have an allergy to profits.
Not really. Say you take $100 million in finance, so you are now $100 million in debt and you are probably not going to put the money in the bank, but spend it. When you spend (invest) the money, that is taken away from earnings and you can spend more than you earn thanks to the financing. Even if the company's valuation goes up because of these investments, they are still taking a loss now. You have to take a hit now to grow and earn more later.
I'm not an accountant, and this is just my uninformed understanding.
Github is cool but with all the cheapskate web devs (no offense, I am one of them) I don't think it is _that_ profitable.
It must be. Github raised 100m from investors led by Andreesen Horowitz in 2012. You don't get that kind of funding from that level of quality investors without a seriously good financial trajectory.
> 5. Lisp will be in the top 10 most popular programming languages by 2010. This failed hard.
I don't know about hard. I can now imagine a day where Clojure is the predominant JVM language--whereas if you told me five years ago a Lisp would be even as popular as Clojure is now, I would have scoffed.
Now, the by 2010 part is wrong, indeed. But I see these predictions as being more about acceleration than current velocity.
I think you're overestimating the popularity of Clojure.
While it's not as totally obscure as many other Lisp, Scheme or Lisp-like language implementations, it's not very widely used at all. Its penetration is minimal, even compared to Scala (which has seen much more significant usage, but still pales in comparison to Java).
While Clojure is a step in a better direction, I don't think it's gaining traction all that quickly. There's a whole lot of existing Java and Scala code that would need to be completely discarded or ignored in order for Clojure's usage to be considered as having any impact.
>> 6. Oh come on, facebook, youporn, whatever. As I said, too vague.
Yeah, so he didn't call out a specific "type" of socializing, or draw up the wireframe for facebook here. But he just recognized there is a huge trend of people wanting to share their lives and communicate in near real time. I think he nailed it. Facebook and twiiter are just that. Even look at his move example. These days, you watch you movie or TV show, then laugh and comment on it with your friends not in the theater, but on facebook or twitter. I'd say he NAILED that prediction.
Perhaps, but they were all in college, it was a very different FB. This article was written my sophomore year of college, and I hadn't even had a FB account yet.
I think for #1, there are a number of hosting providers that match this description, such as Dreamhost, that offer a number of open source web apps that you can install with a single click. They manage them, upgrade them, back them up ... all you gotta do is pay them.
That being said, GitHub does indeed match the description, though I think people are thinking about the code they are hosting for other people ... but I think where they fit the description is with the fact that they are hosting Git for you (and making some money through enterprise services).
3. Multi-threaded programming will fall out of favor by 2012. Not really. We are already in the multicore age, and it will only get moreticore.
A lot of things server-side are moving back towards multiple processes rather than threading, driven by "cloud" based scaling: if you code is designed to handle being spread over many nodes then why complicate things by having two concurrency models on the go at once (threads at host level, multiple processes spread over the hosts)?
8. Someday I will voluntarily pay Google for one of their services. You don't say? They are one of the biggest players out there, even if they are not primarily known for payware but still.
While most individuals just use the free service level, you can pay for the use and support of Google Docs and some groups do. Same with Google Drive: while many won't need more than the free 5Gb it costs per month for more than that (if you are using it to store music and video 5Gb isn't hard to fill).
10. In five years' time, most programmers will still be average. This makes no sense, but anyway, we are getting better. Slowly though, version control, more languages, testing is infiltrating even some dumber dev companies too.
It makes perfect sense in a pointless way: most of everyone are average at everything. As people get better/worse the average shuffles around too. If he is talking about following best (or at least good) practises I can tell you thre are a lot of average people out there despite the tools avaliable to make it easier to do things right.
> (threads at host level, multiple processes spread over the hosts)
Concurrent programming isn't all "jump off a cliff into the atomic<T> and mutex lock sea and worry about deadlock or run everything in serial".
There is a middle ground where you take apart the easily discretized parts of your task at hand, that have little or no communication between them, and thread those. For example, if you have a system to scrape a user profile, processing the personal information, hobbies, and pictures in separate threads only requires sending work to threads in a thread pool saying scrape X. That X gets scraped, database calls are asynchronous and work up to the thousands of workers, and you don't even need to wait - pass off the profile, let the threads run and wait for a new connection.
Threaded parallelism only becomes dangerous when you start using shared data. That is where you need to ask if its worth the cost. You shouldn't be afraid of threading entirely just because you can back yourself into deadlock hell if you don't plan ahead.
I wasn't meaning to suggest it was (or should be) considered dangerous: just that supporting two concurrency models it that same time is a complication many would prefer not to bother with unless their application really needs the efficiency boost threading may offer within a single process on a single node. Unless you can justify working with both due to some measurable and significant efficiency gain or don't need multi-host scaling you go with the option you have to have (multiple processes).
Supporting both (multiple processes for multi-host scalability, threads for speeding up CPU intensive operations and such) might not be much more hassle, but it is at least some more hassle.
> Lisp will be in the top 10 most popular programming languages by 2010. This failed hard.
On contrary, for me this one was one of the strongest predictions. In my opinion, predictions should be taken in spirit and not by letter. Looking at the rise of Clojure, i dare say that was a great foresight.
Edit: http://www.tiobe.com/index.php/content/paperinfo/tpci/index.... appears less focused on web development and startups (it puts VB ahead of Ruby and leaves Clojure and Scala out completely) and even this ranking puts Lisp (presumably CL) at #13 overall.
> 3. Multi-threaded programming will fall out of favor by 2012. Not really. We are already in the multicore age, and it will only get moreticore.
What that prediction missed was that Moore's Law finally broke down for single-threaded performance. Single cores got only about 3x faster from 2004 to today, rather than continuing the historical trend with four more doublings. Moore's Law is alive and well but now requires parallel architecture to exploit. But you can't blame an armchair pundit for missing this one. The 3 GHz clock speed barrier was hardly something a layman could have predicted; it's an artifact of extremely sophisticated manufacturing processes and infinitesimally detailed subatomic physics. A layman couldn't have said whether the physical limits of clock speed would be reached at 1 GHz or 100 GHz and in 2005 or 2025.
> 6. A new internet community-hangout will appear. One that you and I will frequent.
This succeeded way more than anyone thought. More than "you and I", our moms and grandparents even hang out there. It is Facebook.
We're not that far from the speed of light barrier. There's a fixed amount of time it takes for any information to get from one end of a chip to the other and back. Only shrinking the die will decrease that.
1. XML databases will surpass relational databases in popularity by 2011. Yup, close enough insofar as the point was made but some of the names were changed and improvements applied.
2. Someone will make a lot of money by hosting open-source web applications. Ya know, he might be onto something there. In retrospect, it relies on fulfillment of #7.
3. Multi-threaded programming will fall out of favor by 2012. It's just too obvious a technique to not use. Alternatives may be cleaner, but not as clear. "Get it done" trumps "make all the pieces play in perfect harmony".
4. Java's "market share" on the JVM will drop below 50% by 2010. That assumed robust use of JVM. iOS & Android exploded onto the market, changing the landscape he was addressing.
5. Lisp will be in the top 10 most popular programming languages by 2010. Nope. But I assume he never expected Objective-C would be.
6. A new internet community-hangout will appear. One that you and I will frequent. Facebook. EVERYBODY is on Facebook. Wasn't so important that "you and I" use it, but that our mothers would.
7. The mobile/wireless/handheld market is still at least 5 years out. Odd that he was more predicting something not happening; we'll say he was right insofar as what constitutes "happened" is squishy and took a few years to get going, but wrong insofar as when it did happen it did so to a far greater degree than he expected. Half of the then-absurd snide "promises" he listed as others making have pretty much come true: HDTV pixel counts on a pocket device, LTE data rate 10x faster than T1, etc.
8. Someday I will voluntarily pay Google for one of their services. He couldn't grok what Google would be able to offer for free.
9. Apple's laptop sales will exceed those of HP/Compaq, IBM, Dell and Gateway combined by 2010. Had to invent the smartphone and tablet markets to do so, but insofar as an iPhone 5 is on par with notebooks back then, we'll count this as a big win.
10. In five years' time, most programmers will still be average. More a tautology than anything. What's "average" is pretty friggin' capable given a cheap modern toolset. For $99 and a Mac darned near any competent idiot can whip up a practical app and have it available to the world in a few days, no need for mad skillz.
The thing about predictions is the difficulty of articulating how discoveries along the way will affect manifestation of the prediction. One may be right in the spirit of the prediction, but not the letter thereof.
> Half of the then-absurd snide "promises" he listed as others making have pretty much come true: HDTV pixel counts on a pocket device, LTE data rate 10x faster than T1, etc.
Shame we still don't have microsecond latencies, though.
"5. Lisp will be in the top 10 most popular programming languages by 2010. This failed hard."
Tiobe has "Lisp" ranked 13, not clear what they included as "Lisp". (Specific Lisps like Scheme, Common Lisp, and Clojure rank much lower, so must be some kind of aggregate.)
Interestingly, the "Very Long Term History" table has Lisp going from rank 20 in 2008 to 13 in 2012. So Yegge clearly missed his top 10 prediction, but may have been on to something in regard to Lisp increasing in popularity (depending on how much credence you give Tiobe rankings, of course).
You are perhaps aware that the Death Of Lisp happened in between the first date and the current date? The Golden Age Of Lisp was during the 1970s and 1980s when the USA Defense Department put substantial funds into AI research, and the favored language for AI research was Lisp. And, to a degree, the Lisp community became dependent on that money. If you look up some of the famous companies that put together Lisp-oriented machines circa 1980, there was the clear assumption that money from the Defense Department would continue forever. However, Communism collapsed in 1989 and from the Reagan build-up of the 1980s to the demobilization under Clinton in the 1990s, the USA spending on the military shifted from almost 9% of GDP to roughly 4% of GDP. There were cutbacks in research spending. And this lead to the Great Death Of Lisp.
However, good ideas are resilient even if the face of death. People such as Paul Graham kept the flame alive. People like Rich Hickey brought it into the modern age. There is no question that Lisp is on the comeback trail.
There is also, of course, the broader victory of Lisp: the impact it has had on all other languages. Paul Graham wrote about that here:
>2. Someone will make a lot of money by hosting open-source web applications. Github is cool but with all the cheapskate web devs (no offense, I am one of them) I don't think it is _that_ profitable.
Err, that prediction talks about something like Amazon AWS and Heroku, which very much happened. Not about hosting open source CODE.
>3. Multi-threaded programming will fall out of favor by 2012. Not really. We are already in the multicore age, and it will only get moreticore.
Nope. Multi-threaded programming he talks about has long fallen out of favor. Multicore programming in 2013 is all about message passing, STM et al --where threads are hidden from the programmer.
>5. Lisp will be in the top 10 most popular programming languages by 2010. This failed hard.
If you count Clojure, it didn't fail that bad. But the general trend towards functional programming and its concepts means that the prediction sort of have been proven true, despite Lisp not being the language that brought it.
>6. A new internet community-hangout will appear. One that you and I will frequent. This is such a vague prediction does not even worth mentioning.
Are you kidding me? This is all about Facebook, before there was facebook.
>7. The mobile/wireless/handheld market is still at least 5 years out. Happened sooner.
Well, the iPhone appeared on 2007, so not that much sooner. And he is correct that the whole market exploded a little afterwards (the first iPhone wasn't even 3G, remember?).
>9. Apple's laptop sales will exceed those of HP/Compaq, IBM, Dell and Gateway combined by 2010. No way, thanks God.
> 5. Lisp will be in the top 10 most popular programming languages by 2010. This failed hard.
It peaked[1] in 2010 and is now settled on the 13th place, so I got to give him some credit for this prediction. If you include other dialects including newcomers such as Clojure, we might be talking at least close to the 10th most popular.
Has it failed? Or have people just stopped trying to shoehorn it everywhere?
There are cases where XML is a great format. For example in the past I wrote a system for interfacing with a government department.
The thing was an epic of about 30 different schemas, however being able to actually look at a concrete schema for specification and actually be able to test against it was invaluable.
Trying to build that thing with JSON would have driven me to tears.
Well, perhaps it's not fair to say XML itself has failed, but the idea of XML as the data format to end all data formats seems to have fallen flat on its arse.
I too like the ability to validate XML against a schema and found it invaluable when dealing with a set of (needlessly) complex ant-plugins a couple of years back.
That brings up an interesting question of where the line of what is a database gets drawn. Does a JSON interface to a CRUD application that uses a relational database behind the scenes count as a JSON database? A part of me thinks it should. CouchDB positioned itself with a JSON API that allowed you to skip the CRUD middleware, so from the API consuming point of view there is essentially no difference.
>> 2. Someone will make a lot of money by hosting open-source web applications. Github is cool but with all the cheapskate web devs (no offense, I am one of them) I don't think it is _that_ profitable.
My initial thoughts on #2 were AWS and the PAAS companies (Heroku, EngineYard, etc.) but maybe they're to broad to fit. The people who mention WordPress have a good point.
Actually 4 is almost entirely backwards. He's saying that Java the language will be less popular than other JVM languages. That's not true at all. Obviously what actually happened is that the Sun/Oracle JVM dropped below 50% market share of "VMs that run Java", losing badly to Google's Dalvik. Java-the-language remains dominant on both, however.
I think on point 2, he meant a hoster like heroku or AWS? On point 6, its reddit (BBS for this gen), point 8, google drive, android apps, play store etc, point 9, I think the iPad counts :)
I'm sorry he got (1.) wrong. Let's be really generous and say that he means databases that aren't relational (which is totally different from XML database), even by that definition I'd argue that there aren't more people (developers) using Non relational over relational. How about end users, yes probably more users, but even in 2004 more people were using non-relation databases than relational (file systems, LDAP, Lotus Domino, VSAM, IMS, AD). So perhaps you could interpret surpass to mean 'better than', well that's just vague.
* both Dell and HP have been in the doldrums in terms of PC/laptop sales and they both desperately have tried to milk the enterprise software and services business
Apple's total revenues come to about 69% of the combined revenue of HP, Dell, Acer and Lenovo ... so a miss since the iPhone/iPad have been what boosted Apple rather "laptops" per se (and certainly not to complete equality in revenues by 2010), but it's an interesting miss.
>5. Lisp will be in the top 10 most popular programming languages by 2010. This failed hard.
I don't think it did. Lisp itself didn't become popular, but a lot of functional ideas are being implemented in more mainstream programming languages, and some functional languages are also becoming popular.
I don't think Lisp is a functional language like Haskell is. It's more of a language with no paradigm. You can do objects, you can do pure functions, you can write your own language easily. And the mainstream languages, like Java, C#, Ruby, Python have also been mixing up different paradigms. You have some kind of lambdas, maybe even closures. They rely heavily on objects and having mutability everywhere is still too common.
> We got NoSQL (esp. JSON based ones) instead of XML. Close enough though.
NoSQL still stores tables, not trees (or at least the NoSQLs I've paid attention to do). That's the change he was predicting. You still need a tabular schema for your NoSQL database.
Hmmm - not really. You're storing complex object(or document or xml or json). In other words, you're not using the db schema to destructure your model, but just using it to get at the bits you want.
@friendly_chap: No surprise considering how shit Java is.
You must be a shitty developer for hating on Java. Didn't you read thru the article about how useful and awesome Java is. The worlds top websites, mobile apps heavily run on Java and JVM.
9. Apple's laptop sales will exceed those of HP/Compaq, IBM, Dell and Gateway combined by 2010. No way, thanks God.
Well he wasn't that far.
The high-end laptop market is utterly dominated by Apple: already in 2007 they had more than 30% (growing by 8% from 2006) of the high-end laptop market and now they're at more than 50%. So their sales there exceed those of all the other players combined.
They're also the uncontested winner in the ultra-thin laptops market.
Granted, there's a market for IBM Thinkpads but you simply cannot say he was far off regarding is Apple laptops prediction.
On the Lisp thing, looking around the programming world in 2004, you didn't see proliferation of functional concepts like you do today. In those days, "functional" meant Lisp, at least popularly. I think the big lesson of Lisp has been disseminated, though, and that is that it is more productive to write code-that-writes-code.
But I also think we learned a broader lesson that it's better to not pigeon-hole people down into a single paradigm. So what if Company A uses only the functional bits of your language and Company B uses only the object oriented bits, thus making their code "incompatible". Their code would be incompatible even if they could 100% agree on programming paradigms, naming conventions, API patterns. Because programming is so much NOT the syntax of the language, as long as it is not specifically restrictive.
To whom did 'functional' mean the same thing as 'lisp'?
I'm always astonished when people conflate these two things. Lisp encourages you to put the forms that evaluate to a value in the position you would otherwise put a variable that has been set to that value, sortof code in place. That is kindof like functional programming. However, lisp does not avoid side effects or mutability, in fact virtually all operators in lisp are either destructive or have a destructive equivalent. The destructive equivalents are there because if you are limited to the nondestructive versions only you can't implement lots of algorithms without doing a bunch of consing and copying. In lots of ways it is easier to implement a performant functional language if it is pure and without side effects, because you can make strong assumptions everywhere.
Lisp was much more interesting 10 or 20 years ago than it is today. Today, most languages have stolen most of the good parts of Lisp. Strong but dynamic typing (or the gross weak but dynamic typing), pass by reference only, generic data structures, garbage collection, first class functions, ability to introspect objects at runtime, and most other features it pioneered are just taken for granted in modern languages. What is left as an advantage is the lisp syntax, and advantages that come from that, such as data-as-code and therefore powerful macro systems. That is also the main disadvantage to lisp, as much as people will claim that eventually you can see through it, I have never seen anyone do math either on paper or on a whiteboard in anything but infix notation. There are infix and postfix calculators, but I have never seen a prefix calculator.
TLDR; Lisp isn't really that functional unless you use a restrictive and inefficient subset of the language.
You are also not getting the point you complain about completely clear, it is not that "Lisp isn't really that functional", Lisp != Common Lisp and being a Lisp and being functional is to some extent orthogonal those days - Clojure is a very functional Lisp for example, similarly Racket, Common Lisp much less so. The original McCarthy Lisp paper was to a large extent a paper in theory of computation, using Lisp as a vehicle for things like proving program correctness, so that Lisp was purely functional, otherwise you could not do much formal reasoning with it. I guess to be completely clear one has first to define what one means by "Lisp".
That's an interesting point about infix and postfix calculators but no prefix calculators. Though I wouldn't go so far to say there aren't any, I feel pretty sure I've seen one before--though don't care enough to go looking--they do seem pretty rare. I wonder what lisp/scheme/whatever-I-need-to-say-to-get-you-to-stop-being-pedantic would look like with post-fix evaluation.
I think it would be not that much better than prefix.
Infix is easier to read, because it is just more natural to parse. An expression like:
sin(1 + (5 + x + y) / (n + k)) is just really easy to understand
(sin (+ 1 (/ (+ 5 x y) (+ n k))) is inscrutable, at least to me.
The second thing is, intermediate variables makes code much more readable, but lisp strongly discourages this. The only way to create a lexical variable is with LET. But this causes indenting, which is pretty ugly. In fact, the use of indenting for both logical nesting and variable creation is a really nasty thing:
so this:
def qualifies_for_free_shipping(item_price, weight, shipping_factor, category):
if category is in FREE_SHIPPING_CATEGORIES:
return True
item_cost = item_price * TAX_RATE
shipping_cost = weight * shipping_factor + 2.00
total_price = item_cost + shipping_cost
if item_cost >= 80:
return True
if total_price >= 100 and category in ELECTRONICS_CATEGORIES:
return True
return False
becomes, in some lisp dialect I am making up but is like common lisp
(defun qualifies-for-free-shipping (item-price weight shipping_factor category)
(if (in category FREE_SHIPPING_CATEGORIES)
t
(let* ((item-cost (* item-price TAX-RATE))
(shipping-cost (+ (* weight shipping-factor) 2))
(total-price (+ item-cost shipping-cost))
(if (>= item-cost 80)
t
(if (and (>= total-price 100) (in category ELECTRONICS-CATEGORIES))
t
nil)))))
I could have cleaned up the code and used a better conditional than nested if's or something, but this is my point. I can't just glance at the code and see what is going on, I have to parse it and keep track of where I am as I move around the s expressions. In the python version I can just jump in the middle and move around without keeping a mental bookmark, because I always know from indentation how I can get to where I am now.
Your argument amounts to "I know Python better than Lisp." And it may be true that because most languages and especially most popular languages are like Python that you have a head start. But it doesn't mean that Lisp is insensible or harder to read. It just means you have less experience reading it.
I say this as a decent Haskell programmer, and a very poor Lisper. There was a time I found Haskell code so unbelievably befuddling I thought it was a prank. Now I know it, so I know what to expect and I can read it as easily as Java. Lisp remains harder for me, like you, but it's a matter of practice.
I think you’re trying to use Lisp as if it were Python, and that’s going to make things more difficult for you than they need to be.
(A lot of my college classmates ran into trouble in a class where we programmed in Scheme [since renamed to Racket] because they tried to use it like Java, the standard curriculum language. If/when they grokked Scheme, they became just as proficient in it as they were in Java, but until they did they spent a lot of time effectively complaining about how Scheme wasn’t Java.)
I know it's not real code but I honestly find the second example way easier to read. It's indeed very common to use a 'let' like that and I don't see what's wrong with a "two spaces" indent (between your 'let' and the next 'if'). If you're talking about the indent after the 'let*' then, if anything, I think it makes the scope just so much more obvious.
Being a "Lisp" is a very broad category and I always wondered about people calling Lisp a functional language. I would love to see a list of things that are today considered functional that originated in Lisp, I think there is not that many of them in the end. However, there is a ton of things that have nothing to do with being functional that originated or got popularized by Lisp: garbage collection, lists as the fundamental data structure, dynamic typing,... One could just as well consider it the first "scripting" language, a spiritual predecessor to Python, Ruby, maybe Smalltak to a extent? Many Lisp dialects do not stress the functional part all that much and lots of Lisp code is written in a way that is very imperative. In the end I think many other factors contributed much more to the popularization of functional programming.
And if you take Common Lisp as an example of a Lisp, there are still tons of interesting things that are far from being mainstream. Multiple dispatch, conditions system, macros, ...
I think the common programmer considers first-class functions and closures as 'functional' features. This is from the bad old days when procedural languages didn't usually have these things.
> Being a "Lisp" is a very broad category and I always wondered about people calling Lisp a functional language.
I don't understand how anyone who has studied programming language history could not understand that Lisp, and APL to no small extent, are the languages that initially defined what it means to be "functional".
Scheme, which is a hugely influential dialect of Lisp, is unambiguously functional, and Lisp in general is clearly derived from lambda calculus, which is the origin of functional programming.
I understand this, see my comment above, but when you speak of "the programming language Lisp", the way you speak about Python or Ruby, it most commonly refers to Common Lisp, that is and was the most popular implementation and its common usage was hardly functional. Part of the confusion is also that what today goes as "functional" is not only about functions as first-class objects but also very much about minimizing mutable state and this isn't something that was emphasized by first Lisp implementations.
That's not my experience at all. "Common Lisp" is used to refer specifically to Common Lisp. "Lisp" refers to the entire Lisp family. When I took SICP at MIT, Sussman and Ableson referred to Scheme as "Lisp" more often than they referred to it as "Scheme". Check the online video taped lectures if you don't believe me.
Additionally, lists in Lisp are the definitive persistent functional data structure. Sure, you can use rplaca and rplacd to modify a list, but this is very rare (not to mention dangerous) to do. Also, courses taught using Lisp typically focus on recursive solutions.
..."I always wondered about people calling Lisp a functional language"
Even more interesting: why do people call a family of language a language? elisp, for example, certainly doesn't put the emphasis on FP. But Clojure does (although you can use Clojure in a non-functional way).
But it's not that much of a surprise: nowadays talks about FP are everywhere, so programmers who think they understand FP use the term for everything.
For example you can have Excel spreadsheet having cells which shall always update themselves automatically from the network: like a cell containing the current date. Or a cell containing the current exchange rate between this and that currency.
Yet recently on HN in every single thread about Excel you had armies of retards explaining that "Excel was the ultimate functional programming language".
People are really that retarded. Even on HN. And it's frankly sad.
"Important Note: the predictions themselves don't matter! Most of them are probably wrong. The point of the exercise is the exercise itself, not in what results. You should try this experiment yourself — it may show you things you're thinking that you weren't really aware of. And other people might find it interesting reading, even if there are factual errors or whatever"
Lisp will never be popular. Everyone likes to kiss the ass of Lisp, and talk about how transformative and powerful it is, then they go and write a bunch of Python or Ruby or Lua or Perl or anything other than Lisp, because infix is just much much more readable. When was the last time you went to a math class and the professor wasn't using infix notation? Even the Common Lisp Hyperspec has to resort to infix notation to explain things. Look at all the infix notation: http://www.lispworks.com/documentation/HyperSpec/Body/f_car_...
Otherwise, this is pretty spot on. I would say that XML databases should be replaced with JSON or schemaless databases (there was no json back then) and then this prediction is spot on as well.
The mistake here is the "everyone." I agree that infix is easier for some math, but otherwise I genuinely prefer prefix, and I'm not the only one.
So the question now is, are those of us who have no problem with prefix notation merely a random minority of the population, like people who are left-handed, or are we a minority like say the minority who are good at math, or the minority who used the Internet in 1995?
I'm not saying for sure we are. Even after all these years, I still can't say for sure whether ordinary programmers will ever be able to deal with prefix notation. But I also don't feel it's safe to dismiss the possibility.
I've always thought that the ease with which I picked up prefix notation had to do with the amount of time I spent using HP RPN calculators as a teenager. Postfix and prefix are not that different.
But RPN calculators are harder to find these days, I think.
I got a new job in December. This company mostly uses Ruby and PHP. I was given an assignment for which they scheduled 2 weeks. I wrote the whole thing in Clojure, my new favorite language. I got it done in 1 week (6 days). After I was done, I asked if it was okay if I used Clojure. The folks I work with had no problem with that. They asked were impressed with my speed, part of which I attribute to Clojure.
I am the first person in this company to use Clojure, (although there is a small team in London that now uses Scala, and there is an official goal to move toward JVM technologies). I use it because it lets me work very fast. I have spent many years writing Ruby and PHP. I used to use them because I thought they were fast, and Java was slow. But one's point of reference shapes one's preferences.
Let me put it like this: I started using PHP in 1999. At that time, I felt strongly that this was the easiest way to write web apps. Java was verbose. The Struts framework was monstrous to the point of being offensive. Perl was ugly. I had never heard of Ruby. I became an evangelist for PHP, trying to convince companies that its use would make web development much more pleasant and faster.
Later I did some Ruby On Rails development. I like this language, though I have never understood the smug sense of superiority of some Rails developers, especially in their attitude toward PHP. In my mind, these are 2 similar languages with slightly different strengths and weaknesses, but at roughly the same level of power.
PHP was for a long time a Wild West of development. Every developer built their own CMS and then used it for everything. At some point after 2006, this changed. Ruby On Rails was a huge influence. By 2008, the only jobs I could find, for either Ruby or PHP, were using monolithic frameworks: Rails, Symfony, Drupal or Cake. I found myself bogged down reading endless documentation. I recall losing whole days trying to track down a bug in Symfony and then finding it was because of a setting in an obscure YAML file. The situation was not as bad as Struts, but I felt that somehow the world had taken a wrong turn. Once upon a time lightweight scripting languages offered freedom from Java and Struts -- and when you make that comparison, both Ruby and PHP seem like heaven. Bruce Eckel summed up the spirit of the change here:
"But for someone who has invested Herculean effort to use EJBs just to baby-sit a database, Rails must seem like the essence of simplicity. The understandable reaction for such a person is that everything they did in Java was a waste of time, and that Ruby is the one true path."
http://www.artima.com/weblogs/viewpost.jsp?thread=141312
To me, Clojure is interesting for 2 different reasons:
1.) the language
2.) the eco-system
The 2 work in combination of course.
When I got done with my 2 week assignment, which I did in 1 week, I ran "lein uberjar" and gave a single binary to the sysadmin, so he could roll it out to the production machines. He asked where the other files were. I said "There are no other files, all the HTML, CSS, Javascript, images and code are in this one file." He asked how to start it. I gave him a simple 1 line startup command. He started it up and was amazed. He said, "That's it?" I said, "That's it." He said "Why can't they all be this easy?" He was used to dealing with complicated Capistrano and Jenkins commands for roll-outs.
To me, Clojure has that simplicity, speed and elegance that I was looking for in 1999, when I stumbled on PHP and decided PHP was the wave of the future.
I suspect the Great Age Of Lisp is still ahead of us. Stuff like Clojure makes me think there is a vast potential here, still waiting to be unlocked.
So, if I were to do your two week assignment in one day, in C, would that mean that C is much better than Clojure?
What if I gave your Sysadmin a make file that would build a Debian package or RPM package rather than expecting him to maintain some complex and silly deployments system? The sysadmin would be my new best friend, but would that imply that the language I used was better?
You are doing some seriously sloppy thinking here.
I have to disagree - he was talking about the eco-system of clojure as well as the language.
We do get mixed up with complicated frameworks at the expense of simple (dare I say composable) solutions. Clojure is small and simple enough that it has not yet had the weight of working in a thousand different environments imposed on it.
Sure, it's the same, both imply a test for equality or in this case an assertion of equality. However, '==' is not part of this language and there is reason to use it other than the alternative is less obvious and clear.
It's pretty universal to use syntax that's outside the domain of the language being described to describe a language in it's specification, otherwise your head would explode trying to distinguish the example from the rule.
The C++ standard uses tables for the equivalent purpose, not C++ declarations, like at §23.2.1 in C++11. Does this mean that C++ should move from header files to an RDBMS?
This really is an impressive display of foresight. It appears that all the reading Steve mentioned doing paid off. The act of discussing his predictions here on HN is foretold in prediction #6. And those that were off, were still pretty warm.
Facebook already existed when this was written. HN was just around the corner and fits a lot better, imo. I figured "you and I" referred to developers and entrepeneurs, not the general public.
Facebook didn't launch 'til February 2004. Initially it was only available at US Ivy League colleges and, then, in 2005 at some international universities, but it was not "unavoidable" in 2004.
Perhaps I spoke too broadly, but I signed up (according to my Facebook account) September 6, 2004 and only did so after being nagged by friends and acquaintances. So, for me when I look back, it felt unavoidable.
Note: I went to a university in the US but not an Ivy
But how many of us thought it would become proliferous as a global community? At the time it was locked down to each college (you had to have an @college.edu address to even register), and it wasn't until 2005 or 06 until high school kids got access, and then their parents, and Facebook tore down the dividers of their walled gardens.
Then again maybe he was just a big fan of his MySpace page at the time.
In mid-2006 I was hearing about facebook, and it was being described as: "kinda like myspace, but for college kids - something universities can buy into and keep the creepers[1] out".
[1] Creepers being defined as people who just trolled social networks looking for friends/sex/whatever, that was new and terrifying to a lot of people then.
When he said "whoever creates AOL for real people [...] is going to be really, really rich" I'm pretty sure he was talking about something closer to Facebook.
That point was one of the easier predictions and vague enough to fit a number of currently popular services, but I'd have to agree that Facebook fits it best.
I think its twitter more than facebook.. Facebook is oriented towards connecting friends.. while twitter is oriented towards people with similiar interests..
Prediction #6 is not very ambitious. The only way several new "internet hangouts" would not emerge would be if the internet suddenly had ceased to exist.
I'm similarly impressed. While we can argue about the specificity of several predictions, I can say that I would not have predicted any of them in 2004 other than Programmers Will Be Average and We Will Have A New Online Meeting Place.
That really stood out for me too. Even more so, the fact Paul Graham is referred to in the prediction just before. PG forms the conclusion of Steve's lisp prediction and also founded a type of community mentioned in the prediction after it.
This is a pretty incredible list of predictions. I'm not just impressed at how 'correct' some are, but also that these predictions turned out to be so important to the tech industry.
It's interesting how everyone has a different take on #2. When I read it, I thought of Heroku and I think that's pretty close. With the git integration many people do `git push github && git push heroku` which is pretty cool.
As fir #10, that's not good news for all of the "Rockstars Only" job postings.
Does anyone know why XML was so hyped when it came out? I remember reading article after article about how great it was and how it was going to revolutionize everything.
The Java->XML->XSLT->HTML or Java->XML->Webservice framework was pretty big news in 2001. It meant you could transfer data between any number of services using the same data set. You could take that XML and send it to a java servlet or you could render it to html using xsl and give it to the users. You could also flat out dump XML and have other services use it. It was the first generally accepted webservice standard done over http.
Before XML exploded information was transferred between companies (that I worked for) by moving CSVs around, then putting the csv into the database, then reading the database back out... compared to that xml was an absolute godsend.
It was that great. It got us away from the era when everybody would make up binary formats to store data, into the era where you can debug by eyeball instead of with a hex dump utility, and use off-the-shelf parsers as a bonus. That's plenty enough revolution for one generation of technology. Everything after that is gravy.
With threads as the complexity grows you basically re-invent the OS kernel/scheduler inside your app. You might as well use the real kernel. Multi-process is the way to go.
One weird thing. Site is hosted in sites.google.com, and content is published on 2004, however Google sites launched in 2007, which came from acquisition of JotSpot, which was launched in 2006: http://en.wikipedia.org/wiki/Google_Sites
1. Well we have the NoSQL bandwagon. Obviously with JSON in a lot of places instead of XML.
2. Well we have GitHub and Bitbucket, but also cloud application platforms and EC2, and Heroku for taking application deployment and infrastructure away from being the pain point it was.
3. Green threads.
4. I would say Java is still the No.1 on the JVM by a long shot and more than 50% but I don't have the figures to back it up.
5. Lisp. I agree, hasn't happened
6. Hackernews, Reddit, Facebook
7. I think he was closed to the mark there. 2009, well there was a complete reinvention when the iPhone was released in 2007 and Android in 2008. Took the networks some time after to get their 3G together.
8. A lot of people are starting to for Google Apps for Enterprise, Google Adwords is still popular as ever, but personally. No.
9. Apple is apparently more profitable, but I don't think there are good figures anywhere for this.
10. Most people are probably around average at.. ;-)
That doesn't really make a lot of sense. Cell phones existed in 2004. So did "smartphones". Really, the only major difference that you could use the word "market" to describe is the notion of a platform for general-purpose mobile computing, the key part of which is a general market for apps, instead of just baked in features on a phone.
Actually I took it to mean more like wordpress.com and WPEngine - ie. things that people can self host if they want to deal with the hassles of it all.
Because, out of the miniscule market segment of geeks and techs, the tablet fills much of the requirements that people were purchasing laptops for 8 year ago.
Curious, not to be rude, why wouldn't you do that?
The tablet, as we see it today, really wasn't even a thing back when this article was written. Certainly not in the way that it is today. It's been a while since I've been in a 2004 mindset; but, if we consider a tablet as a 'large (as in not phone), mobile computer', then tablets certainly fit the bill. If we're thinking of a laptop as mostly a consumption device, then it still fits the bill. Even now, with the Tablet PCs that are now coming out; as well as all the productivity software that exists on the various tablets (including ssh clients), tablets are continuing to fill the place that laptops used to.
(General response to a trend in the comments.) I believe #2 refers to things like Wordpress.com or CloudFoundry.com. Wikipedia.org might qualify as well. If Discourse.org offers hosted instances, (and they make a lot of revenue), they could qualify too.
As far as I know, GitHub never opened their source.
Crazy to see someone talking about PG in 2004 that way (see comments section). I assumed that comment must have come from an HR'er in 2004, but nope. Apparently it was just someone talking about PG and LISP on the Stevey's blog in 2004. Weird.
Not sure why I got the downvote - I am serious - Laser guns will change the geo-politics of the world - and business has always followed those changes:
I simply assume that with "type system" you mean "statically typed language" since lisp is strongly typed, which implies having a "type system": bigloo & typed racket demonstrated that static typing & lisp/scheme aren't incompatible. That prediction was rather unlikely to come true, anyway.
Well Clojure is actually a lisp dialect, more generally speaking, functional programming languages get more attention nowadays than back in 2004. Since 2004, we have F# Clojure, Scala was released in 2003....
Considering how hard it is to predict the future, I would say it is a pretty accurate prediction.
I came late to this thread, but I want to add an item to Stevey's list of "trust tests" for prediction #2, "someone will make a lot of money by hosting open-source web applications":
SOVEREIGNTY: I would strongly prefer outsourcing to large companies with C* executives, boards, and/or founders who really "believe" in the mission of the company, have complete financial control, and are generally unlikely to even need or want to sell out, and generally unlikely to ever be vulnerable to a buy-out.
When making investments of any kind in an entity that is made up of individuals, I'm increasingly wary of the cupidity of those individuals. Very soon I'm going to have to decide whether to send my kids to private schools or public schools (and switching costs are high, so I want to get it right the first time), and it strikes me that private schools may be less likely to change drastically over the course of 8-12 years than a public school.
Well, _paying_ for it is expensive in terms of money, it's not the switching that costs money generally.
The switching costs I'm referring to are "social" ones. I don't want to shop my kids around schools if I can avoid it, because each time they'll have to make new friends and be "the new kid" for 1-2 years.
In the public system, you change schools twice, elementary -> middle and middle -> high school (or just once, k-8 "middle school" -> high school), and each time you generally know lots of other kids because schools are arranged as "feeders", and a given high school class is made up of cohorts from each of the feeders.
> public schools change slow and ploddingly, after long tedious debates.
Public schools are beholden to state budgets, and I don't like they choices that were made a couple decades ago when _I_ was in public schools (no music or "arts" education at all in my k-6 school), so I shudder to think of what's going to happen nowadays. State budgets fluctuate along with the political climate.
Wow, I really was expecting at least one of these comments to have meaningful discussion of each of his predictions with facts/figures. Instead, half of the comments are arguing about Lisp and the other half about XML / JSON.
10 years ago I wrote desktop software with UI threads, background worker threads, network layer threads and various helper threads you spawned on a whim. And so did everybody else. Back then the big innovation to multi-threaded programming was the use of RAII Mutexes[1] to avoid livelock/deadlock scenarios.
Fast forward to today. We write complex software for the browser with Javascript front-ends that are completely asynchronous. On the server we have a share-nothing-architecture where hundreds of incoming requests are divided over independent workers. There is still complex multi-threading going on in the database and in the web browser, but most programmers don't have to deal with it anymore. The problem of locks and deadlocks is pretty much a responsibility of the operating system now, much like virtual memory and writing data to the filesystem.
We are doing more "multi-process" programming than "multi-threading".
Our systems are designed to scale by increasing the number of processes performing the task at hand, more than having one process with multiple threads to increase concurrency.
The programmers try not to think delegate the thread management to the OS, by using multiple processes and MQ in between.
> I'm not sure what you mean with that one, what kind of alternative?
I'm guessing one which better handles layouting and modularization.
I'm not hopeful though, CSS is way too "good enough", it's probably there to stay and actual layouting features will slowly get (awkwardly) fitted into it[0], meanwhile modularization will likely be left to components.
nr 8) is a manifestation of my wish that there must be something better "out there", i don't know what it will be.
some of the best frontenders i know rant about CSS regularly (and with a passion), and some already started dreaming of something better. lets hope they fix it until 2020. (7 years from undefined dream to market, should be possible)
Steve Yegge is usually a talented and entertaining writer. Having said that, his strengths lie in writing about computer science and far less in actually creating real systems. These predictions aren't very good (as time as shown) and are on par with the poor judgment he exercised in bashing his former employer in a 'private' post.
> Prediction #5: Lisp will be in the top 10 most popular programming languages by 2010.
Every time I hear that, I always think of "abcd will be the year of the Linux desktop". I want it to happen, but at the same time, I realize how absurd I am being.
Thank god JSON has taken over XML. I'm sure XML still has its uses, but it simply is way too verbose, and frankly I have yet to find a data structure which representation fits better in a xml like format (besides html, obviously)
I believe point #3 describes a paradigm shift from thread-based programming to a more event-based approach (which still uses threads/co-routines) rather then implying that everything will be single threaded
This is really impressive. Soooo close on the XML database thing - just substitute JSON (which hardly anyone knew anything about in 2004) for XML and he nailed it. Just one great call amongst many here.
Well, not really. If we look at programmers by skill level and number that skill level from 0-9. If you have 100 "5ers", than the average is 5. And yes, than most programmers are average.
If you have 50 "1s" and 50 "9s", the average will still be 5, but no one is average.
"Of course most programmers will still be average, by definition. The problem is that the average level of quality and productivity probably won't have changed all that much."
-- from the article.
2. Someone will make a lot of money by hosting open-source web applications. Github is cool but with all the cheapskate web devs (no offense, I am one of them) I don't think it is _that_ profitable.
3. Multi-threaded programming will fall out of favor by 2012. Not really. We are already in the multicore age, and it will only get moreticore.
4. Java's "market share" on the JVM will drop below 50% by 2010. Not in 2010, not even yet, but we are certainly on the road. No surprise considering how shit Java is.
5. Lisp will be in the top 10 most popular programming languages by 2010. This failed hard.
6. A new internet community-hangout will appear. One that you and I will frequent. This is such a vague prediction does not even worth mentioning.
7. The mobile/wireless/handheld market is still at least 5 years out. Happened sooner.
8. Someday I will voluntarily pay Google for one of their services. You don't say? They are one of the biggest players out there, even if they are not primarily known for payware but still.
9. Apple's laptop sales will exceed those of HP/Compaq, IBM, Dell and Gateway combined by 2010. No way, thanks God.
10. In five years' time, most programmers will still be average. This makes no sense, but anyway, we are getting better. Slowly though, version control, more languages, testing is infiltrating even some dumber dev companies too.
--
I give this guy a mental thumbs up for points 1 and 4, and the cynicism in point 10.