Hacker News new | comments | show | ask | jobs | submit login
Whatever happened to programming? (reprog.wordpress.com)
157 points by skorks on Mar 3, 2010 | hide | past | web | favorite | 111 comments

My biggest gripe with modern programming is the sheer volume of arbitrary stuff I need to know. My current project has so far required me to know about Python, Django, Google App Engine and it's datastore, XHTML, CSS, JQuery, Javascript, JSON, and a clutch of XML schema, APIs and the like.

Don't get me wrong, I'm grateful for all of it, but it just doesn't seem like what I was promised when I followed SICP for the first time. It just feels like I spend most of my time scouring through documentation and trying to remember umpteen different sets of syntax and class names rather than actually thinking in code.

Back in ye olden days, most programming tasks I performed felt quite natural and painless, just a quiet little chat between me and the compiler. Sometimes longwinded, sometimes repetitive, but I just sat and though and typed and software happened. The work I do these days feels more like being a dogsbody at the tower of babel. I just don't seem to feel fluent in anything much any more.

We talk about 'flow' quite a lot in software and I just have to wonder what's happening to us all in that respect. Just like a conversation becomes stilted if the speakers keep having to refer to their phrasebooks and dictionaries, I wonder how much longer it will be possible to retain any sort of flowful state when writing software. Might the idea of mastery disappear forever under a constant torrent of new tools and technologies?

I kind of like it.

The thing is, most of the languages you mention are pretty good at what they do. Python is a good general purpose language, and Django is just a Python framework, right? (Not a Django developer.) XHTML does a good job describing hierarchical content to be displayed. CSS is a good language for styling content separately from the structure on a site wide basis. JQuery and Javascript can be looked at as a single thing, and if you know the syntax for Javascript data structure literals, you pretty much know JSON. Google App Engine seems to potentially offer a lot for the learning curve required (not an App Engine developer, just read some tutorials).

XML, well, there's just no excuse for that. :)

Some people complain about all the parts you need to know to build web applications as compared to desktop applications. But I think that the different languages each doing their thing well makes for a better development experience. The other great thing is the diversity of choices for the backend technology stack, like Viaweb deciding to build web apps just so they could avoid Windows and use Lisp (...and some Perl and C++, I think).

I know exactly how you feel. Having to pick up all those different knowledge sets is something else. However, I balance the fundamental desire to build things against a fundamental desire to build things people can use.

It's the pride in me. The closest I came to it a long time ago was when I animated a bunch of search algorithms for a class project in college. That was one of my favorites because I could show my non-geek friends what I was doing and they could appreciate it.

It's a big reason why I am now focusing on my web app. The web is at a point where I can build something and get (potentially) millions of people using it. I'll sacrifice some hands-off-ness in my programming for that.

My current project has so far required me to know about Python, Django, Google App Engine and it's datastore, XHTML, CSS, JQuery, Javascript, JSON, and a clutch of XML schema, APIs and the like.

This is the price you pay for being able to work with other people. If you never wanted to reuse anyone else's code or talk to anything else on the Internet, you would not need to know any of this. To stand on the shoulders of giants, you need to speak their language.

I think you missed the main thrust of his complaint: back when everyone was hacking in FORTRAN and COBOL, you really didn't need to know much more than the language, its standard library, and stuff specific to the problem you were solving. At worst maybe you'd have to learn BLAS and LAPACK if you were doing numerical stuff. The APIs/overhead/junk memorization situation has been steadily worsening since then.

But at least now we don't have to take a stack of punched cards to the Priest with an Offering (bribe) and wait a day just to get a printout saying that the damn thing dumped core.

You still have the option of using FORTRAN and COBOL with BLAS and LAPACK, if you're satisfied with the result.

The problem is that today we demand much more than what those earlier tools can give us. Such demands result in an inevitable (?) increase in complexity of the aggregate toolset. The tools are capable of more, and give you more options. So it seems that learning and using those tools effectively would be more challenging.

But it's not a one-way street. Along with this increase in complexity have been efforts to simplify, streamline, and structure these tools and ease their use. But both the demand for increased ease of learning/use and the demand for power are moving targets. Hopefully with time both could be maximized. We're just not there yet. Remember, computer science is still in its infancy.

Nobody is forcing you to want to interact with the real world.

The reality is that "back in the day", computers weren't useful to very many people. Now they are.

You're taking a narrow definition of the "real world" by including webapps but excluding ATM machines.

I agree that all this "overhead" does have utility, I'm not saying we shouldn't have it -- but it's less pure/fun compared to the old stuff (IMO).

"Standing on the shoulders of giants" is exactly what's wrong with programming, and technology in general. If we keep doing this, we'll eventually build up so much information that when the foundations (the "giants") need repair or improvement, nobody will know how to fix them. You should never rely on someone else's work without fully understanding it yourself, and in programming that usually means it's best to write things from scratch. There needs to be more praise for re-inventing the wheel.

Please tell me you are trolling.

Why would I be trolling? Sure, it's an unpopular opinion, but I'm expressing it sincerely. The idea that you should continually build on the work of others is insane--do you build buildings like that? Constantly tacking on parts as they're needed, never re-analyzing the foundations? Of course not. Why, then, should the same work for software or science? All that results in is an excess of information; I'm suggesting that it's better to trim that excess by constantly rebuilding and refining the basics, rather than building monstrosities upon monstrosities.

Do you build buildings like that? You certainly do!

Please don't tell me each building project reinvents concrete, or steel girders, or cranes, et cetera, et cetera. These are all technological wonders, and using them means standing on the shoulders of giants.

It just looks different in physical construction.

This is precisely why I think I enjoy developing for the iPhone more than web development.

Yes, Cocoa is a massive library, but it has 99% of the things that I need. I think in objective-c and cocoa.

When it comes to web development I'm doing 100's of tiny little things in 2 different languages with 2 different markups to get something done.

Tone and I were discussing just this subject over lunch yesterday.

We were talking about why some people love working in Objective-C and why others abhor it and came to the general conclusion that it is more of a philosophical issue than a technical one.

That the bits given to you by Cocoa provide the minimum functionality required while the frameworks of other environments try to cover every piece of functionality imaginable.

The former suits the makers and the latter, the assemblers.

I think there is room for both in this world (although I admit I belong to the first camp) but treating both types of "programmer" the same rubs both the wrong way.

This actually helps sum up why I like doing Flash work in Actionscript. AS3 has a single fairly narrow standard API with consistent structure and conventions. Flex has a much larger API, but still with consistent structure and conventions. When I stray from the comfort of my (admittedly user-unfriendly) marketing microsites toward the wooly harum-scarum of HTML/PHP/non-jQuery-Javascript, I feel slightly sick to my stomach.

On the other hand, Django/jQuery is pretty sweet. I think it's a matter of working with orthogonal and well-designed systems, more than the number of systems.

Well ... you should be grateful that the stack you're using doesn't suck so much. Python/Django is a piece of cake to work with, and it has many extensibility points (stays out of your way), while also giving you lots of stuff for free.

Ever worked with Spring / Hibernate / Struts / EJBs / Oracle? If pain should have a definition when related to programming, then that's it.

I would rather have decoupled components that improve in quality independently and are the best at what they do.

If you really want a more unified development method, look into Pyjamas http://pyjs.org/ . It's a port of google web-toolkit to python. You just write everything in python and then it compiles the javascript front end for you.

And another port of GWT for .NET is http://dotweb-toolkit.com. It uses a decompiler in order to translate .NET assemblies into javascript.

> My biggest gripe with modern programming is the sheer volume of arbitrary stuff I need to know. My current project has so far required me to know about Python, Django, Google App Engine and it's datastore, XHTML, CSS, JQuery, Javascript, JSON, and a clutch of XML schema, APIs and the like.

The situation seems ripe for a disruptive technology to emerge.

Curl tries to be this. I remember reading articles about it years ago, but it never really took of: http://en.wikipedia.org/wiki/Curl_%28programming_language%29

Here's a project that's taking a crack at disrupting this situation: http://cappuccino.org/ I know there's a few others like this, but it's the first one that popped into my head.

It all depends on the context - sure on some platforms (mostly embedded) it's just your binary and the hardware.

Anything else, where you start having dependencies on components sourced from elsewhere then you are in this situation.

Personally, I don't really see what the problem is - looking at and using other peoples software is usually a great learning experience.

This is why I'm an embedded engineer. I don't want to muddle through all of the 'bajillion' different frameworks, languages, etc. Need a window manager? Wrote one. Need a function not implemented in our custom graphics driver? Implemented it. Want to toy around with speeding up software bitblt? Go for it, and good luck.

Though that doesn't mean we don't use third party libraries, I just think we have far more flexibility in not using them.

Well, I don't know about you, but I suspect that my efforts to write an IDE, OS kernel, database engine, virtual machine host environment are probably going to be rather worse than the ones that already exist. So I'm quite happy to use existing ones.

The obvious solution is to move up to a higher level of abstraction. Create some sort of new language or tool that allows for creating the entire application in a single environment, and then compile that down to Python, XTHML, CSS, JavaScript, etc as "object code". There have been a few rough attempts at tools like this already but nothing that really worked yet. Seems like a good opportunity if someone can figure out a good abstraction that isn't too leaky.

Someone who likes invent wheels, go ahead, may be u are the creator of next tool or framework. Someone who likes stack wood, come on, u should be happy for so many woods in the library of computer treasure. Whatever you choose, happy programming is most important.

jdietrich, you have EXACTLY nailed what I was trying to say (but in about 1/10th as many words as I needed). Thank you. I have linked to, and quoted, this comment in my followup article, http://reprog.wordpress.com/2010/03/04/whatever-happened-to-... -- hope that's OK.

Thank you for your kind words. You are most welcome to quote me.

Wow. I sympathize with this very, very strongly. I think this captures the root of the career frustration that I experience 70% of the time.

I seem to spend way too much time on what I call "guessing the magic password" - tracking down that one bit of punctuation in some obscure value in some xml configuration file that makes my whole system fail silently in one environment. I might spend 5 hours beating my head against a wall, only to finally figure out that this brittle system over here needed me to put some proprietary prefix on some host name in this one context. And at the end of it, I don't feel accomplished. I feel like I'm wasting my life! Figuring out which parameters to pass to which APIs in what order is a step above "guessing the magic password" - but it's not that big a step.

A few things come to mind. You can try to make sure you are one of the people writing the libraries. You can implement the next platform and use all the aesthetic sense you have in you to try to make it a joy for others to use. Put yourself in the position of the programmers who write the systems that make the rest of us age faster, and try to do a better job. Or content yourself with getting your programming joy from your elegant side-projects.

I seem to spend way too much time on what I call "guessing the magic password"

It's always been this way, but lately it's gotten worse and worse.

You need to fix a problem. Your friends tell you about technology X which seems exactly what you need. So you pick it up, take a quick look at the samples, and wham! Within minutes the thing is working exactly like it is supposed to!

Well almost. There's this little part that's broken that ruins the whole thing and then you spend hours or days (yes, perhaps weeks) digging up obscure configuration documentation and tweaking APIs in ancient Klingon, sacrificing turtles to Ba'al or whatever else it takes to get the damn thing working like it's supposed to.

Then it's on to something new.

The funny part of this whole story is -- when your friends ask you about how programming in X was, you are more than likely to brag about how easy it was to solve your problem. After all, the bulk of the problem was fixed almost instantly. Don't want to look like a dweeb. And so the cycle of pain continues.

Libraries were created so that you could focus on mastering new and unexplored territory, not to replace programmers.

As a beginner, I didn't appreciate big libraries (like Cocoa). I couldn't wrap my head around the idea of having tons of prewritten code and stitching it together - I understood programming as writing code - so I wanted to write my own damn code, not just manipulate massive libraries.

So I turned to PHP and developed my own webapp (a stripped down Google Docs), from scratch. I quickly stumbled upon the purpose of frameworks and libraries. After writing my 20th SQL query I realized I was wasting time. About 10 hours later, I made my own light PHP framework. Now I didn't have to worry about my SQL syntax (it was automated based on the data model variables) and I got to start working on really cool/interesting programming issues rather than the mundane blah code.

Now I've started to investigate Cocoa again and I realize that libraries and frameworks have not hurt programming, but dramatically catapulted it into the realm of science rather than plug-and-chug. Today, our software is more complex than ever and it will only get more complex in the future.

I think if you see programming as just "stitching libraries together", then you've lost your passion or never had any to begin with. Programming has evolved into a science where you have to learn the foundations (languages) AND the pre-existing solutions (libraries) in order to figure out what's been done for you so that you can focus on exploring new and more complex issues.

Where does mathematics come into play in your evaluation? I just spent last quarter on discrete mathematics and digital design and it has given me a different perspective on programming in my learning experience.

I wouldn't call myself a programmer yet ( Joe Hewitt has made that apparent to me: http://twitter.com/joehewitt/status/9813494038 ), but it seems like modern software engineering is a form of alchemy now. I thought a software product is a combination of different mathematical techniques, but translating that into code seems so cumbersome (taken into consideration that I've learned and had experience in my choice of programming language and framework, which is also a major task on its own).

Math has always been a part of technical design. I think "programming" is starting to become a field where the actual authoring of code and the design of logic fuse completely. So the future of programming combines the design (engineering, mathematics, etc.) with writing code.

From my somewhat inexperienced standpoint, I think programming has gone from instructing the computer to complete a task, to telling the computer to create a logic and handle events based on a complex set of conditions. The former was simple - 1, 2, 3, done - while the latter requires you to design a system, then make that system function through code.

I see old-style programming as following a map: go straight, turn right, turn left, and then stop at the destination. Modern programming is more like telling the computer to use this logic to find its own way on the map, then follow through.

Basically, I see programming slowly morphing into the creation of artificial intelligence.

I think you have the wrong idea about programming. Programming has always had algorithms (the ability to find its way around a map). This "old-style programming" is called procedural programming, but even that paradigm had capabilities to implement algorithms. This is the computer science aspect of programming. Actually, it's not an aspect at all, it's the foundation of programming.

No, programming is a result of mixing math and linguistics to achieve information processing. Of course that's my inexperience evaluation and I bet a veteran of this field would quickly tell me that I too have an incorrect notion. But see, that's the thing. It shouldn't be this hard to comprehend! Sadly, as Scott Rosenberg says in his book Dreaming in Code, the reason why programming is imperfect is because people made it that way and people are imperfect.

I think you need to really learn what sql is. It's a 4th generation language, higher level than php or other oo/procedural/functional langauges. If you're finding php and a framework to be better you're underestimating the power of sql.

The SQL statements are not the problem. The problem is turning the list of tuples you get back into a usable data structure.

Also, abstracting away SQL doesn't mean you abstract away set algebra. It means you write "$current_user->messages( text => { -like => 'foo%' })->recipients" to get a list of people the current user sent messages containing the word "foobar" to, instead of some 200-character-long embedded program in another language. A good ORM is an essential abstraction.

By that example, I meant to highlight the DRY aspect of programming and how I realized, as an early beginner at the age of 15, that frameworks and libraries provide a good foundation so you can focus on more complex issues.

The SQL I kept repeating in that application was simple CRUD functions on MySQL tables. I know that SQL is so much more than that.

Sql isn't more than that. That's the whole beauty of relational programming, you only have to write queries and views, and this simple abstraction does most of the work that you need to do (with pl/sql or t-sql it is turing complete and you don't even need to leave the rdbms).

I don't get why some people keep praising SQL like this. SQL is great for complex queries that contain valuable logic, but it's not expressive at all when you want to do something trivial. Getting all the attributes of user X is so much more work, and harder to maintain, in SQL than it is in any ORM.

"select name, pass, hash, email, realname from user where id = ?"


session.get("User", x)

SQL is not particularly nice for complex queries, either. The underlying model is good, but the particular syntax they chose is verbose, confusing, and non-portable.

I kind of agree, but I was conceding the most reasonable argument to the parent.

'SELECT * FROM user WHERE id = ?' OR 'SELECT getUser(?)' presuming you have a stored procedure/function getUser that returns the data you need, and you should (or a view, at least)

SELECT *? A function in the SELECT statment?

Oh dear god, a blasphemer!

The former can cause performance problems if you have blobs as well as being an unnecessary security hole, the latter can cause performance problems over large datasets as it often forces the execution plan away from a set based solution.

The fact that using the most trivial of examples triggers a discussion in on itself helps to prove my point.

Nobody debates session.getUser(id)

Relational programming may be beautiful, sure, I guess in some ideal world.

But SQL syntax is a dog. Especially when it's sitting next to real code, wrapped up inside strings and making everything feel dirty.

If you're using an SQL database from within another language primarily just to dump objects and pull them out, and especially if you're doing a lot of it, I have trouble thinking of a situation where you wouldn't prefer to abstract away the SQL bits. Maybe if you're working in a language with built in SQL syntax support? Even then, it's not exactly compact, and you've got quite a bit to do even after you get your results back from the database...

SQL, or more accurately, relational algebra, is a thing of pure mathematical beauty.

The problem is all the other crap that you have to layer on top of it for it to be really, really useful.

What about the very common cases where you need to do very simple things, like common CRUD operations? Abstracting the SQL required for these simple queries makes a lot of sense to me.

Sql is designed exactly for CRUD. Almost by definition there's no better way to do CRUD than sql. And about CRUD - most software should rightly be CRUD, because CRUD represents the highest level of abstraction in programming that you can get (below natural language interface of course). Sql CRUD abstracts away all hardware and software layers of the computer (the only hardware/software efficiency decision you have to make is where to declare indexes).

It doesn't abstract away the fact that your application is written in terms of polymorphic objects, rather than opaque relations. It also doesn't abstract away the fact that people like to represent their data with data structures richer than "set". SQL works reasonably well for CRUD, but most web applications are not CRUD, they're online transaction processing, where relational databases just get in the way. A good object database is a much better fit. (And no, I don't mean CouchDB.)

The author captures the sentiments of a generation of programmers quite well. In the 80s, at the beginning of my 'professional' programming career, I wanted to not believe those who foresaw our careers as being like engine and automotive engineers of the early 20th century. That is, destined to become "parts changers".

Early-phase technology workers with those sophisticated machines were true 'engineers', and were paid well. Over time, things evolved (of course). That history confirmed and taught that as technology matures, it becomes commodity. Economies of scale emerge, markets develop, etc. make it such that the technology practitioners ultimately become installers and configure-ers. I think it started to go awry, in my timeline, when the term Application Programming Interface (API) hit the streets. :-)

However, the above is really only true in the 'application development' space. Most software development opportunities are in that space; and therefore, yes, most opportunities to develop software are boring, non-engineering jobs -- rather blue-collar, in fact. To one steeped in a true CS background, that's a real disappointment when considering a lifetime career. This is true for those in their 20s as well as their 40s.

So, I agree with other posters here: get creative! Reject the direction of the herd, if it seems fitting. Do it anyway, just to experience and learn.

It seems the author of the article is stuck in a problemscape where all the problems he would find interesting are already solved.

One solution is to move to a new area of software development that offers challenging problems.

I am currently, like many others it seems, working with creating enterprise CRUD-apps and integrating them. I try to find joy in reaching for a skill level where I can produce solutions quickly, solutions that are easy to change, solutions that immediately lets you know where and why an error has occurred, solutions that are secure, solutions that scale, solutions ran in the cloud instead of on servers that it takes it operations months to set up etc. There really aren't that many people out there that can do all of the above well (me included) and there are lots of money to be saved for your employee/clients doing those things well. I think we would see a lot less attempts at off-shoring if more people in enterprise-it would strive for that kind of knowledge.

I guess what I'm trying to say is, maybe one shouldn't have too high expectations of the technical challenges in problemscapes that have been mostly solved already but that, at least in my area, there are still challenges to be found. They are just of another kind and maybe you can find them interesting? They keep me interested enough at least.

sigh Why am I posting on hacker news again?

I think to do actual "programming" you need to work for a software company. Google and Facebook and other gargantuan sites have as much interesting CS stuff to work on as you could want. I'm sure many at Apple and Microsoft still get down and dirty with OS internals. There are a lot of companies springing up focused on machine learning over massive datasets.

If you are building applications for end users, though, it's hard to justify not leveraging libraries and frameworks that already supply much of what your users want. Especially if the software does not directly make money for your organization.

What happened is the we finally hit "the dream" of building software out of large building blocks instead of having to reinvent screws and bolts for every project.

The lack of fun is, in my opinion, a side effect of using weak languages that force us to write boilerplate and ceremony along with using half-baked non-Turning complete XML languages (don't forget the "L" in "XML" stands for "language").

"screws and bolts... non-Turning (sic) complete"

Pun intended?

Heh... Pun not intended; just typing too fast.

I suspect he meant Spring anyway.

This is the promised land. Remember hearing how good code re-use is? Now it's here.

Hurray! Now we can focus on what's really making our application different. And it turns out to be boring.

Maybe we should be more creative in our application's core functionalities, should we not?

Is this an indictment of frameworks or merely an indictment of bad frameworks? I've worked with both. Trying to manhandle JAX-RS into marshalling your objects in exactly the right fashion such that the XML parser doesn't barf on them is not exactly uplifting work... but it did get something done for my customer.

Working with well-designed frameworks, though, is a true joy. It means I spend a minimum of time on plumbing and spend the majority of my efforts solving problems for my customers or otherwise doing high-value work. I understand there are folks out there who love socket programming, but I've done socket programming, and I have no desire to ever touch it again. Good HTTP clients, REST, etc make that totally unnecessary for the types of problems I routinely have to solve. (Speaking of good frameworks versus bad frameworks -- even in Big Freaking Enterprise Java there is a titanic gulf between ways to implement web services. RestEasy is probably the best architechted framework I've ever seen -- it is so good you can forget you're using a web service at all.)

My next business will need to make telephone calls. There is a whole lot of very ugly coding necessary to make that happen. Happily, Twilio has apparently already done it for me. I'll just snap together Twilio and Rails to be able to make the customers' phone ring, and concentrate on delivering the smart part of the service, which is that making sure what happens after the phone rings solves a real problem that is preventing their business from making money.

I think it's more an ode to "plumbing". From your tone, I get that you don't see much value in or get much enjoyment from writing a parser, or doing pixel-level image manipulation, or writing extensions to your editor to gete it set up just perfectly, or trying to get your code to fit in a 64k segment, or debugging hardware drivers. But some of us do.

We're just different. For you, and most modern programmers, there isn't much value in that stuff, and you don't want to have to do it. And, in the modern world, the truth is we don't have to except in the rarest of circumstances.

In the modern world, developers (they aren't called "programmers" anymore, it seems) have the freedom to worry about prioritizing "high value work" to solve their customers' problems. But it wasn't always that way.

For some of us, that kind of sucks.

Everyone is tainted by their own experience, I suppose, but I feel like I get to work on plenty of interesting problems. What bothers me (and this author it seems) is having to spend your time learning arbitrary things, such as an XML format for a configuration file, or nuances of how different browsers handle different conditions.

However, we (web developers at least) are writing software that is supposed to run on just about any hardware / OS / browser combination, and making that happen and doing complex and interesting things in that context is going to involve lots of arbitrary details.

That being said, if I think back 10 years, this type of work has become dramatically more pleasant thanks to a lot of open source software that has matured in that time.

If the complaint is that we can't all write our own virtual memory system or our own b-tree, I suppose I can't disagree that that is becoming less of a reality. However, solving interesting problems still requires an understanding of these things, and using existing libraries in my experience saves me time and allows me to have a bigger-picture view and solve larger problems. It is still very satisfying. If this article is written from the perspective of someone automating simple and mundane business processes, putting together forms, etc, then I believe the complaints are misdirected.

The problem is not that libraries exist, but the incredible mental investment required to use them. Early libraries were just bundles of functions, but modern libraries are vast, complicated beasts that require years of experience to fully grok. But don't take too long, because they all exist in a state of perpetual churn, and hitching yourself to the wrong train could well destroy your career.

the current trend of plugging together frameworks and libraries does lead to some pretty boring, frustrating work. And the unfortunate thing is that in many cases they don't even reduce the complexity of the solution. (They certainly don't simplify the /problem/)

There's a fascination with shiny new things and it's tempting to think that the latest and greatest download from some smart guy solves the problem you are solving. But after a couple years working in that manner I'm not so sure.

I think it's tempting to view frameworks and/or libraries as an implementation of software patterns, and also as a way to avoid learning your trade. That's probably not concious thought behind it, but you see it in so many things... if you just learn this framework or use this library, you won't have to learn html, or css, or sockets, or synchronisation, or searching and sorting, or...

This has happened to many industries, not just programming

A Pharmacist (not sure what you call them in the US) spends years at Uni learning all about drug manufacture, how the body works etc, and now all they do is type script in the computer and hand over bottles of tablets

What does a signwriter do now? It used to be a skilled craft, now all it requires is typing text in a computer that then cuts the letters from a roll of vinyl sheet

Or Carpentry or any number of other trades?

Internet happened. Internet has made sharing code extremely easy, and promotes code reuse which has led to current situation.

On the other hand, nobody is forcing you to follow this "glue libraries together" -method. Disconnect from the internet and just start coding. As a bonus you'll get rid of a lot of distractions (email/IM/twitter/whateva). It can be quite refreshing, and who knows, maybe you'll even create something new and awesome.

If you're just plugging libraries together it's a sign that the area of computing in which you're working has ceased to innovate.

This was maybe the main reason why I switched to Ruby. The largest part of my time was spent on integrating other people's work. Don't get me wrong. It would take me more time to write the code from scratch and it would have introduced redundancy. But it just didn't feel right spending so much time on getting other people's stuff play together. I think the problem is that C/C++ and Java don't scale semantically.

My comment I left on the post:

I guess it depends on your motivation. I don't program because I love creating complex data structures or innovative new algorithms, I program because I want to create innovative new applications that change how people live, think, work, learn, etc. I signed up for programming so I could change the world for the better in my own little way -- not improve how an electronic machine inputs and outputs binary input.

But to each their own. It seems everyone likes to work at different levels of abstractions. I think we can both agree that those electric engineers are weird :)

The whole point of libraries and higher level languages is that they allow you to work at a higher level of abstraction.

As the languages and libraries improve, difficult problems become trivial, while impossible problems become merely difficult. Given enough time, problems eventually become trivial enough to be solved by non-programmers.

There will always be a frontier - problems that have barely just become solvable thanks to improvements and technology, and these 'very difficult' problems will be what the best and brightest will be drawn to.

Maybe the author is working on problems that aren't challenging enough.

I'd take the view that the emphasis has shifted from 'can we do this? / is it possible' to 'how shall we do this? / what will make our users happiest'. Both of these are great aims, but the former is probably more fun. There will still be edges where the emphasis is still on 'is it possible' but as the technology matures, you'll probably have to seek them out, rather than be there automatically.

A very interesting observation.. I tend to agree that the sheer number of frameworks and libraries makes programming into coping with flaws or badly documented "features". The other thing is that the programming world is much like an archipelago, where different islands develop their own programming culture, like .NET or JEE or PHP. And the islands are floating apart: It will become increasingly difficult to establish the better choice because of long learning curves. It is important to admit there are other cultures out there, that even might do a better job in particular areas. However, as much as frameworks get more sophisticated and cater to the programmers ideal of elegant and abstract formulation of the code, Managed Code, Reflection and even OO have a overhead cost in cycles and bytes, particularly when scaling up. As a 45+ programmer I sometimes weep about "elegant" code that drives your machine to the edge but only does a fairly simple job. Stacking API's and frameworks does certainly not always contribute to efficient and simple executables. Sometimes we just need to get back to the root, or at least understand what happens in the machine when we use a certain framework or library. For programmers that will remain a challenge, and fortunately will force us to reflect occasionally...

Programming for the sake of programming and programming to solve real-world problems are two different things. The first is more fun, the second is more profitable. Since most people want money rather than fun, most programming is of the second type.

If you want to program something fun, nothing is stopping you. Thanks to the libraries, you can concentrate on what interests you, rather than the details of how some standards committee thinks the word "referrer" is spelled.

I think your comment hits the nail on the head. I've worked with web apps for a long time and I quickly learned that all I was doing was writing software for some business process. This isn't the sexiest work or the most interesting, but it has never stopped me from doing interesting things.

For example, I try constantly make less work for myself. So much of the code we write is boilerplate so I have various tools that generate as much code as possible. I have written mini languages that specify the important parts and then generate all of the plumbing around it. I won't run out of interesting problems until I can generate an application by specifying the important parts (to the business) and generating everything.

There is fun and exciting work to be done still. The fact that most of us chose to work on boring plumbing has little to do with programming having changed; it became a more diverse field a majority of which is 'commoditized'.

So why complain? Go do something interesting. If the problems you are solving as your daily routine are themselves boring why would the process of solving them be any more fun?

Years of research and countless dollars have gone into making programming a systematic, controllable process. Old school managers frequently wish that a dev team would output like an assembly line.

Simultaneously, we're expected to be into the newest and coolest technologies.

Put these together, and we're supposed to systematically glue everything that's new.

Seriously, it's not all bad, though. I like being able to generate a test UI in under an hour or a basic web spider in one python file. The art's just different. It's no longer packing your entire logic into beautiful lines of BASIC. It's now more about choosing the right platform to do something that the user will appreciate.

If you're into the pre-library styles, I recommend looking at alternative hardware. When I wrote my first CUDA project, I had to figure out how to get random numbers with sufficient randomness for our physics simulation. You might also look at robots or embedded devices. There's plenty to tinker with on devices that don't have libraries available.

It's a pity that so many friends of mine gave up programming after starting out with Java... But, at least MIT didn't ditch Scheme for Java @intro level. Better Python than Java any day :-)

I've been feeling like this lately, every now and then there are some interesting problems to solve but more often than not it's all just plumbing...

Every now and again? What?! If you can't find problems with today's internet technology, you aren't looking hard enough.

GCC?! Real programmers use punch cards.

More seriously, abstraction is not the issue. The issue is poorly written libraries and languages that require you to worry about the small details. Your brain can handle many small details or a few big ones, but not both. I recommend picking the latter, and using libraries and languages that allow you to do so.

Hint: Lisp is really pretty swell.

Lisp has libraries?

I think the core of the problem that the OP is experiencing is that most API's are really bad. By bad, I mean the API gets in your face and becomes more hindrance than help. I say, choose your API's wisely, find what helps you and discard the rest.

I want to make things, not just glue things together. When people ask me what I like about my job, I always say the same thing: that its the thrill of starting with nothing and making something.

I couldn't agree with this more.

Yep. I've been going and digging up old ACM and other programming contest problems just so that I don't forget how to actually code something from (near) scratch. I was disturbed to discover how rusty I was.

I started doing Project Euler problems for the same reason last year. I even sat up with friends one night working on a few problems, just to recapture that college feeling.

But I know writing algorithmically interesting problems are not likely to appear at work, and I've made my peace with that, along with the paycheck I get for writing enterprise systems.

boo hoo, the wild west days of roll your own everything are gone. except they aren't, you just have to be good enough that companies working on the forefront of non-standard (yet) hardware will hire you.

I weep for the software industry.

Our lives are going to suck in 10 years.

Computer Science is the 5th highest paying undergrad degree right now.

Also, my friend has a math and econ degree, never programmed, got a new job at a trading firm, guess what the first thing they want him to do is: learn C++.

Even if the industry goes in the shitter I guarantee programmers will be better off than 90% of the economy.

I'm pretty sure this complaint has existed as long as there have been programmers.

If you want to program as they did in 1985, go work for a game company. Or find an investment bank looking for C++ developers; they're out there.

Last I checked, BOA in ATL couldn't find enough super good math/finance/C++ programmers, for instance.

Of course to do that you have to live in ATL. And work for BOA.

Those were two excellent reasons that I had no interest when their recruiter came calling to ask if I was interested in applying to work there.

And write C++.

It's because the people that are good with math, finance, and programming don't want to use C++ (or live in Atlanta). Remove the C++, and it becomes much easier to find people. Look at how successful companies like Jane Street Capital are in recruiting, for example, even though the pool of people that know OCaml is much smaller than the pool of people that know C++.

(My experience with people that know math and finance is that they do all their work in Excel. That's because simple programs become 100-file boilerplate monsters in C++ and Java, and someone told them that Perl and Python are not real programming languages. Sigh.

OTOH, turning spreadsheets into production software is profitable and enjoyable. Let the programmers program and let the finance people do finance.)

>My experience with people that know math and finance is that they do all their work in Excel

We are not talking the same sort of people. We're talking the type of people who require huge super computers to run their algorithms.

I'm good at math and programming and all the stuff required to work as a quant and I'd much rather write C++ than anything else.

I know a bit about JSC. They are brilliant at recruiting and have some amazingly smart people.

So where would it be in 2010?

It's just one of those things

Javaschools, PHBs, and other failed attempts at the commoditization/regularization of the software engineer brought this about.

There are as many interesting problems today as there were in 1960 or 1985 or 1998. There will be demand for new languages (and new compilers for those languages). Operating systems may or may not be "solved" for now, but VMs are going to be of interest in the next 10 years.

The problem is that, outside of high-risk startups and possibly academia, it's very rare to get permission, resources, and support to attack the interesting problems. Blue-sky research is gone. This is because of the next-quarter mentality; there's a significant likelihood of failure if you're doing something interesting, and interesting projects cannot easily be done to deadline. And everyone wants to be on those projects. So a first-year programmer at an average company is not very likely to get to do an AI research project that may or may not lead to anything.

"Gluing" projects, on the other hand, can usually be completed in a predictable amount of time and have a high likelihood of achieving success for sufficiently mediocre definitions of "success".

> Javaschools, PHBs, and other failed attempts at the commoditization/regularization of the software engineer brought this about.

Personally I am glad the same thing happened to other industries like shoemakers and car builders. They have been commoditized: so what? It's good for people who buy their product. The same is true of typical software engineering work.

Although I disagree about the extent to which our work has been commoditized. I don't think we're anything like factory workers yet, although it may someday come to that.

The problem with the software industry is that it isn't productized.

Probably the majority of programmers are working for internal corporate IT departments cranking out CRUD apps because the company mistakenly believes it's a precious, unique snowflake that couldn't possibly make do with off-the-shelf software for routine business processes, even though they derive no competitive advantage from custom software. Most of these jobs simply shouldn't exist.

The larger problem is that too many people profit from the dysfunctions of the software industry. From the empire-building CIO to the outsourcer in India, there are plenty of people who benefit from shovelling bad programmers at bad software, and would lose out if companies bought cheap, standardized software from independent companies.

The assumption that cheap, standardized software is either cheap or standardized is a false one. Take the content-management space. For a complex site serving millions of viewers with 100k+ pages, off-the-shelf solutions can be quite expensive (when you account for the initial investment, the time-to-learn, the internal development work required to implement customizations) and often you need to customize the solution.

In the end, you end up with a custom solution layered on top of a specific implementation (so, no standardization even if you started from the vanilla off-the-shelf). Implementing a tool that's specific to the needs of the company can turn out to be a reasonable decision.

How many internal corporate apps are serving millions of viewers?

Depends on how you look at it. I work on an internal CMS that has <100 users but powers large websites serving millions. The app is internal but the product goes to the external-facing websites.

Cool stuff isn't operating systems and compilers, cool stuff is getting your computer to do your job for you.

Why don't you use the knowledge you gained of prolog and lisp and metaprogram the tasks you're given to get your work done even faster.

This talk about crud is boring, libraries are boring, is almost ludicrous. There are 200 million jobs in the united states alone that can be automated by software. Try to automate those jobs - that's cool and lucrative stuff.

Well, I think systems and languages are pretty cool. Knowing how they work is good practice even if it only demystifies them for you.

Also, every program is either an interpreter or a compiler if you squint just right.

Grabbing libraries isn't fun when you first start out programming you really don't learn much and really can't do anything and I think that seems to be where everything is headed.

If you are programming for work sure its going to be easier to grab a library to do a certain task for you even if you have to mangle your task a little bit to use it but when you are learning or even when you are programming for fun that is dull and really really not fun.

I think part of the problem is having too many options. When all you had was moveto and lineto, simple programs with simple graphics was enough.

Now, such programs seem so lame and pale in comparison to the state of the art, while the amount you need to know and learn to match that standard is daunting, there must be a cost in terms of programmers lost to the profession at an early age.

"For me, there is a sort of metallic flavour to most raw tuna;" - yes, it is called Mercury

If you can taste the mercury, you're in trouble.

That was the point! :)

Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact