Hacker News new | past | comments | ask | show | jobs | submit login
"Calling oneself a C Programmer is like saying you're a Hammer Carpenter" (archlinux.org)
180 points by fogus on Oct 25, 2010 | hide | past | favorite | 200 comments



The "programming language as tool" metaphor is terrible.

I don't have a clue how pointer arithmetic works. I'm vaguely aware of it, but it just never comes up in my professional life. I'm absolutely fine with that, just as I'm fine with the idea that a cardiac surgeon may be vaguely aware of how to perform a kidney transplant but doesn't know the details.

Let's be realistic. The days of wizard programmers single-handedly designing an operating system are gone. We're all specialists now. An embedded systems programmer has absolutely no need to understand the intricacies of XML schema or the finer points of CSS parser bugs. Beyond a broad sketch, I have no real need to know the fine details of memory management.

I'm happy to call myself an ECMAScript programmer because that's what I do, that's what I have done for some time and that is what I expect to do in the future. I know other languages, though none well enough to produce work I would be happy to put my name to (apart from Scheme, but who's going to hire me to do that?). Like it or not, I'm not a computer scientist. I'm a working programmer and my view of the world is inevitably coloured by the raw material I work with every day. The way I think about code, the way I design problems is an imprint of the capabilities and limitations of my usual languages. I imagine a sufficiently literate computer scientist could smell it on my breath, or at least infer it from the way I might sketch out a diagram or pseudocode solution.


Every average programmer with a passing knowledge of computer architecture should be able to pick up pointer "arithmetic" in a couple of hours.

It's not rocket science, really, it has more to do with the "I don't know what it is and I'm absolutely fine with that" kind of attitude.

As an electronic-engineer-turned-embedded-C-programmer I never had anything to do (and likely will in the immediate future) with things like closures, prototypal inheritance or web frameworks, but at least I should be able to know what they are and the concepts they are based upon.

By willingly limit your toolbox to one tool you limit your possibilities and usefulness. Maybe you'll come up with a problem that an other tool solves better than the one you are used to, and you'll never know.


Speaking about pointers, it is just an abstraction about addresses which itself is an abstraction of the bit pattern generated to access a memory location, which is an abstraction of the analog electrical voltages on those physical wires, which is an abstraction of ......


abstraction of...ya go on i realy want to know what's beneath that if there is. would be very happy if you could recommend a book which goes so deep.(deeper then that preferebly!!)


By willingly limit your toolbox to one tool you limit your possibilities and usefulness. Maybe you'll come up with a problem that an other tool solves better than the one you are used to, and you'll never know.

Correct me if I am wrong, but any one of these "toolbox"es will take enormous effort to master fully. Some person may really want to do web development for his/her whole life, nothing wrong with that. But to do that they will have to master many many skills to even say "I know everything about web development".


Computers store information in parking lots. Each space has a number. Most cars fit in one space, but some RVs and busses take up two or four spaces. If two cars are parked next to each other, one might be in space 4, and the next in space 5. But if two RVs are parked next to each other, one takes up spaces 4 and 5, so the other takes spaces 6 and 7.

A stupid parking attendant might count cars by asking: "what car is in space 4?" "An RV." "What is in space 5?" "The same RV."

But a smart parking attendant would obviously just ask "what care is in space 4?" "An RV." "Ok, so what car is in space 6?"

There, now you don't have a toolbox but at least it's a squeaky toy hammer.


When you want to upgrade to a solid rubber toy hammer, see:

http://baetzler.de/humor/paging_game.html


The core argument of phrakture is that the toolboxes are more complicated than the underlying mechanism they hide. If he's right, learning them is worse than useless.

Kurtz isn't talking about those. He's talking about basic knowledge about programming. Even if he only does C, he at least has an escape hatch when he needs more powerful thoughts than what C alone can give him.


This is one of my beefs with ORM -- in order to use the database layer properly, you need to expose at least as much complexity as the database itself provides. We use Sequel, which is fine as far as it goes, but it ships with a stupid DSL that is a) as complex as SQL and b) significantly less capable. You end up with shitass queries and many more database round trips and for what? Method call syntax?

Sorry. Rant.


Speed of development. I know perfectly well how to write SQL, but I would rather write "Post.where(:published => true).first.comments.count" and move on to the next thing.

If I need to later on I can go back and write a raw SQL query, but why do it unless I need to?


Complexity/simplicity isn't everything. If an abstraction layer is more complex than its substrate but is also more consistent, intuitive, bug-resilient, maintainable, etc. then it may be a net win.


Do you have any example of such an abstraction layer?


A CPU is a lot more complex than wiring up your own NAND gates, but it's easier to write programs for the former.


It's impossible to "know everything about web development". That's not the point; nobody is demanding you become supreme master of every programming discipline.

But by being afraid of dabbling outside your comfort zone (because mastering things fully is "hard"), you inherently restrict the solutions you can possibly even conceive of to an incredibly limited set.

It's a sort-of Dunning-Kruger: You don't even have the basic knowledge to evaluate how often a better solution than "throw more javascript at it" was available to you and you didn't know it.


Limiting yourself to a domain is one thing. It's like having narrow goals, which is fine. Limiting yourself to a set of programming languages is different. It's like using nothing but a given set of means, which is silly. So, you're a front-end web developer? Fine. You're a JavaScript programmer? Stupid.

Programming languages are tools, in the sense that they are mostly means to ends. But they are not ordinary tools, because they have a tremendous influence over the way you think. I reckon The sentence "PLs are tools" is improperly used most of the time, but not here.

As a side note, if you actually don't have a clue how pointer arithmetic works, check if you really understand plain ordinary variables. Pointers are just a second level of indirection away.


I'd say saying "I'm a C programmer" is more analogous to saying "I'm a helicopter pilot." You can probably fly a commercial jet or fighter, but they are very different aircraft, with very different use cases and operation.

And there will always be a market for helicopter pilots.


Training a pilot on a new aircraft is extremely expensive. Teaching a competent programmer a new language is dirt cheap in comparison (a week for the basics, 2 or 3 more for decent proficiency). So your analogy don't apply.

Plus, I bet I bet many pilots would get a kick out of learning a new aircraft, but can't afford it.


Sometimes language specialization is a result of domain specialization. If you're an embedded programmer or a game engine programmer, there is a good chance that you will use very little in your job other than C or C++. People don't necessarily set out to become proficient in a particular technology, it just happens that the technology is the best tool in their domain.


Yeah, it is interesting how chosing methaphors change the outcome. For example change 'carpenter' to 'artist' and 'hammer carpenter' to 'painter', 'musician' or 'sculptor' and you have different outcome. Or change carpenter to scientist. Or even change to mathematician: a set theorist is really dealing with different things than a number theorists. These metaphors really have their limitations.


Metaphors are necessarily colored by our perceptions and preferences. For instance, I'd say a set theorist is more equivalent to, say, an area specialist like an embedded systems developer, whereas a "javascript programmer" is like a mathematician who insists he's "only an addition guy".


I think the claim in the title still stands, though. Carpenters never call themselves hammerists, but there are cabinetmakers. The point inherent is that there's some truth to saying that a true programmer's skillset is only informed by but not demarcated by the languages he or she is fluent in. It's the Sapir-Whorf hypothesis in another form!


I'm an embedded programmer, and I had to write an XML parser for work. I've also written operating system code, 3d graphics, whatever. And ECMAScript too. Having such a narrow view limits were you can go, and what you can do. I specialize in a few things, but there's no programming task I'm truly afraid of, with at least being given a little time to read some documentation. "Specialization is for Insects."

I know so many people who've just done basic CRUD business programming. Smart people, really smart people, but they limit themselves; don't try to do anything deep and interesting, and it's sad.


I know so many people who've just done basic CRUD business programming. Smart people, really smart people, but they limit themselves; don't try to do anything deep and interesting, and it's sad.

I don't know much about CRUD business programming, but I do know in every field there are many levels of difficulties. Maybe these smart people are trying to solve those problems? If a system has simple atomic parts, it does not mean that every problem will be easy.

Edit: "Specialization is for Insects.", wow if you believe that, why do you live in an society. If you know every general thing, then you must be able to make your meal right? It will include farming, harvesting, cleaning of wheat, making your own bread (with equipments that you make on your own) and then slicing it and serving it. And that is just bread ;)


Re: "Specialization is for Insects"... Two extremes in one thread. :) There is a middle ground, where we have different levels of knowledge on a great multitude of things.

The other day, for example, I patched a hole in my own tire, rather than taking it to a shop. What if (a hypothetical) I, a self-proclaimed C programmer, needed a bugfix in a tool written in python? I can get my job done much quicker if I make a quick patch on my own, rather than wait for someone else to fix it for me.

Specialization is great; it is nice to have experts. Knowing many things is great; it is nice to have people who can do more than one thing effectively.

Just as a basic life survival thing, btw, you probably want to learn the basics of hunting, gathering, and farming. Some things are more important to know than others. :) Insects survive well as a species and much less well as individuals.


I do occasionally make my own bread, and I grew up on a farm. So? I'm not an expert in either, but I can dabble. Why keep yourself from learning something new?


An embedded systems programmer has absolutely no need to understand the intricacies of XML schema or the finer points of CSS parser bugs

They may not have a need, but they should be able to dig through all that because it's all just functions + data + dealing with some other idiot's code.


I have been witness to many JavaScript programmers who could have greatly benefited from good old fashioned pointer arithmetic.

Take the following code for example:

  /** BEGIN **/
  var f = new Array(3000000);
  console.time(1);
  for (var i = 0; i < f.length; i++) {
    if (f[i] === 'foobar' ||
            f[i] === 'foobaz' ||
            f[i] === 'baz' ||
            f[i] === 'barfoo') {
        // Do something
    }
  }
  console.timeEnd(1);
  /** END **/
On my comp, in Firefox, this executes in 7478 ms.

Now, if there is one thing that pointer arithmetic has taught me to be especially aware of is that array index lookups are expensive, usually regardless of the language. With this knowledge, I am empowered to refactor my code to look like this:

  /** BEGIN **/
  var f = new Array(3000000);
  console.time(1);
  for (var i = 0, _i = f.length; i < _i; i++) {
    x = f[i];
    if (x === 'foobar' ||
            x === 'foobaz' ||
            x === 'baz' ||
            x === 'barfoo') {
        // Do something
    }
  }
  console.timeEnd(1);
  /** END **/
On my comp, in Firefox, this runs in 6235 ms. Now, I have achieved almost a 20% performance increase due to my expanded world view.

This is a very simple example, but the list goes on. One certainly should not limit themself to one language.


It's a good example of how coming from different language can both help and inhibit programmer's thinking. Knowing C certainly helps to understand that repetitive array size and index access is likely to be more expensive than using local variables. At the same time C programmer is more likely to adopt an imperative programming style, which is in this case more error prone, verbose and doesn't allow for code re-use.

More ideomatic JavaScript would look like this:

    Array.prototype.foreach = function (f) {
       for (var i = 0, _i = this.length; i < _i; i++) f(this[i])
    }


    /** BEGIN **/
    var f = new Array(3000000);
    console.time(1);
    f.foreach( function (el) {
           if (el === 'foobar' ||
               el === 'foobaz' ||
               el === 'baz' ||
               el === 'barfoo')  // Do something

    })
    console.timeEnd(1);
    /** END **/


I think the point is, you'd have no problem learning pointer arithmetic if you needed it. It's not hard.


No the point is, he doesn't need it right now and as he has projected his future interest, in the future.


No, the point is, he can never evaluate if he'll ever need to know it, now or in the future, as long as he doesn't know anything about it.

It's impossible to evaluate the utility and applicability of knowledge you don't have.


While this may be strictly and pedantically true, the fact remains to declare ignorance of something, even complete ignorance, means you must have minimal knowledge of it. (even if the knowledge is merely existential). Paradoxically, this minimal knowledge is frequently be enough to deduce potential utility and applicability. (particularly with a descriptive name such as pointer arithmetic).


Well, describing oneself as a working programmer (as opposed to a computer scientist) is a valid position, but why does it invalidate the "programming language as tool" metaphor?

A basic message of CS is that most general-purpose programming languages are to a large extent isomorphic in their expressive power, and that many computing problems can be solved efficiently in the mathematical language of sets, matrices, or abstract nodes sending each other abstract messages. The translation to a real language is then the easier step (though it still can be time-consuming and error-prone, for sure).

If only we all worked this way - there would be fewer language wars, and less terrible code resulting from "thinking in the language" rather than "thinking about the problem".


The reason is that the programming language, in this usage, is not analogous to a single tool (like a hammer).

People build all kinds of complete things with just "C" -- where by this term, I mean the compiler, standard library, build and debug system (make/gdb, say).

There are few things of interest you can build with just a hammer. The evidence for this is that no craftsman builds with just a hammer.

The better analogy to the concept of "C" above is "wood shop" (as opposed to, say, "metal shop", "polymer shop", "electronics shop"). You still get the point, which is that everything out of the wood shop to some degree "looks the same" but it's not so clearly leading the reader to the hoped-for conclusion ("the statement is stupid").


So you're saying that the "language is a tool" metaphor breaks down because C is "complete" in a way that a hammer isn't. True, but that's taking the metaphor too far; it's original meaning is that the tool matters less than what you're building.


I was saying that the OP ("hammer carpenter") is taking the metaphor too far, which is why the phrase "hammer carpenter" sounds silly, and we seem to agree on this.

I have no problem with the conclusion that the tool matters less than what you're building.

I'd go farther and say that there is a certain tendency for C projects to "look the same" and that lispy projects "look the same" in a different way (e.g., surprisingly extensible, incorporating a language).


Yes, because that's all Rails (and the other frameworks) do. They obsfucate HTML and put a thin wrapper around SQL

You always have the option of dropping down into straight HTML. This is one of the things I love about Rails. Also, he forgot all the other things these frameworks handle, which constitute 95% of what they do.

Yes, there are people out there who couldn't write straight HTML if they had to, and have never heard of an inner join. We call them bad programmers. Guess what: they're everywhere.

EDIT: Obviously this only applies to people who work in web development, or other domains where SQL and HTML are important parts of their domain. If you do low level systems development, write device drivers, do kernel development, etc, you're not a bad programmer cause you don't know HTML.


Actually, at work I am currently engaged in a bit of a war to convince people that outputting HTML is not easy and in fact is quite challenging to get correct manually. Because people who think outputting HTML is as simple as printing out the tags write cross-site scripting attacks by the dozens. Arbitrary command injections and SQL injections follow just behind.

Outputting something that looks like HTML enough to please a browser is easy. Outputting correct HTML that isn't a CERT entry waiting to happen is actually very hard.

Though this still boils down to an argument that you ought to know what you're doing at the base level of what's going on, when I see people just slamming out HTML in string concatenations and variable insertions I generally consider that evidence that they only think they know what's going on, not that they actually do. If you aren't using some sort of safe+sane HTML generation wrapper you're suicidally betting on having superhuman levels of discipline if you expect to not write security holes.


While one can't count on superhuman levels of discipline, correctness (i.e. discipline) is still a trait to look for. Rapid development goes faster if programmers are not rapidly iterating over their logical/structural mistakes.


You'd call someone a bad programmer just because they can't write straight HTML? You'd call someone a bad programmer just because they've never dealt with databases?

Interesting.


C'mon HN! This is DH3 at best. Don't let your egos get the better of your words.

The obvious domain generalization is a much harder (but hardly impossible) statement to argue:

Yes, there are people out there who couldn't <use basic building block of their field> if they had to, and have never heard of <some atomic concept they implicitly use every day>. We call them bad programmers. Guess what: they're everywhere.

So the question becomes "if you only know how to function within some high level of abstraction are you effective at what you do?" I'd suggest that this holds pretty well if the level of abstraction you're using is too leaky. I don't have the slightest idea how to time an HD seek operation, but I can write to a file with a great deal of robustness. I think the op has a point though that if you operate mostly with RoR you might be ignorant to a non-trivial amount of detail which will relegate your work to being lower quality.

My personal belief is that "web frameworks" aren't a sufficiently compartmentalized level of abstraction. RoR holds the opposite philosophy (evidenced by marketing and the opaqueness of Active Record, for instance) which causes a great deal of impedance when you have to dive into lower level concepts which were supposed to have vanished via RoR. So I agree with the op in that if you can only create things using RoR abstractions, you'll probably be in trouble before too long.


It's certainly possible to be a good programmer but to not have picked up HTML, but come on - making a simple web page is easy. A competent programmer should be able to pick it up quickly. (Note the "simple", though.)


Maybe they never felt the need to do it?


Hence "pick up".


In the realm of web development (which the OP is), most developers have probably done at least some HTML and SQL. Sure there are folks who work at larger organizations where the SQL may be done by a different group than the web stuff, so maybe bad programmer is making too many assumptions. Maybe programmer with very limited scope.


Lots of embedded programmers I know don't consider web development as "programming". Just sayin'.


Lots of people think the sun revolves around the Earth. Just sayin'.


It doesn't? This is news to me, do you have anymore information for me? I am in intrigued as that would change the entire model of the universe I have right now.

</sarcasm>

I've seen some web development in the past that was absolutely horrid, and I hope that after that project not a single programmer has to EVER touch code like that again.


> I've seen some web development in the past that was absolutely horrid

What's your point?


That web developers don't necessarily fall in the realm of software development/engineering.

I should have finished that post, I just kinda left it hanging. Something else came into mind at the time and I mindlessly hit submit.


web is just one kind of user interface. I think those guys more generally refer to applications which are mostly user interfaces to a database and do not involve too much algorithmization challenges.

As far as I am concerned I like algorithmization better, but most jobs out there are mostly UI and database related, so I have to deal with that.


And lots of programmers I know would call them closed minded star-bellied sneeches. Just sayin.


Web development covers a lot more than writing markup.


That's funny, I don't consider twiddling bits of some gizmo programming.


I would have mercilessly flamed you for saying this a few years ago, but then I met someone who made millions from writing his own BIOS for embedded systems, since early 80s. I love him, he is a good friend, his work runs industrial equipment that power the world. But ..

Talking to him about the most primitive constructs in computing: say, variables, or calling conventions, or the simplest data structures has been a teeth-pulling experience. One of the tricks he invented involved saving the registers for interruptible code to fixed locations in memory: he reasoned that saving X registers at Y clock-cycles would meet his soft-realtime requirements. He also kept an index, that stored the last instruction that was executing when the code was interrupted, so he can restore the "handler" later.

He "invented" this ~25 years ago.

Let that sink in.

My friend invented context-saving, all by his own, really, and there is no way in bloody hell to tell him that it EXISTED since before he was born.

I am mightly impressed by his work, all of it self-taught and wildly profitable. But it's just sad that I, a two-bit paper-hacker with nothing to his credit can look at his Magnum Opus and have a name for every invention, not to mention research references, and suggest alternatives.

Embedded hackers are competent, but FUCK, sometimes they need to see past board specs and stupid timings. College Freshmen can out produce them, and those kids are running 100% simulated stuff in Java and Flash. Something has to be said for rigor, depth, and breadth of knowledge, but most importantly abstraction. Who cares if you can operate industrial electronic equipment, if some kid with PLT Scheme can create a cycle-accurate simulator of your equipment in 2-weeks, and out-codes you after that?


With those actual examples, it would be highly unlikely for them to be good programmers.


I know, personally, at least 3 exceptionally good programmers who can't write HTML and don't know what an inner-join is. They are gifted at system design and architecture, they write code that's clean, fast, understandable, and has very few bugs, and their code is delivered on time and to spec.

But they've never written a web page, never bothered to look behind web pages, and don't know the syntax. I'm sure they'd be great if they chose to learn the syntax, grammar and elements, but they've no need.

They certainly know about mathematical logic, and almost certainly know the concept of an inner-join, but they've never dealt with relational databases, and certainly not SQL.

I'm pretty sure you can't be a good web developer without being about to write at least a little straight HTML, and without knowing the differences in use and performance of the different types of join, but even then I'm not convinced.

Just speaking from my experience. It seems to me that sweeping generalisations such as "Can't write HTML => bad programmer" says more about the speaker than the subject.


And they're web developers? I don't think you can read the OPs message in any way other than that it applies to web developers even before his edit.


No, they're not, and I didn't intend to imply they were. My comment was originally intended to point out that there are other types of programming than web programming, and that sweeping statements about people being bad programmers are misplaced when they're not properly qualified and contextualized.

The "discussion" got out-of-hand, and it's too late to try to write something balanced and conciliatory.


> Just speaking from my experience. It seems to me that sweeping generalisations such as "Can't write HTML => bad programmer" says more about the speaker than the subject.

I will yield to your experience, and certainly do not suggest that it is impossible to be good without a grasp of the two technologies.

My instinct, however, is that not knowing HTML (which I define as understanding the structure and basics, not complete mastery of XHTML 2 tags or something of the kind) shows an alarming lack of curiosity especially given how dominant web-based technologies have been in the last 6-7 years or so.

Understanding databases and SQL (again, no mastery needed), on the other hand, seems like an exceptionally useful thing to have in your toolbox. How do you evaluate data storage options without having some working knowledge of databases?

They are also both relatively cross-cutting for the field of software development, not specialised niches.

Now, the guys and/or gals you know probably are great programmers: but I think a very small time investment in the two subjects would be quite beneficial.

Edited for grammar and added 2nd last paragraph.


I know how to do the <i> and </i> tags, does that count as knowing basic HTML?

Seriously, I think people might be showing a certain myopia in this thread, believing that the subfield that they're working on is all there is, or at least all that matters. If you want someone to write an iterative eigenvalue solver for a large sparse matrix in Fortran, I'm your guy... but my HTML is at the level of circa 1997 "Look at my awesome <blink>web page</blink>".


"My instinct, however, is that not knowing HTML (which I define as understanding the structure and basics, not complete mastery of XHTML 2 tags or something of the kind) shows an alarming lack of curiosity especially given how dominant web-based technologies have been in the last 6-7 years or so."

The only thing it shows is a lack of interest in HTML. A lot of people take a problem-first approach to career development and will only learn a new technology if it helps them solve a particular class of problems. For all you know a programmer not interested in HTML could be dabbling in graphics or compiler writing in his spare time - domains where HTML or web technologies are not really needed.


I spent the first 4 years of my career programming video games for the GBA and NDS. I knew basic HTML at that point, but I had barely ever worked with databases. I knew how to select something from a database but that's it (didn't know about joins, for example).


Well, not HTML, but I would maintain that any programmer who hasn't dealt with databases is either a newbie or a bad programmer. Any web programmer who can't write straight HTML is most definitely a bad one.


OK, so I'm either a newbie or a bad programmer. I've been programming since 1978, so I guess I'm a bad programmer.

Hmm. I write safety-critical software for embedded processes and distributed systems. Perhaps I should be concerned at your judgement.


I think you have misread me. And while my knee-jerk reaction was to meet snark with snark, I shall resist the temptation. If none of your software has ever read from or written to a source of persistent data, I think you should be concerned. :)


My comment wasn't intended to be snarky, but when someone questions my credentials, albeit inadvertantly and through ignorance, I find it a little hard to be entirely objective.

I do read and write persistent data stores, I've just never had to use SQL or a relational database. I've just gone and looked it up. I do know the concepts of inner, outer, left and right joins, and I use similar constructions every day.

But this is counter-productive. I've made my point that there are areas of programming that don't use SQL or relational-databases or web technologies. Making sweeping statements about the competence of programmers based on their knowledge of technologies they don't use is blinkered.

Yes, programmers who work in web development most likely should know about databases and HTML. If they don't, then they are most likely either inexperienced or limited in their capabilities. It may yet be that the code they write is clear, clean, well-designed and bug-free, but databases and HTML are strongly correlated with productivity in this field.


My comment wasn't intended to be snarky, but when someone questions my credentials, albeit inadvertantly and through ignorance, I find it a little hard to be entirely objective.

Huh. Well, you've been programming for longer than I've been alive, so you're almost certainly a better programmer than I am.


I suspect that largely we've been talking (arguing?) past each other. My point was/is that there are areas of programming that really, really don't need SQL or relational databases, and don't touch web interfaces. I've just been talking with a friend who does financial simulations as well, and he's heard of joins, but never used them. He's also never written HTML, but has got a clue about it.

There are also many programmers out there who claim 30 years of experiences, but who actually have 1 year of experience 30 times over.

At its heart I think we'd all agree that narrow definitions and narrow judgements don't do anyone much good. There are more programmers than just web programmers, or embedded programmers, or kernel programmers, or simulation programmers, etc.

We should all do each other the courtesy of recognising each others' skills and knowledge.


We have definitely been talking past each other. Mostly because we understand the term "database" differently (see my other reply to you). I am actually mostly in agreement with you, since I don't do that much web development these days either. And narrow judgements may not do anyone good, but I think precise definitions would have helped us in this case. :)


I've never done "database" stuff either, but I've "read from persistent data sources" (Fortran formatted read ftw). But if I tried to claim this counted as "database experience" on a job application I don't think many people would agree.


Plain old files are perfectly valid sources and sinks for persistent data, if you don't need all the other features of a database.


I interned this summer in a genetics lab, and was shocked the extent to which this is the case. Both the input and output files are absurdly huge, so I'm not sure anything else would've been viable, but still, at least as a student it always seemed like there should be "some other way."

After a week or two of getting to know the system, for clarification I asked: "You mean this entire sophisticated system is just some Perl, with a database that keeps track of the flat files?"


A lot of companies still do all the processing on text files as a nightly run (particularly those companies getting once a day data files (e.g. credit card rewards processors)). Some put it in an actual database, but you can really churn through text files faster than a lot of relational databases can do their queries.

The most interesting example I saw of this was a company that had an automated process to add a header with time/date/source and check it into code control (subversion I think). The nightly batch job checked what it needed out and did all the processing including generating some control spreadsheets.


Plenty of developers work on embedded systems, libraries, GUIs, adapters, etc. that never need to touch a database. Not to mention many developers may work on applications where data is provide by middleware layers or data services rather than databases directly.

I think you're letting your experience of development cloud your view over what software development involves.


I think you're letting your experience of development cloud your view of what a database is. Almost every program imaginable requires one.


No, most embedded programmers will never deal with any form of database unless you stretch the definition of database so far as to include lookup tables. I am pretty sure arrays are not databases.


I don't need to stretch the definition for that to be included, since the definition of a database is an organised collection of data. Ephemeral or transient data is still data.


By that definition a JPEG file is a database.


But, for example, a dictionary in python, while containing data, is not usually referred to as "A Database". It doesn't have inner-join, outer-join, left or right join, select, etc. It's data, it's not what most people would call a database.


Ugh. Having horrible flashbacks here. :-D

I once worked for a company that had a version control system where they checked in all the data by project. You could go in an see the history of a project by reviewing the documents (emails, spreadsheets, CSV files, etc) or binary data files that were associated with the project. At the time for checking something in, you were asked to classify the data being checked in. You had options like "Document", "Text file", "Spreadsheet", and one of the options was "Database". Much to the dismay of the programmers who had to deal with the data, users insisted on checking in spreadsheets and CSV files and classifying them as "Database". No form of rational discussion would persuade the users to classify the documents correctly (like I said, there were classifications that covered Spreadsheet/CSV directly). In their mind, a spreadsheet was a database, and it was just stupid that the programmers insisted that it was not. In the end, the programmers just gave up trying to persuade them, and we just grumbled to ourselves every time we came across it.


Whilst I would mostly agree with that, the general modern definition of database has grown out of the common form of the popular varieties.

I don't think that something has to have inner-join, select, etc. to be a database, it's just that the most common do. Similarly I don't think something has to have 4 wheels, an internal combustion engine and a sunroof to be car - it's just that most of them do.


It seems this has degraded into an argument about semantics. When I learned the term in school, it was not synonymous with RDBMSes. When I was trying to learn lisp from the book Practical Common Lisp, there was a chapter containing an example where one went about building a database that resided in memory for the duration of the program's run. At my current job, the app I am writing processes a collection of csv files which my colleagues and I refer to as a database. I don't think the term "database" should be conflated with "database management system", much less a "relational database management system". I mean, nowadays there are so many different types of databases out there other than those (loosely) based on relational algebra.


The comment that started this discussion was here:

http://news.ycombinator.com/item?id=1829481

in which it was said:

    > Yes, there are people out there who ... have never
    > heard of an inner join. We call them bad programmers.
You said:

    > ... the definition of a database is an organised
    > collection of data. Ephemeral or transient data is
    > still data.
That's not the point. Your sense of "a database" being "an organised collection of data" is actually more commonly (in my experience) referred to as "a data structure." Usually it's not "a database" until it gets some sort of query language or manipulation primitives.

Yes, people refer to a collection of data as "a database," but I return to my original objection to the original comment requiring that not being a bad programmer requires that you know about inner-joins.

As I said earlier, we have, to some extent, been talking past each other.


No, that's a data structure.


Files are not databases.


Is the file-system?


I wouldn't say so. In my eyes, databases (or a DBMS, if you will) are sufficiently more complex and incorporate sufficiently more and/or different functionality for me to separate them from a filesystem.


There are loads of applications which have no need for databases. There are loads of application DOMAINS where you never THINK of using databases. Lots of embedded stuff, especially. Drivers. OS work. Any of those, you can easily carve out a career without a single database write.


Boy, C programmers really get riled up when you suggest that anything with a higher level of abstraction is good (heaven forbid you mention an interpreted language). I wonder if assembly programmers do the same thing when you bring up C.


You might more correctly say that Rails "abstracts" HTML, JavaScript (on multiple levels) and SQL (via an ORM).


What's with the downvotes? That's the purpose, it's to provide an abstraction layer, not obfuscate.


I can't write HTML off the top of my head.

Go ahead, judge my work.


That analogy only holds if you can build any object of a carpenter's skill with just a hammer, or just a screw driver, etc. C is a general purpose tool, compared to the task-oriented tools of carpentry.

There is a very strong difference in signal from someone telling me "I am an expert C programmer" compared to "I am an expert PHP programmer."

Leaving out the name of the programming language all together removes salient content without adding anything else of value. If I want to make your of your programming skills, I'll still need to ask "Okay, what languages?"

So, saying that "Calling oneself a C Programmer is like saying you're a Hammer Carpenter" is like saying when there is a choice between tool sets, the choice doesn't matter.


Yup, programming languages are (generally) Turing-complete. Hammers are not the equivalent in the woodworking space, good luck cutting that plank of wood into two equal pieces, lengthwise, with a hammer.

Saying you're a C programmer is more like saying you're a carpenter specialising in furniture, as opposed to say a carpenter that specialises in houses.


> cutting that plank of wood into two equal pieces, lengthwise, with a hammer.

Sounds like one of those "round manhole" interview questions.


I think the solution to that one is to use a hammer to make a saw out of metal and use it.


Build a forge. Construct a saw mold. Melt down the head of the hammer into the mold. Meanwhile, carve notches into the handle to measure the board. After the saw cools, find a rock to sharpen it with. Saw the wood in half.

This actually sounds like a process that's more fun than some work I've done in the past.


Saying your a C programmer is more like saying you are a carpenter that only uses non-powered hand tools. You can accomplish any task and you take pride in being a master of you simple tools, but getting a job done might take you multiple times longer than the carpenter who shows up with a table saw and cordless drill.


This is an even worse analogy for so many reasons, not sure why it hasn't been voted down into oblivion. Must be all those people using that great new OS written in PHP.


I don't know many folks that call themselves "PHP programmers" or "Python devs". I know a lot of "web developers" that know a bunch of technologies, though. The few people I do know that are "$LANGUAGE developers" usually aren't that good.

As far as frameworks go: I use frameworks because I don't want to re-write the same shit over and over again. Yes, I can write an <a> tag, and I can do complex JOINs. Now I use Haml and ActiveRecord so I don't have to spend time hand-crafting every SQL statement I need.

I've never understood the whole "frameworks bad" ideology. Just because some folks don't learn anything outside of their framework doesn't mean all frameworks are bad. (See: US Supreme Court, Baby v. Bathwater)


I identify as a Rails developer out of expedience. It is what I've worked in for the last five years or so, and where most of my marketable proficiency lies. You could call it my specialization. I know PHP, C, Java, Perl, and others, but why mention them? I haven't used them for work in years.


Carpentry metaphors for programming continue to suck: news at 11.


Not doing C is like a carpenter not using a hammer?

Programming languages are like hammers in that they often end up in metaphors.

Hammers are like C in that they are bad metaphors for women.


Bravo.


I think what the author of the forum posts means is just that it sounds like saying "hammer carpenter" when you say C-progammer, i.e., he argues that it sounds strange to put the name of the tool you use in your job title. I don't think that he's trying to make the analogy "C is to a programmer like a hammer is to a carpenter", because that analogy is obviously flawed (as shown by other comments in this thread).


"CNC machinist", "forklift operator", "machine gunner", "truck driver", "helicopter pilot"--it seems like a lot of job titles say more about the tool used than the task accomplished, and in fact it would be less informative to replace these job titles with vaguer terms like "metalworker", "pallet mover", "soldier", "long-distance cargo transport specialist", or "long-distance aerial cargo transport specialist". And frankly, C is just as tricky to learn to use properly as at least half the tools mentioned here.


Having recently (and not a little inexpertly) bashed a shed together, my thumbs and I would contend that using a hammer isn't exactly a walk in the park, either.


Yes, but no one would ever use a hammer by itself. At a bare minimum you're going to need wood, nails, and probably a few saws. C isn't a hammer. Pointer arithmetic is a hammer, and there are other tools at your disposal in C.


People have said that it is flawed, but they haven't shown that it is flawed. Can you?


I'll try my hand at it. C is not like a hammer, and being a "C Programmer" is not like being a "Hammer Carpenter."

Programming languages are not tools, they're MEDIA. They're a notation for thought, so they are the material you work, not the tool you use. Therefore, if you must compare programming to carpentry:

Calling yourself a C Programmer is like calling yourself a carpenter.

Calling yourself a polyglot programmer is like calling yourself a construction contractor.

Calling yourself a Visual Studio programmer is like calling yourself a Hammer Carpenter.

I wouldn't use any of these expressions, I think that the examples I give suck less than comparing a language to a hammer, but they are still terrible metaphors and easy to dispute. Remember:

    Programming is an unnatural act.
http://www.cs.yale.edu/homes/perlis-alan/quotes.html


I could think of many salient differences between hammering nails and programming in C, but the simplest one is:

Its impossible to do any kind of real carpentry job to completion using only a hammer. Thus, a 'hammer carpenter' is derogatory because you would not expect them to produce anything useful. C developers produce, and continue to produce complete, useful software.

There are also things like "A hammer carpenter doesn't have to spend a significant amount of career development time keeping up with the state of the art in nail compilers or wood instruction sets." C programming is a specialisation in a complex field, and such specialisations often deserve their own title.

Calling someone a neurosurgeon is like calling them a hammer carpenter. Everyone should be a GP.

A previous commenter mentioned 'helicopter pilot'. You wouldn't see a topic saying "Calling someone a helicopter pilot is like calling them a hammer carpenter." It's likely that a helicopter pilot has experience flying fixed-wing planes, but their _value_ comes from specialising in a particular craft, so they're likely better at it, and that's what they advertise themselves as.


Well, I think the objection is not that C is a tool for programmers like a hammer is a tool for carpenters, but more because a hammer is an indispensable tool for carpenters, but you can do a whole lot of programming without writing in C.

If I were to attempt another tortured analogy I'd say C to a programmer is more like a router to a carpenter. You can do a whole lot of carpentry without using a router, but for certain tasks you absolutely need one. Not to mention the blade is sticking out the bottom and totally scary, ready to bite your hand off at the slightest provocation, and the slightest deviation from true carves a big dent in what you were trying to do. Using a router requires a steady hand and some thinking beforehand, but if you know what you are doing you can make short work of many tasks.


"If I were to attempt another tortured analogy I'd say C to a programmer is more like a router to a carpenter."

Or like the Hole Hawg?

http://www.team.net/mjb/hawg.html


Meh, I hate analogies with a passion, but here's my useless rewrite:

Calling yourself an Emacs Programmer is like saying you're a Hammer Carpenter.

Calling yourself a C programmer is like saying you're a framer. Calling yourself a Ruby developer is like saying you're a brick layer.

Both can be used to build a house and it's usually better to work with what you know.


So I'm a newbie python/django person that has been using them both for about 6 months now. I've had multiple rants directed my way by web developers telling me that I was making a big mistake by using a framework - for all the sorts of reasons listed in this rant.

So apparently I'm ruining the universe by employing tools I don't fully understand. I must admit this particular complaint I find a bit perplexing. I'll drop down a level of abstraction if I find the universe demands it of me. Otherwise - I really don't care about what's going on in bowels beneath me. Maybe Gandalf is getting it on with a Balrog for all I know... but for my needs, it just doesn't matter.

I mean - I'll learn assembler if I come up against a use case that demands it of me. But that's just not likely to happen for the silly little things I'm working on.

I also happen to believe that it's a waste of time to teach kids the algorithm that you work through on paper for long division. Give the tykes a calculator and let society progress at the faster pace that technology allows. I mean - claiming that you need to use an abacas to do basic sums because that's how we old folk did it, is just ridiculous. All algorithms are just effective procedures which depend on a certain level of technological sophistication to be employed. The introduction of arabic numerals meant that we didn't have to use abacuses any more. Hooray!

Besides that - I actually know people whose applications I have aped for learning purposes - and I notice that I wrote them in about the tenth of the time, with less than a tenth of the experience, and they also seem to the run at ten times the speed with similar functionality. I hear yarns about how they were doing dumb things like not setting primary keys, or making loads of unnecessary calls on the database etc... and I think about how Django really helps me to steer clear of many of the basic mistakes. I learn about these as I go because I continue to read the awesome documentation as well as digging into the guts of how Django works. Through this I get a lesson on how at least one group of professionals think web development should be done.

I personally feel much better placed going forward than all the folks who've had to roll their own over the past 10 years. When I feel more comfortable with Django, probably the next thing I'll do is learn another framework - so as to get another perspective from the professionals.

I think it's a great way to learn - personally... and folks who think otherwise I'm going to keep ignoring.


You don't think it's important for people to learn long division? Other than being one step away from total technological reliance, these kinds of exercises are way of teaching the relationships between numbers, which is INCREDIBLY important.


Bah. Long division is a rather unenlightening algorithm that doesn't teach most people anything at all about how numbers interact.

That's assuming they even remember how the algorithm works. Which they don't, as a rule.

ROI-wise, there are far better places to spend time learning the relationships between numbers than long division.


I think it's relative to the use to which you intend to put that knowledge.

There are zillion of different algorithms that could be applied to do long division. I could for instance, apply an algorithm that works for first order classical logic extended with the set of peano axiom and a successor function. Certainly someone who knows THAT algorithm understands something more about division than folk who just use paper and pen with arabic numerals. But do most people need to see the relationships between the two algorithms to do all the stuff in life that they need to? Clearly not.

So to be convinced of your position I'd need to see why the particular knowledge of the particular algorithm you think is so important is really necessary to the work that most people go on to do with long division.

I'm not saying the issue is completely cut and dried - but I personally can't see what your argument might be.

- edit - I might just add... that it's ALL technological reliance to some degree. I mean - what if they don't have pencil and paper? What if they don't have arabic numerals? These both were important technological advances that made long division as we know it possible. It simply couldn't be done with an abacus in any way that wasn't horribly time consuming. Technological reliance doesn't hold as an argument against employing new and faster algorithms except where it may still be difficult for many to get access to that tech. In the case of calculators in the western world - that's clearly not an issue.


> Other than being one step away from total technological reliance

I rely on technology to cook my food for me. Personally, I think that should be a little more alarming than relying on a calculator. My point being: embrace technology, don't fear it.

> these kinds of exercises are way of teaching the relationships between numbers, which is INCREDIBLY important.

One could learn about relationships between numbers without knowing long division.

Honestly, the only time I do non-trivial division by hand is when I'm doing napkin math to entertain myself. If I didn't know how, I'd just rely on my phone. It's fairly unlikely that I'm going to be in a situation where I need to do long division and don't have access to a computer/phone/calculator.

Now, I'm not sure that I agree with not teaching long division but, at the same time, I'm not convinced that it's terrifically important.


How does long division help at explaining the relationships between numbers? All it was for me was a mechanical set of steps to arrive at the answer.


My inclination is to believe that knowing the basics of writing code with C is slightly more difficult than figuring out the basics of hammer usage.

Also, the range of things you can do with C likely exceeds what you generally utilize a given hammer for.

Programming languages are not like hand tools.


I'm going to go off at a tangent here and rant a bit.

Firstly, "C" is not a difficult language. When you compare it to the difficulty of say elementary calculus, you will notice that the concepts are simple.

Any person who studied computer science and do not know how a pointer works (or how to represent an access an array with a pointer) is grossly incompetent. C does not hide the basic programming concepts behind a nice IDE which you double click to type your portion of code.

I recently had another problem (with one person who was busy with a Masters degree at a fairly prestigious technology university). The project involved having video updates. I wrote a demonstration that performed the computation in a “while(1)” loop (i.e. fetch frame, do processing, display frame). The person was involved in writing a simple GUI program.

I tried to explain to him that he cannot use a “while (1)” loop in the event handler of the button that should start the event. Even drew nice graphs on a board explaining all about the thread of execution and how event handling usually works (e.g. a loop that handles events and call functions).

After 30 minutes of explanation the person came up with a very bright idea. He said that “Maybe we should set a sleep(1000) command in the loop to give the program time to react”. WTF?


It's not so much the c language that is difficult, it's just that it tends to be more difficult to construct large programs with it, because it is a lower level abstraction than other languages. This gives more flexibility in memory management, etc., but also requires you to actually do it. (c is to higher level languages, as ASM is to c).

I, for one, enjoy languages (and even frameworks) with sensible defaults, so the normal case is handled for you, but that let you override that default behavior or even drop down into a lower level language when needed.


That is not the biggest problem. My problem is that people do not understand basic things (which higher-level programming languages hide). One thing is pointers (e.g. how a matrix is represented in a language) and another is the flow of execution.

I understand that the utility of C is low when we are talking about web-applications. But the fact is that the concepts should be known. Everyone who writes event-based programs should at least have a rudimentary idea of how it works.


Any person who studied computer science and do not know how a pointer works (or how to represent an access an array with a pointer) is grossly incompetent.

Maybe so, but such folks exist in droves. C isn't hard, but it looks simpler than it is, and this clever disguise catches a lot of people off guard. And even so, there seems to be a practical difference between understanding C and having the discipline to use it correctly in the midst of a real project.


> Firstly, "C" is not a difficult language.

Compared to other programming languages - most of which are significantly more complex than hammers. My 2 and a half year old daughter knows what a hammer is and has an idea of how to use it. I'm proud that she can point and say "C for cat!", but that's about the extent of her knowledge of the C programming language.

Also, yes, C is a fairly simple language, but that doesn't mean that it's necessarily easy to use compared to other languages, that do nice things like GC.


I don't understand... don't you get a "ButtonClickedEvent" when the button is clicked? Why would he think he needs a while(1) loop in the button? Am I missing something?

UPDATE: After rereading it... you guys are actually implmenting the button. Although surely you're not implementing the whole mouse stack... so I'm still confused :-)


I think the guy copied the video processing loop into the event handler.

So the event handler goes into a while(1) loop instead of starting a new thread/process to do the video processing.

That will lock the GUI and make the app unusable.


OK, so from looking at the code, only DrawNow needs to be on the UI thread (unless DrawNow is actually not drawing to the screen). That makes sense.

One thing to note though, the grad student may not be familiar with a model where UI is on a specific thread. I know some of the past systems I used had the rental model (even WPF had it early on), so maybe, giving him the benefit of the doubt... this is what he was thinking about (and assumed you had acquired all the correct locks in your code)?


Here is a very bad pseudo-code demo illustrating the problem and the proposed solution:

http://pastie.org/1247590

I'm beginning to suspect that the willful incompetence on their part is maybe just a scheme of many to get others to do the work. Suspected a bit of professionalism from people who are about to graduate with master degrees.


Programming languages are not like hand tools.

Particularly when you consider how much different languages' capabilities overlap.


The real danger isn't calling yourself a C programmer, it is calling yourself a programmer. That has far more dangerous career consequences, particularly for freelancers.


You need to explain that for it to make any sense.


If you call yourself a programmer, sell yourself to management as a programmer, and go about getting jobs by applying for the ones that say programmer in the title, you are boxing yourself in to a career role as a lumpen resource (literally, they will call you a "resource") and cost center, which the MBAs are going to do their level best to either eliminate or replace with a substitutable resource at a quarter of the price. That resource might be a younger resource, or it might be a resource with an accent, but either way they'll check your department's resource box.

Instead, you want -- particularly as a consultant -- to be the guy giving measurable, predictable, huge impacts either to costs or, ideally, to driving revenue. If you cannot quantify your worth to the company, it will be quantified for you, and the default guess is going to be -1.5 * $YOUR_SALARY. If you can quantify the value of your work for the company -- by being the guy who makes them money using his bag of magic tricks that management frankly does not understand and does not give a shit about -- your market value will be far, far higher, whether you choose to take it in terms of dollars, flexibility, working conditions, or what have you.


If you accept a job that requires such mindless bullshit, then you're going to get constant mindless bullshit in return, no matter what kind of pretense you cook up. The only winning move is not to play.


Gotta disagree on this one. This is all about the higher-ups' opinion of you. In most businesses technology is not a competitive advantage, and the brass doesn't understand how it could be; they just use industry-standard templates for applying technology. If you come in as someone whose job it is purely to implement such a template then you are simply a replaceable cog, no different than the 20 bookkeepers that the new accounting system replaced.

On the other hand, if you come in able to really talk about the business and offer solutions at the level of upper management, then your value will be much more obvious to the people who matter.


If one happen to stumble upon such management, how should one introduce himself? Senior Software Engineer, Senior Software Architect, or others?


A friend of mine said it best. If you see something really interesting (concerning software) and you have the immediate desire to share it with someone else but there is no one around who will understand it, it is time to add Architect to your resume.


c.f. PG: "To get rich you need to get yourself in a situation with two things, measurement and leverage."


Wait, it's not dangerous!

You just have to add that: you are a programmer that is able to handle a project from requirements gathering to deployment in an agile fashion, provide help with SEO, copy-writing and scalability issues :)


What do you call yourself?


He forgets what the main purpose of frameworks are for, doing all the boring repetitive bits, so I can get down to writing the new and interesting parts of my web app. For example doing form validation, and displaying the errors back again is really common, and annoying. Any good framework makes it painless.

And what about running on multiple databases. What happens if I need to be able to support MySql and Postgres? Really hard without a framework.


Exactly. Pain in the ass does not imply hard to understand.


I like Django as much as the next guy, but the framework-obsession that has developed over the past few years does indeed tick me off.

There are certain instances (i.e., full-blown web applications) that benefit a lot from employing abstraction tools such as RoR or Django. Equally, whenever there's a lot of variation in the underlying technology, give me a framework! jQuery is a wonderful example for this.

However, there's a time and place for such frameworks, and that is not "always, everywhere". Really simple CRUD stuff doesn't require Rails. CSS grid frameworks are a ridiculously inefficient mess. (Aye; even in prototyping.) Most template languages are silly and unnecessary. Et cetera.

Sure, it's a rant. But the man has a point. Web frameworks are enabling a whole generation who can't even use a scripting language without relying on plenty of magic. A modicum of abstraction, when sensible? Bring it on. Relentless black box programming? That has its limits.


"Web frameworks are enabling a whole generation who can't even use a scripting language without relying on plenty of magic."

I don't really think people program like this. When I tried using rails 2 years ago, all the magic it was doing behind the scenes actually made it impossible for me to use it, as I had no idea what magic was there and how it related to the underlying system. After some time of writing web apps "by hand", then trying out sinatra and padrino, I just recently looked at Rails again and suddenly understood how to use it.

What I want to say is that I don't think that the magic Rails does leads to stupid programmers. It will only help those that actually know what they want to do and make the repetitive parts easier. Honestly, I don't want to write a login system with hashing etc. ever again. I don't want to think about how to save old versions of the entries in the database. I don't want to write my own form helpers. I am happy that Rails does this for me and I can spend time thinking about the interesting parts of my program.


I would argue the opposite -- that really simple CRUD stuff really does need a framework. Why waste your time doing mind-numbing work when you can let the framework do it for you?

On the other hand, with full-blown web applications that do something out of the ordinary (long-running requests, etc) it might be better to drop down a level of abstraction.


That entire comment is very biased. Sure, the "pure" way might be amazing and fantastic, but if I used raw SQL queries for my projects, I'd be in a lot of trouble when I needed to do something extra when saving an object.

Sure, I could use stored procedures, but then it's database-specific. What happens when I need to move from MySQL to postgres, or from AppEngine to my own server?

The poster just ignores a million things frameworks are good for. Sure, if your project is large enough you'll eventually grow out of them and need to do more complicated stuff than what they make simple, but for 90% of projects they are a godsend.


Reading forums where PHP Programmers hang out and ask about other langauges is like hitting yourself on the head with a Carpenter's Hammer.


"Non-transferable API knowledge" sometimes saves a lot of time you'd otherwise spend debugging your newly invented low-level hexagon-shaped wheel.


Agreed. But the big problem with RoR that I've encountered is that the whole has_and_belongs_to_many mess feels like an idea that got out of hand. You can imagine that with a very simple schema writing the associations would be easy enough, but you end up with two problems:

1. You have to keep the schema and the associations in sync. I simply do not understand why in RoR the associations are not derived from the schema using foreign key relationships.

2. You end up knowing how to do something simply in SQL and then having to translate that on to the Rails way. This Rails way seems like a useless translation of a perfectly workable underlying system.

Put 1 and 2 together and I end up with the frustration of "Oh, I have to keep the schema in sync. with all this association stuff so that I can use Rails method to access the schema". It feels circular and I'd be much happier with reverse engineered methods.

Also, I find the Rails magic functions where you can just make up a function name find_by_nozzle_color() to be infuriating. I'd be really happy if the reverse engineering component spat out something like a header file containing sensible methods I can call.

Perhaps I lack imagination. Or experience. Or both.

Also, while I'm moaning. I wish Rails had proper support for database views. It's as if no one has any real experience with databases. Doing that would eliminate a lot of my troubles because I could keep a view (even an updateable view) in the schema and have a handy object accessor for it.


1) I don't think I've actually used has_and_belongs_to_many since 2008. Generally, most people seem to use has_many :through, which is somewhat clearer and allows more complex join models.

2) All that has_many :through does is simplify the syntax of accessing objects through many-to-many join tables. If you know how to use many-to-many joins, has_many :through is just a simpler syntax that enables you to explicitly mention the join table only once, and have it used implicitly thereafter. Nothing all that complicated about that, and there's nothing that needs to be "kept in sync" any more than it would if you were doing pure SQL.

3) find_by_* methods are hardly complicated. Have a read through the source of ActiveRecord::Base's method_missing and you should get the idea of how they work pretty quickly. Give yourself half an hour, because it is pretty tight code that does a lot and does it efficiently, but it's not difficult to follow if you know ruby:

          if match = DynamicFinderMatch.match(method_id)
            attribute_names = match.attribute_names
            super unless all_attributes_exists?(attribute_names)
            if match.finder?
              options = arguments.extract_options!
              relation = options.any? ? construct_finder_arel(options, current_scoped_methods) : scoped
              relation.send :find_by_attributes, match, attribute_names, *arguments
            elsif match.instantiator?
              scoped.send :find_or_instantiator_by_attributes, match, attribute_names, *arguments, &block
            end
From: http://github.com/rails/rails/blob/master/activerecord/lib/a...


In all these frameworks, there's probably one guy who learnt a bit of SQL and thought "this is too complicated for anyone but a programming god like me" and built a Rube Goldberg abstraction layer on top of it. There's just no need for an "ORM layer" if you understand that a table is NOT a class and a row is NOT an object, and you can't fake it except by jumping through a lot of entirely unnecessary hoops and what do you get for that? A app that is 1% more "object oriented" than it would have been if you'd just done it.


That's a good point about the mapping of a row to an object. I've always found that whole "table is class" but "row is object of same class" thing weird. It's like the people who created these things don't understand objects either. Would have killed people to have a row class and a table class (perhaps with iterators)?


Rails takes a very clear and opinionated view that the database is just a dumb storage layer for the application code, which sits in the application layer.

If you have very strong feelings against this view, I suggest you stay away from Rails. If you can adjust to it, though, Rails is very powerful, flexible, and efficient.


This isn't really true, particularly with Rails 3. If you don't want to use the ActiveRecord database patter, don't use the default ActiveRecord ORM that ships with Rails. Drop in DataMapper instead, and use whatever pattern you wish.

Rails != ActiveRecord these days.


> 1. You have to keep the schema and the associations in sync. I simply do not understand why in RoR the associations are not derived from the schema using foreign key relationships.

...

> I wish Rails had proper support for database views. It's as if no one has any real experience with databases.

Recall that Rails was created, originally, for use with Mysql, which is the sort of environment where concepts like foreign keys were, perhaps, a bit foreign to some users.

I think your complaints are valid ones, but on the whole, I think Rails has been very good for web applications, because you can do more with less code, and yet keep it fairly clean. Once you've got things set up, it becomes pleasantly simple to interact with the database in straight up Ruby, while still being possible to do more complex things should the need arise.


All of this "Rails makes you do X; or, well, you can do Y" is conflating Rails as a framework with Rails' ActiveRecord library, which is the default ORM. Since Rails 3, you can easily, and effectively transparently, use DataMapper instead, which is a general ORM that supports whatever pattern you wish, not simply the ActiveRecord pattern.

Mind you, prior to Rails 3 this was a heinous world of pain, so it's understandable to overlook it.


I hate those auto-generated find methods as well. So i don't use them. I don't understand why this negatively impacts your experience.

+1 on the lack of support for views. You can kludge things together, but its not pleasant.

You're making the associations sound much more complex than they are. Once an association relationship becomes too complex, you always have the option of just declaring simpler association and stitching together the complexity manually. Maybe I'm completely wrong, but in most places i do feel that Rails degrades reasonably, so that you always have the option of doing something the more verbose/manual way. In something like asp.net, I find that it either just works, or you're up st creek.


Main reason they used those find_by / find_all_by methods is that there's no good way to shortly express one-field finder.

Compare

    DM: Article.first(:author => person)
    AR: Article.first(:conditions => {:author_id => person.id})
    AR shortcut: Article.find_by_author_id(person.id)
    AREL: Article.where(:author => person).first # not sure about that


This is one thing that surprise me about Rails 3:

AREL: Article.where(:author => person).first

Why the sudden departure?

Arel looks familiar to SqlAlchemy, Hibernate Query Language, and Sequel. And far from everything is an object in ActiveRecord.

And why reinvent the wheel?

http://sequel.rubyforge.org/ have exists for a while now.


It's been a long debate, and yes, it's almost like sequel. Actually, ARel positions itself into the same niche as sequel, i.e. low-level toolkit to build frameworks on.

In my opinion, sequel is an absolute kick-ass sort of thing, it's just that ultra-cool if you have time and will to hack.

Further reading:

http://sequel.heroku.com/2010/02/06/arel-sequel-differences-...

http://sequel.heroku.com/2010/02/10/arel-sequel-differences-...

http://sequel.heroku.com/2010/02/23/the-benefits-without-the...

http://magicscalingsprinkles.wordpress.com/2010/01/28/why-i-... -- note comments bout sequel


I agree with this point for sure. Frameworks are great if you are going to leverage a lot of the libraries/modules. But often in practice you never do. So then you end up taking on the issues with using a large codebase. Which is that when you find a bug, it's very hard to trace through code to figure out what's going on.

You are investing in learning a specific library/framework rather than the low level details that are on average what you'll need more because in practice you don't get to use the same framework all the time if you switch companies or projects as they tend to have entirely different frameworks or even languages on the backend.

I like how Marcin Wichiary though in this ajaxian.com article: http://ajaxian.com/archives/web-ninja-interview-marcin-wicha...

He basically builds all his libraries from scratch for each project which he can do quickly because he knows what he's doing and can of course leverage his other code for reference. Each iteration simplifies what turns out to be bloated and creates a new code base more matched to the problem he's trying to solve.


Actually, I would have characterized this more as:

"Calling oneself a C programmer is like saying you're a finish carpenter."

As in, it means you normally work in a particular set of circumstances, which is all we're aiming to say when we denote the language we work in (because language is often closely allied with the type of project). Does it define you as a person, or even state that you don't work in other types of projects, no, it doesn't.

Can we move on from this now?


Reducing this analogy, you obtain the simplest form of "C is like a Hammer". I really want that hammer. With it I could as easily manipulate the molecules in the wood I'm hammering as I could secure the roof to the frame of a house.

Buffer overflows would be a problem of course, at least for large buildings with lots of traffic-- after the 4,294,967,296th person the whole thing would just collapse.


I think "machinist" is a better analogy than carpenter for programming. A C programmer is like being a lathe operator, a ruby programmer like a milling machine operator, and so on. And the various frameworks and APIs are like jigs and fixtures, they let you turn out some work more quickly and accurately, at the cost of overall flexibility.


You're implying that C akin to a lathe and ruby is akin to a milling machine. AFAIK, lathes do come with an optional milling attachment, while milling machines don't have a lathe attachment. Thus C > Ruby ? ;) After a certain point all analogies are flawed.


The Arch Overlord has a point but he has chosen to make a stand on an issue that is shades of Grey and not Black or White. At the end of the day abstraction can be very useful, even if you don't understand the underlying technology, depending on what problem is trying to be solved.

Today it might be useful to do SQL queries and Form handling via OO and tomorrow it might be more useful to go hack some raw HTML. The answer is, almost as always, "it depends".

One could take his argument to ridiculous levels and say if you don't fully understand Machine Code and every layer of Abstraction above it, you are doing something wrong. Clearly a ridiculous argument although I have no doubt that if you did understand all these layers you probably could safetly call yourself a good programmer :)


"make a stand on an issue that is shades of Grey and not Black or White"

There is much more of this type of discourse in the programming community than I can bear. I think it happens when a young programmer gets just enough experience to gain confidence but not enough experience to gain perspective.


The author may not have as much experience on a large web project as using something like RoR would help with, but even so I generally agree with him.

Specifically speaking, what made me a better programmer was making my own game engines from scratch using DirectX, SDL, and OpenGL. Yup, I wrote 3 different engines on three different technologies (even though SDL uses DirectDraw underneath). What made me a better web developer was writing my own MVC framework for PHP, and then I moved to Rails to be productive.

Hiding those details helps certainly, but only if you know them. You still need to know them, which is where a lot of Rails programmers get it wrong, but hiding it will still help you be productive.


I couldn't agree with this rant more. I use frameworks because they streamline my code, and provide basic abstractions for simple tasks. One of my biggest issues with Rails (to be fair, it's more about people new to rails), is the way it sees abstraction as a form of magic. Simple tasks should be abstracted, but abstracting complex tasks is a recipe for disaster down the line, when you need to operate outside of the confines of the abstraction. This can be mind-numbinglyannoying for experienced programmers, and frustrating for new programmers.

I do want to address specifically the idea of abstracting out things like HTML or SQL. These are my two pet peeves. I look at something like HAML, and I can't understand why anyone would consider that to be a good idea, even putting aside the fact that your designer will have to learn a whole new syntax, but the idea that markup should look like programming language defeats the point of a markup language.

Here's how I did it with Appleseed, using the simple_html_dom library. Views are dumb, very dumb. You create a view which is only markup, for instance:

http://github.com/appleseedproj/appleseed/blob/master/compon...

So you properly class and id your markup, and in the controller, you populate the data, repeating elements for lists or tables, by targetting the dom.

If I want to set the title on this view, I do:

$View = $this->GetView ( 'friends');

$View->Find ( '.profile-friends-title', 0 )->innertext = "New Title";

That's it. No template pseudo-code, designers only have to care about the markup itself. On the front-end, I do the same with Javascript. Think of it as unobtrusive PHP.

I also stay away from ORM's. I understand all the arguments in favor, but in the end, I'd much prefer writing complex joins than using an ORM.

Separation of concerns is so important, and yet, sometimes I think frameworks, which are supposed to facilitate that, actually can make it worse, by trying to reinvent the wheel in abstract ways.


The analogy fails. Have you ever seen a carpenter never ever use a hammer? They have to use it as some point but I've seen many successful programmers never use C in their life.


All of these fancy web frameworks like Sinatra Web.py and Compojure are making us soft. The abstractions that they hide are simpler than these frameworks. HTTP is easy.


I view ORM and MVC frameworks like a stepping stone. I first started programming using these wrappers/layers because it was simple to use. I was able to get the same data I could have if I statically programmed SQL queries onto my code, and I didn't care if I executed 1 or 50 queries for each page that loaded. Who cares if SomeGuyJoe's site does this. If what his site does works, who cares how it is done.

When I decided I wanted to learn more, I started to learn what the abstractions did, and hacked and changed it to be better for my particular use. Then I moved on to lower level languages, and now I have a decent understanding of how to write my own abstractions, yet I still use abstractions because they let me focus more on what I am wanting to do, instead of how to do the things that let me do it.

This post could be changed to say that anyone that learns a higher level language is harming themselves. The skills and syntax you learn from Python cannot be used in C# or Java, so you should just learn assembly. It is all machine code in the end and these languages are just facades. Only real men program by moving bits to registers.


I don't get it. Using these frameworks has nothing to do with the user either not being able to write markup, or just generally sucking at doing so. The abstraction isn't 'useless'. It gives you a choice, though. If someone is feeling more comfortable writing raw SQL then do so, ActiveRecord isn't stopping you. That's the great thing about Rails. If you want to write raw markup, sure, go ahead; just don't use any of the helpers provided.

Regardless of the means of transport, we're achieving the same goal. How we achieve this is usually personal preference. And for that reason rants like this will always come and go, just as responses like mine.


I like my template language, it lets me write less repeating HTML. I like my SQL wrapper and migration tool, it lets me change the database without worrying about doing the wrong ALTER TABLE. And I like my automatic form validation, its just much less annoying with it than without. If you know all the tools I'm talking, then they are merely abstractions you'd build anyway if you were using no frameworks!

The details beneath the abstractions are indeed simple, and likely more simple than the abstractions themselves.

Why do we use abstractions if they are so goddamn complicated?

You know the answer.


Calling yourself a C Programmer isn't anything like saying you're a hammer carpenter. The analogy breaks down so quickly I don't know where to begin. Maybe here:

A programming language is a tool. Each one fits it's own problem-set.

And what would that "problem-set" be, exactly?

The boundaries between the utility of a hammer and utility of a saw are quite clear. The boundaries between the utility of PHP and Ruby are very blurry. Strictly speaking, both languages are turing-complete and are equally capable of solving the same "problem-sets".


That article is from 2006. Somewhere the Arch Overlord states: "CherryPy is more restrictive than web.py - web.py uses the RoR Routes technique (I think Django does too)", but nowadays CherryPy has a 'RoutesDispatcher' too, among other types of dispatchers, which can be chained. http://docs.cherrypy.org/dev/refman/dispatch.html#cherrypy._...


You only get to write this rant if you built your computer yourself, smelting the metal and crafting the circuits by hand, without recourse to off-the-shelf parts, and then wrote every bit of code for it on your own, from the lowest-level drivers to the highest-level interfaces.

And for bonus points, you have to use it by toggling ones and zeroes manually, since all those wrappers hide things and get in your way.


I used to make all my websites in pure Perl, and got along ok. When I discovered web frameworks (specifically Django), I just about pissed myself. I hate needless abstractions as much as the next guy (more, probably), but I strongly believe that web frameworks will make a programmer orders of magnitude more productive.


needless?


In my experience, the only people who actually prefer writing web apps in raw SQL/HTML either 1) have never written a real web app or 2) tend to write highly insecure code without even realizing it.

A few things that web frameworks/libraries help with:

Forms:

  - Validation & displaying errors & input sanitization.
  - CSRF checks.
  - Safe file upload handling.
  - HTML generation is useful for removing a bunch of boilerplate.
ORMs:

  - Default SQL injection protection.
  - Query objects help you construct complex queries without having to do a bunch of 
  string manipulation.
  - Database abstraction (e.g. moving from MySQL to PostgreSQL is a lot easier).
  - Database migration tools to help you upgrade your schema.
Templates:

  - Inheritance & replaceable blocks reduce a lot of boilerplate code.
  - Tools to help you safely escape user generated content & transform it for web 
  presentation.
  
Misc:

  - User authentication. Rolling a secure implementation on your own is no small task.
  - Secure cookie and session management.
  - Providing an organized architecture with sensible separation of concerns.
  
Now, to be a solid web developer I agree that at some point you're really going to need to understand the fundamentals of how each of these things works under the hood, but you're crazy if you'd rather write all that code by hand instead of using solid, well-tested, existing libraries for it.

Note: That's not to say that using a web framework will automatically make your code secure (the recent Diaspora debacle clearly demonstrates otherwise). However, you've already got so much code to worry about securing in your own application, why would you want to have to hand roll everything else too?


These types of rants always strike me as being ignorant. It seems like it's just taking this one person's view of software development and trying to apply it to everything. To me, it's on the same level as saying "real programmers use X".


The title was promising but I found the point of the post moot. The title made me think about the fact that a large segment of the market tends to categorize developers on the tools they use rather than the things they build.


I agree that web frameworks make things more complicated than they really are. In other words, they make things simple in the beginning (when you are creating the project), but they get more and more complicated when you try to customize the application.

Despite all the problems with PHP, it still gives you a middle of the road approach. You have all the tools to do the stuff you need to do, but at the same time you have access at how things really are done. For example, you can handle SQL by hand or create your own objects to do what is necessary. I think this is really useful, and would like to have something like this in other languages as well. I realize, however, that the reason PHP allows this operation mode is that it was created as a web language itself, not a set of libraries on top of an existing language.


I got news for you.... these "details" that are hidden, they're less complicated than these frameworks.

Amen. Insight of the century with regard to Rails, J2EE, BPEL, and a number of other boondoggles.


It should be something obvious: a language is just a tool, and a good programmer should be able to adapt and learn to use a similar tool in little time.

A pity that for many in HR it´s not that obvious.


Alas, in most cases learning the language is nothing compared to learning and mastering the libraries. And then there is domain knowledge. HTML and CSS can be very easy if you ignore browsers, their rendering modes, accessibility requirements, etc.


If you were a carpenter and all you used was a hammer then it would make sense to call yourself a "Hammer Carpenter" to distinguish yourself from other well-rounded carpenters.


Sure it's easy to get hurt if you do not use a hammer with care but you just can't be a good carpenter without knowing how to use a hammer properly . NOM


I can drop down to plain HTML without using any Rails helpers, and I can also use standard SQL for queries in ActiveRecord. This is probably the case for most other frameworks as well.


I think it's more like wearing hammer pants.


What if I wrote the wrapper myself?


I am an Information manipulator. I happen to be very fond of using C. I sometimes just use my fingers.


I love that a four year old programming rant is still poignant.


I, for one, welcome our new Arch Overlord.

I kind of agree with him. But I bet he wrapped his "select count(*) from ..." in some sort of function like get_row_count(table). Add a few more and before you know it, you're half way to a new framework that concatentates HTML fragments built from SQL queries.


Perhaps people call themselves a C programmer because they are advertising their skills for a job. Calling themselves "a programmer" might find them getting less jobs.

But to assume the argument has merit, lets follow it to its natural conclusions: since when was SQL a low level language?! SQL servers are vastly more complicated than Rails.

Another assumption the author makes is that, for any programmer, SQL is the constant, and time spent learning Rails is wasted if one has to switch to Django. This just shows the limited world view of the author. For many, Rails is the constant, and SQL, Couch, or Mongo are the choices.

To stick with the tool analogy, C is the giant processing equipment that skilled operators used to print my photos in an hour (http://bit.ly/aCoA0K), while Rails is the little Canon Selphy sitting on my desk. Both produce identical results. One of them does it a lot faster, for a lot less money, and a lot less failures and maintenance.


Calling oneself a C programmer is like saying you work with wood but not metal.

Fify?


a web programmer should perhaps not speak of things he does not understand


c is god




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: