Hacker News new | comments | show | ask | jobs | submit login
Using unix as your IDE (sanctum.geek.nz)
233 points by mgrouchy 1777 days ago | hide | past | web | 127 comments | favorite



On the one hand, I'm pleased that someone is posting this in the new-fangled bloggy tutorial style popular among the youth of today. It's nice to see my preferred environment evangelized.

On the other, it's a giant facepalm that the simplest, best documented, most powerful development tradition of the last 40 years actually needs this treatment. That there is a whole generation of developers who come out of school thinking that "building" is the action of pressing a button in an IDE GUI (and that "writing a program" means launching a big wizard that pukes a dozen files into a new directory) is just depressing.

Oh, and get off my lawn.


My problem with make and autoconf, as a young student, was the difficulty of finding quality, intermediate examples. Textbook examples were trivial, professional examples were too dense to understand. There was nothing to bridge the gap.

Trying to read the configure scripts and Makefiles for software like Apache httpd server was overwhelming. There's obviously many man-months of work there as well as unknown styles, conventions, and tricks.

How (and why) would I go from "gcc -lexample -o program csci201.c" and maybe a simple Makefile to a 10,000-line ./configure script? I never had any idea.


The files you want to be reading are configure.ac (formerly configure.in) and Makefile.am. configure is compiled from configure.ac via m4 (no, really) and Makefile.in is compile from Makefile.am by perl. Makefile.in is used to generate Makefile at the end of ./configure (by config.status, actually).

Some configure.ac files are a bit hairy, usually because they depend on heaps of optional packages or because they test for all this obsolete stuff that autoscan told them to.

Have a look at libfake437[1], MudCore[2] or libtelnet[3] for some in-between examples.

[1]: http://code.google.com/p/libfake437/source/browse/#svn%2Ftru...

[2]: https://github.com/endgame/MudCore

[3]: https://github.com/elanthis/libtelnet


The reason you couldn't find a quality, intermediate example of autoconf is that there is no such thing; it's an obtuse and painful tool. As someone else mentioned, cmake is a much better choice.


Fortunately there are better build systems available nowadays. CMake is a lot easier on the beginner, and yet it is used by large projects like KDE.


Sadly, my experience with CMake makes miss the otherwise annoying make.

This is mostly from trying to compile tar balls that used it. It didn't seem to give error messages for missing parts as well as simple make.


Does it even make sense to use a GUI IDE to write and build something as cross platform as Apache? All that boiler plate from the IDE is great, but much of what one would consider boiler plate in the Apache code base was built up over time as features were added and it was ported to new platforms.


Right, I basically picked apache randomly. Pretty much any significant open source project from the era I was a student (1997-2001) has the same problem, from a student's point of view anyway.

The problem was that an intermediate student who can use gcc and make but has questions like "what's the best way to organize my source code into directories?" there were no good answers if you were stuck at the command-line with man pages. If you didn't have a good mentor, you were figuring it out for yourself, or trying to emulate a random open source project with hundreds of source files organized into multiple subtrees with helper files, shell scripts, dozens of libraries and modules, and huge configure files that bear no resemblance whatsoever to any problems you're actually trying to solve. Or, maybe, if not a big project, then another intermediate developer who was just as clueless as you and you don't actually want to be imitating.

Meanwhile, GUI-heavy IDEs were offering sane default answers for those intermediate questions and offering tools to help manage a moderate number of dependencies, build steps, and platform targets. Unfortunately, they were also hiding important details and too often became crutches for people who didn't understand how compiling and linking actually works.


I contend that things like "the best way to organize my project into directories" is something that is discovered, and changes, over time while building a project and is specific to that project and the people working on it. Obviously, students don't know this, but they should be writing code, and working toward this discovery, rather than fretting what to name the directory where library routines are stored as The One True Way™.


> thinking that "building" is the action of pressing a button in an IDE GUI (and that "writing a program" means launching a big wizard that pukes a dozen files into a new directory) is just depressing.

I use UNIX as my IDE almost exclusively. When it comes to Java however, I don't think there's any way to avoid using Eclipse or Netbeans or something. Any nontrivial Java project is going to be convoluted enough that using the traditional tools is an exercise in futility.


I grudgingly agree. It is frustrating but something created while using an IDE tends to create specific ways of thinking.

For example, most IDE developers tend to love their debuggers... want to learn something? Just run the debugger! Want to debug something, hey run that debugger and watch 'dem breakpoints!

When I code without a debugger I tend to write smaller, easier testable code because I lack a real way to step through a convoluted process. I feel this encourages a simpler application (but I have no study to prove that).

Another example is most Java/C# work tends to be in the center of the unknown universe. You are editing something twelve layers deep in a mess of public/private/protected objects and you aren't sure if the item you should call is the MarginalFlowStateController or the ControllingMarginalFlowState... with an IDE you would just open up both of them, their unit tests, and any documentation until you discover which one you should be calling.

It bothers me because I really enjoy my vi/tmux development environment where every time I :w a file and run a test I am running on virtually the exact same environment as dev/qa/pre/prod.


While writing small, easily testable code is a good practice it isn't sufficient. Running your code in a debugger and actually watching things happen exposes a lot of bugs and inefficiencies that tests don't tend to catch. Getting that dynamic view of control flow and state is a huge advantage for improving quality.


Meh, I think you are over-inflating the benefits of a debugger. Everything you described can be achieved through proper testing.

Want to know if something is efficient? Then build a test to gauge it. That way if it changes and becomes less efficient in the future your test will break. That's a much more solid approach than simply walking through the code a couple times and noticing something out of place.


> Meh, I think you are over-inflating the benefits of a debugger. Everything you described can be achieved through proper testing.

You're falling into the trap of "one tool to solve all problems." Why waste your time writing tests to "gauge efficiency" when a profiler tells you more, more effectively?


The test will persist and always be there to test again where as a gauge will be a one time thing.

What happens next week when someone does an x += hugeBlockOfText in a loop?

I'd rather have a set goal and a test/process that validates it over a one time event where human error is involved. You want X process a million records in under three seconds? Build a test.

I'm not advocating one tool. I'm simply saying that everything described so far is better solved with a test first approach over a debug first approach. Build a test to replicate the problem. Solve the problem. Keep the test to prove that the problem is solved.


> What happens next week when someone does an x += hugeBlockOfText in a loop?

Then it will show up in your profiling? If you're doing something perf-critical it is absolutely insane not to be running it through a profiler suite on a regular basis. Your continuous integration system can (or should) be completely capable of replaying real or synthetic activity in order to demonstrate real-world hotspots.

Tests only find what you already want to find. Performance concerns are much fuzzier than that (unless you want to be writing "performance tests" for literally-literally everything, which you're welcome to do but I have better things to do than that).


Except that it's tough to make performance and efficiency tests actually persist. The expected test results have to be keyed to the particular test environment and so those tests can't really be portable to other computers. And then every time you upgrade or change the test environment you have to modify the tests with different expected results.

The only way to make such tests really persist is to build in some kind of fixed known benchmark to evaluate baseline performance in the test environment, and then evaluate the software under test relative to that benchmark. This is a huge extra effort and hard to get right.


And when the test tells you the problem has come back? How do you debug it then?


You're missing the point. Sure you can build a test to gauge whether your code meets some arbitrary level of efficiency. However the process of dynamically stepping through your code in a debugger engages a different part of your brain than inspecting static code or writing tests.

I can't offer any hard evidence to prove this. But personally I've found that keeping the discipline of always stepping through my code in a debugger — even when it seems to be working correctly — leads to improved results. Try it for a few weeks with a good interactive debugger and I think you'll see what I mean.


I don't reach for a debugger often, instead relying on sanity checks and traces. But when I do reach for a debugger, it's because I am so completely confounded by what is going on, and I have started questioning even the most basic of assumptions, that I just want to watch the damn thing execute.


When I code without a debugger I tend to write smaller, easier testable code because I lack a real way to step through a convoluted process. I feel this encourages a simpler application (but I have no study to prove that).

I was talking to a programmer who used to have to hand in punch cards and wait a day to get results. He said people tended to get it right first time in those days.


I suspect they've just blacked out the pain of the times they didn't.


I don't think anyone is saying that IDEs like Eclipse or Xcode or Visual Studio do not have a place. They absolutely do.

HOWEVER: (1) from my experience of working with people fresh out of school who were exclusively familiar only with the IDE way of building programs and who knew little to nothing about the command line tools -- such people were curiously unaware that many tasks (testing for example) could be automated. People who used command line extensively, on the other hand, would be the first ones to write a simple script that performed testing of their program. Testing is just one example (I'm sure there are many IDE-tailored solutions for it). There are other things that are done repetitively; using an IDE can mold one into thinking that such tasks are impossible to automate and that they can only be performed by clicking buttons.

(2) (Again this is about people either in school or fresh out of it -- certainly not about highly professional IDE users). People who were taught IDE first and who are not familiar with the command line will often assume that running command line utilities is an old-fashioned way and that they are more "modern" (therefore somehow "better") programmers just because they are using a GUI. Such people will then go on to write programs that completely ignore the standard way of interacting with the shell, even if their programs do run from the command line. One example: One of the programmers (who used exclusively GUIs for building programs) would write command-line utilities which, when outputting a text file, would place '\n' at the beginning of each line and not at the end, which then resulted in parts of the output being truncated when I ran them through sed/gnuplot/whatever. Firthermore, even though his programs were quite complicated (read: painful to debug), he had no idea what the difference between stderr and stdout was, and would write any errors that would come up to stdout. The same programmer was then curiously defensive when I told him that his way of doing it was wrong (his answer basically was: "I've been programming for a long time, and I've always done it that way, and I don't care that my programs don't play nice with that sed thing with which I'm not familiar with").


I have encountered the inverse:

1) People who exclusively use the command line, especially when recently graduated, have extreme difficulty thinking in an OO fashion, because they have not had the tools to support exploring inheritance and call stacks the way someone with a visual OO development environment has (Smalltalk is ideal for this.)

2) People who have used the command line exclusively are content without visualizations, which can be massively more effective at conveying information, especially about performance and architecture.

3) People who have used the command line exclusively are less likely to employ refactoring tools and more likely to tolerate long, poorly factored code because vim/emacs makes it easy to navigate and ignore the ugliness.

4) People who have used the command line exclusively, especially when they are recent graduates, believe that their approach is inherently better and necessary in order to be a good developer. They get curiously defensive if you expect them to be able to instantly jump to the definition of a method in question or actually use the Strategy pattern.


1) Don't really have a defense for that. Then again, not everyone who uses an IDE and claims they know OOP had had experience with Smalltalk.

2) I have encountered people who used UML/SQL schema diagrams in a cargo-cultish way to prove to themselves and (especially) others that what they were doing was well thought-out when in fact the opposite was the case.

3) That reads more like an argument in favor of Vim/Emacs as superior editors. Also, it seems silly to blame lack of refactoring on editor when it's more of an issue of whether the programmer has a sense of code elegance and clarity or not.

4) I guess hubris is not exclusive to either side.


Having just picked up Android development (and my first real usage of Java), I've been pretty happy using eclim (http://eclim.org/). It bridges the gap nicely, I get eclipse's error checking, decent completion, automated import statements, refactoring, etc. all inside vim. The only two features I've found myself bouncing into eclipse for are the debugger, and android GUI layout previews.


ant was built specifically for building large Java projects, was it not?


Needing an IDE for Java in my experience has to do with two things: The amount of boiler plate and really good refactoring tools/autocomplete.

Building the project is not really that difficult, and make can be made to do it just fine.


I thought that it was written to prove that the supposedly declarative data format of xml still is vulnerable to imperative misuse. :)


Understanding the process of building has zero to do with knowing how to type make, or knowing the exact command line parameters GCC needs. Confusing a few cli commands with the process of building is just as bad as believing in the big magic button.


Of course. But one environment scales and the other doesn't. A IDE user who only knows how to modify projects by adding files and clicking checkboxes for optimization settings will do that for the rest of their career. A CLI user who needs to edit makefiles will eventually have a vastly more powerful tool in the shed.


I'm using the Qt Creator IDE right now, in Linux of all places.

The Desktop GUI I'm making involves about twenty libraries, multiple threads and other complexities.

I could make my job even more complex by diving in and learning a command-line way to do everything. But it seems to me, my personal approach is, I move ahead by dealing with the necessary complexity and letting the unnecessary complexity fall away.

I mean, I'm sure there a way use ls with wildcards in a way that doesn't puke all subdirectories out at you when you're trying to find a single file in a large directory. But the fact that ls does this by default seems like strong evidence that the interface wants you to remember random bs - indeed, I've read stuff here where people were saying essentially "vi is good because it makes me remember arbitrary commands, strengths your brain". It might even be true that it strengths your brain but I think only a subset of even geeks want to relate to their computers like this and this subset isn't the same as geeks who can create useful and powerful programs or the most productive subset.

Also, I don't see any inherently more powerful tools here. Qt Creator can show me the meaningful references to variable foo, excluding comment "foo". I can't see grep doing that (even though I use grep for some thing but I still memorize ten cli options). All the tools are pretty Turing complete if you dig so it's hard to claim one thing is more powerful than another.


First off: clearly you're right. IDE's are very large, featureful software and there are lots of specific tasks they accelerate that you won't find addressed in the common CLI workflow. Finding "references to foo" is a good example.

But here's some stuff Qt Creator can't do (or rather, provides no particular support for -- you can always use it as a text editor, of course): iOS apps; Android apps; sending email automatically; kernel drivers; languages other than C++ or Javascript: Scala, Haskell, TCL, C#, sh, perl, prolog, lisp...; build and deploy a node.js app; fire up an AWS instance; build a binary for an AVR microcontroller and download it to a device; Verilog or VHDL editing/synthesis/verification.

Obviously I could go on and on, but you get the point. The command line environment? Yeah. It "does" all that stuff. And it does it in the same way. A Makefile for your VHDL project is going to look like a Makefile for your web app. The build scripts for your machine learning rig and your AVR hackery work the same way. You write your email in the same editor you use to write your C++ and CoffeeScript (and that you used 20 years ago to write your TCL!).

So sure: if you want to spend your career searching for "foo" in your Qt C++/QML projects, Qt Creator is a great choice. And the next environment you work in will have a new tool, that does all the same boring stuff in mostly the same ways, and you'll need to learn it again. Pick up the command line and get better at it and you'll find you've opened up a much bigger door (and you won't need to learn to use a new editor every year, to boot).

Is it clear now why there are some of us (with decades of make and shell and ssh in our toolbelts) who sit here and scratch our heads at the perpetuation of this nonsense?


> Is it clear now why there are some of us (with decades of make and shell and ssh in our toolbelts) who sit here and scratch our heads at the perpetuation of this nonsense?

It's no more clear than it was before--that is to say, perfectly so. But here's one for you, in turn: I'm glad that it works for you and that you like it, but if you want to insist it's objectively superior and I don't know what's better for me (which is what you've been doing throughout this thread), I cordially invite you to go fuck yourself. Impolite of me, sure, but telling you to go fuck yourself is considerably less impolite than patronizing people who decide they don't value working in the same way you do. I've been where you say to be, I've done what you say to do, and I got the T-shirt that you're rocking out in. I put it in a drawer because I think it wears poorly.

I have plenty of experience with "make, shell, and ssh"--I'm neither new to nor afraid of working in shells. Doesn't mean I want that to be my primary work environment and certainly doesn't mean I'm either ignorant or wrong for disliking it. For me, a phrase which I will italicize a few more times because you seem unable to internalize this, the UX of the command line abjectly sucks for tasks beyond basic file manipulation and quick text editing. It sucks! For me. I don't care about its hallowed consistency when it makes me feel like shit to be using it all day and makes me feel like I'm wasting my time. Learning new tools is, for me, a smaller cost than hating what I do. I'd rather go learn a new tool, one that's specialized to its use case, because in every case I've encountered it is both a more productive and more enjoyable experience. If I'm not going to value doing my job in an enjoyable fashion, I might as well go be a plumber.

I'm more than happy to use the command line to handle deployment tasks and other automation, but for actually creating things, writing code and doing development work? I'd rather have a tooth pulled than work in vim all day. But I recognize that that's the case for me. Your decisions to use those tools are right for you but not for me are your decisions--if they make you happy, awesome! But your patronizing "what you do is nonsense" crap (and your consistent implication that not adopting your methods is borne out of ignorance rather than experience) leads me to invite you to GFY.

Preferring consistent tools to specific tools for different tasks is not better; it is preference. And, quite literally, nothing more.


I stare at Vim all day long and I can say that I don't give a shit what tools you use. Use whatever works best with how you think. Personally, I don't need to write a text editor because a group of people who more-or-less think about how to use computers the exact same way as me already did that.

I don't know why you took the parent post so personally; they never attempted to convince you that the command line is superior all the time for everyone. I can relate to it already. I saw a post on here a few days ago about shortcuts for some random new-ish editor (Sublime text 2?) listing awesome shortcuts that people should use. So what? None of that can compare to what is possible in Vim and it never will. I'm sure it's helpful for those who use that tool, but sometime soon there will be some other new tool that replaces it as the hot new editor for language x and those people will have to relearn it when they switch tools. However, that new tool will still use <ctrl-g> to find by line number.

Learning new tools on Linux is as easy as reading the man page or reading online and failing that I would dive into the source code. For example, I learned how to use rsync last night, but only enough to do what I was trying to do. Luckily, that knowledge will build as I require new uses for it. What makes this all worthwhile is that I can now make my computer do whatever I want and I am not limited to what others have built for a specific use case because these tools can handle almost anything.


> I don't know why you took the parent post so personally; they never attempted to convince you that the command line is superior all the time for everyone.

Characterizing a well-reasoned decision not to use the command line and command line tools for everything as "nonsense" certainly is implicitly placing your chosen tool stack as objectively superior. It isn't.

The rest of your post is more of the same--I'm glad it works for you (and for text editing I use MacVim quite often, but not for development because for that task it sucks for me) and its reasons are great for you, but it is not objectively superior. Which is why I responded to the post to which I responded in the first place--because it was the sort of sneering arrogance that's so common in these parts, where my way is the best way. It is the best for you, and I'm happy that you enjoy it, but taking the tribal rock-throwing ("what you have chosen to do is 'nonsense'") attitude that ajross did is stupid.


But, but, UNIX command line tools are tools :D. If you enjoy learning new tools why don't you learn a set of tools that is applicable to any future task/programming language you will encounter?


I don't enjoy learning new tools. I enjoy the user experience of using the right tool for the job more than I do using the same tools for every job.

For example: I use MacVim and Textmate for text editing (writing markdown files, etc.) because they're good at it. And that's about where it ends, for me; I would under no circumstances use it to write more code, than a one-off Python script because I find it displeasing to do. I feel annoyed and angry because writing code in it is, for me, a slog; I consider ctags basically defective and they're not as good as, say, IDEA's Python (or Java, or Scala) support (or Visual Studio's for C# or VS's and Xcode's for C++).

Similarly, and heading back towards the command-line realm: while this thread is full of people extolling the virtues of make (and my eyes made a credible attempt to roll out of my head at the idea, because make is one of the build tools I swear at most often at work...), it would be utterly stupid of me to use make for most of my personal projects because xbuild (for C#) and Maven (for Java) exist--but, oh the horror, they're specialized to their task, what if I want to flip waffles instead? And to that I respond, I don't care about using xbuild or Maven to flip waffles, because their job is not to flip waffles but instead to build C# and Java code. If I need to write a platform-independent build script for C or C++, sure, I'll use make (although today I'm more likely to use cmake), but that doesn't make it a wonderflonium-packed delight for everything else under the sun. I know this from personal experience; I deal with a make-powered Java app with makefiles best termed "monstrous" and if I could turn back the clock and make it a Maven application we'd have about a tenth of the headaches that we get from the current system. Because it's made for the job.

I would rather adapt to tools better suited to a given task than use consistent tools across all problem sets. I don't find picking up new tools to be daunting or particularly time consuming (I picked up IDEA in about two hours, despite it being markedly different in behavior than previous IDEs I'd used, and I'm significantly more productive) and I don't find value in the "elegance" of bolting 30,000 things into vim to make it almost as good as my IDE was, for me, a decade ago.

But what I have said above, if you listen to ajross, is head-scratching nonsense. Which is patronizing and insulting to people who've decided that their priorities aren't his. Such was the attitude to which I replied.


Very well written post.


This is one of those cases where you should have calmed down before typing.


Ah, the "which way is the logical way" stuff is such an arbitrary but yet so weirdly satisfying...

I could to write emails with vi and I could learn to write python extensions for Qt Creator to do just about anything also. I could add an email-sending make step to my project if I happen to want send emails when I checked changes into git. I don't know which one is better and I don't do either right now because I don't have to.

As far as doing things "the same way" goes, the same way as what? I've actually done lets of command-lining in my time. When I need to formulate a twenty argument command, I usually edit it in a text file and paste it to the prompt. And the thing is, the argument syntax of shell commands isn't consistent. The only unifying factor is the various pipes but it's hardly a panacea. I mean, fricken grep only searches directories recursively for a pure wildcard search (just "*"). You can pipe the results of this search to a further search for, say. ".cpp" but if you've got lots of non "cpp" files that happen to have the "cpp" string next to the string you want in them, you're hosed. I'm sure someone has some other trick for this somewhere. But the situation seems to boil down to "you can do anything if you take the time to learn everything" - I mentioned you could write custom IDE extensions right?


The best of all possible worlds?

emacs.


> A IDE user who only knows how to modify projects by adding files and clicking checkboxes for optimization settings will do that for the rest of their career.

Sorry, but seriously that is BS. I started coding with Basic on a Commodore 64. By the same reasoning, I should be mystified by a mouse.

People learn, tools and languages evolve, especially during the whole span of a 40 years career.


Indeed. But these tools can (and some might argue should) be learned outside of school. On what basis do you make the claim that make should be taught in school? I can count the number of times I've had to edit makefiles as part of my job on one hand, and I don't think I'm odd.


Everything is best learned by self study. School should expose you to the different choices in life. We expect doctors to have a good grounding in the practice of chemistry even though they'll never do a lab test themselves.

Also: I'm not following. The context of your post implies that you agree with me, but the facts seem to say you're an IDE user. How can you possibly be doing development without modifying your build system? If you're not using make, you're using something else. My guess is it's an IDE...


Actually, quite the opposite. I'm not sure I agree that students need to be informed of how to write build files, but I don't use an IDE myself. That said, I did learn how to program in an IDE, and I don't feel as though I missed out on a whole lot in doing so.


>A CLI user who needs to edit makefiles will eventually have a vastly more powerful tool in the shed

That is true if the user didn't already give up and leave programming completely due to the obtuseness and complexity of autotools,makefiles, m4,macros and workarounds like cmake.I survived it, but many others in my Masters class did not. http://esr.ibiblio.org/?p=1877 If even ESR finds them troublesome, what hope does a beginner have?

>Of course. But one environment scales and the other doesn't. A IDE user who only knows how to modify projects by adding files and clicking checkboxes for optimization settings will do that for the rest of their career

Using an IDE does not mean that you cannot do advanced configuration, including setting options that change the commandline of the compiler used. Don't almost all GUIs call commandline compilers on the backend?

Visual Studio even shows you the commandline being used when you hit the Build button. SQL Server usually has a "Script" button to generate script for a admin GUI option and you can do anything you can do in the IIS GUI with Power Shell.

The power should be there for when you need it, but making everyone go through arcane options has the unfortunate side effect of making the tool less popular though it does show the power of the tool to the novice user.


This is the kind of discourse that shows the disconnect. I'm sorry, but programming is complicated. You're seriously claiming that someone who can intelligently navigate the rats nest that is the J2SE library environment can't handle autotools (and for the record: autotools really isn't what I'm talking about)? Please. It's just laziness and aesthetics. One looks shiny and new, and the other crufty and old. People like to play with shiny new toys.

And, sigh, "advanced configuration" is certainly something more than setting compiler flags. That was sarcasm, intended to show the limit that IDE users hit trying to extend their tools. For a practical example, how about this: conditionally build the modules of your C/C++ project such that they can be either linked in statically to the main binary or loaded dynamically, and generate both versions in a standard build. Someone who knows how the linker works and how to write a makefile can do this in a day, easily. Someone who only knows the IDE will be lost; there's no button for that. They'd probably hack up some monstrosity involving thousand-line boilerplate like building all the modules as separate subprojects or some nonsense.


> programming is complicated

I'd like to hide in the corner and applaud the recognition of this reality. I grew up with Windows, I've been learning cli and vim and ... for 5 years and I still suck. But as Kennedy said, "don't wish for easy lives, wish to be strong [wo]men."


>> Of course. But one environment scales and the other doesn't. A IDE user who only knows how to modify projects by adding files and clicking checkboxes for optimization settings will do that for the rest of their career

> Using an IDE does not mean that you cannot do advanced configuration, including setting options that change the commandline of the compiler used. Don't almost all GUIs call commandline compilers on the backend?

That's really illustrating his point, rather than countering it. If all you think of in terms of "advanced configuration" is "setting options that change the command line of the compiler", you're missing entire worlds of flexibility and productivity wins because of this GUI-centric view of things.

What about, say (and this is a very trivial illustration of flexibility), compiling part of the project, running a tool that analyzes those .o files along with some network resources, which generates yet more source files, which are then picked up by another step in the same build process and linked into the final executable?

I had to build essentially that kind of build system a few years back, for two different ecosystems, so that they could share some common infrastructure in a Big Org™ (a command-line/unix/ant-based Java build system, and a Windows/Visual Studio 2003/2008 based C# build system). The Java version was straightforward, but the Visual Studio version was an exercise in frustration.

The IDE Gods grant you no "custom build step that will generate an unknown number of files with unknown names which you will then pick up and compile as part of this compilation process" button to press. They give you a build scripting language, MSBuild, but it's so crippled by the baked-in assumptions of its IDE-bound design that it won't get you anywhere (to be specific, it assumes that you can hardcode the name and path of every file to be compiled by the project. It will let you insert a wildcard, but it expands them when the IDE opens the project file, which is useless when files are generated during the build, and if anyone deletes a file from the project in the IDE, it will "helpfully" rewrite the wildcard into separate entries for everything the wildcard matched back when the IDE opened the project file. Pure crap).

And this is a really trivial way to use the flexibility that the unix build environment gives you. There's so much more that can be automated than any IDE could hope to present a GUI for.


I call standard makefiles as a pre-build or post-build task in Visual Studio. The pre-build task even generates some source files that are subsequently used in the build.

Just make sure you have make.exe in you PATH and maybe defer the call to a batch file if it is too complex for VS. Or use the batch file to set up the PATH altogether.


Pre-build tasks wouldn't cut it, IIRC. The issue was that compiling the project required compiling part of it (part a), then generating source based on the results of (a), then building that generated source (which required the presence of the compiled results from (a), then compiling the rest of the project (b).

Unless I'm grossly misremembering, pre-build tasks would only have worked if the project had been broken up into separate projects such that the generation script could run post-(a) or pre-(b).


Exactly. With my typical setup, a makefile looks like this:

    BIN=my-program
    OBJ=file1.o file2.o file3.o
    DEPS=../foo/libfoo.so

    -include ../config.mk
    include ../mk/c.mk
    include ../mk/lexyacc.mk
It's not hard to modify, it's flexible, and if I need custom scripts to generate files, it's very easy to extend. I don't have to futz with compiler options, command line parameters, etc. I just add the next .o file to the list, and get back to coding.

There's room for improvement, but it works for me. And it's implemented in less than 100 lines of make.


Most makefile projects end up with a simple list of files to add to. But each one does it in a slightly different way, because make is basically an unstructured shell-scripting language.

I can barely stand to work on projects that don't use maven any more. Declare your dependencies and your sources in a standard format that every other project uses, and then get on with actually writing code; the build system is not the place for arbitrary complexity.


Make is a way of defining dependencies. It's relatively far from a free-form shell like language.


> That there is a whole generation of developers who come out of school thinking that "building" is the action of pressing a button in an IDE GUI

Why? I mean, the fact that a newbie developer just needs to know how to press one button to build their code is actually pretty amazing. Plus, how many school projects are complex enough to require much more than that?


i've been developing under linux since 1993, and i would love it if building were pressing a button in an ide gui (without having to drag the rest of the ide development process along). it seems like a genuinely superior development model to me - write your code, have your dependencies figured out automatically, and hit a button to build. writing makefiles by hand is just a sign my toolset is not powerful enough, or my language insufficiently well designed to automate dependency analysis.


Funny, I find it sad that we need so much tinker-knowledge about obscure command lines of build tools to program stuff.


I take it from reading this and other posts that cutting my programming teeth by learning to code in gedit and the command line, sans dev tools, is a good approach then?


I wouldn't go that far.

The best way to learn something is through whatever method encourages you to learn.

For some that is starting from zero, others want to start at the highest level and work backwards, others start in the middle and jump around. Whatever can keep you interested.


I agree with x1 that you should start with whatever will keep you going. If the command line appeals to you, check out Zed Shaw's "Learn C the Hard Way." It is not at all sans dev tools, but it will have you using sophisticated CLI tools rather than a GUI. I have found it to be a good fit for me personally, and I suspect I'm not alone.


gedit's not bad, but try and work your way into vim/emacs. This will allow you to use the same development/debugging environment that you use on your laptop on remote servers, which will vastly improve your earning potential.


Not to sound like a jerk, but I would really like to see the math you used to determine that learning one text editor versus another increases your earning potential.


I might not have phrased that very well, I was trying to claim that being comfortable with command line tools (the unix "ide") will increase your earning potential as an engineer vs only knowing a gui ide.

Being familiar with command line tools for debugging allows you to troubleshoot issues in production, saving the company money, and troubleshooting issues in production generally means working with command line tools on linux servers. This is valuable even if you're not in operations.

2 examples to back this up (one from a backend developer and one from a frontend developer):

My former boss used to run the Customer Order Workflow (COW) team at amazon. He related to me how useless/slow he felt the first time a production issue popped up and he was trying to get some ungodly combination of WinSCP and notepad working. Obviously he learned vim very quickly after that.

Even if you're not on the "backend", you may find yourself needing to ssh to a server which is reverse proxing to something else, and running curl/wget to try and figure out why the CDN isn't serving the latest version of the CSS file you just created even though you're using a cache busting query string.

Sadly saving the company money doesn't always translate into more for you, but experiences like this generally end up making you a go to guy/girl, which can lead to being a lead engineer.


Thanks. (to all the commenters under this thread).

I was thinking about vim just a bit ago (I hear emacs pinky is a real bitch). Maybe I'll put eclipse on my ubuntu too and check it out (dunno if it has support for python or not. I tried IDLE and didnt like it ... )

I have kind of stalled out a bit lately. maybe it's time for an IDE anyway


I like geany, it is a lot more powerful than gedit, but still a lot simpler than most ide/mega editors. You can hide the features you don't use.

Despite the advice that you must use a terminal editor to edit files remotely, nautilus (& others) let you mount remote folders over ssh and edit them locally with the editor of your choosing. I find it very convenient. I hit a bookmark and start editing, keeping the terminals reserved for commands instead.


I don't want to dis IDE makers. There are folks such as jetbrains that make an amazing product. But at the end of the day, for us linux-type professional developers, it's all text. You have maximum leverage with unix text editing tools. Refactoring 50 files at once by doing something like:

  grep -lR 'old_method' | xargs sed -ie 's/old_method/some+other()+expression/g'
is just plain awesome. Want to find out what libraries a process is using?

  cat /proc/$pid/maps
Want to find out where that process is logging to?

  ls -hal /proc/$pid/fd/
Basically it's often hard to use a gui ide to accomplish an arbitrary task if the IDE wasn't built for it, or if no one has taken time time write plugins. At the risk of gushing, the command line is just one of the most extensible, composable environments ever invented, and it's success over the past 40 years proves it.


It's all text is the lowest form. Renaming a method by mass replacing text won't always yield the correct result. A tool that understands the language represented by the text can do a much better job. Also right-clicking on a method name, clicking rename, and getting a preview I can manipulate seems far more efficient.

But you are right, if an IDE wasn't built for a task you're SOL and you can likely compose something together on the command line to solve it.


I really love console, but I don't think that command is so awesome (don't tried it so I'm asking just for curiosity): What if you only want to refactor Class1.Foo() method while not changing Class2.Foo()? That is, could you refactor taking context into account with the CL?

I use Visual Studio to develop for WP7, and refactoring is even more awesome: change any name, click the little popup (non-intrusive at all) and select refactor. Done. Not only in that file, but also in your project and any other project you have opened.

I'm not trying to bash command line tools (I'd really love to know them better), but I think that there are things that are far easier with a graphical IDE.


"I'm not trying to bash command line tools" hehe

"if you only want to refactor Class1.Foo() method while not changing Class2.Foo()"

vim has functionality for scope in most languages (see http://vimdoc.sourceforge.net/htmldoc/eval.html#internal-var...). If vim doesn't have the functionality, it is a one-liner in your config file.

Additionally, there are language-specific vim refactoring options such as eclim (eclipse functionality in vim and vice-versa) for java, and rope/bicycle repair man for python

edit: on a side note, these religious discussions are great for unearthing great links and cli tricks!


I'm not very opinionated actually. IDE's for static languages do a great job at refactoring, especially for things like java where the language was designed more for tooling.

When sed fails, and I don't have pycharm open, I stick with something like:

  grep -lR 'thing_i_need_to_change(' | xargs emacs
To get a listing of files and pull them all up in buffers.


While I generally use sed sparingly because of testability issues, I agree with the general point. The UNIX console tools are extremely powerful for text processing and graphical IDE's don't even come close. I have never found a graphical IDE that could match even VIM in text processing.

Sure maybe it adds a file browser or a couple of menu commands for building and running. But in terms of actual coding, VIM rules (I am sure EMACS fans feel the same way about EMACS and for similar reasons).

The selling point of an IDE is generally that it's "integrated" whatever that means. The selling point of UNIX or Linux is you can string the tools together, loosely integrated, to accomplish a better job, but it's not integrated in the same way out of the box.

I don't think there is any question that the UNIX approach leads to better productivity in my mind. The issue is that like all high productivity tools, it has a learning curve.


This is more or less exactly my approach. gnome-term, vim, and since this is a web app, a testing environment of Apache, Chrome and/or Firefox, etc. Between proper usage of vim, shell tools (grep, find, etc) svn, etc. I can put together the best of breed tools to form a really, really good development environment.


I agree that many underestimate the power of bash for development. Make, in particular, is much more widely usable. However, there are some things that I find very convenient in an IDE that are harder to do at the command line. Finding usages of a name (variable or function); finding the definition of same. Renaming same throughout the code base. Yes, I know about "find | xargs grep", but it's a bit more convoluted than IDE usage. Maybe I just don't know enough. Can anyone point to command line or vim tools to accomplish these tasks? Code analysis and warnings are another thing IDEs seem to do better.


I'm going to be a tad pedantic and note the following:

- bash and make are completely separate programs. As a matter of fact, I spend half of my time developing with ksh and Solaris make. The two programs you mention don't go 'hand in hand'.

- `find | xargs grep` is a terrible construct. 99% of the time (if not a lot more) you'd prefer using something like `find -exec`, I have yet to come across a version of `find` that wouldn't work that way.

My point is not to be an arrogant prick, and criticize the parent post. These kind of misconceptions usually indicate poor understanding of the larger Unix philosophy. By her own admission, my parent "just do[es]n't know enough".

I believe that in order to truly use Unix as an IDE, one has to go beyond the (arguably bad) reflexes taught by common GUI IDEs. It's not simply a matter of "What command can replace feature X from Netbeans/Eclipse/Visual Studio/...?" Modern IDEs weren't conceived with sed/awk in mind, aren't (for the most part) scriptable like most Unix shells are, and aren't sitting on top of a freaking Lisp machine like emacs is (you know the whole enlightment deal, lisp is superior, bla bla bla).

I am sounding like an evangelist, Unix tend to do that to me. I am trying to make a simple point:

It's normal to feel that something is missing if one is simply transposing her knowledge from point-and-click IDEs to the Unix shell.

(To answer your question, a combination of sed and awk does wonder to rename across the code base. And SO much more ...)


Uh oh. After a few decades of Unix I _still_ have a poor understanding, sniff. I have in years past used sed and awk and perl -e and other such to rename as well other things. However, the regular expressions to ensure you're not including things you don't want involves a bit of effort. I'm in the Unix choir, but I still like the shiny new IDEs which save me time (especially with Java as noted elsewhere).


There needs to be far better evangelism/PR to tell people the right way to do all these things so that they understand how to do all the IDE stuff, the truth is that the number of use cases people need is not unlimited... of course it is good to know the full range of flexible tools, but it's far from obvious to most people whether or not find | xargs grep is good or not, or how to do a method rename


of course it is good to know the full range of flexible tools, but it's far from obvious to most people whether or not find | xargs grep is good or not, or how to do a method rename

It's also far from obvious whether a for or a while loop best fits the problem at hand. How does one figure these things out? By learning about the tools/constructs and using them (experience). Furthermore, it is expected of software developers to know these things.

It's all well and good to be an IDE jockey, but being a software developer entails more than clicking through wizards and filling in some logic. Learning tools (such as UNIX CLI tools) opens up a whole other world of untold power in accomplishing not just programming tasks, but everyday tasks as well (how many people know that there is a CLI unit conversion program that not only has more units than you can shake a stick at, but is scriptable and has a tiny footprint to boot?). Not to mention that these tools have been around and ported to just about everything, and will probably continue to be around for ever, and eat less RAM and CPU while being more flexible and powerful than their GUI counterparts.

Edit: I didn't mean to come off condescending and scolding, but I forgot to insert some helpful links in response to your statement that there should be better evangelism/PR for shell programming; hope these links fit the bill:

http://www.bashoneliners.com

http://www.commandlinefu.com

http://www.shell-fu.org

http://noone.org/blog/English/Computer/Debian/CoolTools

http://noone.org/blog/English/Computer/Shell


You don't have to sell me on using the shell at all and I have seen one-liner databases (which are really only fun once you have a pretty good level of mastery)

I find that IDEs make everything so needlessly complex and inflexible - I don't want 2000 little icons in 200 drawers managed by 54 XML files that I will eventually be expected to edit.

The observation is that they are winning anyway. In fact, people even think they are easier, AND they think their IDE does things that can't even be done otherwise! These people are smart enough to develop software, yet they are opting for what we think are dumb tools. Either we are just wrong, or there is just a misunderstanding about the relative easiness of IDEs.

I think that commercial platforms and products aimed at consumers (including developers) tend to have people paying careful attention to marketing and experience, to add a layer of glitz and wow and accessibility. It isn't that it could not be done but no one is bothering, once you know the efficient way then there is no point dressing it up.

I also think we have built up a culture which is somewhat punitive to newbies. Too many people treat programming and composition of command line tools as some kind of dick swinging competition rather than the inherently simplest and most straightforward way of doing things, which is SUPPOSED TO make your life easier and let you do things you couldn't otherwise do.


`grep -r` works pretty well, too.


This is a significant reason that many Unix hackers prefer languages like C or Common Lisp where the same symbol means the same thing system-wide. Plain text grep, sed, and ex commands make refactoring quite simple and don't require fancy plugins to figure out what a name means in a given context.


> "This is a significant reason that many Unix hackers prefer languages like C or Common Lisp where the same symbol means the same thing system-wide."

I don't know about lisp, but that's not even close to true for C. typedef struct A { int A; } A; is an easy counter example and that doesn't even take into account macros or any of the other tools that play horribly with grep and friends.

The reason Unix hackers prefer C is because it's rock solid, extremely well supported and plays nicely with hardware. And it doesn't hurt that it's old enough and popular enough that a huge number of people have extensive experience with it.


Not true for Lisp (Clojure) either: (defn a (let [a ...] (+ a a)))


You're looking for ctags or cscope, the first of which is mentioned in the blog. As for code analysis and warnings, surely that's a property of the compiler being used?


Thanks for the reply. ctags is a great tool, but it is not as easy as an IDE. IDEs keep track without my having to explicitly rebuild the tag data base. IDEs have plugins for different languages. The dynamic flagging of errors and warnings without having to explicitly compile is a time saver. Refactoring tools can also save quite a bit of time. Renaming a function becomes trivial in an IDE.

Of course, there are many things for which vim and bash (":%! sort -n" anyone?) are far superior to an IDE. I tend to use both.


There are some things IDEs do better, but these Unix tools give you more choice.

It should be pretty simple to add a hook to vim to rebuild the tags whenever you save a buffer, for example. That should take the load off your mind. The issue with IDEs I find is that this stuff is automatic and no way to turn it off, and then I find at the worst time that Visual Studio is crawling to a halt as it runs a load of complex parsing in the background on my 500k lines of C++, just when all I wanted to do was write some code.

Think of it as separation of concerns - you wouldn't write a program where all the code was in one tightly bound monolithic system. Why do so for your tools?


As quickly as you can turn your back on Visual Studio. (Sorry, just kidding ... sort of.) Eclipse and NetBeans are a couple IDEs (there are others) that use a module system to build what can appear to be a monolith. Look into OSGi which is an attempt to standardize modularizing GUIs. Of course, I haven't run these tools on 500k lines of C++ just half that of java code. P.S. I used "find . -name \*.java | xargs cat | wc -l" to count the lines of java code. I did not use the IDE for that, giggle.


Plus ctags isn't really that smart, or you can say the way it is used. For example I have a struct "foo" that has different independent declarations in several files in my source tree. Whenever, I would try to open that tag, list of all tags would show up regardless the files included in the current source files. This is quite irritating.

Another not-so-smart feature is auto completion (in C++ at least) and refactoring.

However I can live with these missing features as once you get used to something like Vim or Emacs, its quite frustrating to work on bloated IDE's.


How does ctags do when a variable name is repeated in a different scope? I do rather wish for a really good vim type replacement for the editor in NetBeans though. I keep inadvertently entering vim motion commands in my java text - sigh.


It picks the "best" one (typically the one in the scope of your current file), and lets you change which one to go to, without ruining your tag stack.

For example, if I Ctrl-] over `open`, and it pops me to the wrong one, I can then type :ts and pick the proper entry (typically with info about the file, type, and container). I can then Ctrl-t back to the previous context as normal.


Well, if you're using ctags through vim it guesses by default, or you can use :ts to get a list of the possibilities. Not ideal, I know, but often good enough.



Thanks alot for that link, it lead me to this very interesting talk by Brent Victor, as kind of a counter argument to that principle: http://vimeo.com/36579366. And holy shit does he have a point. Many of today's programmers seem to be trapped in the thinking that the unix and vim principles are the holy grail, forever to be pursued, never to be changed. Isn't it time we stand up and reassess the tools we have? And I'm not just talking about the teleprompt generation tools like unix and vim, but also today's IDEs that have basically evolved as extensions to those tools.

I also recommend the following talk about "subtext": http://www.subtextual.org/subtext2.html.


I would argue that very few of today's programmers really follow the Unix philosophy, but those I know who do would likely agree that our tools should be reassessed. But you seem to be conflating the idea of reassessing our tools and reassessing our principles. In my opinion the core principles [1] are eternal, however transient the programs themselves.

[1]: http://en.wikipedia.org/wiki/Unix_philosophy


I'd argue that the tools (if they're 'good' in itself) reflect the principles with which they have been designed. And no, I don't think that these principles are eternal. We should look at the context under which these principles have been formulated or evolved, compare it to today's context, pick out the principles that still apply and throw the rest out. As an example, building small programs to do well defined, easily testable tasks and chain them together, is certainly still a good design principle. Strings as the universal interface, however, is not in my opinion, since it just doesn't reflect how most of today's UIs are built on objects - thus we have today's disconnect between the terminal and the GUI.


You believe that text as a universal interface isn't a good design principle because it doesn't reflect how other UIs, built upon completely different principles, behave? Is it not possible that it remains a good principle that WIMP GUIs simply do not adhere to? (Also related, "everything is a file.")


Unformatted text (as in text that doesn't follow a machine-understandable format like xml or json) is not a good interface today, I absolutely believe that. Imagine if "ls" would return an xml table that you could easily reuse in other programs, including graphical editors. Imagine that the OS could understand what every column represents and give it a name. You could do something like

ls | showtable in.modified in.filename --sort:1.

Of course you can do this today with sed, but in a very unintuitive way not easily accessible for beginners.


That's a great talk (see also chris granger's "light table", inspired by it), but is an example of a key truth of tools: the right "tool for the job". Like products with usages, tools are more or less appropriate for specific usages.

IDE's are great for navigating huge codebases, and massive libraries with long informative names (intellisense), integrated debuggers that can step into libraries, refactoring tools, etc. The visual gui builders are also really pretty cool - and a wysiwyg example not that far from Bret Victor. They require less effort (and skill) for these specific common tasks.

Bret's use of time is great for games, but not appropriate for all programming tasks. There's been extensive discussion of light table (here and on progit), about how it seems more suited to smaller, experimental coding (not large codebases) - but this remains to be seen, as the tool's not yet available.

I use vim myself, because I'm writing parsers and related code, an example of pure code (non-library, non-gui, non-web), leaning toward the algorithmic, which vim grew up around and was shaped by. Vim/emacs are still the deepest most fundamental level of coding - but that's only appropriate if you need to be working from first principles. Most of the time, for most high-value development, you're not. I think that for specific domains, where custom tools can get some leverage, it's more productive to use those tools. Tool for the job.


You'll probably be interested in Light Table: http://www.chris-granger.com/2012/04/12/light-table---a-new-...


It's interesting because usually HN would detect the link and redirect you to the previous discussion. This escaped the parser simply because the url is appended by "?". I even added another question mark and submitted it fine (Deleted it of course).


I think there's some time limit that expires, after which old links can be resubmitted.


I don't think so because I resubmitted the old link (the normal url) and it redirected me to the discussion that took place a year ago.

Edit: As mentioned in the comment bellow, it was less than year, in fact less than six months even, so you guys might be right. I just saw a big number and assumed it was a long time ago. My bad.


119 days is a year? What planet do you live on? In fact, I think the expiration is 6 months, though I'm not sure.


By becoming dependent on "IDE's" you are only making it easier for companies to control you. As developers you are important and companies want to have some control over what you do. That was Microsoft's original plan and it has worked beautifully.

They were quick to provide an OO environment where someone could create something "impressive" very fast. A little eye candy is all it takes to impress many developers and users. And that's all she wrote. They beat everyone else to the punch. To this day, IDE's and OO programming still rule. And MS's mysterious API's are still very much a competitive advantage, tying programmers to the MS IDE and platform. They control you.

The unfortunate result of this over a long period is that today's developers are a lot less knowledgeable about how to build things from the ground up. Take away their IDE and they can't really do much.

If you want to create something truly "new" and push the envelope (think systems programming), then you need to start at a lower level than the IDE where someone else has handled the low level details for you. (Who built those objects anyway?) A recent post here on HN said that the most talented programmers are the ones who can move from different levels of abstraction effortlessly. From very high to very low and back again. The IDE keeps you in a box. One level of abstraction. Dirty low level details are scary to many developers.

IDE's are useful. They increase productivity exponentially. But if we take away your IDE and you are helpless without it, that's a problem from a progress standpoint (e.g. new systems development). Companies like MS are controlling you and controlling the rate of progress. And they have an incentive to maintain status quo. The envelope does not get pushed.


My understanding is that Emacs is not really Unix in flavor or mentality. GNU's not Unix, remember. :-)

To the best of my knowledge - i.e., I've read somewhere - Emacs is a derivative in style from Lisp Machines at the MIT Lab. It's not really a Unix program.


Unix is an aesthetic. It's squishy. Emacs is a text editor. Editing text is very much a Unix Thing. Emacs does a lot of other crap too, most of it (email, terminal emulation, IRC...) intended to replace or augment some other core Unix Thing. Is putting all those Unix Things into a single box (that isn't unix) a Unix Thing? Probably not, but most of us forgive emacs that transgression. Don't use the crap that doesn't fit your world view, and it remains a unix tool.

Emacs predates the Lisp machines, btw. The original was written (as TECO macros) on ITS. The Lisp machines came out of the same culture and thus had their own emacsen. But GNU Emacs that we are all using today was written on Unix from day one.


However, Emacs still holds on to all the unique ITS key combos, which is why I always hit ^W and wonder why that word didn't get deleted.

There's a public access ITS system over in Sweden, or you can grab an emulator and run it yourself. I recommend trying it just so you can see how Emacs and Info have remained essentially the same (RMS's sad devotion to that ancient religion, etc)


A key indicator of that is it treating ASCII LF as a line separator not a line terminator, creating text files that don't end in LF.


oh, neat. I've always wondered where these messed up no-eol files came from. emacs users!


The author mentions that ack "allows you to use Perl-compatible regular expressions (PCRE)" in contrast to grep. I want to point out that grep does too, or at least the versions on Linux and Mac OS X, when the -E flag is used. I don't know if it actually uses the PCRE library or if it even uses the same character class shortcuts that Perl does, but it's at least modern in the sense that it uses (...) instead of \(...\) for grouping and {n,m} instead of \{n,m\} for specifying the number or range of occurrences, as well as + for {1,}, etc.

Edit: Slight fact correction. The author of this article led me to believe that ack uses Perl Compatible Regular Expressions, which is a C library that implements a regular expressions system whose pattern syntax is very similar to what's found in Perl. Now that I've read ack's website, I see that ack is written in Perl and actually uses Perl's regexps. The author must have just meant that ack's regular expressions are literally "Perl-compatible."


grep -E is just extended regex, which is also a million years old. grep -P, for some versions of gnu grep, will get you the experimental perl compatible regexes.


For a time, Unix was specifically developed as an IDE; the PWB/UNIX system was basically a version/distribution of Unix intended for development, even of programs for other machines like the IBM System/370.

Time marches on, however; and modern IDEs offer rich tools like refactoring capability, in-place documentation (IntelliSense) and fully integrated debugging tools that are a considerable evolution from what was available in the 1970s, that can only have come from having all the tools be aware of the language that's being used. From dumb, simple tools that munge streams to text to smarter, context- and semantics-aware tools that operate on program representations -- that's progress. Accordingly, Unix itself is no longer an acceptable IDE.


I always think of unix as the Disintegrated Development Environment, and I'm happy to keep it that way. Every time I go down the rabbit hole of trying to integrate build, debug and edit into a single tool, I eventually go back to using individual commands.


vim but .. No cscope? I am disappointed. cscope is a power tool when it comes to editor-integrated development.

http://cscope.sourceforge.net/cscope_vim_tutorial.html

I have cscope+vim+make/build+gdb+logging windows on auto-open for my developer desktop terms, and spend a lot of time navigating large codebases with cscope in all its incarnations. If you like this article, but haven't heard of cscope+vim, give yourself 30 minutes to check it out ..


cscope is great. It would be even better if they supported more languages.


"X as your Y" is a sign of good architecture in X.

You can consider Unix as a collection of hierarchically organized entities that can act as nouns and verbs. To develop in this system, all you need to do is to add verbs for text editing and compiling.

BeOS/Haiku has this versatile property as well. In fact, you can use BeOS as your media player/mail client with no app at all.

Python and Ruby are built with this quality. There is tremendous power in their REPL. The language is somewhat its own advanced debugger.

Smalltalk is much like Unix, with its structure of nouns and verbs and a REPL everywhere. This is precisely why the Smalltalk compiler is just another first class Smalltalk Object.

Any app like an IDE is actually something like a design pattern. It's actually a symptom of something lacking in your language/substrate. This isn't something bad or wrong, because kitchen sink architectures have their own drawbacks. It's also possible for a system to be too generalized.

Rather, the sign of something wrong (or conversely the sign of something right) is the extent to which you are building everything yourself or composing pieces that are already there. To what extent are you building together your own, and to what extent are you exploring powerful tools that are already there and putting them together to get things done?

Being able to do the latter is the definition of power in most contexts. The too much need for the former is pathological.

Unwieldy IDEs, app servers, dependency management, etc -- this is all a sign that something needs to be improved.


Powerful as the CLI might be, what are the CLI ways of:

- replacing Foo just in specific modules/classes

- finding the call stack of a specific function/method even across closed source binaries

- provide a visual overview of modules/classes dependencies

- GUI design

- code refactoring

- debugging with hot code replacement on the fly

- browsing symbols in object files / binary modules


I'm in the process of completely switching over to the development style described in this article (I read it last weekend). I started typing a response to this and accidentally ended the first sentence with a semicolon, if that gives you any indication of where I am at in this process. I'm at a PHP/.Net shop doing LAMP development and everyone else here simply installed Ubuntu or Windows XP/7 then Eclipse, checked out the code from SVN and started working. I installed Ubuntu fairly quickly but then got fed up trying to get Eclipse and a bunch of PHP plugins working together. That was a complete waste of time in my opinion; at best I would be learning Eclipse and Ubuntu. I switched to Vim for all of my editing needs about 16 months ago when I was taking a useless Perl class, so I started using that daily at work. 8 months of 40 hour work weeks later and I'd say I'm pretty decent at it now. Two months ago I got fed up with the awful window management in Ubuntu and switched to Xmonad. Configuring my window manager by writing Haskell and recompiling it is preferable to watching some other piece of garbage break or crash constantly and wasting time moving windows around with a mouse for the rest of my life. Shortcuts didn't help; working with the default keybindings in Ubuntu for web development on a single screen hurt my hand enough to buy a different keyboard, but it was still glitchy enough to be a daily annoyance.

My development environment currently consists of Ubuntu/Arch Linux (haven't switched at work yet), Tmux, Vim, Xmonad, Urxvt and Zsh. The only sane way to start moving towards using these tools is to use one new tool at a time, see which defaults annoy you and then customize it to your workflow. I would definitely not start with anyone else's default settings (Janus for Vim or Oh-My-Zsh) unless you just want to see what these tools are capable of. Last week I completed a full productive workday without touching my mouse except to move it out of the way and over the weekend I had to work from home at this job for the first time. I had my ssh tunnels and keys setup already so it was a simple as attaching to my tmux session from work and continuing where I left off.

The reason I am going through this process is to learn more about Linux/Unix but more importantly I am sick and tired of setting up programs on computers, learning settings and modifying configurations through guis that change at the developers' whims. I don't need or want a system that anyone can use; I want a system that works the way I think with my exact hardware. The best benefit of doing all this is that I can actually learn and understand what my tools are doing and remove as many abstractions as I can handle. Most recently I learned about strace; I always wondered how people came up with answers to Linux questions about random dependencies.


I like the article as I am also a command line junkie, but I have an issue with it. I might be being pedantic here, but the fundamental premise of the article is a falsehood.

UNIX is in no way, shape or form an IDE. IDE stands for Integrated Development Environment and means exactly that. All your tools under one roof.

I do not deny and in fact would also assert that the separate tool chain / everything is a stream of bytes philosophy can be used in amalgam as an incredibly powerful methodology for software development, but that doesn't make it an IDE.


I typically tweak my rc files for a particular task ($language development, packaging, communications, system configuration, etc) and spawn a screen session with my $HOME pointing at a context-specific directory. This makes everything integrate a little better around the task at hand, while still giving me the full expressibility of my shell. I don't know whether I'd quite call it "Integrated", but I find it useful.


> UNIX is in no way, shape or form an IDE. IDE stands for Integrated Development Environment and means exactly that. All your tools under one roof.

agreed. if anything, it's a UDE: Unintegrated Development Environment. which is it's strength and weakness. the initial learning curve is steeper than with an IDE, maybe, but you don't run into these painful low ceilings like with an IDE. so much synergy & integratability in the CLI paradigm. pick and choose. customizable UX, custom workflows, automation, and typically much less opaque configuration, lower idle cpu and memory use, etc.


Surpised tmux wasn't mentioned in the list of core tools; Author has written about it before it seems. tmux is the key to building an IDE in *nix for me.


I've been using GNU screen for years and am happy with it. A brief bit of Googling suggests to me that the main advantages of tmux over screen are that its code is designed better and that it's actively maintained and, therefore, less buggy. It also seems like its configuration is more logical. As someone who has already configured screen to my liking and who hasn't run into noticeable bugs, I don't have any real reason to make the switch if those are the only advantages. Are there others? (e.g. better features, etc.)


For me, tmuxinator[1] is a huge advantage. I can easily define environments suited to various tasks and projects, launch them easily and lose none of tmux's power in the process.

For example, my 'work' setup is very IDE-like; when it starts it mounts my dev VM (nfs), starts vim, opens windows for logs, database client, SSH to the VM, source control (tig and git).

[1] https://github.com/aziz/tmuxinator


> Are there others? (e.g. better features, etc.)

Apart from active development and easier configuration, vertical splits(screen has it for some time now).



Yeah. POSIX shell is my IDE of choice too. So I agree with it. Although there is one exception - development in C#/.NET which is only environment in which I wouldn't code with anything else than Visual Studio. :) But maybe it's just a matter of way I was tought to use it, with all those one click - hundreds LOCs stuff. So I can't imagine coding in .NET without it.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: