Hacker News new | comments | show | ask | jobs | submit login
Using Unix as an IDE (2012) (sanctum.geek.nz)
162 points by nz 3 months ago | hide | past | web | favorite | 54 comments



It's not for everyone, but the Unix/BSD/Linux shell is still my favorite command invocation environment. Despite their simplicity, pipelines are a really handy way to pass data between programs when solving certain kinds problems. It also provides a cheap way of providing some parallel processing.


The other thing I absolutely love about pipes is the ease of writing custom utilities. I'd refactored a filetype for use with a game last week, and needed to insert several lines into ~1.5k ASCII files. 1/2 hour to make a hacked c++ program to make the changes, and pass the outputs of grep to that.

Although I'm unfamiliar with using pipes to provide parallel processing, could you elaborate? If you can pass to another thread I will be kicking myself for missing that.


> Although I'm unfamiliar with using pipes to provide parallel processing, could you elaborate? If you can pass to another thread I will be kicking myself for missing that.

You didn't miss it, each stage in the pipeline runs in it's own process so you're already using it. With say "grep -r foo ./ | do-something | do-omething-else" the second and third stages will be operating on the data from grep while grep is still running.

In addition to that you can get more parallelism pretty simply with xargs (http://offbytwo.com/2011/06/26/things-you-didnt-know-about-x...). If that's not enough then check out gnu parallel (https://www.gnu.org/software/parallel/).

While we're talking about unix as an IDE I want to put in a plug for inotifywait (https://linux.die.net/man/1/inotifywait). Combining this with a makefile you can get some very fast builds with near instant feedback everytime you save a file. I have one that executes units tests and valgrind every time for instant memory leak detection. Another great use case is getting WYSIWYG like features with latex files.


using make -j $num is another way to go to get parallelism that is sometimes convenient.


Just make sure $num is set, otherwise it tries to do everything it can at once. I found out the hard way compiling gcc last week, system was unresponsive so I reset, but I'm curious if it would have worked eventually.


This is a pet peeve I have had for a while now. I still have a definition for MAKEFLAGS (with -j [# of cpus-1]) in my profile and rc files and had to learn that if you allow make to start too many things at once, the whole machine would hang. Even switching to a different TTY took ages.

To me, it sounds like a tremendous scheduling issue if a build job can drown a whole system like that. Shouldn't there be some emphasis on non-waiting threads/processes that haven't done much over those who have hogged the CPU in the past without me having to manually adjust the niceness of every build task?


I can’t offer specifics, but it’s probably at the point that you’d compile a different scheduler?


IIRC make -j is equivalent to make -j256...


From the man page:

> If the -j option is given without an argument, make will not limit the number of jobs that can run simultaneously.

Although other implementations might be different of course.


It's not a notion that I carried into my implementation of redo, for example.

    % redo --jobs
    redo: ERROR: jobs: missing option argument
    %


What made you go with that rather than trying to pick a sensible default like the number of cores available? Or was it just a general goal to require explicitness?


It was simplicity, of design, implementation, use, and indeed explanation. There's only one default, the default if the --jobs option is not used; rather than there being two defaults, one for not having the option and another for having the option but not the option argument.

More generally, I try to avoid optional arguments to command options, in line with the guideline in the Single Unix Specification. (I actually picked such ideas up many years ago, before the first POSIX specification, from a book by Eric Foxley titled Unix for Super Users, where there was an appendix on command line option syntax.)

* http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_...

Of course, if the user wants to use the number of cores available, then xe can work that number out and pass it as an argument; and indeed I have done that very thing in some of the package/make scripts that I have published.


You mean function composition on Lisp Machines REPL and Smalltalk Transcript, with the benefit of structured data.


I've used guile to do pipeline like transformations of data by using multiple function calls. It has the advantage of a more consistent api to do the transformations on the stream of data and surely better type checking. The syntax is different but I think about it as being similar.

Despite its messiness, doing ad-hoc integration of command line tools to transform text is really handy. It also spreads the cpu utilization and memory usage across the commands that you use in the pipeline.


I'll add that I enjoy using Common Lisp and Scheme REPL environments. I also really like Squeak as an interactive programming environment. Neither has replaced the old school Unix shell as my preferred command invocation environment, at least in my professional life.


I have the same experience, and I suspect it's because we haven't replicated the utilities in these environments. No one uses them as a replacement for shell because no one has done the work to use them as a replacement for shell.

Personally, I think Prolog is the right way to go, but I have this idea that I don't want to spend the rest of my life typing into a teletype emulator.


We kind of do on Windows, thanks PowerShell and PowerShell IDE.


> It also spreads the cpu utilization and memory usage across the commands that you use in the pipeline.

For example with Clojure you can use pmap and pcall to achieve that.


That's cool. I'll have to try it out.


Which is your favourite shell? bash, oh-my-zsh, fish?


Maybe this is implicitly assumed, but a terminal multiplexer such as tmux is also a useful utility for using unix as an IDE. For example, you can have a pane for your code in an editor like vim, and a pane for running your tests next to it, and maybe another pane for an interactive shell of your language. Lacking a consistent local development environment, tmux is indispensable when I ssh into into my remote development environment.


The killer feature of tmux and screen is that it doesn’t kill your working processes when you lose your ssh connection or your terminal emulator crashes. If you (or your boss) is into peer programming, two people can attach a single tmux session instead of crowding around a single screen. The fact that you can crate terminal tiles and sub-windows is icing on the cake to being able to persist work across login sessions


>it doesn’t kill your working processes when you lose your ssh connection

I meet folks all the time who don't know about tmux and this blows their mind. Likewise having multiple panes in a SSH session... and with mouse control!

    set -g mouse on
tmux is bliss!


These days, you can also run terminals as panels in vim, so vim is sort of its own terminal multiplexer now.

  :help terminal.txt


I tend to just write a command in a vim buffer (with all that nice Vim magic), select it and run it with :@"

e.g. I write and yank: r!fancy command arg1 arg2 ..

Then go to the next line and do :@" This will run the command (well execute the colon mode command) and read back the results.

edit: The nice thing about this is that I can do this as a progressing document (command, output, command, output) and scroll back to see what I have done and what the result was.


They should just stick a kernel in vim so I don't even need to install an OS ;)


It's funny, I never really understood why people want to subdivide a terminal emulator. I want to see more lines of code, not read code through a mail slot. My visual layout has always been many tall and skinny terminals and editors. With dual 4K monitors, I've let the windows get wider. I find myself hobbled to be on a single laptop screen, so I usually defer real work until I am back at my desk.

I do use screen for the detach/reattach feature sometimes. But, ever since I started regularly using X Windows instead of a vt220, I use my window manager to multiplex. I will open multiple xterms and emacs X windows ("frames" in emacs terminology). I will never sub-divide one xterm or one emacs frame, and I only learned the command to undo an accidental windowing subdivide, much like I learned to abort from vi/vim if I accidentally get dropped into one due to a missing VISUAL environment variable.

If I can launch emacs through ssh with X forwarding, I will. If not, I'll open multiple xterms and multiple ssh sessions and run many emacs instances in -nw mode. Once in a while, I'll mount the remote files via sshfs and use my local workflow. Even locally, I am just as likely to have multiple emacs instances open as multiple frames from one instance, since I prefer to find files and open them from the shell prompt than screw around with file-opening dialogs in the editor.

Even back in the vt220 days, I was much more likely to use shell job commands to background and foreground for multiplexing rather than want to subdivide the already small console.


Same result but you are tied to your wm key bindings for navigation. With tmux you have everywhere same bingings. Even on machine you see for the first time. Lost connection? tmux a - t0 and all panes are restored. (I also use tiling wm, but not for terminal splitting)


Perhaps that is the difference. I don't float around and use different consoles. I would never enter have my SSH credentials on some machine that isn't my own, so would never get to the point of using an SSH session from a foreign keyboard, screen, and window manager.

My workstation and my laptop computer are my interfaces to the world for 99.9% of my interactions. Without them, I am not working. The only exceptions might be touching a KVM console on a server in our machine room to see diagnostics (otherwise I would use SSH from my office) or a lab computer where I'd only be running local browser or demos.


On a similar note if you like tmux, give i3 window manager a go!


Manjaro i3 (https://manjaro.org/category/community-editions/i3/) is a very nice pre-riced one. Despite a lot of unnecessary bling you wouldn't get with a stock install it gave my crappy old dell laptop a new lease on life.


Manjaro i3 has a few good ideas but had I started using i3wm through that, I'd probably have been turned off it. It's just not my cup of tea.

Fedora Magazine has a pretty decent guide for setting up i3 to a functional state and I can recommend it: https://fedoramagazine.org/getting-started-i3-window-manager...


If using LLVM, you may want to debug with lldb instead. Also with gdb you can use the tui mode.

To read the environment variables used by a program you can use

    cat /proc/{pid}/environ
update: Some of this overlaps with reverse engineering. I posted this comment yesterday about useful reverse engineering resources on Linux https://news.ycombinator.com/item?id=17342197


Note that `/proc/$pid/environ` has null bytes between the entries, so it's not very readable if you just `cat` it. So you want to filter its content, e.g.:

    tr "\0" "\n" < /proc/$pid/environ


> To read the environment variables used by a program

This is only the environmental variables used when the program launches. If said program changes any environmental variables during the course of it's execution then /proc/$pid/environ wouldn't be updated to reflect that.

I expect you're already aware of this but it's worth me mentioning just in case anyone reading this thread wasn't aware.


I found cgdb much easier to use and more robust when resizing the terminal than gdb+tui. Liked it so much I made a video demonstrating it's usage: https://youtu.be/OKLR6rrsBmI


I like Acme because it is built to facilitate using shell commands from within the editor. You can write scripts in any language you like and run them, which in some ways makes it the ultimate scriptable editor.

However, an IDE like IntelliJ brings so much to the table that it's hard to imagine working without it. I know I'm much more productive using it than Emacs or Acme or Vi or any other editor that integrates well with Unix.


IntelliJ only seems fantastic because it is so tightly integrated with the language you want to use. That's why the company has to release a separate product for each major language they support. Considering this, IntelliJ cannot be considered as being in the same league as a general text editor such as vi or emacs, it is instead a special-purpose Java source editor.


That is called market segmentation, a well known business technique.

You can use InteliJ enterprise and all the plugins.

Eclipse, Visual Studio, Netbeans, XCode, Android Studio and many other IDEs of yore always supported multiple languages.


One thing that annoyed me the last time I tried it (webstorm variation if I’m not mistaken) is that you can’t use it as an editor only.

You have to create a project. For everything.

And of course, multi-second launch time is unacceptable for an editor in my book. It’s just too frustrating.

But yeah, autocompletion is amazing. It even understands image dimensions.

And it understands PHP, JavaScript, HTML and CSS, all mixed toguether in the same file, respecting indentation and syntax highlighting like nothing I’ve ever seen.


I can't find the source, but I remember hearing a quote similar to, "What separates a text editor from an IDE is that you have create a project to save a file."

I feel the same way as you. I spent a good amount of time with PyCharm, WebStorm, and RubyMine.


This is the type of tradeoff I have with Emacs vs. an IDE. Certain IDE-like features are good enough for Emacs; it was better with IDEs. However, IDEs were designed for specific purposes. Having the flexibility of the same text editor at the same time is probably more important than possibly faster autocompletion.


Highly recommend the inotify tools. It makes things like auto reloading very easy to setup.


Though it looks like unmaintained, I use watchdog[1] for automatically running build and test commands on file change.

[1] https://github.com/gorakhargosh/watchdog


Maybe look into the very unix-y fswatch, which is cross platform too.

https://github.com/emcrisostomo/fswatch


Midnight Commander (mc) is also a great file management tool.


ranger (or rangerfm) is also quite nice.

It provide a 3 columns view with either:

* parent directory | current directory | child directory

* parent directory | current file | preview of the file (if possible in text)

I found it quite nice to use because it gives you a good picture of where you are and what are your neighboring files, which can be helpful when navigating inside a code base.



But the idea of an OS in and of itself is the ultimate IDE, is it not? I can't say I like the idea of writing software without any system calls at my disposal.


That is the case of C, because UNIX is the runtime, hence what was left out of ISO ended up on POSIX.

Any programming language with a richer runtime can be OS agnostic to a certain point. The runtime is the OS.


tmux + cscope + a $CSCOPE_EDITOR script that starts $EDITOR in a new tmux window == awesome.

EDIT: Also, add nested tmux sessions to make it awesome++.


is it possible to write man pages for tasks rather than commands.

so you could have: man search, man restart, man compile. etc.


Manpage names are not tied to command names at all. Sure, sections 1 and 8 are conventionally sorted by command name, but the other sections have different purposes. Sections 2 and 3 document system calls and library calls, one function at a time:

  man 2 fork
  man 3 fread
Section 5 documents file formats and conventions:

  man 5 passwd
  man 5 rsyncd.conf
And section 7 is described by `man man` as "miscellaneous". There is some nice high-level overview documentation hidden in there, e.g.:

  man 7 signal
  man 7 socket
My all-time favorite is from section 7 is `man ascii`. Very handy when you're ever pushing bits.

NOTE: If these commands fail for you, check if you have the "man-pages" package installed (may be called different in your distro).


ls -ltrapR




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: