Although I'm unfamiliar with using pipes to provide parallel processing, could you elaborate? If you can pass to another thread I will be kicking myself for missing that.
You didn't miss it, each stage in the pipeline runs in it's own process so you're already using it. With say "grep -r foo ./ | do-something | do-omething-else" the second and third stages will be operating on the data from grep while grep is still running.
In addition to that you can get more parallelism pretty simply with xargs (http://offbytwo.com/2011/06/26/things-you-didnt-know-about-x...). If that's not enough then check out gnu parallel (https://www.gnu.org/software/parallel/).
While we're talking about unix as an IDE I want to put in a plug for inotifywait (https://linux.die.net/man/1/inotifywait). Combining this with a makefile you can get some very fast builds with near instant feedback everytime you save a file. I have one that executes units tests and valgrind every time for instant memory leak detection. Another great use case is getting WYSIWYG like features with latex files.
To me, it sounds like a tremendous scheduling issue if a build job can drown a whole system like that. Shouldn't there be some emphasis on non-waiting threads/processes that haven't done much over those who have hogged the CPU in the past without me having to manually adjust the niceness of every build task?
> If the -j option is given without an argument, make will not limit the number of jobs that can run simultaneously.
Although other implementations might be different of course.
% redo --jobs
redo: ERROR: jobs: missing option argument
More generally, I try to avoid optional arguments to command options, in line with the guideline in the Single Unix Specification. (I actually picked such ideas up many years ago, before the first POSIX specification, from a book by Eric Foxley titled Unix for Super Users, where there was an appendix on command line option syntax.)
Of course, if the user wants to use the number of cores available, then xe can work that number out and pass it as an argument; and indeed I have done that very thing in some of the package/make scripts that I have published.
Despite its messiness, doing ad-hoc integration of command line tools to transform text is really handy. It also spreads the cpu utilization and memory usage across the commands that you use in the pipeline.
Personally, I think Prolog is the right way to go, but I have this idea that I don't want to spend the rest of my life typing into a teletype emulator.
For example with Clojure you can use pmap and pcall to achieve that.
I meet folks all the time who don't know about tmux and this blows their mind. Likewise having multiple panes in a SSH session... and with mouse control!
set -g mouse on
e.g. I write and yank: r!fancy command arg1 arg2 ..
Then go to the next line and do :@"
This will run the command (well execute the colon mode command) and read back the results.
edit: The nice thing about this is that I can do this as a progressing document (command, output, command, output) and scroll back to see what I have done and what the result was.
I do use screen for the detach/reattach feature sometimes. But, ever since I started regularly using X Windows instead of a vt220, I use my window manager to multiplex. I will open multiple xterms and emacs X windows ("frames" in emacs terminology). I will never sub-divide one xterm or one emacs frame, and I only learned the command to undo an accidental windowing subdivide, much like I learned to abort from vi/vim if I accidentally get dropped into one due to a missing VISUAL environment variable.
If I can launch emacs through ssh with X forwarding, I will. If not, I'll open multiple xterms and multiple ssh sessions and run many emacs instances in -nw mode. Once in a while, I'll mount the remote files via sshfs and use my local workflow. Even locally, I am just as likely to have multiple emacs instances open as multiple frames from one instance, since I prefer to find files and open them from the shell prompt than screw around with file-opening dialogs in the editor.
Even back in the vt220 days, I was much more likely to use shell job commands to background and foreground for multiplexing rather than want to subdivide the already small console.
My workstation and my laptop computer are my interfaces to the world for 99.9% of my interactions. Without them, I am not working. The only exceptions might be touching a KVM console on a server in our machine room to see diagnostics (otherwise I would use SSH from my office) or a lab computer where I'd only be running local browser or demos.
Fedora Magazine has a pretty decent guide for setting up i3 to a functional state and I can recommend it: https://fedoramagazine.org/getting-started-i3-window-manager...
To read the environment variables used by a program you can use
tr "\0" "\n" < /proc/$pid/environ
This is only the environmental variables used when the program launches. If said program changes any environmental variables during the course of it's execution then /proc/$pid/environ wouldn't be updated to reflect that.
I expect you're already aware of this but it's worth me mentioning just in case anyone reading this thread wasn't aware.
However, an IDE like IntelliJ brings so much to the table that it's hard to imagine working without it. I know I'm much more productive using it than Emacs or Acme or Vi or any other editor that integrates well with Unix.
You can use InteliJ enterprise and all the plugins.
Eclipse, Visual Studio, Netbeans, XCode, Android Studio and many other IDEs of yore always supported multiple languages.
You have to create a project. For everything.
And of course, multi-second launch time is unacceptable for an editor in my book. It’s just too frustrating.
But yeah, autocompletion is amazing. It even understands image dimensions.
I feel the same way as you. I spent a good amount of time with PyCharm, WebStorm, and RubyMine.
It provide a 3 columns view with either:
* parent directory | current directory | child directory
* parent directory | current file | preview of the file (if possible in text)
I found it quite nice to use because it gives you a good picture of where you are and what are your neighboring files, which can be helpful when navigating inside a code base.
Any programming language with a richer runtime can be OS agnostic to a certain point. The runtime is the OS.
EDIT: Also, add nested tmux sessions to make it awesome++.
so you could have: man search, man restart, man compile. etc.
man 2 fork
man 3 fread
man 5 passwd
man 5 rsyncd.conf
man 7 signal
man 7 socket
NOTE: If these commands fail for you, check if you have the "man-pages" package installed (may be called different in your distro).