Hacker News new | past | comments | ask | show | jobs | submit login
Unix as IDE (sanctum.geek.nz)
237 points by yarapavan on Oct 6, 2016 | hide | past | favorite | 209 comments



This was how Unix was presented to me when I was first introduced to me. I was told it was like industrial machinery - designed for those who know how to use it, efficient, dangerous if you aren't careful.

Which is exactly what a good tool should be.

When IDEs (the kind that comes as a GUI'ed bundle of things thought to be the most appropriate for the purpose by committee of someones, somewhere) started becoming popular, I didn't get it, at all. Put aside tool tips (which annoy me far more than is rational); it seemed to me that the idea was to build a toy model workshop on the work bench of a fully outfitted workshop. Why would anyone want to use that? Why bother in the first place?

I mean, I do get it. And who knows, maybe if I used one of those long enough to break muscle memory and the conceptual assumptions that don't map, I'd change my mind. But so far, vi + shell + standard utilities + what I build have served me just fine, and if for some reason I were to need "more power", whatever that means, I'd probably move to emacs.

And I still maintain that tool tips/balloon help/obscuring the very thing you're pointing at is a stupidly rude anti-pattern. Even worse is not making it possible to turn off.


The difference is IDEs understand the underlying structure of the code, including relationships between entities, symbol types in their context, indexes for searching, and so on. That allows you to express transforms that can't be expressed if everything is just a text stream. I can grab a chunk of a function in resharper and do an "extract method", and it can figure out data dependencies, or I can do a "find usages" for a variable called "map" and not pick up every comment that has the word "map" in it, or I can move a class to a new module and let it figure out all the call sites that need to change their using statements.

Comparing unix to an IDE is like comparing a CSV file to a relational database. They both store data in tables, but you're not going to replace Postgres with grep, I hope, even though you could do very basic "queries" with grep. And unix can never be a very good IDE, comparatively, because "text streams" are not a sufficiently good data structure for representing program structure.


The article mentioned ctags, which actually gets you quite far in this regard. Combined with CtrlP (a vim plugin) and the Silver Searcher, I navigate large codebases quite quickly. I do agree that full blown IDEs probably have better support for refactoring code, but so far the advantages of using vim/tmux/unix commands are much more valuable to me:

- it's more natural for me to have a shell as a starting point and jump into an editor, rather than the other way around; it suits my workflow much better

- I can combine tools in a very convenient way, `:!ls /tmp` real quick from my vim, fuzzy search some json file hidden inside an atrocious directory structure, `:!git status -s` to see what I've changed, and so on

- IDEs are often language specific, which implies learning a new IDE for every language; I only need to know vim, which I use for every language I use but Java

- I'm more of a keyboard person rather than a mouse person, so terminal based tools that have intuitive keyboard bindings suit me better than IDEs which often require you to learn some really awkward key combinations to do things that I do very quickly in vim; just take a look at the cheatsheets for e.g. PyCharm/IntelliJ and you'll see what I mean (yes I know about the vim plugin for IntelliJ, but as an experienced vim user, it's not that great)


"I only need to know vim, which I use for every language I use but Java"

Java does seem to be the big exception in that 1) it's so overly verbose it almost requires an IDE to handle all of the boilerplate, 2) has enough static typing to allow deterministic refactoring, 3) is popular enough to support massive IDE engineering efforts.

I use IntelliJ for Java (and sometimes Scala), and Emacs for pretty much everything else.


I agree with your about language specific IDE's however the exceptions are sometimes very good.

I use intellij, it has intelligent support for Python, JavaScript (including node), SQL, HTML, CSS and PHP all working out the box with very little setup required, its the sheer integration of things.

I'm aware I could get close with emacs (except frankly its web modes suck) but I'd have to find all those bits myself, integrate them and hope that it wasn't fragile (you know updated fizz which was a dependency of too but bar uses a different version) and at the end I'd still have what I would regard as an inferior less cohesive tool.

There are no absolutes in matters of taste and that's what it is I think.


vim-cscope gives the vim user everything the OP thinks he needs ..


There is nothing about text streams that prohibit tools from understanding whatever language you're writing in. Unless you have an very interesting compiler, your IDE is processing text, too.

But also, you seem to believe that somehow communication between applications is somehow limited to pipes, which makes the comparison inapt.

Ignoring the category error (text format in a data file vs. server application + storage), please explain what your IDE does that cannot be communicated over a socket? Or dbus (oh god please don't, but the point stands)?


It's not about what UNIX can theoretically do. Yeah, sure, you can refactor a method over a socket, I guess, just as you could use VBA macros in Excel to write a Tetris game using spreadsheet cells as block.

The question is whether there is actually good refactoring support on the UNIX command line that can compete directly with the likes of ReSharper or IntelliJ IDEA? I haven't seen anything coming close to that yet.

The fact is, there are actually a few implementations of Tetris and Breakout for Excel, and probably a few command line refactoring tools here and there. But just as no one is going to play a clunky Tetris game in Excel just for the novelty of it, I don't see myself using an inconvenient and limited command-line refactoring tool.


> The question is whether there is actually good refactoring support on the UNIX command line that can compete directly with the likes of ReSharper or IntelliJ IDEA? I haven't seen anything coming close to that yet.

There's nothing stopping people from making it. IDEA Community is open source; I haven't looked at the code but I trust it can be decoupled and made into a separate utility.

If you want an example that's out there, Go has support for refactoring via tools like eg and gorename, and you can use them in an IDE, in a text editor or in a terminal with nothing more than GNU coreutils.


Then people can just go ahead make it, instead of complaining about lazy developers locking themselves into Jetbrains tools or eclipse or trying to do everything from within Emacs or Vim.

Right now you've got very basic refactoring tools for Go and JRefactor. Maybe a few other tools I don't know.

But in that case I don't see what's the point of running this tool from the CLI instead of configuring VIM/Emacs/{insert your favourite IDE or Editor} to run it on the function under your cursor with just a few keystrokes.


> There is nothing about text streams that prohibit tools from understanding whatever language you're writing in. Unless you have an very interesting compiler, your IDE is processing text, too.

Well, the text stream lacks context of related entities. Let's say you're piping a single source file. The program you're piping to can certainly parse "import foobar", but how will it know what's in "foobar"? You could argue that it could parse foobar, but then it would need to recursively parse all of "foobar"'s dependencies too, and so on, which seems pretty inefficient if you're doing that a lot compared to a program keeping it in memory. Sure you could create files to cache that data, but at that point you're sort of waltzing into IDE territory anyway.

Besides, the "unix as an IDE" idea is based on using general purpose utilities, not specialized language specific ones, and those utilities make no strong assumptions about the input text.


>The program you're piping to can certainly parse "import foobar", but how will it know what's in "foobar"?

That's the "understanding which language you're in" part comes in.

> You could argue that it could parse foobar, but then it would need to recursively parse all of "foobar"'s dependencies too, and so on, which seems pretty inefficient if you're doing that a lot compared to a program keeping it in memory. Sure you could create files to cache that data, but at that point you're sort of waltzing into IDE territory anyway.

The whole point is that "Unix is your IDE". If a specialized tool benefits from caching, it can use it. If more than one tool can share the cache, pass it as a parameter or pipe it in. If it speaks in a universal language (ideally, simple text) and it's a specialized tool, it's certainly following the Unix way.

> Besides, the "unix as an IDE" idea is based on using general purpose utilities, not specialized language specific ones, and those utilities make no strong assumptions about the input text.

That's another arbitrary line. It makes sense that awk or grep work with any type of text. It doesn't make sense that gcc or a tool like I just explained "makes no strong assumptions" about what's coming in.


> That's the "understanding which language you're in" part comes in.

... which standard unix utilities can't do. grep and awk can't reliably determine if a symbol is in a comment, and I can't tell it "rename symbol Foo, but only if it's used as a type identifier in my project".

The question isn't if unix could theoretically work as well as an IDE if you added a few thousand custom userland programs that understood language context. The question is if unix, out of the box right now, is competitive with a high end IDE. It's not, because it can't possibly understand your code.

Whether you like IDE's or not, the entire point of an IDE is that it's more than a dumb text editor because it understand what you're typing into it. If all your IDE offers is text editing functionality unix can do, then it's a bad IDE.


> The question isn't if unix could theoretically work as well as an IDE if you added a few thousand custom userland programs that understood language context. The question is if unix, out of the box right now, is competitive with a high end IDE. It's not, because it can't possibly understand your code.

You need to download tools for your language just like you downloaded an IDE. The whole point of Unix is that you have specialized, composable tools for your purpose. Don't move the goalposts.


Which tools do you know that will handle automatic refactoring in, say, java, from the command line? Genuinely curious, I've wanted something for this


gofmt is an example of a tool for refactoring on the command line.

There isn't much demand for a similar tool for Java, but I have no doubt that a few months of effort could build one out of Eclipse. Its startup time probably wouldn't be great though. We already have eclim, that treats Eclipse as a daemon serving refactoring APIs to clients, to mitigate the configuration and startup problems.


"There isn't much demand for a similar tool for Java..."

What empirical evidence do you have for this?

At one point I looked very hard for good Java tools for Unix and/or Emacs, and found nothing. I imagine many others have done similar investigations and came up empty.

So I think its an open question whether Java refactoring tools for Unix don't exist because there is no demand, or whether there are engineering and productivity reasons why such tools work better as part of an IDE.

(I will say, though, the existence of similar command line tools for Go indicate similar tools might be possible for Java, also. Or maybe this is just due to Go being designed to be easy to parse?)


The reason there isn't demand is because the same tool for Java would necessarily be a lot slower on startup. The version of the tool that eliminates the slowdown by daemonizing does exist: it's called eclim.

Personally I sometimes use eclimd in Eclipse from emacs. But it's a bit flakey. Usually I switch between an IDE and emacs depending on whether the current task is text-intensive or not.

IMO the entire language-specific modes thing in emacs is a collection of hacks (eg.: using regexes for highlighting! highlighting as a characteristic of the text!) and woefully under-architected; this is a much bigger problem than lack of any language-specific tool. Until elisp is replaced with a faster language with less dynamic scope by default (Scheme would be fine), I don't really see it improving.


I am not the parent you reply to, but see my previous reply for the capabilities you miss in 'dumb' editors.

"here is nothing about a text streams that prohibit tools from understanding whatever language you're writing in"

Well it might be true that you can deduce the language of some text stream but you need to make a big step in order to interpret it. You are talking about a compiler. Well a compiler takes a compilation unit, not some stream of text. To infer the semantics of a program the program needs to be present, not a part of it. Granted, a program might be broken up in independent modules, of course.

Make no mistake, a 'dumb' editor like sublimetext could understand a php symfony or a java struts project, if it would get the same analysis tools available in eclipse for example. It could be done, but it is massively more difficult than just running a syntax highlighter and running a rough tagger on some text. Its sad because I believe that fast, slim, editors that know as much as a massive IDE is the best of both worlds.

I have seen the code for eclipse -- there is so much dancing, it's a miracle that handling a key press eventually completes in some time.


> I have seen the code for eclipse -- there is so much dancing, it's a miracle that handling a key press eventually completes in some time.

In many IDE's it barely does, I've had issues with Eclipse, Visual Studio and IntelliJ at some point regarding input latency or the total freeze of the UI while it tries to handle magic with my input in the background.

Visual Studio has gotten progressively better since VS2010, especially with Roslyn; IntelliJ-based products have made strides since IDEA 13 where they really started to focus on input latency; Eclipse was never quite as bad, but it's a pain to use compared to VS or IntelliJ so it's irrelevant that it freezes 10% less in my eyes.


Text editors do have this though, YouCompleteMe for instance does semantic analysis on several languages to generate its completions (using different backends, like Clang or Jedi). Some of the Haskell plugins for Vim and Emacs are also downright amazing with the sort of code analysis and even code derivation they can do right in the editor.


"Make no mistake, a 'dumb' editor like sublimetext could understand a php symfony or a java struts project, if it would get the same analysis tools available in eclipse for example."

And at that point, it would be an IDE.


Of course there's nothing text stream cannot do. Because, at the very worst, you can always feed the text stream to parser to convert it to structured object.

But I don't want to do the gluing. I want to refactor code, not to teach the tool how o understand the code.

Why do you need text editor when everything are just bytes?


This is the equivalent of arguing that there are no differences between two languages because they're both Turing complete.


I don't see where they were saying there's no difference.


> The difference is IDEs understand the underlying structure of the code, including relationships between entities, symbol types in their context, indexes for searching, and so on.

This is only true because purpose-built IDEs embed the tooling to do so, but nothing prevents said tooling to be extracted, to be used in any way fit by any editor or non-editor tool. Case in point: gocode/gorename, C#'s Roslyn project.

> And unix can never be a very good IDE, comparatively, because "text streams" are not a sufficiently good data structure for representing program structure.

The article is not about Unix as pipes of streamed data (which BTW can very well be binary) but Unix as individual tools that compose into a greater whole.


This a 1000 times. Text is beautiful but if your editor doesn't understand it, you are, quite paradoxally, limited in your editing of it. Think refactoring, think auto-completion. There is a difference between making type-correct suggestions and saying "foo? oh, what about foobar, i saw it mentioned somewhere?!"

I often even use auto-completion in eclipse to get the hang of a library and explore its capabilities.


Code in an IDE is just a text editor with the aforementioned tools built in. Such tools are available for the terminal as well - both command line based and as vi or emacs plugins.

Saying UNIX is like an CSV compared to a relational database is, frankly, a hugely ignorant comment. The real differences between UNIX and an IDE is that the tools available in an IDE are discoverable where as UNIX requires a little more research and self assembly.

I frequently flitter between using a GUI IDE and just working entirely in the terminal - depending on my mood. Generally I find the terminal to be more productive, albeit I've have 30 years of experience in command line environments so I'm very much at home in a terminal. However the one thing I do find quicker in a GUI IDE is switching between different functions on different source files. Not that it's slow in the command line either, but that's the biggest inconvenience I have. So these days I generally use a GUI text editor for reading and writing the code and the command line for code refactoring, compiling, debugging, etc. It's a ratio that seems to work well for me.


There really needs to be a standard interface that all compiler toolchains can deal with for emitting program structure. All implementations deal with this problem their own way.

For C# there is OmniSharp based on NRefactory, and also Microsoft's Roslyn. For C++ there are tools based on libclang. Rust has Racer. It would be nice if there were an interface an editor could implement which gave access to all of these languages.



Wasn't Steven Yegge working on something like this for Google?

A toolchain to handle any/most languages?


> And unix can never be a very good IDE, comparatively, because "text streams" are not a sufficiently good data structure for representing program structure.

This is a simplistic view of the system; you essentially have binary over streams and can use it to encode in any format you wish.


>This was how Unix was presented to me when I was first introduced to me. I was told it was like industrial machinery - designed for those who know how to use it, efficient, dangerous if you aren't careful. Which is exactly what a good tool should be.

The problem is that this is a whole lot of cargo cult.

It does have the efficient parts, but it also has a lot of accumulated cruft, archaic limitations, arbitrariness, and plain bad decisions.


> The problem is that this is a whole lot of cargo cult.

Please do explain. Where do I use symbolic representations of technology to attempt doomed magic rituals?

As far as the rest of it, everything accumulates cruft. That's why people write new things. And yes, there have been bad decisions. (Just, IMHO, meaningfully fewer than other OSes.) But perhaps I'm blinded by my cargo-cultish ways, can you point to an archaic limitation that you've run in to recently?

I'm ignoring "arbitrariness", as that generally is both highly opinion-driven and generally subject to correcting ignorance of the decision-making process that went it to it. So usually "arbitrary" things are actually one of the other listed categories.


symbolic representations of technology to attempt doomed magic rituals

I believe that's the official tagline for GNU Autoconf?


Exactly. Sometime in early 1990's, the first configure.ac file was created. Today, every configure.ac file in existence can trace it's ancestry to that original file through a long, unbroken chain of copy-paste operations.


>Please do explain. Where do I use symbolic representations of technology to attempt doomed magic rituals?

Perhaps you are not aware that cargo cult has a metaphorical meaning in computing that's about carrying forward baggage that's not relevant, because it worked/mattered in the past, and giving the whole past thing more importance than an objective view would.

>As far as the rest of it, everything accumulates cruft.

That's hardly an excuse not to get rid of it periodically in the fear of losing compatibility with some 30+ year old script or device mode.

Terminal EMULATORS are a perfect example in an age where someone close to 30 probably has never even seen an actual terminal.


> The problem is that this is a whole lot of cargo cult. It does have the efficient parts, but it also has a lot of accumulated cruft, archaic limitations, arbitrariness, and plain bad decisions.

I don't think the problem is cargo culting. Replacing things with better solutions is hard. Take the venerable ANSI terminal. It's archaic and cryptic, I think most people would agree on that. Yet proposing a better, widely useful and backwards compatible alternative is extremely hard. Which is why no one has succeeded yet.

There is still a lot of innovation, though. 10 years ago, we didn't have git, or ag, or autojump. Gdb has improved A LOT in the past few years, even if it's not immediately obvious when you open it.


> Yet proposing a better, widely useful and backwards compatible alternative is extremely hard.

we keep inventing new stuff, but always end up just using it to emulate and display terminals in new and interesting ways.

even my new, shiny 4G LTE + iOS + touchscreen mobile device is used to run a terminal emulator over ssh half the time. i've even use its web browser to run a web RDP session .. to display a terminal console.

i think there's just something soothing about a screen full of text. it's a very efficient way of displaying information.


Only certain types of information. It's not a great way to display, for example, a comparison between two similarly-edited movie trailers. Or a map of Bremen. Or the annotations on your 2016 second quarter financial sum-up spreadsheet.

The big problem with the entire Unix philosophy is that it assumes all important data is text, when in the real world very little of it is. If you're one of the few people who can get away with text-based everything, it's great. If you're the vast majority of the population, it's a terrible idea.


> it assumes

True, UNIX was designed, in part, to make the processing of textual data as easy as possible. It was never intended to be running inside a photo camera or a drawing pad.

On the other hand, from the very beginning UNIX was just as capable of processing binary data as well as controlling various devices, such as typesetters and graphical displays as any other reasonably powerful OS of the time.


this is HN, so i was talking only about systems admin and nuts-and-bolts development, but i would also argue that in all of your examples, having a terminal in addition to the other display formats would help convey more information as well as offer a better command interface. a good example would be autocad which has the drawing window but also a powerful terminal (i believe it was lisp-based) that you used in parallel.


Git is just repeating all the same mistakes all over again, just when it looked like things were starting to maybe get a bit better. I'm amazed you're citing that as a positive example.


If you're going to hate on something almost universally respected, you're going to have to explain yourself.


Git was made in, what, 2005. We knew about discoverability then. We knew about accessibility then. We knew about user testing, about UX. Git makes use of zero of those concepts. In fact, it seems to be actively hostile towards its users^.

We also had source control packages that were not only easier to use, but had more features: TFS let you put a lock on files that don't merge well, for example. Both TFS and Subversion allowed you to check-out only small portions of the repository.

Git has some nice things. It's generally fast, for example. It allows developers to work offline. But when the developer community apparently unilaterally chose to adopt it, we threw away a lot of good stuff, let's not forget that.

I'd also question that Git is "universally respected". I've worked with a great many programmers who found it just as unpleasant and unfriendly as I do. The reason people use Git is because other developers demand it as a condition of employment, not because it's enjoyable. In that respect, it's in the same class of application as, say, OracleApps or Lotus Notes.

---

^ Yes, someone's going to come in here and say, "well it's a professional tool for professionals therefore it's ok that it's difficult to use and hostile and not accessibility to people who can't use a CLI due to physical limitations because it's a professional tool for professionals.

Well.

Sony Vegas is a professional tool for professionals, and somehow it doesn't have even remotely the amount of usability problems Git does. So is Adobe Photoshop. So I don't buy that argument.


> We knew about user testing, about UX. Git makes use of zero of those concepts. In fact, it seems to be actively hostile towards its users^.

I was with you all the way here, and then suddenly...

> We also had source control packages that were not only easier to use, but had more features: TFS let you

Suddenly you mention TFS as an example of something good. Mind blown.

TFS is absolutely among the worst "popular" version control systems I've worked with, and maintaining several branches and merging between them is pure hell.

In TFS a branch is something heavy which moves at tectonic speeds. Trying to maintaining agility in a world of TFS is just not viable.

> TFS let you put a lock on files that don't merge well, for example.

A "key" feature which more often than not is a source of problems you only have with TFS.

90% of the failed merges I've had with TFS has been because of its completely inadequate handling of binary files.

> Both TFS and Subversion allowed you to check-out only small portions of the repository.

This has nothing to do with Git specifically. This is a difference between centralized and distributed. If you want the benefits of distributed source-control, you'll have to pay the cost.

> I'd also question that Git is "universally respected".

Yes. Universally will obviously be easily disproven. I'd say "widely respected" instead.


> Suddenly you mention TFS as an example of something good.

You need to work on your reading comprehension. The only claim I made about TFS is that it has a feature (locking files) that Git doesn't.

I cited it as an example that Git has fewer features than its competitors. I never said anything about its quality.

> This has nothing to do with Git specifically. This is a difference between centralized and distributed. If you want the benefits of distributed source-control, you'll have to pay the cost.

Fine; but that's not the point.

The point is it's a feature that I want. And now that Git has seemingly taken-over the field of source control, I no longer have it. Nobody does. It's gone.

I'm not a fan of newer software having fewer features than older software.


> I cited it as an example that Git has fewer features than its competitors. I never said anything about its quality.

Fair enough.

> This is a difference between centralized and distributed... Fine; but that's not the point. The point is it's a feature that I want.

But file-locking is realistically just not implementable in a distributed source-control system. This sort of mechanism does only have a home in centralized architectures.

> I'm not a fan of newer software having fewer features than older software.

This may be a nitpick about the way you decided to express yourself, but just in case: Your math is wrong here.

You've had 1 capability you appreciate removed, while having tons of others added. That still yields you new software with more features, not fewer.


> But file-locking is realistically just not implementable in a distributed source-control system.

Ok. Then I guess it was a bad technical choice to pick that system if it prevents you from matching the features of your competitors, wasn't it?

I get the sense you're still missing my point here, but whatever.

> You've had 1 capability you appreciate removed, while having tons of others added.

Like what? The only one I'm aware of is "work offline". Which TFS also offers in the newer versions.

(Also: again, work on your reading comprehension. I talked about two capabilities that TFS and Subversion have that Git lacks.)

I'd like to hear about these "tons" of features Git provides over its competitors.


Git allows you to micro-manage your work by commiting often, and diffing against a previous known good state, with easy rollback an option all the way.

But this is not really friendly to issue in a PR for, so git also allows you to rewrite your commit history by squashing commits, by reordering commits, and basically cleaning up after the fact or before publishing for code-review, or public release or whatever,

If you decide that code you've written in one private repo logically belongs in another public one, you can create a new repo for the select files in your private repo which you want to share without losing version control information or any revision data.

Git has a flexible diff-engine and lets you for instance plug in pandoc for diffing word-processing files, letting you actually handle non-VCS friendly formats like MS Word through Git.

Git is modular and lets you plug in pretty much any thing you like.

And people like to, so there's modules for pretty much everything around.

Like VCS-bridges converting TFS and SVN repos to Git (and allowing you to merge seamlessly, while commiting back to the monolith).

Etc etc. The list goes on.

Git gives me the flexibility I need, to do pretty much anything I can imagine.

Except locking files. I'll hand you that one. It wont let me do that. But I can setup a central Git-server where I'll reject commits which tamper with pre-agreed "locked" files. So work-arounds obviously exist.

I'm not saying Git is perfect, but it works for me, for the needs I have. Which is more than I can say for TFS or SVN.


> Yes, someone's going to come in here and say, "well it's a professional tool for professionals therefore it's ok that it's difficult to use and hostile and not accessibility to people who can't use a CLI due to physical limitations because it's a professional tool for professionals.

Oh that's nonsense. First of all, because git is a CLI application you can easily build a GUI for it, and there are a lot of them for you to choose. You can't do the reverse, and good luck integrating your tightly coupled WinForms application with other tools.

Second, what "physical limitations" put people into a position where they can code but they can't use a CLI?

> Sony Vegas is a professional tool for professionals, and somehow it doesn't have even remotely the amount of usability problems Git does. So is Adobe Photoshop. So I don't buy that argument.

Multimedia artists are among the people who are most resistant to ever switch tools due to how much they have mentally invested into the complexities of their go-to application. Your example is far from good.


> Oh that's nonsense. First of all, because git is a CLI application you can easily build a GUI for it, and there are a lot of them for you to choose. You can't do the reverse, and good luck integrating your tightly coupled WinForms application with other tools.

While I agree his example (TFS) is rubbish and all that, I'll hand it to Microsoft that they've at least considered developers.

TFS source control comes with fully scriptable command line tools, not to mention a web-based API.

You can automate TFS-based processes. The WinForms are just there to make it more useable for end-users (and obviously because nobody else is going to bother making a GUI for this source-control system)


> Multimedia artists are among the people who are most resistant to ever switch tools due to how much they have mentally invested into the complexities of their go-to application. Your example is far from good.

And here we are talking about people not wanting to use IDE because how much they have invested in their favorite work flow mental model.


"Second, what "physical limitations" put people into a position where they can code but they can't use a CLI?"

It's not the lack of a GUI, it's that the CLI interface is very poorly designed.


> Oh that's nonsense.

Ok...

> First of all, because git is a CLI application you can easily build a GUI for it,

Actually you can't, because Git has features like "pre-commit hooks" and it's literally impossible to translate the output/messages of those into reasonable form for a GUI to consume (since it's just freeform CLI output text). The best you can do is simply echo them back to the user in a stupid text dialog box. You'll notice all Git GUIs either don't support pre-commit hooks at all (always commit with them set to disabled), or display the output of them using the lazy text dialog box method.

The other limitation, of course, is that since Git has no library, it's impossible for a GUI to be based on Git without it actually using Git CLI commands in the background. Which means if the Git CLI changes its commands, suddenly its broken all GUIs built off of it. (GUI clients generally "solve" this problem by shipping their own Git binary and using it exclusively, even if a newer one is on the same system.)

(It's funny that you mention WinForms. What Git does with its CLI commands is pretty much exactly equivalent to a WinForms app with business logic in the event handlers. Except where any WinForms developer would say, "why are you mixing up the UI and the program logic? That's a terrible idea!" apparently Git developers think it's great and double-down.)

It's impossible to build a truly quality GUI without the product being designed with that in mind from the start. Git wasn't. And there was simply no excuse for that in 2005.

> You can't do the reverse,

The correct way of writing an application is to put all the functionality into a library, then keeping your user interfaces separate from that library. So while you're right that you can't do the reverse (build a quality CLI from a GUI application), if the developers of Git had done their job correctly, you wouldn't have to. Git would have a core library that anybody could build any kind of user interface from.

> Second, what "physical limitations" put people into a position where they can code but they can't use a CLI?

Dyslexia. I can manage in an IDE with a large dependence on auto-complete, but I can't do a CLI.

Since I grew up on Macs and it's only relatively recently that crud like Git has been seeping into the Windows development scene, this had never been a limitation for me in the past. Worse than being user-hostile, Git makes me feel disabled in a way no other program has.

> Multimedia artists are among the people who are most resistant to ever switch tools due to how much they have mentally invested into the complexities of their go-to application. Your example is far from good.

I'm sorry my examples do not meet your exacting standards.


The fact that git man page generator is so successful at creating pages full of garbage indistinguishable from the real thing should be a hint

https://git-man-page-generator.lokaltog.net/


I'm a vote in the got sucks camp, its not universally respected amongst the programmers I know we just put up with it because its dominant.

I always preferred mercurial but that fight is largely lost, I could use a mercurial -> whatever bridge but I'd still need to know git for when things break.

Git's ui is simply horrible in my opinion.


I want to thank you for posting. I know that the groupthink around Git, especially on forums like this one, is so powerful that by even mentioning that Git has flaws and isn't some kind of angelic perfect software from heaven runs the risk of having a billion down votes.


People get silly when it comes to tools they've invested time in, lots of people don't like to think they made the wrong choice and will defend it often to the extreme, I don't get invested in tools I use what works for me and switch when it seems reasonable.

Git is a necessary evil basically.


Well like I said above, I look at it as a "condition of employment" piece of software. Nobody would us Lotus Notes for email if they had their own choice in the matter. Nobody would use OracleApps for time keeping if they had their own choice in the matter. No, you have to use it to be employed.

Unfortunately, software in that particular niche has to be really good or it'll engender a lot of hate. If you're Skype, people generally don't mind because Skype is relatively easy and pleasant to use. If you're Git, people are going to hate you.


>I don't think the problem is cargo culting. Replacing things with better solutions is hard. Take the venerable ANSI terminal. It's archaic and cryptic, I think most people would agree on that. Yet proposing a better, widely useful and backwards compatible alternative is extremely hard. Which is why no one has succeeded yet.

Making a better alternative isn't hard. Getting everyone to use it is.


I forget where I read this, but someone once talked about how a new standard can't just be "better". Instead it has to be _a lot_ better and probably even bring a lot of really good new stuff to the table, otherwise not enough people will bother switching to it.


Another problem is the scope of the stack and problem space to be addressed. Someone can easily come up with a better solution to some subset of the problem, but it won't be interoperable and have the features necessary to solve the problem that all the people using the original solution expect. If you have a big enough fundamental improvement you can motivate the change to happen over time, but by the time that transition is done you've accumulated a whole different set of crufty legacy.


> And I still maintain that tool tips [...] is a stupidly rude anti-pattern

Amen, brother


And that moment when you point at it, but that f-ing pop up doesn't actually pop up...


Hey, don't ask me. I'm an emacs user. I have a fully outfitted workbench on top of my workbench, specifically optimized for text editing, which can interface with the workbench beneath it, so you never have to leave.


"I was told it was like industrial machinery - designed for those who know how to use it, efficient, dangerous if you aren't careful."

Neal Stephenson expressed this same idea very eloquently:

http://www.cryptonomicon.com/beginning.html


> vi + shell + standard utilities + what I build

That is good. Can you tell us, what exactly did you build with "vi + shell + standard utilities + what I build"


Mostly systems tools for managing operations in companies with complex internet presences.


> ... dangerous if you aren't careful. > Which is exactly what a good tool should be.

That's a really stupid requirement.


It lacks the I in IDE. Those different tools are not integrated.

In a java IDE the debugger steps through your source in the same editor view that you just used for editing. You can hover over each identifier while you code or debug to get the docs in a floating panel. Editing incrementally computes compilation errors while you type. Content assist has complete knowledge of the types at your caret position and thus can make accurate suggestions what can be inserted there. Saving the file while debugging automatically compiles the code, runs tests and splices it into the running VM so you can continue debugging the running process with the new code while also showing you which lines you changed relative to the most recent VCS commit. And of course there's more.


The I is Emacs. It adds a heapload of tools on top of what's already there, integrates what is already there, and is a frankly awesome enviroment. It has most of what you described, but it starts faster than an IDE, and is generally better: I don't think and IDE will be beating Paredit and Slime/Geiser, or JS2, or gdb-mode, any time soon.

Not to mention, compared to most IDEs, Emacs is trivial to extend. You know those really simple plugins that provide a tiny amount of incredibly useful functionality? Yeah, we have those, but most of them are so trivial to implement that they're just snippets you can copy into your config. Once you get the hang of elisp, you can be writing real, useful commands in a matter of minutes. Sure, not the big stuff, but still things that matter.


I've used emacs for years. I've invested a great deal of time in learning it. I'm not sure it was worth it. When I think about the opportunity cost involved in internalizing the kb shortcuts, apis, tuning emacs configs, getting various plugins working together, setting up this or that language support - and on and on - that time could have been better spent learning more useful things.

This feeling is especially strong when you use an IDE that does more out of the box with fresh install, than you could make emacs do after 6 months of tinkering and tuning.


Same here, now I'm a disciple of http://spacemacs.org/


Difference is fitting the tool to your likes or fitting your likes to the tool. Some people prefer one of those, some prefer the other.


I have a guilty confession to make. I don't know how to use Visual Studio. Which seems absurd because I am a heavy Emacs user. Last Time I tried to use Visual Studio (which was about six years ago) I found it kept getting in the way of what I was trying to do. It almost had too much complexity. I ended up throwing my hands in the air and 'saying forget this' I'm going back to what I know, which is Unix and Emacs. At this point I think I'm too entrenched in my habits to have the patience to give anything else a try. Maybe slavish adherence to my tools makes me a bad developer but if it isn't broken why replace it.


If you can't debug with just print statements, you're doing it wrong :)


"My primary debugging tool is Console.Writeline(). To be honest, I think that's true of a lot of programmers." --Anders Hejlsberg

If Anders Hejlsberg does it, there's no shame in it.


There's no shame in it but that doesn't mean that it's the best or most efficient way of doing things.

I've found that stepping through code in a debugger at a human pace, and getting to really understand what's happening when a bug occurs is invaluable.


I don't trust debuggers with multi-threaded programs.

Organized (multi verbose level) tracing built right into the application can go a long way before you actually need a debugger.


One problem with this, is that your code often ends up in a state where only step-through debugging works anymore. It might become too complex to reason about just looking at the code, from the types alone, or by printing data.

Same problem happens with other methods too: If you develop solely with unit and integration tests, it might actually be quite difficult to get a step-through debugger set up to debug your application. I worked at a company where I was the only one who used a step-through debugger, and some uses of compile-time metaprogramming would frequently break the debugger.

And if you use multiple techniques as appropriate(unit tests, printf or equivalent, step-through debugging), it's generally easy to use any of them as appropriate.


Like every other tool in programming, different people have different preferences and experiences. Symbolic debuggers have always ended up being a waste of my time, but I don't try to extrapolate from there to everyone else's preferences.


With prints and a backtrace function, you can make up for a most debugger usage. But for tracking down heap corruption, a debugger with memory breakpoints is the bees knees.


Yeah. Sometimes I find it easier to write a program parsing log files with the line number and PID to step through.

Sometimes I think the people who have success with debuggers often must be working on a different class of problems than I am familiar with.

Numerical problems can usually be fixed with a series of targeted asserts and logs.


It's funny but, in the embedded world, it's scary how archaic things are.

For example, just a few hours ago I convinced a colleague to try using the debugger. Our hardware has had a functional JTAG-based debugging toolchain for years, but people still haven't picked up on it. I'm the new guy who spearheaded it in the team -_-


For my 200 and 300 level embedded software papers, I was essentially stuck with using flashing leds and printf to debug code.

For my 400 level embedded systems design paper (which was actually a hardware design paper, we weren't graded on the code), we built boards that we could program and debug with JTAG.

I was stuck on a pain point for hours until one of the tutors showed me how to use GDB with the boards via JTAG. It took me literally 5 minutes to fix the problem. Being able to step through the code line by line allowed me to see exactly where it was breaking, and why it was breaking.

If I'm ever doing embedded development again (unlikely as I'm now employed as a web developer), I don't think I'll be able to function at all without a proper debugging environment.


> I was essentially stuck with using flashing leds and printf to debug code.

Should be enough for anyone. Some people used flashing LEDs to dump an entire firmware! ;)

http://hackaday.com/2008/05/27/porting-chdk-to-new-cameras/


Back when I was working on embedded, I typically wouldn't even bother looking at the debugger. They never helped.

The general workflow, when presented with a new system, was something like:

(a) Board would arrive. Admire it for a bit.

(b) Look suspiciously at the supplied CD. Gingerly insert it into computer. Oh, look, a Windows install.exe. Insert it into the Windows computer next to mine (with its screen and keyboard slaved to mine with x2vnc, which is great). Install.

(c) Load the terrifying, buggy, proprietary IDE. Close it again. (This was in the pre-Eclipse days. You really had no idea what you were going to get here.)

(d) Search through the vast pile of useless guff which it had installed for the embedded copy of gcc. Find it. Also find the BSP libraries, and link scripts.

(e) Realise it's a terrifying, buggy, proprietary-patched version of gcc where the source package doesn't match the binary.

(f) Attempt to find whatever terrifying, buggy, proprietary tool actually downloads images onto the board.

(g) From the command line, write a tiny makefile which uses everything found in (c) plus (f) plus the terrifyingly misspelt quote documentation unquote (supplied in a PDF on the CD) and attempt to produce and run a 'Hello world' image. Download and run it.

(h) Assuming (g) worked, bolt it on to our existing gcc-and-make based built automation and actually start work.

Any debugger was usually so tightly integrated to the IDE, which was always set up to assume a particular project layout which didn't match our source layout, that it was usually more trouble than it was worth; particular as our product had a lot of JIT stuff in it, which a source debugger couldn't help with much anyway.

The very best boards had an on-board monitor which gave you a Commodore PET-style assembler debugger. One even had hardware watch and breakpoints! The ability to single step through stuff, via a serial terminal, with no prior setup required, was amazing. It was sufficiently robust that even when the board really crashed badly it would drop into the monitor and you could examine what had gone wrong.

I have a particular hatred for the debuggers which required a slave task running on the board itself. (a) sucks to be you if you were running a different OS; (b) oddly enough, if your app crashed and scribbled all over memory, it tended to stop working...

</rant>

Edit: Oh, I forgot to say --- this was mostly before JTAG was ubiquitous, so debugging options were either a terrible, proprietary serial monitor or a terrifyingly expensive ICE unit. JTAG did come along later, and it was miraculous; you could even single-step through interrupt handlers with it! But it wasn't standardised and each board typically had its own interface, which wasn't supplied, plus its own software. Then not long after I got out of the game.

...I'm having flashbacks now. Downloading Java images onto an M-Core development board at 100-200 bytes per second with a 1 in 10 chance of a failed download for every megabyte. You think I'm exaggerating. I'm not. I still have the CPU from that board somewhere; I ripped it off once we were finished to make sure that nobody would ever have to use it again.


Fortunately, most new design starts in the embedded world these days are ARM SoCs. Most (not all) ARM chips use a standard 20 pin JTAG connector. The Segger J-Link supports a large number of parts for debugging.

The BSPs are still as scary as they ever were, though.


There there...

This is why I have a stash of whiskey at my desk ;)

(though honestly I haven't touched it in a while. Impairs debugging, really)


That's funny, at my old job we actually called whiskey "debugging fluid"


In the embedded world the quote from Mickens "I HAVE NO TOOLS BECAUSE I'VE DESTROYED MY TOOLS WITH MY TOOLS" happens much more frequently :)


I used that quote just yesterday >_>


You smile, but that's a harmful load of shit in a world that has an ever-increasing hunger for software that doesn't suck


This is just like the anti-car people. "People ought to be restricted to slow and unpleasant methods of transportation because they should not want to leave their immediate vicinities."

There's a grain of truth in both cases - there's no energy efficiency like not needing to move around at all, and there's no memory/CPU/cost efficiency like not having sophisticated functionality in the first place.

But once you are used to the functionality that comes from heavyweight tools, "do without - it's better for you" is frustrating at best and more likely engenders hostility for the person saying it.


That is som lousy anti-car people you have been talking to, it sounds like.


Just because a thing is sufficient doesn't make it efficient or pleasant.


you can do that too, just use hot code replace to put print statements into the running code. you don't even have to suspend threads for that


> It lacks the I in IDE. Those different tools are not integrated.

Arguably, to me, they are because making the tools produce results together through composition is a first class construct of the shell and vim/emacs.


>But I think that trying to shoehorn Vim or Emacs into becoming something that it’s not isn’t quite thinking about the problem in the right way.

...Well, this is clearly a Vim user. If you're never leaving Emacs, and you're writing plugins to do everything inside it, you're using Emacs exactly as intended.


http://blog.vivekhaldar.com/post/3996068979/the-levels-of-em...

"It was all just text. Why did you need another application for it? Why should only the shell prompt be editable? Why can’t I move my cursor up a few lines to where the last command spewed out its results? All these problems simply disappear when your shell (or shells) simply becomes another Emacs buffer, upon which all of the text manipulation power of Emacs can be brought to bear."


This is Atom's great failing: it's trying to do the emacs thing, but it doesn't treat all buffers uniformly: your settings window, your terminal window, and the various text windows are fundamentally different. Until this is fixed, it's not even in the running as a usable editor for me.


> This is Atom's great failing: it's trying to do the emacs thing, but it doesn't treat all buffers uniformly

That's definitely a great failing, but I think it has two greater still: not being written in Lisp, and being written in JavaScript.

The former could be overcome (Alpha was an amazing editor for the Mac written in TCL), and after all there are plenty of great programmes not written in Lisp (emacs, for example, has a C core …).

But JavaScript‽ JavaScript: there's got to be a better way. JavaScript: warn your friends about it. JavaScript: dissatisfaction guaranteed.


JS is fine. It's got warts, yes, but so has elisp. It's also got proper lexical scoping, and real, function-equivalent lambdas, which are surprisingly rare.


This is of course such a transparently good idea that Acme and 9term adopted it too.


I was never as much a fan of Acme. It was a neat idea, but I don't think it was executed as well as Emacs was. And it doesn't work super well outside plan9. And I need to edit over SSH a lot. And keyboard > mouse, IMHO.


I use Acme on and off. I will never be as productive with it as I am with Emacs (I depend way too much on autocompletion, contextual text wrapping, org-mode and magit), but the philosophy appeals to me.

UNIX as an IDE never truly made sense until I discovered Acme. It was truly a zen-like experience.


> And I need to edit over SSH a lot.

Try sshfs :) Surprising that someone who has tried Plan 9 hasn't come to love specialized filesystems!


I would, but in the locations I'm in, the only computer I can access is my school chromebook: no X. :-(


sshfs doesn't require X...? It's just a filesystem.


No, but Acme/Sam ports do. And emacs, vim, and sam and acme ports all require a POSIX api which the chromebook lacks.


Not all ports of Acme/Sam require X.


The mouse is pretty inherent to both designs. What TUI version of acme are you talking about? Because AFAIK, none exist.


The macOS, Plan 9, Windows and Inferno versions of acme do not require X.


...And were you not listening to the point which started this discussion, which is that I do most of my development on a Chromebook, and all that has is SSH.


It doesn't have vnc or similar screen-sharing functionality?


Yeah, but the SSH connection is slow enough as it is. VNC would be quite unpleasant. Also, this is laptop, so I've got a touchpad. Not the best thing for Acme.


For pointing to a location in 2-dimensions, a mouse is hard to beat with a keyboard; when locating a point on a map to someone, do you say "Start at the bottom left, then up two inches and over four inches to the right" or do you just point?


For inputting and manipulating a set of characters, the keyboard is hard to beat. And it works well enough for navigation.


Well no one is suggesting that you'd input characters any other way, and acme supports sam's structural regular expression command language for keyboard-based editing; so acme supports the keyboard features you're looking for and also has, as you've admitted, better navigation, via the mouse. I suggest you give acme (and/or sam) another try without the prevailing pro-keyboard bias of those people commenting on HN.


Doing a lot of switching between keyboard and mouse will slow you down, IMHO. But that's just me. You're free to have your opinion.

But as I've asked elsewhere, does acme have the ability to send text to an external repl on systems other than plan9 (a continuously running one, not a freshly-spawned one), preferably one running in another terminal window? Does it have indentation for Lisp? does it have syntax highlighting (not necessary, but handy)? does it have colorscheme customization (you'd be surprised how many programs generate illegible text in white-on-black terminals, and I've become quite fond of Solarized)? Can it look up documentation for the functions I'm using (very handy at times)?

I've got the tools I'm used to, and they provide things that Acme never will for me. Maybe I'm not experiencing ultimate Unix Philosophy Zen. Maybe using the mouse is a bit more efficient. But at the end of the day, I'm here to edit text, and Emacs works in my environment, and has some pretty good tools for that.


But my opinion regarding the mouse being faster is supported by research; the opinion that the keyboard is faster or that context switching slows you down is factually incorrect.

http://plan9.bell-labs.com/wiki/plan9/mouse_vs._keyboard/ind...


Okay. First off, I can use the mouse with Emacs in most contexts. Second off, the time I'm wasting may well be more than the time I'm gaining from all the features I use that Acme doesn't have. Finally, I use my editor over SSH a lot, so Acme is a non-option for me, even if it is so unbelievably brilliant (it isn't).


I generally use acme, but when I need to edit files on remote computers, I use sam. There's no better editor for that.


...until you're doing remote dev work, especially in Lisp, and need a repl, autoindent, or any other feature that Sam doesn't have. Given, it's an uncommon need, but one that I have every day.


> remote dev work

Not sure what you mean, I do remote development all the time.

> and need a repl

Why do you need a lisp repl in sam? The unix shell is a repl too, and I use it in a separate terminal, which talks to sam via the Plan 9 plumber.

I use multiple terminals and sometimes even multiple sam instances, and I consider this a big plus, rather than doing everything through a single program.

> autoindent

Sam supports autoindent just fine. But perhaps you mean some other form of "smart" indent though.

You're right that sam (and acme) don't have features. The functionality comes from programs interacting with each other through the plumber or some other method.


I was not aware that Sam had as many hooks as acme. If sam can send text to the terminal, than that's alright, I guess. But if you can't send program text to the interpreter...

As I'm running a Unix (Linux, at present), I really don't have the full power of the plumber at my disposal.

As for autoindent, I was talking about Lisp code, which most autoindenters don't indent properly. This is one of several reasons that the original vi had a dedicated mode for editing lisp code. Yes, really. You can look it up.

Anyways, I'm an emacs user. Saying we do everything through one program is like saying that every Smalltalk program is the same: we do everything through one VM. It's not as elegant, true, but it works right now. And outside of plan 9, acme and sam are a little hit and miss by comparison.


Does emacs support sam's structural regular expressions language?


Not AFAIK. Care to write it?



Oh, hey. Neat.


I vaguely remember the C64 had something like this. You could move the cursor up to an old line, modify the line, then press RETURN to run it. Mind you I was like 8 years old.


xiki


Not the same thing at all.


Yeah, emacs would be a great operating system, if only it came with a decent text editor.

(ducks)


Someone fixed that. My emacs came with good modal editing out of the box. Maybe your emacs is broken. Try spacemacs.org.

(runs)


And if you don't want to deal with hellish complexity, and have a DIY mindset, you can grab evil, and build the modal system you want.

(dodges)


Modal? I want composable.


With evil-mode, which is what spacemacs is based on, you get it.


Interesting. Does it have text objects?


Yes. It includes most of the standard vim text objects, as well as providing an api for you to write your own. It also has the usual motions, plus an api to add your own. There are existing extensions for the more common vim plugin-provided motions as well.


While I don't know if it has programmable text objects, it does have the standard Vim ones from what I've seen last time I tried it.


Just checked the docs: Not only does it include text objects, you can write your own, if you don't mind getting your hands dirty with some elisp.


Spacemacs is a whole bunch of things, nobody needs it for a simple (evil-mode +1) (not that I call it useless, but Vi emulation is part of base Emacs).


I bet Emacs has a plugin that makes it like Vim, so Emacs is not that bad.

:-)


Evil mode: it's the basis of spacemacs. If you don't want the complexity of spacemacs and you don't mind writing a bit of elisp to get your bindings working, it's worth looking at.


Am I missing something here? That's like an asthma drug whose side effect is shortness of breath.


You're missing something. Spacemacs defines a lot of keybindings for many plugins and so on. Evil only defines the basic Vim keybindings. If you want more, you have to write it yourself.


Of course it does! :).


Multiple.


At some point I realized Unix is basically a long-running IDE + REPL, and thought that was just so cool. And then I realized how horribly inefficient it is at being that. So I started longing for the days when we had some kind of OS where it actually is a IDE + REPL, like an OS based on Lisp. And then I learned that that already was done and didn't turn out as cool as people had hoped. Oh well.


The C64 was like that. Not Lisp but it booted right into a BASIC REPL. BASIC was the C64's operating system. I still think there is a niche for such a system today (mainly for developing rapidly changing business applications you'd nowadays use Excel for, as well as education).


Almost all 8-bit computer were like that.

I was so glad when I left REPLs and then debuggers and IDEs, to compiled languages and text-editor + command-line compilation and open source-code. I just cannot fathom the trend of the last 5 years where I hear and read people who cannot live without an IDE, a debugger and a REPL. It feels like living in a jail cell for me; a cell I left 20-25 years ago.

Incidentally, I just had to install an IDE today, for HDL development. I urgently have to find a way to turn the process into a Makefile or I will throw the computer through the window before the week-end comes.


> I still think there is a niche for such a system today

In Polish auto repair, if nowhere else:

http://www.popularmechanics.com/technology/gadgets/a23139/co...


Javascript and a browser is essentially that.


By all accounts, LispM's were awesome systems to use but lost out due to heavy marketing from unix vendor companies.


They were also really expensive, and there have always been people who hated Lisp for the syntax.


On Symbolics Genera they had the option of using C, Pascal, Ada, Fortran, or Prolog.


Lisp's syntax is okay, but its semantics is absolutely horrendous. Even ignoring the features that make it not so great a high-level language (such as the lack of a workable static semantics, forcing programmers to dynamically test what can be taken for granted in more civilized languages), Lisp is an even worse low-level systems language. In the 80's, the only way one could reasonably hope to use Lisp as a systems language was to run it on hardware explicitly designed to run Lisp. Of course, hardware support for features like dynamic typing and efficient dynamic allocation of lots of small objects doesn't come for free - this is what made Lisp machines expensive.


Well, some of us like Dynamic Typing. Just because you don't doesn't make it horrendous.

Given, it can be unpleaseant in certain contexts, but Lisp has really good metaprogramming support, so you can add syntax for a runtime type system relatively simply. It's not optimal, and it certaibly isn't the fastest thing, but it works if you want it.

And nowadays, CL (and several of the Schemes) provide more powerful type systems, and sometimes even compile-time typing. All optional, of course.


I don't have anything against the presence of dynamic typing - it's in fact very useful. What's annoying is the absence of meaningful static typing: parametric polymorphism and exhaustive pattern matching help me prove things about what my code does and how it can be used by others, but their usefulness is reduced to zero when dynamically typed code is allowed to break the proofs' assumptions. To be perfectly clear: it doesn't bother me in the slightest that a dynamically typed module can break the internal state of another dynamically typed module, since in most likelihood I'm the author of neither, and I respect other people's right to write their code however they wish. So I'd be fine with a Racket-like contract system, where well-typed modules can't be blamed. But, if I understand correctly, what Common Lisp has is nothing like this.


I don't know. I'm not a CL user. I understand that they actually do have a type system, which can be used to verify that your inputs are of the correct type and match your assumptions. As for contracts, I wouldn't be surprised if someone wrote a macro for it: they're not exactly rocket science, at their simplest.

Don't ask me: I'm using Chicken Scheme, which provides both (to an extent: The documentation is worryingly vague about how the type system handles failure).


Checking alone isn't the point. As things stand, it's no better than C, where the type checker can approve your code, yet it still has undefined behavior.


So you want exhaustive matching? I don't know if any lisp provides that by default. One of them probably does, somewhere.

But since you don't mind Lisp syntax, and really like type systems, have you tried Shen? (http://www.shenlanguage.org) it's an interesting project.


> But since you don't mind Lisp syntax, and really like type systems, have you tried Shen?

I also want:

(0) Type inference. Shen seems to require all typed code to be manually type-annotated.

(1) A decent module system. There seems to be no mention of abstract types in Shen's documentation.

(2) A guarantee that dynamically typed code won't break abstractions defined in statically typed code.


I'm pretty sure shen has limited type inference. There might be ADTs.


Nice rant, but mostly wrong.

> Even ignoring the features that make it not so great a high-level language (such as the lack of a workable static semantics, forcing programmers to dynamically test what can be taken for granted in more civilized languages)

There are many more high-level features, than static semantics. Lisp is designed for runtime flexibility, not static semantics. Runtime flexibility allows lots of interesting high-level features.

> In the 80's, the only way one could reasonably hope to use Lisp as a systems language was to run it on hardware explicitly designed to run Lisp.

Lisp ran fine on five MIPS machines. Current systems have several hundred times the compute power.

The hardware was explicitly designed for Lisp in the late 70s when nothing comparable was available (single-user powerful workstations for AI/Math researchers developing large systems like Macsyma) and most development took place on time-shared mainframes with many users. The market that demanded this was a high-end market, thus the systems were developed, even though the were expensive.

Starting in the mid 80s, Common Lisp was moved to 1 to 10 MIPS workstations from SUN, SGI, Apollo, DEC, IBM, NeXT, Tectronix, and many others. The main problem at that time was that it would need 10 MByte RAM to run efficiently, which was expensive in the early 80s.

10 Mbyte. Today Lisp runs fine on an modern processor and some people tinker with Lisp-based operating systems, again.

> Of course, hardware support for features like dynamic typing and efficient dynamic allocation of lots of small objects doesn't come for free - this is what made Lisp machines expensive.

What made them expensive was the custom development for a small market, ten to twenty years ahead of their time. It was not the technology complexity of the hardware. People were buying future technology for lots of money, where comparable technology would appear 20 years later - the first Lisp Machines with object-oriented operating systems appeared in the late 70s for $100000 a piece.

Some of Lisp Machines used 8 bit ECC to provide error checking and correction, when memory wasn't that reliable. It made them even more expensive, because ECC memory was even more expensive than normal memory boards. With memory sizes from 4MB to 20MB.

Minor Lisp/Smalltalk support then went into the SPARC chips from SUN. Lisp ran quite well on those.

> dynamic typing and efficient dynamic allocation of lots of small objects

Every iPhone does that now, since Apple's Objective-C and the iOS frameworks are actually that: efficient dynamic allocation, of runtime-typed small and large objects.

If you want to see Lisp on a small machine, buy a Roomba cleaner.

http://www.irobot.com/For-the-Home/Vacuuming/Roomba.aspx

Their software was developed in Lisp already twenty years ago on tiniest hardware.

https://en.wikipedia.org/wiki/Roomba

L – A Common Lisp for Embedded Systems https://www.cs.cmu.edu/~chuck/pubpg/luv95.pdf

It will clean your home, using Lisp on tiny hardware.


> 10 Mbyte. Today Lisp runs fine on an modern processor and some people tinker with Lisp-based operating systems, again.

My main desktop machine has 16 GB RAM, and its processor cycle speed is probably also 1600 times as much as that of a machine from back then. Why can't I get my computer to perform 1600 times as much concurrent work as back then?

> Every iPhone does that now, since Apple's Objective-C and the iOS frameworks are actually that: efficient dynamic allocation, of runtime-typed small and large objects.

That's a luxury that we can nowadays afford. Even the cheapest entry-level smartphone is ridiculously more powerful than an 80's era workstation. But the amount of work computers do for us hasn't grown proportionally to the amount of computing power.


> Why can't I get my computer to perform 1600 times as much concurrent work as back then?

Is that a real question?

> But the amount of work computers do for us hasn't grown proportionally to the amount of computing power.

I don't know about you, but my laptop is much faster than previous machines. Software which used to compile with the Lisp compiler in half an hour, now compiles in a few seconds.


...exactly. So there's computing power to spare.


To spare, you say? I want to use that computing power for my own purposes. I suddenly decided I want to multiply gigantic matrices.


Okay. Go write some C, or some Asm. It might feel unpleasant to you, but if you use most other langauges, you'll be wasting too many cycles, and you won't be using that computing power for your own puposes. Don't forget to use as few abstractions as possible, and remember: every function call takes valuable time and computation.


> Don't forget to use as few abstractions as possible

Huh? I'm perfectly fine with abstractions. Just not with abstractions that are suboptimally designed and implemented.

> Don't forget to use as few abstractions as possible

Maybe you're confusing abstraction with runtime dispatch?

> every function call takes valuable time and computation.

Stroustrup's (whom otherwise I don't regard as a great language designer) principle applies: abstractions must elaborate to the best possibly code you would've manually written.


Every abstraction has a cost somewhere. If you want to use your computer to its full potential, avoid them all.


> Every abstraction has a cost somewhere.

It's very simple:

(0) Runtime performance degradation is unacceptable.

(1) Increased compile times are annoying, but they're tolerable in exchange for more exhaustive automatic static analysis. Anything of interest that the compiler can't prove about my code, I would have to prove myself by hand anyway, which would take even more time than the slowest automated static analysis.


> Runtime performance degradation is unacceptable

Requirements like these are not grounded in real software use.

In economic terms: a cheaper/slower software can be acceptable over a more expensive/faster software. Example: Java-based software vs. the same written in C++.

In reliability terms: a more robust / slower software can be acceptable over a more expensive/faster software. Example: Erlang-based software vs. C++ software on a network switch.

And so on.

Abstract/absolutist requirements like 'Runtime performance degradation is unacceptable' is not often found in actual software development/use.

Software usually has a multitude of different important qualities. Raw performance is just one of these. Especially one then needs to say what performance exactly and how to measure it. Performance can be measured in many ways: throughput, latency, micro-benchmark speed, etc. etc. Optimizing one (say, throughput) then has effects on others (say, latency). To achieve faster throughput, some functionality can be affected.


All abstractions degrade runtime performance. Even "zero cost" ones, as they get you to think in less efficient ways, about things like encapsulation, and whatnot.

If runtime perf degradation is really unacceptable to you, you shouldn't be writing in functional languages, or even C. You should be writing in raw asm, as low-level as possible.


Encapsulation is making the internal representation of abstract data types not visible to their clients. In other words, it's a discipline for writing programs. Some languages (e.g., ML) just happen to helpfully enforce that discipline for you. It has no runtime impact whatsoever.


Not true. While the various abstractions themselves may have no runtime impact, they encourage programming in ways that, while they may be good for maintainability, aren't as fast. For example, if you're using ML, you'll likely represent a list of objects (say, structures representing addresses) by establishing a type for the objects, and keeping a list of them. A C programmer might use a statically allocated array, or actually embed a pointer to the next object in the list into the struct. The C program might be less maintainable, but it would be faster because it's doing less pointer chasing. An asm programmer might optimize further.

Now, this is a fairly trivial example, and not too hard for a compiler to optimize out, IIRC, but it's just an example. Other instances of this are far harder to optimize away.

The further up you are from the hardware, the less aware you are of the perf tradeoffs, and the less you can do to fix it. So if you want the best perf, go for asm.


> For example, if you're using ML, you'll likely represent a list of objects (say, structures representing addresses) by establishing a type for the objects, and keeping a list of them.

In ML, most of the time I don't work with objects at all. I work with values. The beauty of programming with values is that the language implementor can represent them however he wishes, as long as he respects the semantics of the operations on said values. For example:

(0) Multiple logical nodes of a recursive data structure can be represented as a single physical node, eliminating the need to store pointers between them. For example, an ML implementation can determine that `List.tabulate (n, f)` always creates a list with `n` elements, and thus always pre-allocate a single large enough buffer before actually computing the elements. In Lisp, this wouldn't be a valid optimization, because Lisp mandates that every cons cell has its own object identity.

(1) Values with large representations can be automatically deduplicated, hash-consed or whatever it takes to reduce their memory footprint. Again, this is only possible because physical identities don't matter in ML (except for mutable cells and arrays).

In other words, values give the language implementation freedom to make useful physical data structure choices. Unlike objects, which come tied to a fixed representation.

There is no tradeoff between abstraction and efficiency, when you use the right abstractions.


As much as I frequently disagree with you, you hit it on the head here.


I had a couple of them, a 3620 and a MacIvory. With all the focus on AI, maybe Lisp optimized hardware will return in some form, perhaps a bit like GPUs have become a trend. There was a lot of clever code written on top of Genera. It would be a shame for it to get re-written due to the loss of the Lisp machine platforms.


> And then I realized how horribly inefficient it is at being that.

This depends a lot on the user. In the right hands it's extremely efficient.


It's a powerful lever, yes, but if we talk efficiency, then it very much sucks. You get a crappy pseudo-programming language (shell) that spawns a new process (!) for almost every function call, and communicates with unstructured text that has to be parsed and reparsed every step of the way. The only benefits of this system are that it's already there with batteries included, and that it became a de-facto standard.


Well also that it's entirely dynamic. You can create and edit "functions" on the fly, which will exist immediately, and are persisted across "sessions". But despite how convenient this is, there's basically no way to make it a whole lot more efficient. Especially the automatic "persisting" feature, there's no other way to do that than writing to disk.

That said, you get a lot of similar behavior to this in things like Clojure + CIDER, but due to the nature of the underlying JVM, you still have to restart it every once in a while.

Part of me still wonders what we could come up with if we took the principles behind a true interactive IDE + REPL (like Emacs + CIDER) but removed the concept of "wrapping" a VM that was designed for a "programming language", and added the concept of a longer-lived "VM". Not quite what LispMs were, nor even Smalltalk (which relied too heavily on a GUI and a monolithic "image state" rather than individual persist-able files).


>but due to the nature of the underlying JVM, you still have to restart it every once in a while.

The DCEVM project almost removed any limitations on hotswapping code in the JVM until oracle hired the people behind it and then never integrated it into their JVM.


> This depends a lot on the user. In the right hands it's extremely efficient.

IMO the inefficiency is more due to the fundamental brokenness of the Unix process model. A Unix process is sort of like a VM with none of the security, and all of the stuff that has been added to it over the years (signals, IPC, dynamic linking) just makes it worse and worse. Things like full machine virtualization and now over the past 15 years namespaces/cgroups/containers are very complicated, brittle ways of trying to work around the shortcomings of Unix processes. This IMO is the biggest advance plan9 made over Unix.


Yeah, the other way around seems to work better: IDE as OS (really IDE as OS shell) a la emacs.


But as the old saying goes, it will still need a decent editor.


There are several. I prefer evil – see https://bling.github.io/blog/2015/01/06/emacs-as-my-leader-1... for some intro.


Systems such as Genera were a lot better as development environments than the Unices which replaced them.


This instantly brought to mind the classic 1984 book, The Unix Programming Environment, by Brian Kernighan and Rob Pike [1]. Note that they call Unix a programming environment, not an operating system. This book shows that the original purpose of Unix is to be a tool for writing software.

[1] https://en.wikipedia.org/wiki/The_Unix_Programming_Environme...


They're not calling Unix a programming environment instead of an operating system. The title of the book refers to the programming environment of the Unix operating system.


Do you (or anyone else) know of a book in the same spirit covering the more recent Unix tools?


I recently contributed an awk filter for doing text transformation. I think it was some kind of code for converting POSIX attributes to LDIF or something like that. I was kind of surprised when one of the replies that I received was a request to re-implement it as a Python API. They didn't seem interested in using piplelines as a way to glue utilities together into something bigger. I don't have anything against Python, I just like the idea of being able to integrate wildly different implementations (C, perl, awk, sed, m4, Perl, Python, etc...) to solve problems. It's a kind of component based approach that the Unix/Linux/BSD shell makes really easy.



I've tried for the past couple of years to get into vim but I can't force myself to remember the odd short cuts I can't seem to ever match up with the action it's doing. I could if I rewrote them butttttt then that blows when I want to jump into a new server without my config. So it's always been nano for one liners and sublime through sftp mounted filesystem.


you've probably never:

1. been forced to use a system with nothing but vi, ed, and ex, which was common for commercial unixes until the 90's

2. had to make thousands of edits to thousands of files in minutes without access to a scripting environment or other more advanced tools

3. had to edit and debug a program under the above conditions on a super laggy dialup internet connection

vi is a force multiplier for operating a terminal under duress in adverse network conditions. that's literally why it was developed. i say it does an even better job of it under good, modern conditions and unless you look at it through that lens, it's never going to make much sense to you. and those adverse conditions still pop up now and then, especially with wireless networks and public cloud shitstorms.

"[Bill] Joy explained that the terse, single character commands and the ability to type ahead of the display were a result of the slow 300 baud modem he used when developing the software and that he wanted to be productive when the screen was painting slower than he could think."

https://en.wikipedia.org/wiki/Vi


To be honest vi and vim are different enough that as someone who was taught on vim, I get lost when I have to do anything beyond opening a file in vi.


yeah, it's pretty awful.


Instead of "odd shortcuts," think of it like this guy does:

Learn to speak vim — verbs, nouns, and modifiers! https://yanpritzker.com/learn-to-speak-vim-verbs-nouns-and-m...

"To move efficiently in vim, don’t try to do anything by pressing keys many times, instead speak to the editor in sentences

" delete the current word: diw (delete inside word)"


Note that Vim is pretty much identical to nano when you press the letter i. Then hit escape and use :wq to write to disk and quit. Over time you'll probably start using more and more Vim features to do text editing quicker.

It took me two years to learn Vim well enough that I could objectively say it works better than standard text editors. I'm not surprised you didn't succeed in learning it all at once. It's like using a mouse for the first time ever that comes with 50 buttons. I don't think anyone could learn that and put it to use from muscle memory (that's where it really starts to pay off) in one sitting.


iirc (very vaguely) this has been discussed on hn before. the entire series is available as an ebook here: https://github.com/mrzool/unix-as-ide


Thank you. The ebook is helpful


A early variant of UNIX was explicitly that. But it was called Programmer's Workbench.

https://en.wikipedia.org/wiki/PWB/UNIX

It introduced version control (SCCS), make, lex, yacc and more.


Unix as IDE is an oxymoron simply because it's not integrated. How about we swap the "I" with something else? Maybe Universal Development Environment?


Shouldn't there be (2012) in the title?


Yes.


I thought this is another acme post. Oh NO~~~, you don't even use a mouse?!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: