Hacker News new | past | comments | ask | show | jobs | submit login
The Renaissance of the Shell? (effective-shell.com)
45 points by dwmkerr 6 months ago | hide | past | favorite | 68 comments

I also observe a renaissance of command line interfaces. I guess it's a trend in a similar way how the graphical user interface (in the MS Windows/OS X/X11 way) has been trendy in the 1990s/early 2000s. Remember that Apple banned the terminal from classic Mac OS. This was a declaration of war against complicated CLIs. Nowadays we experience the opposite: Microsoft is building the most modern Terminal emulator as well as many productivity tools such as Powershell but also the Linux subsystem.

Of course this is driven by commercial interests. Microsoft tries to close the gap to the rich Mac OS X development environment, which is rich because it is compatible to the Unix/Linux ecosystem.

As the author noted, many jobs are primarily terminal-based in a way like probably nobody would have believed 20 years ago. DevOps or data scientist, they both enjoy powerful CLI applications, REPL interfaces, scripting as if the "GUI movement" did not even take place.

Question, because I wasn't a developer 20 years ago yet; were the GUI tools of "back when" wrappers around CLI tools? I get that impression with most IDE's I've worked with.

I think this depends heavily on the platform. Towards the end of the Amiga it was common that applications were accessible through a GUI, through the command line and through scripting, as well as extending operating system features for other applications via system-wide installable plugins/extensions which are recognized by other applications. It was common to use a mix of UI and command line workflows and both worlds were well integrated with each other.

For instance: A typical 2D drawing program which would primarily be used through the UI to draw images could also register new image format plugins with the operating system, so that all applications that deal with image files suddenly can load and save those new formats. The same application might be started on the command line in a 'headless' mode to offer image manipulation tasks like ImageMagick, and finally the application can offer an AREXX scripting port for more complex automation tasks, and "orchestrating" several applications.

All of this was standardized through "best practices" guidelines by the Amiga team.

I miss the Amiga :)

PS: My pet theory for the reason why there's a "shell renaissance" is that UI usability (in the sense of making complex application features "usable") has dramatically regressed in the last 10..20 years because the target audience for most apps has changed from active users to passive consumers. People who don't fit the "average user" have no other choice than to move to the command line because UIs are no longer designed for them. This "average user" targeted by UI designers is no longer a creative person who uses the computer to solve tasks or create art, but a passive media consumer. The computer has been degraded from the "bicycle for the mind" to a dumb TV set basically.

People who don't fit the "average user" have no other choice than to move to the command line because UIs are no longer designed for them.

Tyranny of the Minimum Viable User


Another Amiga datapoint is Blitz Basic II, which I pretty much learned to code on back in the 90s. It didn't have any command line tools, you did everything from the IDE. You could however control it remotely using ARexx.

(I actually used this feature recently when I wrote a Blitz Basic plugin for modern Emacs. It uses ARexx over the network to talk to Blitz, so you can write code in your modern operating system, and then compile and run on the Amiga running alongside in an emulator[0])

There were also a number of visual programming environments such as Amiga Vision and Scala which were marketed as "Multimedia Authoring Systems". They were used to create information kiosks, advanced interactive presentations and early CD-ROM hypermedia products such as multimedia encyclopedias, reference works and the like. I'm surprised there's nothing like them today, they seem ideal for the simpler end of the mobile ecosystem, or more cynically for those who want to turn their whole platform into unhackable (in both senses of the word) information kiosks.

Back in the 90s I looked at systems like this, or at Visual Basic, or NeXT Interface Builder (which later made its way to Mac OS X) and thought that the future would be some extension of these technologies, not the tottering stack of cli tools we use today.

I too miss the Amiga, and consider myself a proponent of desktop personal computing and "bicycles for the mind", which can be a depressing thing to be in these times.

[0] https://github.com/richardjdare/bb2-mode

I disagree. While what you're saying is true there's other side to the issue.

The command line tools are simply more versatile and configurable.

We can try to take git clone command as an example and try to design a GUI dialogue which would allow to use all the options it has.

I think it is very hard and you will end up with something looking scarier than space shuttle cockpit.

I believe it'll take roughly the same time to make sense of that dialog that it takes me to quickly man git-clone to remind myself about some obscure option I don't use everyday.

I don't hate on GUIs, I just think that it is very hard to design full-featured GUI for complicated tools that we use every day. I would like to see an attempt to do this for Git, the key is being full-featured.

I think that's more because the standard git client was designed with command line usage in mind and UI usage had been ignored at that step. Had the goal been a hybrid GUI/CLI from the start than the entire git workflow might look different (which wouldn't be a bad thing IMHO, because the git command line UX isn't all that great either).

This is my criticism of most git GUIs. They try to provide 1:1 functionality with the command line. But this defeats the purpose -- instead of making it more obvious what every button will do, it adds an additional layer of complexity where now I have to guess which CLI operation that button maps to. But the point of the GUI is that it's more accessible than the CLI. If you have to learn the CLI interface anyway in order to use the GUI, it is wasting a huge amount of potential.

Magit actually solves this problem, to a certain extent. It makes Git more discoverable and easier to use, IMO.

Prior to trying it I was a hardcore git terminal user. I still drop to the terminal sometimes, but magit has won me over for the day-to-day normal workflows.

You have to use Emacs for Magit, as it's an Emacs Lisp package, but it really is worth trying in its own right.


I don't think they were the same way they are now. Visual Studio 6, which was an amazing environment for its time, was all implemented as a GUI application, including the build system, IIRC you could do some automations with VBA or a complex "macro" system baked inside the UI. You could run the compiler/linker as cli apps, but everything else was built in.

VisualAssist was a very popular plugin for it (don't know if it's still relevant) which added functionality inside the GUI.

> Microsoft is building the most modern Terminal emulator

If it is anything like Powershell, which is more "modern" than bash, then I don't think it will make much of an impact. Powershell seems like it was designed by a comittee which did not quite understand the point of what they were doing.

It seems that if at least someone on that committee used a Unix terminal maybe once in their lives they would realize that what they were building would deliver a really bad user experience.

I read "Bad user experience" but I suspect what you mean is "It not what I am used to".

They (Microsoft) say that their inspiration was Korn Shell and Languages like TCL. Microsoft when they still offered Windows services for Unix, they did provide a Korn shell. If you look at some PowerShell examples and some Korn Shell example there are quite a few similarities between the two.

So they obviously were quite aware of the options in the *nix world. I really wish people wouldn't make these sorts of claims when spending like less than a few minutes looking up the development history would dispel these ideas immediately.

Personally I really like PowerShell and I find it fun to program in whereas with bash I seem to have to always relearn it whenever I attempt to do something non-trivial. However I come from a background of programming in OOP languages like Java and C# so many it just suits my mental model better than bash.

In my (limited) powershell experience, it's a "scripting first" shell, while bash is more of a a cli-first shell.

Using a powershell cli is very verbose (although pretty consistent), but using it without some sort of GUI helper if you're not familiar with it is pretty daunting.

I would agree with that. I think the verbosity arguments do have some merit. I normally write scripts with the ISE or VSCode with a plugin.

Also there are lots of extensions that you are kinda just supposed to know about e.g. dba-tools extensions.

My assessment, from a history with a lot of bash and a little, not terribly recent PowerShell, is that PowerShell wound up in a pretty common position of having a better language but a worse UX. Picking out why (wrt UI in general, not powershell in particular) is an ongoing interest...

It looks like you don't understand Powershell, committee did know and consisted of unix professionals.

Not everything in Windows is a file so they created a shell based around pipeable objects and improved shell's logic handling. A bunch of terse commands for piping out text streams from files doesn't make a lot of sense here.

Learning Vi at an early stage in career has been a life saver for me. Having to do a lot of my work on remote shells over many years could have been more difficult had I not had some degree of command over a text editor like Vi.

And then there are other advantages too. Editing code in Vi/Vim is akin to touch typing if you are really good. Programming becomes so much easier and so much fun.

And last but not the least, there are still loads of things to learn about Vi.

Plus most modern shells have a vi editing mode. Add something like vimium to your browser and you can use the same muscle memory across environments.

I did switch from vim to Emacs + Evil though (Evil adds a decent text editor to Emacs), with my own set of spacemacs-like keybindings.

The three points about the renaissance of the shell are spot on.

(1) Systems being written in more languages means the shell becomes more important

I wrote about that here: https://news.ycombinator.com/item?id=24083764

The idea is that I write programs in Python, JavaScript, R, and C++ regularly, and about 10 different DSLs (SQL, HTML, etc.) And I work on systems written by others, consisting of even more languages.

in regards to http://www.oilshell.org/

(2) Convergence around Unix and Linux for the server side.

Unix won. In 2017 or so, Windows Subsystem for Linux was marketed as "Bash on Windows" !!! That is, being able to run a 30 year old shell and other programs was a new feature of Windows.

(3) DevOps

I forget where I read this, but someone quipped that "old school sys admins didn't disappear". (That is, the experts at Unix shell.) "What happened is that they went to work for AWS and Google and then sold their skills back to you at a higher and recurring price"

I find that to be pretty spot on... The cloud companies are making a lot of stuff point and click, and cut and paste YAML, so you don't have to use shell, but I think programmers are better off learning and using shell.

Reaosns: For autonomy, to avoid being locked in, to exercise the ability to create simple, sharp, one-offs ... not drag in a 200 MB cloud SDK to solve a simple problem.

I find that to be pretty spot on... The cloud companies are making a lot of stuff point and click, and cut and paste YAML, so you don't have to use shell, but I think programmers are better off learning and using shell.

I have been using Nix for most system-level stuff the last two years (development environments, building container images, declaratively managing systems). It is far more powerful than YAML configuration (it's a turing-complete functional language after all), but the functional aspect brings many benefits (no side-effects, etc.).

Of course, nix(pkgs) stdenv makes liberal use of Bourne shell in its build phases, so a bit of shell chops is very useful there as well.

GUIs are using pictures to convey meaning and action.

A picture is worth 1000 words.

Why use 1000 words to do something when one or two will do?

= Master Foo Discourses on the Graphical User Interface =

One evening, Master Foo and Nubi attended a gathering of programmers who had met to learn from each other. One of the programmers asked Nubi to what school he and his master belonged. Upon being told they were followers of the Great Way of Unix, the programmer grew scornful.

“The command-line tools of Unix are crude and backward,” he scoffed. “Modern, properly designed operating systems do everything through a graphical user interface.”

Master Foo said nothing, but pointed at the moon. A nearby dog began to bark at the master's hand.

“I don't understand you!” said the programmer.

Master Foo remained silent, and pointed at an image of the Buddha. Then he pointed at a window.

“What are you trying to tell me?” asked the programmer.

Master Foo pointed at the programmer's head. Then he pointed at a rock.

“Why can't you make yourself clear?” demanded the programmer.

Master Foo frowned thoughtfully, tapped the programmer twice on the nose, and dropped him in a nearby trashcan.

As the programmer was attempting to extricate himself from the garbage, the dog wandered over and piddled on him.

At that moment, the programmer achieved enlightenment.

Source: http://www.catb.org/~esr/writings/unix-koans/gui-programmer....

  ls -al
  rm -r. /*
  git checkout -b thisIsFine
Couldn't give you a sed or awk example, because, fortunately, did not have to learn either yet.

The fact that some CLIs have terrible UX does not invalidate the idea of CLIs.

It does invalidate the parable though.

How so?

I read the parable as being about the infinite number of ways icons and GUIs can be interpreted, thus making their inherent discoverability and clarity low.

I don't see how listing CLI commands with terrible UX invalidates that point.

Because there's really no difference between terrible command line argument names and terrible icons for the same commands.

That may be true, but I think the parable is about well-chosen cases, not the bad ones.

Icons don't have an intrinsic meaning the way words do, so some percentage of people will find brand-new interpretations for icons, to a larger extent than will happen with words.

That was my interpretation, at least.

> Icons don't have an intrinsic meaning the way words do

This is completely backwards. Icons can have intrinsic meanings - they can literally depict an object or an action from the real world. Words are pure abstracts humans made up, and can not exist outside of the context.

I firmly believe that a lot of really productive work can be done with shell-based applications, and that this environment represents the true power of productive computing. With the web taking front and centre priority, we've lost something very important in terms of human/computer interaction - many would say things are 'easier', but I have to laugh at this claim whenever I see someone clicking something in a list a hundred times when it could have been something they'd solved with a little script, had they known how. Too many times I've saved someones ass by a bit of script fu, only to be granted god-like status at how quickly it was done - but if people took a month of study on, say, bash scripting or something similar, it wouldn't seem so obtuse.

Same goes for google-fu. What is happening that the arcane science of proper search semantics is not being taught kids in school? It seems to me the devolutionary effect of "ease of use computing" has wrought its anti-pattern woes over a few generations ..

Anyway, my kids learned bash before they learned how to find the Settings panel, and there is something to be said for the teenager who knows how to wrangle his OS because his Dad taught him the basics of shell-based package management, something still not quite done well in certain environments...

You're not really wrong on many of your points, but you look at everything from a power user/developer mindset and that's too narrow of an approach, sorry. I.e. you should realize programming is not for everybody, not even close, so the claim that things are easier now is very much true for a lot of people out there. As such there's not much to laugh at, in fact that sounds borderline disrespectful to me.

> You're not really wrong on many of your points, but you look at everything from a power user/developer mindset and that's too narrow of an approach, sorry.

I sometimes wonder if how people talked about 'this whole writing and reading thing' way back when. Perhaps there's a future where the average person will be what we now consider a 'power user'?

Much as there's the criticize about current-day computer use, as someone who was considered a 'nerd' for chatting online in my teens, it's still amazing to see extremely non-typical 'nerds' sit at their laptops or phones chatting with others. Or to have conversation about gaming with people who fit the 'jock' stereotype.

I'd love to hear from historians how the 'common people' thought about writing back in the day when most people were illiterate.

> programming is not for everybody

I guess we (although I am not the person you replied to) have a fundamental disagreement here. I think absolutely, everyone is able to code, a little. Not to be a career programmer, mind you, and I have no expectation that they should be able to produce readable or maintainable code. I have a weakly-held opinion that people shouldn't have to code if they choose not to. However, I think a little coding knowledge is essential for digital autonomy, which is more and more synonymous with regular autonomy, and that everyone ought to have that.

I think this shell Renaissance is going to be short-lived. Scripting is a very brittle way to do automation, and that is recognized across the spectrum, especially when handling distributed systems.

The world seems to be moving much more towards APIs and declarative configuration formats, with actual scripts acting as a sort of last resort only. The author even mentions Kubernetes, which is very much designed to replace scripting based control of your systems with an API. Even kubectl is just an utility build over the API, not a core part of the system.

And even outside the cloud-ish world, you have systemd taking over the Linux world one distro at a time, and it's focus is firmly towards replacing shell scripts with configuration files.

Most of ones work should be in the shell all the time, because automation tasks are every day phenomena (or should be). Otherwise, if its GUI oriented, you will still have to learn CLI variant sooner or later and not only its harder or almost impossible to automate, but such automation is usually flaky as well.

Shell should be considered basic and most important interaction with computer with anything else deemed as optional.

The big problem is that almost all of the shell's are stuck in the previous century, except PowerShell.

> Otherwise, if its GUI oriented, you will still have to learn CLI variant sooner or later and not only its harder or almost impossible to automate, but such automation is usually flaky as well.

Is it? Or is it that us compsci and computer engineering folks aren't used to it and don't know much about it?

A Japanese instructional YouTuber I watch did a tour of his setup, and in it he shows off a lot of automation he uses to make his work faster, more consistent, and less error prone. The relevant part starts around here and lasts throughout the rest of the video (its in english): https://youtu.be/TGgbSmCoZFA?t=390

I think part of it is there is already a buttload of CLI driven tools for us, so there isn't a need for us to rise above the skill floor for GUI automation. But to just assume it is hard or impossible because we don't know is just the same as people new to CLIs saying they are hard or impossible.

I don't assume. You assume stuff about me. :)

The GUI is heterogeneous world. Linux doesn't have tools AFAIK work mentioning because it didn't need them due to CLI. Tools like xdotool are very limited.

Windows had much better tools, such as AutoIt, AHK etc. This was because they all were created prior to PowerShell when there was no other option that encompasses all apps. And PowerShell doesn't deal with GUI too, although you could force it.

However, big part of those scripts depend on current theme, window sizing, even positions. To create good script is not easy.

> The big problem is that almost all of the shell's are stuck in the previous century, except PowerShell.

Another exception is Nushell, which seems to be partly inspired by PowerShell, and has some very appealing ideas. However, given the time Fish needed to steal even a small market share from Bash and Zsh, it will likely take a long time before something even more radical goes mainstream. Unless of course a major player were to back this new shell, like when Microsoft pushed PowerShell and Apple pushed Zsh.

> Another exception is Nushell

I was talking about production ready systems. Nushell is far from it ATM, and will certainly be so in the next half decade.

The bad news is shell adoption. The biggest selling point of a shell is ubiquity.

Zsh for many, many years was basically Bash with more features but not endorsed by GNU.

Bash was released in June 1989.

Zsh was released in December 1990.

Yet Zsh has replaced Bash only on MacOS, only about 1 year ago and only because of Bash licensing issues (new Bash versions are under GPL3 instead of 2 and Apple hates GPL3 because it's against their locked down devices).

29 years to replace the default shell on just 1 platform... despite almost bug-for-bug compatibility, POSIX support, extra features.

I wish new shells good luck!

The shell is the tool that allows us to talk to our computers, and for our computers to talk to us.

Until a better way comes out to tell the computer what to do, the command line will prevail.

I alsways explain it to new students as the shell being the equivalent of "under the hood" but for computers instead of cars. If you go to a car mechanic with engine problems and they won't even inspect under the hood, it's probably not a very good mechanic.

(The last few years have not been too kind to this metaphor because of computer-based analytics, but still)

I certainly hope the shell doesn't have a renaissance. Just today I was updating my dotfiles/customizing my OS, and I had to swap out several shell scripts for Python scripts because I couldn't edit shell. The scripts looked like arcane rituals, not editable code.

I remember seeing shell for the first time ~15 years ago, and it certainly looked like an arcane ritual.

Syntax like var=value and [ x -eq y ] was super weird coming from C and Python.

My Oil project fixes many of these problems:


(as well as fixing semantic problems; it's not just syntax)

Shell scripts have some idiosyncrasies sure, but at least they don't rely on significant white space.

Not a pure shell, but Makefiles do rely on whitespace. I never understood why so many people care about that (not to mention that python uses the curly braces as a useful syntax element for dicts & sets)

I use Python a lot so it's not a deal breaker to me, but even after years of use significant whitespace is still firmly in the "bad ideas" category for me. It's form over function.

When I code in C and move some code around I can just tell my editor "reindent this code" and it looks fine immediately. With Python I always have to double-check to make sure everything is in the right place.

Similarly I almost never have to reindent anything manually in C. The editor always knows where I should be based on the number of open braces. In python I often have to readjust. A common situation where that's annoying is if I want to add code after and outside a block, in C I just tell vim to open a line after the closing bracket and I'm immediately at the right location, in Python doing this will have the cursor at the wrong level of indentation.

It's not the end of the world and it's bikeshed territory but for me significant whitespace is in the same category as automatic semicolon insertion in JavaScript, I sorta get why somebody thought it was a good idea at some point but it's just more trouble than it's worth in practice.

Python wanted to be the anti-perl and it went too far in some places. Pseudo-code is only "pseudo" for a reason.

I get the point about moving code around, I guess it can be problematic with some tools. In my experience PyCharm gets it right, and when moving around code in vim I can just change the indent level on the whole block at once, but it is dangerous to miss a line at the end and move it by mistake one level up.

Also I feel like changing the indent of the cursor (manually) when exiting a block is more or less the same work as typing }, and I got used to it to the point it becomes automatic.

But I guess at the end of the day it's a matter of priority. Is that more important than having special syntax for dicts? I really like the fact that we have the tuple/list/dict distinction, and would be great if we had a different paren type for sets (which would have eliminated the {} vs set() issue). On the other hand, going with begin..end or if..fi blocks would be even more antagonizing than using whitespace.

Makefiles don't gernally have anything more than one level of indentation that is significant.

People care about significant whitespace because it's inferring heavy semantic meaning from something that's by default invisible, and even when shown visibly, may not actually be particularly obvious.

As I read it, the article is more about the concept of scripting in general - and not about any specific language bash/zsh/powershell/etc.

I definitely agree, I think that POSIX-shell scripts can often be completely unreadable and difficult to maintain. But I think the concept of scripting in itself is fine, even though the implementations we might use today are slightly outdated.

At the moment there's a huge number of interactive shells like nushell and xonsh being built, but they don't really focus on scripting; I'd really love to see more competitors attempting to take on the mess of bash scripts.

At the moment there's a huge number of interactive shells like nushell and xonsh being built, but they don't really focus on scripting; I'd really love to see more competitors attempting to take on the mess of bash scripts.

Check out http://www.oilshell.org/ -- that is exactly its purpose. It runs your bash scripts, and then you can upgrade gradually to a better language.

It's also an interactive shell, but I concentrate on the language first.

> I think that POSIX-shell scripts can often be completely unreadable and difficult to maintain.

The same could be said for any language, if you're not familiar with it.

The thing is when you restrict yourself to only what POSIX specifies you don't even have basic data structures, no array, no hash map not even local variables, and that often leads to code that relies on multiple hacks.

For instance: you can't even slice "$@" which is the only array like construct specified by POSIX (excluding unsafe splitting), so you end up shifting and re-enqueuing with set -- "$@" "$1", which is unreadable.

Want to read a single character? Impossible with the POSIX read, but you can workaround with dd


All those workarounds to POSIX limitations are fascinating, but it forces a lot of arcane constructs, that's for sure.

And of course this is usually the point where you ask yourself if you've chosen the correct language for the task, but that's another debate.

> so you end up shifting and re-enqueuing with set -- "$@" "$1", which is unreadable.

How is that unreadable? Set all the positional arguments as is, then set the first one again, into new positional arguments. I'm not sure why you want the first arg twice, but thats your call.

Either way, it's hardly unreadable.

> And of course this is usually the point where you ask yourself if you've chosen the correct language for the task, but that's another debate.

It isn't another debate though. It's all the same debate: do you know the language you're reading and/or writing? Do you know what it can do, what it can be pushed to do, and what it really shouldn't do?

Those are all related to the same basic point: if you think Shell is unreadable, I'd suggest it's because you don't know shell.

Remember readable means that you can read and understand it. Not that it's written out in words that someone with on page 1 of "how to program for dummies" can understand.

> Remember readable means that you can read and understand it

Yes you are right, my usage of "readable" was probably wrong there.

Maybe the point I was trying to make was less about the individual constructs but more about how the lack of "common" features makes the whole program less "understandable", or maybe less easy to get familiar with, simply due to the amount of code needed to achieve a specific task.

The same way assembly is considered less "readable" than C. Not because assembly is less readable on a line by line basis, it's even simpler, but because of the number of lines and operations needed to achieve a simple task.

Basically it's easier to understand 10 lines than a 1000.

It's hard to automate something with a GUI. It is also hard to make programs interoperate if they use a GUI.

It doesn't necessarily have to, though : look at plan9. IIRC GUIs respected UNIX's " everything is a file" philosophy, and thus were introspectible, modifiable and scriptable from other programs.

Looking at modern desktop Linux, I wish it had retained more of this. Unfortunately, d-bus, GObject and Wayland aren't really file-based.

It doesn't have to be though, there's just a high correlation between hard-to-automate apps and non-command-line apps. You can e.g. make a GUI app where every button or keybinding is bound to a named function (GUI Emacs comes to mind), and where those named functions can be used in scripts. Another option is to have a scriptable backend; LibreOffice has a terminal interface that lets you e.g. convert between document formats from a shell script, and Astroid Mail is built on Notmuch as its backend which is scriptable.

Reaper has a billion actions you can assign to key combinations, and you can build your own toolbars from them. I think it's as close to a console application as I've seen a GUI get. It also has a scripting language you can use to make just about anything. GUI Emacs sounds like it probably influenced the Reaper people.

I think that we just don't have a standard for it. I know people use AutoHotkey to automate a lot of workflows involving GUI applications.

Recently there was a thread about Factorio (the game), which is all about automating. And once you have standard-ish inputs & outputs, you can achieve a lot of automations, in a different way.

It also highlights that the shell has limitations: it's hard to give multiple inputs, and handle outputs and create a complex pipeline. (I guess that's why we have things like Airflow)

That has less to do with being shellable and having a proper library support. The entire Microsoft Office suite is extremely well automatable.

Because of the capitalisation combined with the domain name that tittle ("The Renaissance of the Shell") made me expect to read a piece of propaganda from the oil company. Considering that generally the term shell means any software with purpose to manipulate files and run other programs, it would be better to call it "The renaissance of the command line shell".

If it had meant the oil company it would have omitted the definite article: "The Renaissance of Shell".

Or perhaps even included the full name: "The Renaissance of Royal Dutch Shell".

Thank you, my native language lacks articles.

Ah, in that case yours is a perfectly understandable interpretation. May I ask which language?


Thank you. Now another question: how would you make the same distinction in Russian? I mean an explanation of it not just the actual words, I had three weeks of Russian in high school nearly fifty years ago but none of it has stuck.

Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact