I think a better example might be the search boxes that have been added to launch menus (Unity, Windows 8, etc...). People would probably be more likely to start using that as a text interface than a few chat commands. Google figured that out a long time ago.
Text generally has a higher signal to noise ratio, but I wouldn't agree that interfaces are evolving that way. Text is typically a power user interface, but I it could probably be more effective for simple things too if it were designed well. It's faster to interact with (see Fitts' Law) among other potential benefits.
I think the google example is interesting because if you watch people use google they formulate their questions as if they are asking a human and not a search engine or a command line. They don't search for keywords, they don't use the + and - operators, they don't enquote snippets of text that should match exactly, no they ask a question, starting with how/what/who/when/where and sometimes even add a question mark at the end!
If i were to find out who is starring in a movie for example i would search for the movie title, click on imdb and browse through the actor-list. If a non techie would do the same thing the query would be something like "who was the lead actor in X?".
Now translate this into replying to a comment in a bug tracker, google has thousands of engineers just working on interpreting "stupid" queries, there is no way this type of AI can scale and work against some arbitrary API to a bug tracker maintained by 5 people.
Discoverability of available commands is also still an issue, not even solved by google. In their case people only know about one command because there is only one: search. If you were to add commands such as "create task", "reply", "send email", "tweet", how would that even work, is it first interpreted as a search query about how to send emails and then it suggests opening your email client? While it's a nice way to start your email client, and exactly how windows start menu works, it doesn't help any further in the workflow of translating GUI-tasks into command line tasks because it only takes you past the first step.
I think people probably do tend to type queries into search providers like Google in whatever their native language is. A web text box might not be the best example for people trying to do discrete tasks though, but at a higher level the queries are probably part of some goal like "get times for new Ridley Scott movie" or "check availability of X" . I'm not sure I can see people typing that into a launch menu, but I'm guessing that's roughly what Microsoft wanted to see happen with Cortana.
Stricly using commands, discoverability probably is still an issue. Text doesn't necessarily need to be command based though. For example, if there are 10 (100, 1000, etc...) options on a screen I suspect it's probably easier to select using text. In some ways it could be similar to a touchscreen.
It's not exclusive to slack, for example: twitch chat there are a bunch of text commands that people love to use. When people use emojis and other such things they usually use a text macro. If you've noticed with programs like photoshop and illustrator, it's recommended for novices to learn all of the keyboard shortcuts.
If anything this points to the future of UI being a combination of text and point and click.
It's not exclusive to slack, for example: twitch chat there are a bunch of text commands that people love to use.
People who use Twitch a lot love to use them. People who don't use Twitch much don't bother. This backs up the argument that power users like macros.
When people use emojis and other such things they usually use a text macro.
True, but that might be because PC keyboards don't have emoji. There's no other option. People certainly don't use a macro on mobile; they use a an emoji keyboard because that's what they prefer.
If you've noticed with programs like photoshop and illustrator, it's recommended for novices to learn all of the keyboard shortcuts.
There's an assumption when you're learning something with the end goal of understanding that application that you're on the first steps to becoming an expert (aka power user). If someone is only learning how to achieve a single end result they don't start with the shortcuts. For example, someone who just wants to learn how to resize and crop images in Photoshop just learns the two or three features they're going to use and then they stop.
I don't know very many people outside of the software industry who are happy with a command line interface, and even fewer who're happy with macros on a command line.
Also, if Partyline is the future, then I don't like it. The two examples in the article are inconsistent;
/partyline create Signup endpoint is 500'ing label:bug
vs
/partyline create:task Write about the future of text-based interfaces
Why is the first one using a "create" verb with a "bug" label and the second using a "create:task" verb? Why not "create:bug" in the first one, or "label:task" in the second?
True, but that might be because PC keyboards don't have emoji. There's no other option. People certainly don't use a macro on mobile; they use a an emoji keyboard because that's what they prefer.
I actually set up the few emoji I use as macros, so I don't have to switch keyboards. Then again, I fall into the power-user category
Sigh.