Lot's of people are short sighted, like children who would consume candy every day if their parents didn't tell them no. Current copyright laws allowed Disney to essentially buy up all of popular culture. This has not been a good thing for the world.
Its a shame that people who supposedly work "in the arts" can be so blind to the world.
People tried to standardize on XMPP back in the day, but capitalism figured out that standardize didn't fit their profit motivation. These days XMPP is a bit of an dated XML-heavy protocol, but Matrix is a newer alternative, and it supports bridging.
While I agree with you, there's certainly other ways to make money in an open protocol. Email perhaps is a good example, we are still on SMTP/IMAP and there's lots of business built on custom clients and whatnot. (Ok, maybe not the best example haha but hopefully you get my point here)
Email is a glorious relic from the truly distributed internet that could have been...
That is why its so useful! It was just designed to work not enslave or en-silo.
The opportunities came after the market was created and adoption was wide-spread because it was just so useful.
The security business opportunities exploded once Microsoft got into the market and things like computer viruses spread via email due to their total negligence and enabling ;)
I can still remember nasty things like Lotus notes or ccmail but once email became widespread and the momentum was undeniable they could not give that sh*t away -- they did try that too.
The problem is that you cannot make as much money as you would by gatekeeping, which means billions of dollars of VC money goes to the gatekeeping app that offers its experience for free and no ads, and spends hundreds of million on ads, influencers, and partnerships to promote their offering and kill the open competition.
And once they’re entrenched enough that’s when they turn the screws on the customer.
Unfortunately our antitrust laws didn’t imagine a world where the marginal cost of serving a new customer was close to 0, so offering a product for free in order to kill competition doesn’t really trigger antitrust laws even though it’s the same kind of behavior.
I think the closest we came to something like this was Slack suing MSFT for bundling Teams, and that probably only stood a chance because of Microsoft’s history.
Based on what data do you draw that conclusion? The fact that cheaters can still exist isn't an indictment that it doesn't ever work. Have you ab tested a game with an without anti cheat?
We should just build more CLI tools, that way the agentic AI can just run `yourtool --help` to learn how to use it. Instead of needing an MCP-server to access ex. Jira it should just call a cli tool `jira`. Better CLI tools for everything would help both AI and humans alike.
This would be awesome, but great CLIs would have already been valuable prior to the age of LLMs and yet most services didn't ship one. I think it is because services like Jira and others do not want to be too open. Ultimately, despite the current LLM/MCP craze, I think this won't change and MCP tools will start getting locked down and nerfed somehow, the same way APIs have in not so recent memory after there being a bit of a craze around those a decade+ back.
I agree with your conclusion that this stuff will get locked down again over time.
I think there's also another major reason people don't like to ship desktop software, and that's the cost of support of dealing with outdated tools, it can be immense.
Ticket as raised, "Why is my <Product> broken?"
After several rounds of clarification, it's established they're using a 6-year old version that's hitting API endpoints that were first deprecated 3 years ago and finally now removed...
It's incredibly expensive to support multiple versions of products. On-prem / self host means you have to support several, but at least with web products it's expected they'll phone-home and nag to be updated and that there'll be someone qualified to do that update.
When you add runnable executable tooling, it magnifies the issue of how old that tooling gets.
Even with a support policy of not supporting versions older than <X>, you'll waste a lot of customer support time dealing with issues only for it to emerge it's out-dated software.
If that took "several rounds of clarification", then the support they're paying for is worthless. Getting version of the application should be among the first bits of information collected, possibly even required for opening the ticket.
You've never asked someone for a version and got back a version number for a completely different product?
Obviously it depends on your audience, and 3 rounds is exaggerating for the worst case, but in previous places I've worked I've seen customer support requests where the first question that needed to be asked wasn't, "What version are you using?", it's "Are you sure this is our product you're using?".
Actually getting version info out of that audience would have been at least an email explaining the exact steps, then possibly a follow up phone call to talk them through it and reassure them.
If your reference is JIRA tickets or you're selling software to software developers, then you're dealing with a heavily filtered stream. Ask your CS team for a look at the unfiltered incoming mail, it might be eye-opening if you've not done it before. You might be surprised just how much of their time is spent covering the absolute basics, often to people who have had the same support multiple times before.
A big problem with CLI tooling is it starts off seeming like it’s an easy problem to solve from a devs perspective. “I’ll just write a quick Go or Node app that consumes my web app’s API”
Fast forward 12-18 months, after several new features ship and several breaking API changes are made and teams that ship CLIs start to realize it’s actually a big undertaking to keep installed CLI software up-to-date with the API. It turns out there’s a lot of auto-updating infrastructure that has to be managed and even if the team gets that right, it can still be tricky managing which versions get deprecated vs not.
I built Terminalwire (https://terminalwire.com) to solve this problem. It replaces JSON APIs with a smaller API that streams stdio (kind of like ssh), and other commands that control browsers, security, and file access to the client.
It’s so weird to me how each company wants to ship their own CLI and auto-update infrastructure around it. It’s analogous to companies wanting to ship their own browser to consume their own website and deal with all the auto update infrastructure around that. It’s madness.
I've had good luck with having Claude write little CLI tools that interact with Jira: "cases" prints out a list of my in-progress cases (including immediately printing a cached list of cases, then querying Jira and showing any stragglers), "other_changes" shows me tickets in this release that are marked with "Other changes" label, "new_release" creates a new release in Jira, our deployment database, and a script to run the release, etc...
I could imagine a subagent that builds a tool on demand when it's needed.
Claude is really good at building small tools like these.
Jira actually has both an MCP server and a CLI tool (called "acli"). I switched our claude code to the CLI (with a skill) from the MCP as it seems.. more efficient/quicker.
Nobody shipped this because previously almost nobody could use CLI tools. Now you can just ask an llm to generate the commands which makes things much more accessible
CLI tools for online services like Jira etc. basically amount to an open and documented API which the attitude towards these probably unlikely to be changing anytime soon as you mention.
The good news is that an llm will be really helpful in scraping your content locating alternative service providers or even creating your own solution so you can migrate away
That's only useful if the agent is running in your terminal. The example given about updating a cell in Excel, I mean, I suppose that is a sort of tool you could used for something. SharePoint has an API for updating excels on SharePoint. But to update a single cell is actually quite time consuming for the API round trip - multiple seconds. I recently had to rewrite something because it was doing individual API calls to update cells.
I use the GitLab CLI (glab) extensively, because it is so much better than the (official) GitLab MCP. I just run `glab auth login` before launching Claude Code, then tell CC to use `glab` to communicate with the GitLab API.
When using the MCP, I have to do a whole OAuth browser-launch process and even then I am only limited to the 9-10 tools that they've shipped it with so far.
That's pretty much how I have been using coding agents.
I get them to build small cli tools with a --help option and place them in a `./tools` directory.
Then I can tell an agent to use the tools to accomplish whatever task I need done.
Usually when I mention a `tool --help` for the first time in a prompt I will put it in backticks with the --help argument.
The agents have a tendency to make the "Examples" section of the help message way too long by stuffing it with redundant examples, so it needs to be manually pruned from time during development if you use an agent for tool development.
`gh-install` is a fish script (using curl and jq), it was made by an agent.
gh-install -h
Usage: gh-install [-i] [-q] [-s] [-p PATH] [-n NAME] [-f FILE] [-e EXECUTABLE] <repo> [version]
-i - Show info about what would be installed (no install)
-q - Quiet mode (suppress all output except -i info line)
-s - Install to /usr/local/bin (system-wide)
-p PATH - Install to the specified directory (incompatible with -s)
-n NAME - Install with custom binary name
-f FILE - Select specific file from release assets and use as binary name (unless -n is specified)
-e EXECUTABLE - Select specific executable from extracted archive (when archive contains multiple executables)
repo - GitHub repository in format owner/name
version - Optional version tag (defaults to latest)
Examples:
gh-install cli/cli v2.40.1
gh-install cli/cli
gh-install -i cli/cli (show info only)
gh-install -i -q cli/cli (show info quietly)
gh-install -s cli/cli (install system-wide)
gh-install -p /opt/bin cli/cli (install to /opt/bin)
gh-install -n gh cli/cli (install as 'gh')
gh-install -f zed-remote-server zed-industries/zed (install server file)
gh-install -e server some-org/multi-tool (install 'server' executable from archive)
Prompt:
Use `gh-install -h` to install asdf, hadolint, ripgrep, fd, delta and bat.
If I need it to do something that uses multiple tools I might just tell it to look in `./tools` for the available tools, so the prompt would be something like this.
Do x using the tools found in `./tools` (they all have a `-h` option).
I also have several tools that are just js scripts using playwright (webpage as the api) to fetch data and return it in a json format. Then I can tell the agent to use that tool and jq to do data processing.
Yeah, I have a feeling we will instead start exposing some /help api that the AI will first call to see all possible operations and how to use them in some sort of minified documentation format.
This is exactly what I do when given an PAT + api documentation, write my own tool. Sure it'd be better if atlassian did it, but I'm not holding my breath.
Or, why not let the LLM write the tool and give it to the agent? Taking it one step further the tool could be completely ephemeral - it could have a lifetime of exactly one chat conversation.
Git was created to replace BitKeeper. There used to be a whole industry of commercial version control software. I grabbed the following list from Wikipedia (but I image there where even more companies around):
- AccuRev SCM (2002)
- Azure DevOps Server (via TFVC) (2005)
- ClearCase (1992)
- CMVC (1994)
- Dimensions CM (1980s)
- DSEE (1984)
- Integrity (2001)
- Perforce Helix (1995)
- SCLM (1980s?)
- Software Change Manager (1970s)
- StarTeam (1995)
- Surround SCM (2002)
- Synergy (1990)
- Vault (2003)
- Visual SourceSafe (1994)
A lot of companies was also building their own internal version control software. Either from scratch or as a bunch of loose scripts on top of other existing solutions. Often turning version control, package management and build scripts into one single complex messy solution. I worked for a company early in my career that had at least 4 different version control systems in use across different projects and even more build systems (all a mix of commercial software and home grown solutions on top).
These days almost everyone uses Git. Some companies uses Mercurial or SVN.
One commercial actor that is still around is Perforce, which is still popular in the game industry. Since managing large game assets isn't optimal for Git (but is possible with Git LFS or Git annex, or similar solutions).
Android isn't "just Linux". It's a heavily modified kernel, it's often an even closed source bootloader in many cases and it's completely untrue for userspace, where it incorporates stuff from other OSs (BSDs, etc.). There are huge amounts of blobs.
Yes, there technically is a Linux kernel, but if it's "just Linux" then macOS is "just FreeBSD", because grep -V tells you so, because it has dtrace, because you run (ran?) Docker with effectively FreeBSD's bhyve, etc.
If you wanna spin it even further neither are Safari and Chrome or any other Webkit browsers just Konqueror because they took the layout engine code from KDE (KHTML).
And you can totally install Debian and even OpenBSD, etc. on a Steam Deck and at least the advertisement seems to indicate it won't be all that different for the VR headset.
The problem is that you're talking about the Linux desktop ecosystem whereas the op could be talking about the kernel. Both are just Linux (and the fact we've not evolved our nomenclature to differentiate the two is surprising). Also, fwiw, the android kernel is no longer heavily modified. Most of the custom stuff has been upstreamed.
I'd just like to interject for a moment. What you're refering to as Linux, is in fact, GNU/Linux, or as I've recently taken to calling it, GNU plus Linux. Linux is not an operating system unto itself, but rather another free component of a fully functioning GNU system made useful by the GNU corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX.
Many computer users run a modified version of the GNU system every day, without realizing it. Through a peculiar turn of events, the version of GNU which is widely used today is often called Linux, and many of its users are not aware that it is basically the GNU system, developed by the GNU Project.
There really is a Linux, and these people are using it, but it is just a part of the system they use. Linux is the kernel: the program in the system that allocates the machine's resources to the other programs that you run. The kernel is an essential part of an operating system, but useless by itself; it can only function in the context of a complete operating system. Linux is normally used in combination with the GNU operating system: the whole system is basically GNU with Linux added, or GNU/Linux. All the so-called Linux distributions are really distributions of GNU/Linux!
that doesn't in any way mean you can install an alternate OS. But i get your point that at least you can run Arch stuff. Isnt Arch ARM support unofficial? (its been ages since i tried) You dont hear of people running it on RPis for example
Well. It doesn't say in any docs or specs, but for what it's worth, Valve's hardware has always been open like that. You're free to install windows on your steam deck, for example.
Valve sponsored asahi linux which was a herculean exercise in running another OS on locked down hardware. They've also sponsored wine and fex. It would be a sudden, steep, and unexpected departure for them to go from being leaders in cross platform OS/hardware support to locking down their own hardware platform. It's just not in their nature. They know their nature is good and they know we know it. That's called trust.
They're being a little vague about it but this collaboration to improve Arch's build service/infrastructure is being done in part to faciliate support of multiple architectures.
iirc it was in Tested coverage that Valve said the hardware supports other OSes. It'd be out of character for Valve not to allow for this.
If it's anything like the Deck, then the version of SteamOS on it won't be locked down in any way whatsoever. You can install Windows or any other distro you want on the Deck with 0 issues (other than regular ones you'd experience anyways on any regular computer, nothing to do with Valve locking anything down).
The steam deck was not arm. Unlike the steam machine page, the steam frame page does not insinuate you can put a custom OS on it. On top of custom drivers which are not necessarily upstreamed, qualcomm socs always require closed source userspace daemons which are coupled to the kernel.
Valve have been working with Linaro to develop FOSS drivers for the Adreno 750. This is necessary, given how heavily Valve leans on having integrations with Mesa whereas Qualcomm's drivers are designed for an Android environment.
I don't see why they wouldn't unlock the bootloader, it wouldn't be the first Qualcomm-based product to allow it and in press interviews they have pressed, quite hard, that the Frame is still a PC.
Won't that introduce new security problems? Seems like a step back.
reply