This is like comparing OpenAPI and strings (that may be JSON). That is, weird, and possibly even meaningless.
MCP is formally defined in the general sense (including transport protocols), CLI is not. I mean, only specific CLIs can be defined, but a general CLI is only `(String, List String, Map Int Stream) -> PID` with no finer semantics attached (save for what the command name may imply), and transport is “whatever you can bring to make streams and PIDs work”. One has to use `("cli-tool", ["--help"], {1: stdout})` (hoping that “--help” is recognized) to know more. Or use man/info (if the CLI ships a standardized documentation), or some other document.
But in the they’re both just APIs. If the sufficient semantics is provided they both do the trick.
If immediate (first-prompt) context size is a concern, just throw in a RAG that can answer what tools (MCPs or CLIs or whatever) exist out there that could be useful for a given task, rather than pushing all the documentation (MCP or CLI docs) proactively. Or, well, fine tune so the model “knows” the right tools and how to use them “innately”.
Point is, what matters is not MCP or CLI but “to achieve X must use F [more details follow]”. MCP is just a way to write this in a structured way, CLIs don’t magically avoid this.
CLI tools are designed to provide complete documentation using —help. Given LLMs are capable of fully understanding the output then how is the MCP standardization any better than the CLI —help standardization?
I would spend less time with theory and more time with practice to understand what people are getting at. MCP and CLI could, in theory, be the same. But in practice as it stands today, they are not.
> MCP is just a way to write this in a structured way,
Nope! You are not understanding or are actively ignoring the difference which has been explained by 20+ comments just here. It's not a controversial claim, it's a mutually agreed upon matter of fact by the relevant community of users.
The claim you're making right now is believed to be false, and if you know something everyone else doesn't, then you should create an example repo that shows the playwright CLI and playwright MCP add the same number of tokens to context and that both are equally configurable in this respect.
If you can get that right where so many others have failed, that would be a a really big contribution. And if you can't, then you'll understand something first-hand that you weren't able to get while you were thinking about theoretically.
> then you should create an example repo that shows the playwright CLI and playwright MCP add the same number of tokens to context and that both are equally configurable in this respect
That's just implementation detail of how your agent harness decides to use MCP. CLI and MCP are on different abstraction layers. You can have your MCP available through CLI if you wish so.
Please, please, please actually do this yourself or read any of the top comments. You are still missing the point, which you will understand if you actually do this and then look at the logs.
Fair enough, I’ll give it a try when I’ll have time for it.
I recognize that MCP as typically used would eat a good chunk of context - shoving all those API specs is wasteful for sure. The solution to this, I believe, is either RAG or single-tool (Context7-like), where relevant APIs are only provided on demand from models’ intermediate requests.
Caveat is model may need training to use that efficiently (or even at all, esp. smaller models are either very shy or crazy with tool use), and I don’t want to spend time fine tuning it. Could be that’s where the reality may prove me wrong.
But a token is a token. There is not a lot of difference between Playwright (or any other tool) use documentation wrapped in JSON with some logical separations, or provided as a long plain text blob (ultimately most likely also wrapped in JSON). So if the model doesn’t know how to use some tool innately (it may, for Playwright), and if it needs to use all tool functionality, I’m sure a CLI wouldn’t fare any better that MCP. But if the model knows the tool or needs just a small bit of its capabilities - naive MCP is going to be a bad idea.
Just like a human. If all I need is some simple thingy, I probably don’t need a whole textbook upfront, just a select excerpt. As far as I understand MCP, supplying full textbook in the system prompt is not MCPs innate design fault, it’s merely a simplest implementation approach.
I'm rooting for you, to be clear! It sounds like your approach is more sophisticated than the average, and this is a pain point that is starting to get a lot of attention.
> Open-source models are only a couple of months behind closed models
Oh, come on, surely not just a couple months.
Benchmarks may boast some fancy numbers, but I just tried to save some money by trying out Qwen3-Next 80B and Qwen3.5 35B-A3B (since I've recently got a machine that can run those at a tolerable speed) to generate some documentation from a messy legacy codebase. It was nowhere close neither in the output quality nor in performance to any current models that the SaaS LLM behemoth corps offer. Just an anecdote, of course, but that's all I have.
I believe the problem is not smart glasses per se, but spyware that runs on a lot (if not most) of such devices.
Shame the language makes people intrinsically hate the former by associating it with the latter without even questioning it. The idea of smart glasses is cool, the implementations are not.
Smart glasses are spyware. The ability to record without my knowledge or consent is what I take issue with. I don't particularly care if you self host.
> The ability to record without my knowledge or consent
All major brands have a clear indicator for when they're recording.
Someone could block that indicator out, but someone could also just go to Amazon.com and select one of hundred of available pinhole cameras or not-smart camera glasses.
These aren't enabling an ability that hasn't been enabled for decades. If anything, seeing someone with main brand smart glasses makes it more obvious.
Existing alternatives also make me uncomfortable for the exact same reasons. I would prefer to avoid anyone who purchases a pinhole camera for public use, regardless of whether it came with an LED to indicate recording.
To their credit, smart glasses are an obvious signal for me to avoid. That doesn't make me appreciate them any more.
Every cellphone in every hand is a recording device, very often used in public. Where I am, you can look most anywhere, at any time, and see someone on a phone call, taking a picture/video, posing, etc. What's the significant difference that I'm not seeing, especially since the smart glasses have an indicator, and cellphones DO NOT.
No difference. If I see your recording in public, either via cell phone or smart glasses or shoulder mounted news rig, I do my best to steer clear. I don't like Alexa or Flock or whatever else either.
I do not agree that the existence of surveillance tech justifies the expansion of surveillance tech.
Not only that, but smart glasses have terrible recording time limits. A cheap $30 pinhole camera with a SD card will far surpass meta glasses in recording capabilities.
Hidden cameras have been a thing for a long time now. Stick one in a pair of glasses and give it a super short battery life and people freak out...
Wearing a hidden camera and recording people is also very socially unacceptable. If someone knew you were wearing they would probably also “freak out”.
Every person holding a cellphone up in their hands could also be pointing a camera around at people, a camera with much higher fidelity, computing power, and one that can take much longer videos.
This is just panic about a new form factor. The same thing happened when cell phones came along, with the exact same talking points.
Totally agree, but that's not a justification. "We already do a thing you don't like so you won't mind if we do it lot more, right?"
The same talking points still apply to cell phones. I think people who record TikToks in public are similarly gross and I go out of my way to avoid them.
I watched a guy setup a cell phone to record his laps in a pool yesterday. He swam one lap right about a meter from the 15 year-old girl playing with her mom, then climbed out of the pool, shut off his phone, and walked away. The remainder of the pool was open. Should I have called him out? I couldn't decide, and therefore didn't. This is normal now.
Smart glasses (or any camera-equipped device) don’t have to record anything to provide utility.
If anything, the primary utility of smart glasses is the wearable display, not camera. YMMV, of course.
But even machine vision-capable devices can do a lot of useful things without causing you any trouble, unless your issues are more of a religious concern rather than anything substantial.
I'm not entirely sure what's your exact threat scenario, if someone records your image, especially given that you've said it doesn't matter to you whether it gets siphoned straight into some megacorp database, or private home server, or gets processed on-device only.
But... aren't already existing protections that make it e.g. illegal to distribute your image or its derivatives sufficient? If someone does you wrong, you can seek recourse. If everyone is respectful of each other (and we hate corporations instead of technologies), we enable a lot legitimate uses, making the world better: more accessible, and easier to learn and understand.
Oh, it very much does matter to me what happens to recordings! I should have made that more clear. Self hosting is infinitely preferable than sending that info to Facebook. This isn't enough to flip my opinion on the technology in general, but if I had a friend who wanted to self-host their own smart glasses, I would not mind. The keyword there is friend.
My issue is that I don't have the ability to audit every smart glasses user to find out what their tech stack is, so I'm looking at averages. If I saw that smart glasses were being used and promoted as assistive technology, I would likely form a different opinion. Unfortunately, that's not what I perceive. I am open to the possibility that smart glasses could end up a net positive on our society, but the history of similar technologies does not encourage.
I will think more on your comments here. I find them quite insightful.
Edit: For a rather recent example of my threat model, I will repost part of my comment from elsewhere in the thread:
> I watched a guy setup a cell phone to record his laps in a pool yesterday. He swam one lap right about a meter from the 15 year-old girl playing with her mom, then climbed out of the pool, shut off his phone, and walked away. The remainder of the pool was open. Should I have called him out? I couldn't decide, and therefore didn't. This is normal now.
A challenge to my beliefs! What a great question! Upon reflection, I don't have anywhere near as much reaction to dashcams. There's certainly some dissonance there.
I think it's an issue of perceived benefit vs perceived risk. I see the utility in both technologies, but I assign significantly higher risk to smart glasses. I really struggle to imagine widespread abuse from dashcams.
It's a more fundamental issue than those legal oddities of the day. It's whether people have a right to remember, right to share their memories (there must be lots of nuances here), and whether others have a right to be forgotten or deny some or all of such sharing - and how all those play together.
I can't wait for the day brain-machine interfaces will become more advanced and commonplace (so cyborgs become something way more advanced than just limb prosthetics), and hope the day comes fast enough so the true issue is forced before any decisions are made off the ill-informed assumptions and the shuttle designs are left to depend on a width of horse's ass.
I have a right to collect evidence in my own defense, and that evidence may not be abrogated by by-standers to the event who might attempt to prevent me gathering that evidence.
Its the Universal Declaration of Human Rights and it covers you whether you like it or not, thankfully. You might not like it but thats gonna change the moment you need to exercise that very right yourself.
How do you figure? There is no "right to record," nor is surveillance mentioned in the Declaration of Human Rights. In fact, it points out in Article 12 and 29 that rights and freedoms can and should be limited by law if they impinge on the rights and freedoms of others, such as those mentioned in Article 12:
> No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.
That doesn't seem as clear cut as you're implying.
Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.
Seeking and receiving information covers gathering facts, evidence, or observations from public events or spaces (e.g., documenting protests, government actions, police conduct, or everyday occurrences visible in public).
You might not like it, but its a key mechanism by which we, the people, keep despots and the police state in check.
I do like it, and agree it's an important mechanism, but it's not a blank check as it's in tension with the other articles. I do not read that as granting you the right to any and all information you might desire. For instance, I hope we can agree that allowing the public to film bathrooms or gynecology appointments crosses a line.
Oh, there are always going to need to be exceptions to the rights, such as the tacit contract one enters into, abrogating the right to record, when entering a privacy-respecting space that is marked as such and is not part of the public commons but rather that of a private entity whose intent was to create a private bathroom in which people are definitely not to record each others activities without additional contract - i.e. consent - of all parties involved.
But it still has to be iterated in light of such exceptions, that the rights encoded in the UDHR are there to protect humanity, as a species, so that we can indeed form our own cultures freely as we see fit.
The Universal Declaration of Human Rights has as much teeth as The New Colossus does. It's a bunch of prose with absolutely no binding or enforcement mechanisms.
Following the notion that one needs force to get things done is rather a tempestuous path to take.
The rights are there for all of us, and indeed they are generally aligned along natural human phenemonen, specifically for the purpose of allowing the weak and the strong to live as equals, universally.
Sure, you have the right as long as you have the gun. But you still have the rights once you lose the gun too, human.
Yeah, that is totally okay, its why human rights are so important to protect. You wouldn't want to be in a situation where an authority doesn't allow you to avoid them like the plague, would you? It is, therefore, your right to record those authorities .. so that they will go away, too.
Totally agree. To be clear, I'm not arguing for a ban on smartglasses. I'm simply explaining why they make me uncomfortable.
The ability to record authorities is something I fully support, but I still don't want to be in that video if I can help it.
On top of that, most smartglasses are not private. If authorities can access the feeds, then my neighbor with RayBans becomes an authority, and it makes it that much harder for me to avoid them like the plague. This similarly applies to Ring doorbells and Flock ALPRs.
Yes and I am saying I am tired of those boring cop-out "analysis". Yes, having a social science degree, it was full of those. Make solutions instead. Anyone can """analyze""".
> Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?
Is this somehow fundamentally different from having memories?
Because I thought about it, and decided that personally I do - with one important condition, though. I do because my memories are not as great as I would like them to be, and they decline with stress and age. If a machine can supplement that in the same way my glasses supplement my vision, or my friend's hearing aid supplements his hearing - that'd be nice. That's why we have technology in the first place, to improve our lives, right?
But, as I said, there is an important condition. Today, what's in my head stays in there, and is only directly available to me. The machine-assisted memory aid must provide the same guarantees. If any information leaves the device without my direct instruction - that's a hard "no". If someone with physical access to the device can extract the information without a lot of effort - that's also a hard "no". If someone can too easily impersonate myself to the device and improperly gain access - that's another "no". Maybe there are a few more criteria, but I hope you got the overall idea.
If a product passes those criteria, then it - by design - cannot violate others' privacy - no more than I can do myself. And then - yeah - I want it, wish there'd be something like that.
>That's why we have technology in the first place, to improve our lives, right?
No, we have technology to show you more and more ads, sell you more and more useless crap, and push your opinions on Important Matters toward the state approved ones.
Of course indoor plumbing, farming, metallurgy and printing were great hits, but technology has had a bit of a dry spell lately.
If "An always-on AI that listens to your household" doesn't make you recoil in horror, you need to pause and rethink your life.
If you can't think of an always-on AI that listens but doesn't cause any horrors (even though its improbable to get to the market in the world we live on), I urge you to exercise your imagination. Surely, it's possible to think of an optimistic scenario?
Even more so, if you think technology is here to unconditionally screw us up no matter what. Honestly - when the world is so gloomy, seek something nice, even if a fantasy.
Not only is it improbable, it's a complete fantasy. It's not going to happen. And personally, I'm of the opinion that having AI be a constant presence in your life and relying on it to assist you with every minor detail or major decision is dystopian in the extreme, and that's not even factoring in the inevitable Facebook-esque monetisation.
>when the world is so gloomy, seek something nice, even if a fantasy
I don't need fantasy to do that. My something nice is being in nature. Walking in the forest. Looking at and listening to the ocean by a small campfire. An absence of stimulation. Letting your mind wander. In peace, away from technology. Which is a long winded way to say "touch grass", but - and I say this sincerely without any snark - try actually doing it. You realise the alleged gloom isn't even that bad. It's healing.
> I'm of the opinion that having AI be a constant presence in your life and relying on it to assist you with every minor detail or major decision is dystopian in the extreme
Could that be because you're putting some extra substance in what you call an "AI"? Giving it some properties that it doesn't necessarily have?
Because when I'm thinking about "AI" all I'm giving to it is "a machine doing math at scale that allows us to have meaningful relation with human concepts as expressed in a natural language". I don't put anything extra in it, which allows me to say "AI can do good things while avoiding bad things". Surely, a machine can be made to crunch numbers and put words together in a way that helps me rather than harms me.
Oh, and if anything - I don't want "AI" to save me thinking. It cannot do that for me anyway, in principle. I want it to help me do things it machines finally start to do acceptably well: remember and relate things together. This said, yea, I guess I was generous with just a single requirement - now I can see that a personal "AI" also needs its classifications (interpretations) to match with the individual user's expectations as close as possible at all times.
> It's not going to happen.
I can wholeheartedly agree as far as "it is extremely unlikely to happen", but... to say "it is not going to happen", after last five years of "that wasn't on my bingo list"? How can you be so sure? How do we know there won't be some more weird twists of history? Call me naive but I rather want to imagine something nice would happen for a change. And it's not beyond fathomable that something crashes and the resulting waves, would possibly bring us towards a somewhat better world.
Touching grass is important, and it helps a lot, but as soon as you're back - nothing goes anywhere in the meanwhile. The society with all the mess doesn't disappear while we stop looking. So seeking an optimistic possibility is also important, even if it may seem utterly unrealistic. I guess one just have to have something to believe in?
I can imagine a lot of ways we could be using the new tech advancements of the last decade or two in really great ways, but unfortunately I've seen things go in very bad directions almost every time, and I do not have faith that this trend will stop in the future.
I really hope, that before I will get old and fragile, I will get my smart robotic house, with an (local!) AI assistant always listening to my wishes and then executing them.
I rather have the horror of being old and forgotten in a half care like most old people are right now. AI and robots can bring emporerment. And it is up to us, whether we let ad companies serve them to us from the cloud, or local models running in the basement.
When I look at Google, I see a company that is fully funded by ads, but provides me a number of highly useful services that haven't really degraded over 20 years. Yes, the number of search results that are ads grew over the years, but by and large, Google search and Gmail are tools that serve rather benevolently. And if you're about to disagree with this ask yourself if you're using Gmail, and why?
Then I look at Meta or X, and I see a cesspool of content that's driven families apart and created massive societal divides.
It makes me think that Ads aren't the root of the problem, though maybe a "necessary but not sufficient" component.
Google is almost cartoonishly evil these days. I think that's pretty much an established fact at this point.
I'm not using Gmail, and I don't understand why anyone would voluntarily. It was the worst email client I'd ever used, until I had to use Outlook at my new job.
The only Google products I use are YouTube, because that's where the content is. And Android, because IOS is garbage and Apple is only marginally less evil than Google.
I’ve recently begun using my personal domain as my primary email address, with it forwarding to gmail so I can “get out” easily if I ever had a reason. That said, I’ve found Gmail’s service great, their spam filtering highly effective, (although I haven’t surveyed the competition lately so it’s possible their huge advantage no longer exists) and their features pretty user-friendly (eg the one-click unsubscribe as well as a page to view all your subs in one place). I have never felt like they _abused_ the immense amount of data they have about me nor used it for “evil” purposes; only to profit on relevant ads that are at least clearly marked and unobtrusive. I don’t like that they have so much data on me, but I’ve felt like they’ve been transparent about it, so it’s been on me for making a decision eyes wide open. As opposed to Meta and the shady shit they’ve been caught doing...
That said, I’m open-minded and obviously thinking about this given moving to my own domain.
What’s the evil behavior you’ve experienced? I’m down to move off if I’m oblivious to something…
Yeah the question is what is the optimal feedback loop between producers and consumers and what are the appropriate communivation channels that respect human rights that we can all agree on
I understand the rationale, but don’t you see how this idea contradicts autonomy of decisions for able-minded people? Such good intentions tend to be a pavement on roads to bad places.
I’d rather suggest to inform about all the potential benefits and drawbacks, but leave decisions with the individual.
Especially given that it’s not something irreversibly permanent.
Memories are usually private. People can make them public via a blog.
AI feels more like an organized sniffing tool here.
> If a product passes those criteria, then it - by design - cannot violate others' privacy
A product can most assuredly violate privacy. Just look how Facebook gathered offline data to interconnect people to reallife data points, without their consent - and without them knowing. That's why I call it Spybook.
Ever since the USA became hostile to Canadians and Europeans this has also become much easier to deal with anyway - no more data is to be given to US companies.
> AI feels more like an organized sniffing tool here.
"AI" on its own is an almost meaningless word, because all it tells is that there's something involving machine learning. This alone doesn't have any implied privacy properties, the devil is always in the untold details.
But, yeah, sure, given the current trends I don't think this device will be privacy-respecting, not to say truly private.
I’m not sure I understand the morale of the story. Would you share yours?
A crudest summary of my understanding is that it’s a tale of some dude with eidetic memory who - as a consequence of it - develops a conlang with a huge vocabulary but without abstract concepts.
It’s a stretch for sure, but all I could think of it, is that it’s possibly a tale of how a person with an eidetic memory may find the sheer volume of available information so overwhelming it may even hurt their information processing, like the formation of associative memories. Or something like that, I don’t think I know how it works.
If that, my idea of how machine-assisted memory is supposed to work is opposite of that, it should provide limited but relevant information, with a lot of classifications and references further. Like an encyclopedia with extra fancy natural language querying mechanism. It’s whole point to give awareness about anything user wants to know, faster and more comprehensively than regular diaries, but focused on just what matters for an inquiry.
Fumes, in my understanding, wouldn’t have an idea of a “key” but only “that front door key on a silver keychain” or “smaller mailbox key with a deep scratch on the right side”. If I’d be querying external memory through a natural language interface, it’d be doing opposite of that, heavily relying on abstract ideas as classifiers. Machine that cannot connect “mail”, “key” and “location” into a meaningful query would be useless. Computer “AI” assistant is not an eidetic memory (at least until we start to consider BMI), it’s only a personal encyclopedia at one’s fingertips.
I think API specs are a wrong problem to solve. It’s usually pretty easy to reverse engineer an API requests and responses from a frontend or network log. What’s hard and what an OpenAPI (or any API, but machine-readable specs tend to suffer most) spec would be typically missing is the documentation about all the concepts and flows for using this API in a meaningful manner.
They maintained census, but for government functions (like accounting and taxes), and actual identity communication almost never involved government.
Passports use for anything except international travel is a very modern thing as well.
For most of the history the source of identify was individual themselves (as it should be), that is, one told their name and origin and others accepted that, unless someone knew otherwise.
We've seen ~20 years of people trying to solve identity without the government. We've seen plenty of solutions that can provide stable identities over time, but we haven't really seen anything that provides meaningful sybil resistance. As computer systems become more and more "autonomous", sybil resistance is increasingly the most important feature of any identity system. Any identity system that doesn't solve that problem pushes to the application layer, where it usually has UX impacts that have serious tradeoffs with adoption.
I understand this. I also understand that if history teaches us anything it’s that any centralized governance (of any nature, not just traditional national and regional governments, but any centrally organized communities, like corporations) is to be constantly distrusted and kept in check, and even then it’s dangerous to let it take over social functions. That’s why I wrote “only as a last resort”, that is, unless and until someone thinks of something better. (And then switching over is another issue… that may need some pre-planning even better new solution exists.)
Or maybe someday we’ll have some interesting revelations about personal identity and sybil resistance won’t be necessary. But that’ll probably be only some centuries later.
To be clear, all we need from the government is to establish a person really exists and verify basic properties. We don't need more than that, so we can and should use all cryptography at our disposal (and invent more) to prevent any more information disclosure to both services and government.
I get that identity is a sort of last holdout for the tech libertarians of old. But after years working in KYC, what I saw was the accumulation of vast amounts of sensitive information held by private actors in a way that was completely democratically unaccountable and couldn't be corrected by the average citizen. It's time to bring identity out of the shadows and make it ours to control.
For establishing facts about person, the problem is, hostile governments are not unknown to revoke passports and cause all sorts of trouble. And if the government is benign that doesn’t mean it never turns hostile. We really don’t want to allow governments to disappear people, not physically, nor digitally.
I’m not a libertarian (was; realized why it doesn’t work in reality we have), but I still believe that no entity ever should be able to deny one’s identity, they can only refuse to attest it.
And the more serious problem is that nowadays we’re collectively so much into that flawed paradigm of “identity providers”[1] I’m afraid if a government-ran system happens it’ll would be still built in the same paradigm and engrave that into collective consciousness even further.
Private corporate-ran identities are IMHO better for the foreseeable interim, until we know for sure how to do things right. Because I suspect that whatever we pick as fundamental ideas is going to stick and bless or curse us for a long while. Nation states have longer lifespans than Internet companies popularity, so as weird as that may sound I’d prefer Gmail to, say, that Estonian X.509 scheme (no offense meant; and I’m only considering use outside of government services), despite latter being short-term better.
And - yes - I 100% agree that it’s past the time we should be using proper cryptography for attestation of all sorts, rather than sending passport photos and live selfies to increasingly more and more private companies. But that shouldn’t be general identity verification, it should be only for compliance, only when a law forces to obtain some information from some government-issued credentials. This part desperately needs moderation. But for the love of what’s still sane - unless we find ourselves with an unavoidable need and no other choice, let’s not use that for any other purposes, for now, please?
___
[1]: My view and understanding is that identity cannot be “provided” - those words simply don’t make sense together. Unless if we’re talking about impersonation and skip the “credentials” for brevity, and then it’s not our identity but someone else’s (even if created specially for us). Of course, I could be wrong.
The neat thing is that if government provides identity, you don't have to use it for any system you build. But I'm curious how you would deal with spam and Sybils?
That’s not generally true, even if it may sound true in some specific location and time. Governments trying to mandate national authentication services is a very real thing.
As for your question: sadly, I don’t have a solution for either. I wish I would. I think ML-based approaches seem to show good promise for spam detection, though? I haven’t looked under the hood any recently, but purely anecdotally, almost every time I upgrade my mail system and antispam has something new ML-based, I’m getting a lot less junk. As for the sybils… I don’t think it’s an issue per se - an ability to have alter egos is not a clear negative. And then it must depends on the exact context. Government elections is one thing, online content popularity measurement is entirely different. Not sure it’s meaningful to envision any universal solutions - they tend to have too many side effects, and usually of undesirable nature.
Good sir, cut that fellow some slack - they’re clearly venting some steam, and in doing so they’re not saying anything particularly harmful or wrong.
The part about disabling conscience feels like a huge stretch (I don’t see it there, not explicitly for sure), given how the article is just some personal rant about task and goal management.
> I want freedom, money, affection, play, power, validation, fulfillment, etc.
Of course I already have these things, but enough never seems enough.
> My brain came pre-installed with Human OS; loss aversion will squander CPU until I install security patches (e.g. Taoism, Zen, stocism).
> But I think I'm allergic to enlightenment. Meditation is difficult, quiet is boring, courage is scary, desire is addicting, etc.
This is just sociopathic. More more more. Turn off my loss aversion with stocism, etc.
Sociopathic how? I re-read the article a few times, as initially I haven’t got much sense out of it. Yet, all I see for sure is personal rant how a person is(was?) unhappy about their self-image and they’re reframing it differently to get at peace with themselves.
It’s one thing if one’s being a shitty person to others, and does some mental gymnastics to not feel bad about it. Plenty of examples out there, but author doesn’t strike me as such. I don’t see any of this here, unless maybe if author’s game or Mandarin skills are beyond atrocious, lol. Just kidding, of course.
It’s a whole other thing to be at peace with yourself about your own stuff. Not doing that is a potential way to become a sociopath, because if one constantly feels shitty about themselves all chances are they’ll start to voluntarily exclude themselves from society (to avoid feeling bad) and get out of touch with it.
And wanting good things is… normal, isn’t it? I would be rather concerned if someone doesn’t want anything - ahedonia is not a good state to be in.
The only social thing I’ve seen there, is author’s admission they want to impose imagination (whatever that means), but in my perception that’s just some random thought that wasn’t followed on.
I have an impression that’s the only thing it actually does, right there in the last paragraph (but sure, it’s quite vaguely defined just by this single example).
It doesn’t really say much else, though - just a bunch of commonplace realizations that most of ideas never get done, and then some jump to “metaprojects”, possibly to reframe the frustrations so they feel less stressful, but I don’t get that part.
Nothing changed since ’87. Machines still can’t be accountable and still shouldn’t make managerial decisions. Acceptance control is one of those decisions, and all the technical knowledge still matters to form a well-informed one. It may change, of course, but I have an impression that those who try otherwise seem to not fare well after the initial vibecoding honeymoon period. Of course, it varies from case to case - sometimes machines get things right, but long-term luck seems to eventually run out.
MCP is formally defined in the general sense (including transport protocols), CLI is not. I mean, only specific CLIs can be defined, but a general CLI is only `(String, List String, Map Int Stream) -> PID` with no finer semantics attached (save for what the command name may imply), and transport is “whatever you can bring to make streams and PIDs work”. One has to use `("cli-tool", ["--help"], {1: stdout})` (hoping that “--help” is recognized) to know more. Or use man/info (if the CLI ships a standardized documentation), or some other document.
But in the they’re both just APIs. If the sufficient semantics is provided they both do the trick.
If immediate (first-prompt) context size is a concern, just throw in a RAG that can answer what tools (MCPs or CLIs or whatever) exist out there that could be useful for a given task, rather than pushing all the documentation (MCP or CLI docs) proactively. Or, well, fine tune so the model “knows” the right tools and how to use them “innately”.
Point is, what matters is not MCP or CLI but “to achieve X must use F [more details follow]”. MCP is just a way to write this in a structured way, CLIs don’t magically avoid this.
reply