Literate programming sounds great in a blog post, but it falls apart the moment an agent starts hallucinating between the prose and the actual implementation. We’re already struggling with docstrings getting out of sync; adding a layer of philosophical "intent" just gives the agent more room to confidently output garbage. If you need a wall of text to make an agent understand your repo, your abstractions are probably just bad. It feels like we're trying to fix a lack of structural clarity with more tokens.
This feels like a desperate attempt to stay relevant in a post-LLM world. They’re basically wrapping an LLM in a "professional" skin and calling it an expert review. The problem is that once you start letting an AI "expert" dictate tone and logic, you effectively lobotomize the writer’s original intent. We’re reaching a point where AI is just reviewing other AI-generated text, creating a feedback loop of pure mediocrity. Copium for middle management, if you ask me.
Grammarly even from the start was very distracting to me even as a someone using english as a second language to communicate. I have developed my own taste and way of articulating thoughts, but grammarly (and LLMs today) forced me to remove that layer of personality from my texts which I didn't wanted to let go. Sure I sounded less professional, but that was the image I wanted to project anyways.
Unrelated but surprising to me that I've found built-in grammar checking within JetBrains IDEs far more useful at catching grammar mistakes while not forcing me to rewrite entire sentences.
JetBrains’s default grammar checking plugin[1] is actually built on languagetool[2], a pretty decent grammar checker that also happens to be partly open source and self-hostable[3]. Sadly, they have lately shoved in a few (thankfully optional) crappy LLM-based features (that don’t even work well in the first place) and coated their landing page in endless AI keywords, but their core engine is still more traditional and open-source, and hasn’t really seemed to change in years. You can just run it on your own device and point their browser and editor extensions to it.
> The problem is that once you start letting an AI "expert" dictate tone and logic, you effectively lobotomize the writer’s original intent
Isn't that what grammarly has always been, since long before the invention of the transformer? They give you a long list of suggestions, and unless you write a corporate press release half of them are best ignored. The skill is in choosing which half to ignore
I disagree. You write when you have something to say. A service like Grammarly tries to help you convey what you want to say, but better. What you want to say is still up to you.
Words paint the picture, but the meaning of the picture is what matters.
Children and young students, certainly. Adult students: almost 100%. If writing is your job, then by definition, and your problem is more often finding something to say, not writing it.
You’re not counting all the office workers who have to write reports or emails, or all the scammers who write those websites to manipulate SEO or show you ads.
Everyone should think twice about putting their name on AI garbage, or garbage of any kind. But wishing doesn’t stop it from happening, especially when companies are explicitly selling you on doing just that. Remember the Apple Intelligence office ads?
It's great. Now that fancy writing is cheap and infinite, fields whose entire scholarship value was in obscurantist jargon bending have to actually start to turn on their brains and care about making more sense than an LLM can.
Maybe not. But academia is going to change. Status will still have to be allocated by some mechanism but the classic journals and reviews based system will crumble under the weight of LLMs. Of course this will upset a great many of people who enjoy the current state of things.
Sam really fumbled the top position in a matter of months, and spectacularly so. Wow. It appears that people are much more excited by Anthropic and Google releases, and there are good reasons for that which were absolutely avoidable.
Not GP, and not saying I agree with them, but it may be worth remembering that Netscape had 90% market share at one point. Active user count may not be the moat you imagine.
Adoption of web browsers was also much lower when Netscape was dominant. 90% marketshare is less meaningful if you're only 1% of the way to the potential market size. Peeling away users who talk to ChatGPT every day is very possible, but harder than getting someone whose never used an LLM before (but does use your OS, browser, phone...) to try yours first.
I think the even better analogy than browsers is search engines. There aren't any network effects or platform lock-in, but there is potential for a data flywheel, building a brand, and just getting users in the habit of using you. The results won't necessarily turn out the same - I think OpenAI's edge on results quality is a lot less than early Google over its competitors - but the shape of the competition is similar.
Maybe! Switching search engines is also very easy, and the top story on the front page is someone no longer using Google, but we know in practice almost nobody does that. As technologists we're much more likely to switch and know people who would switch.
google search definitely has a moat. people build their websites to optimize for google's algorithm, therefore google users see better results -> google gets more users -> websites optimize for google -> repeat. Personally I never bother with 'bing SEO' or 'bing ppc ads'.
the AI has gotten good enough that click-thru-rate on informational searches has fallen off a cliff. I have some blog posts for SEO, their CTR is like 0.1% now.
google search took over becuse all search engines sucked and theirs didn't in a few important ways. AND by default, ads over to the side, clean interface.
Now all search engines suck and google's sucks just as bad or worse than the rest.
If someone were to follow the original google playbook and make a search engine that helped people find things (eg by respecting the query syntax rather than making 'helpful' suggestions and dropping words the user included in their query) and kept the ads separate and out of the way of results. They might well make a monster. But this is old tech so nobody cares and everyone thinks google is unassailble even while nobody likes them anymore. Is there /any/ money in search? I thought so but I must be wrong for it to get this bad.
Google search still has at least one competitive advantage: their crawlers are least likely to be blocked so they have the biggest index. AFAIK reddit is indexed by google but blocks all other search crawlers.
How many of those users are paying? Where is the profit? How many users will be willing to use ChatGPT if they had to pay? Might have to pull out the questions like its 2026.
Most people will stick to the free product. Claude isn't free and not widely known beyond tech circles. Gemini, despite being good, also has a marketing problem and most non technical users still default to chatgpt.com for their day to day AI usage but that can change as Google redirects users to Gemini from so many surfaces it owns
> This plan may include ads. Learn more
> When will ads be available in ChatGPT?
We’re beginning in the US on February 9, 2026
> Starting in February, if ads personalization is turned on, ads will be personalized based on your chats and any context ChatGPT uses to respond to you. If memory is on, ChatGPT may save and use memories and reference recent chats when selecting an ad.
You pay 8 USD / month and have higher limits and ads
99% of normies aren't paying for ChatGPT, there's a reason why they're pushing heavy for corporate welfare + government contracts. They're unable to sell to consumers so now they'll selling to governments while trying to lock-in contracts that subsequent people can't easily dismantle.
When they cost more to serve than they bring in, customer switching cost is vanishingly low, your competitor has revenue from other things and you don't.
> When they cost more to serve than they bring in, customer switching cost is vanishingly low, your competitor has revenue from other things and you don't.
What? "Other things"? This is really vague. Who says competitors have lower CAC? It's rather likely competitors pay more for a new customer, due to, very simply, brand.
They aren’t going to run out of money. They have existing customer relationships. They invented the model architecture of which GPT is a variant. Their existing enormous business is their own AI customer.
OpenAI’s business seems way more precarious than Google. Users get the tech either way.
"Anthropic" doesn't exactly roll off the tongue, and I think a lot of people would avoid it simply because it doesn't have a catchy name like OpenAI or ChatGPT. It's also far more fun to say "I did a Google search" than "I did a Duck Duck Go search", and one still dominates over the other no matter the privacy concerns or how easy it is to switch. People can be simple like that.
I’m not sure it matters in Anthropic’s case that much - even people who use Anthropic models rarely think of the company as “Anthropic”. Their Claude brand is very strong, so much so the website is https://claude.ai etc, and you commonly see discourse about the company’s models where the name Anthropic never even appears. It’s Claude, Claude, Claude all the way down.
Claude has impressive mindshare in many engineering disciplines too, and given how many open source projects are a play on its name I’m not sure I’d argue it isn’t catchy either. Certainly rolls off the tongue easier for me than “chatGPT” does, which even Sam Altman their CEO agrees is an awful product name they are stuck with.
> They'll overlook the fact that the work AI tools provide only encompasses 10% of your job even if they're 100% efficient.
Time will tell. As of today, there are strong indications this statement stands on weak knees. Copium is a term I recently heard in that context, and it fits.
Not sure WTF I read here. Just more vibe coded "products" and "blogs", as it seems.
This "padded room" architecture fails because isolating the host OS does nothing to protect the user's data; if the agent has permission to read your files and access the internet, an injection will simply use the agent’s legitimate tools to exfiltrate your private information. Furthermore, making core memory files immutable and requiring manual confirmation for every action effectively lobotomizes the AI, trading its primary value—autonomy—for a false sense of security that users will eventually bypass due to click fatigue.
You’re making a valid point. There probably isn’t a silver bullet that makes an autonomous agent completely secure. But depending on the use case, you can still meaningfully reduce risk.
Security is often about process and layered defenses rather than perfect isolation. The goal isn’t to eliminate compromise entirely, but to reduce the attack surface and limit the blast radius when something goes wrong.
For example, if an OpenClaw agent needs to process emails, one strategy could be to introduce a locked-down preprocessing subagent. That agent would have minimal permissions: no write access to long-term memory, no API keys, and no external capabilities beyond parsing and classification. Only messages that pass this stage would be forwarded to the agent that can actually take actions.
Is this 100% secure? Obviously not. A sufficiently clever injection might still find a path through. But separating responsibilities and privileges makes exploitation significantly harder and limits what an attacker can achieve even if one component is compromised.
While that might take it a little too far, Lex surely is a dangerous individual. On various occasions did he sympathize with the war and terror that Russia is doing in Ukraine. I do not click on any of his content because I will not support these (and a few other questionable, to say the least) views of his. Also his image of an MIT researcher is hilarious.
> On various occasions did he sympathize with the war and terror that Russia is doing in Ukraine.
I'm not a devotee of his but I've listened to a few of his podcasts when I like the guest. I have an idea of how someone would come away with your impression given lex's interview style but I'd be pretty surprised if anything he said would, to me, fit your impression.
That said, I'd like an example if you have something specific to point to that might change my mind or if it's just a general takeaway you've gotten from a corpus of interviews on the topic (which would be totally valid but wouldn't change my mind).
> That said, I'd like an example if you have something specific to point to that might change my mind
This guy wanted Putin on his podcast to hear his side of the story (let that sink in) and spoke Russian to Zelensky. Willingly wanting to provide a platform for a mass murderer who is best known for large-scale social media propaganda.
This is not an "impression" of his "interview style". This guy implicitly supports terrorist acts.
> This guy wanted Putin on his podcast to hear his side of the story (let that sink in)
Many people have interviewed serial killers and not supported serial killers.
I would very much like to know Putin's actual motivations which would unlikely be spoken but his stated motives would also be enlightening.
I'm sure he'd go on with the standard "Nazis in Ukraine" line but in a 2-3 hour interview, I might get some new insights I don't get from 3 sentence sound bites.
We know so much about Hitler from his own writings and speeches. It seems to me that your philosophy on "platforming" Putin would also apply to making the words of Hitler available to the public.
Is there someone you think _could_ interview Putin responsibly?
> spoke Russian to Zelensky
I don't see the significance of that. They both speak Russian and English fluently. I don't know if Friedman speaks Ukranian but I'm not understanding what the implication is here. Surely the interview was in English since the podcast is?
> This is not an "impression" of his "interview style". This guy implicitly supports terrorist acts.
Implicitly being the key word here and is certainly subjective. If the body of evidence you're presenting is "would interview Putin" and "spoke Russian to Zelensky", I don't find that convincing.
> Is there someone you think _could_ interview Putin responsibly?
No, and no one should, see next answer.
> Implicitly being the key word here and is certainly subjective. If the body of evidence you're presenting is "would interview Putin" and "spoke Russian to Zelensky", I don't find that convincing.
"Would interview Putin" implies "is willing to provide a huge international platform for a terrorist and still-active mass murderer who is best known for his effective propaganda of peoples' minds". If you do not find that convincing, you are not alone at all. This has been the objective of Russia all along.
Pretty sure he’s a complete fraud too. He associates himself with MIT despite only having had a short stint teaching non-credit classes. One of his papers was apparently so flawed it’s been wiped from existence. Plenty of info online if you want to go down the rabbit hole.
reply