I was also in tech at that time, in fact I worked for Google during that period and people definitely thought that the Internet had reached its peak. So many criticisms back then not about just peak Internet but that all these companies were blowing money on unproven business models, they were unsustainable, unprofitable, it was all just hype.
You also had numerous telecommunications companies going bust in one of the largest sector collapses in modern financial history, the largest bankruptcy in history (at that time) was WorldCom, followed by the second largest bankruptcy in history with Global Crossing... Lucent Technologies went belly up and the largest telecom company at the time Nortel lost 90% of its value, eventually going bankrupt in 2009.
And then of course the great recession hit, tech companies took a massive blow, Microsoft, Google, Intel, Apple and other tech giants lost 50% of their stock value in a matter of months. You don't lose 50% of your value because people think you have a promising future.
It wouldn't be until the explosive rise of smart phones and close to zero percent interest rates that sentiment turned around and tech companies ballooned in value in what would end up being the longest bull run in U.S. history.
The article immediately starts off with such a glaring contradiction that it makes it very hard to correctly interpret the remainder of it.
You can't say that something can never be ethical/safe on the one hand, and then on the other hand say that being ethical/safe depends on context/intent. Those two statements contradict each other.
Either AI can be safe and ethical in the right context with the appropriate intent which contradicts the title, or it can't be safe/ethical regardless of intent/context, in which case the title is correct but the reasoning is incorrect.
There is no consistent way to interpret the remainder of the article with such a glaring and obvious inconsistency.
I think they're arguing against Anthropic et al. claiming their models are "ethical" and "safe". The point being that it can't be absolutely in all circumstances ethical or safe because even seemingly benign information can be used to cause harm, hence it requiring knowing the user's intent to actually make an ethical and safe choice of whether to provide information or not.
When Anthropic et al. say that their AI is ethical and safe, they are saying so in absolute terms, same as the title. Just one instance of unethical or unsafe behavior is enough to prove that it's not ethical or safe.
No one would say a knife or a gun is safe because we're all aware of the harm it could cause, thus requires care and diligence in use. The term "ethical" doesn't apply in this analogy because an inanimate object cannot act, but an LLM can.
The point is that safety depends on context and intent being known - with unknown context or intent, dangerous situations will appear _some_ of the time, thus the system as a whole can "never" be fully safe.
Yeah I hate the title because it almost verges on clickbaity because one assumes that he's making the assertion that AI has a moral stance in the first place, versus AI being morally neutral and driven by its wielder
I think without reading the final line, you might get the wrong impression.
> It doesn’t make those frameworks worthless. It makes them incomplete by design—and it means, again, that AI will never be entirely ethical or safe.
Lots of people in this thread are reading the headline and making the same comparisons that the author does - "Most people don’t provide their context. They never have—not to search engines, not to librarians, not to hardware store clerks."
The article isn't saying "AI will never be ethical and safe, and it is unique in that way," it is saying "and so it is similar to these other things." If anything, it is critiquing the claims made by corporate AI that they can successfully make AI both useful and totally safe.
This article is specifically about QBASIC, which was bundled with MS-DOS, and by extension Windows up until Windows Me. QuickBASIC is a separate stand-alone application that predates QBASIC. The two certainly shared a lot of similarities, but they were not part of the same product line.
Microsoft developed numerous variations of BASIC from Altair BASIC, MBASIC, GWBASIC, PDS BASIC, and of course the most well known of them all, Visual Basic.
QBASIC was the only of these that was "free" in the sense that it came bundled as part of the operating system, and never sold as a stand-alone product.
There is a clear and sudden transition on this blog where prior to a certain date there are zero instances of the em-dash and then suddenly it appears like crazy. Like look at his archived posts from 2023, absolutely no em dashes... now look at every post from 2025 and almost every single one of them are literred with them.
The fact that the average person is seemingly incapable of detecting LLM text drives me insane. Every aspect of that article screams LLM. The tone, the punctuation, the sentence structure, the overall structure, it's so incredibly obvious. But the average person really is oblivious to it.
why?
Before comments about LLM I didn't notice this. After I compared pre-LLM posts and post-LLM and looks like AI was used to write/edit this article.
But.. why should I matter? Why my ignorance of this fact insane you?
The only ways that comprehending emotions wouldn't belong in its own category of intelligence would be if everyone were equally capable of deducing the emotional state of others, or that performing such deduction is not something intellectual, or that such deduction is strictly a consequence of existing intellectual categories.
>The only ways that comprehending emotions wouldn't belong in its own category of intelligence would be if everyone were equally capable of deducing the emotional state of others
Not every skill gets a whole category of intelligence.
>that such deduction is strictly a consequence of existing intellectual categories
>But why this matters is there a challenge judging intelligence cross cultures?
I don't know for sure, but my own anecdotal experience is that yes, there most certainly are challenges when a person from one culture assesses the intelligence of someone else from another culture.
It would be nice to know whether this is supported by scientific evidence, or whether this is simply my own personal bias at play.
I just looked into this a bit because I thought he still had some kind of role at Microsoft even after leaving as CEO/chairman, but it turns out that in 2020 he left any and all positions at Microsoft as it was investigating him over inappropriate sexual relationships he had with Microsoft employees.
Before that he had a role as a technical advisor and sat on the board of directors.
I also found it interesting that Steve Ballmer owns considerably more of Microsoft than Bill Gates (4% for Steve Ballmer while Bill Gates owns less than 1%).
Without a significant amount of needed context that quote just sounds like some awkward rambling.
Also almost every feature added to C++ adds a great deal of complexity, everything from modules, concepts, ranges, coroutines... I mean it's been 6 years since these have been standardized and all the main compilers still have major issues in terms of bugs and quality of implementation issues.
I can hardly think of any major feature added to the language that didn't introduce a great deal of footguns, unintended consequences, significant compilation performance issues... to single out contracts is unusual to say the least.
Because Disney's deal was specifically and exclusively related to Sora, which was OpenAI's bizzare attempt at a TikTok like social networking site but using AI generated videos.
It was not a deal that allowed the use of Disney's characters for general purpose AI generated content using OpenAI tools.
Sora was "repurposed" as their AI slop social network. OpenAI is not getting out of the business of AI video in general, they're just realizing that an AI version of TikTok isn't the best use of their capital/resources.
> CEO Sam Altman announced the changes to staff on Tuesday, writing that the company would wind down products that use its video models. In addition to the consumer app, OpenAI is also discontinuing a version of Sora for developers and won’t support video functionality inside ChatGPT, either.
You also had numerous telecommunications companies going bust in one of the largest sector collapses in modern financial history, the largest bankruptcy in history (at that time) was WorldCom, followed by the second largest bankruptcy in history with Global Crossing... Lucent Technologies went belly up and the largest telecom company at the time Nortel lost 90% of its value, eventually going bankrupt in 2009.
And then of course the great recession hit, tech companies took a massive blow, Microsoft, Google, Intel, Apple and other tech giants lost 50% of their stock value in a matter of months. You don't lose 50% of your value because people think you have a promising future.
It wouldn't be until the explosive rise of smart phones and close to zero percent interest rates that sentiment turned around and tech companies ballooned in value in what would end up being the longest bull run in U.S. history.
reply