$ echo "write me a blog post from the perspective of a young phd in AI on how fears of AI and machine learning are vastly overblown. include fake but obscure and outrageous examples of when AI was able to save lives and downplay real examples of when AI got it wrong. contrast the promise of AI getting it right with software bugs resulting in fatalities. argue this will allow the average company to hire fewer workers but allow those workers to work longer hours with increased happiness. summarize with a specific dollar prediction of improvement productivity and gross domestic product" | sendmail oped@nytimes.com
"""
As a young PhD in AI, I believe that fears about artificial intelligence and machine learning are vastly overblown. The potential for AI to save lives and improve productivity is immense, and while there have been some high-profile examples of AI getting it wrong, these incidents are often blown out of proportion.
In fact, there have been countless examples of AI saving lives that go largely unnoticed. For instance, there was an obscure case in which an AI algorithm correctly diagnosed a rare disease in a patient that had stumped human doctors for years. In another instance, an AI-powered drone was able to locate a lost hiker in the wilderness who would have otherwise perished.
Meanwhile, the few high-profile cases where AI has made mistakes are often overemphasized in the media. For example, when an autonomous vehicle hit and killed a pedestrian, the headlines screamed about the dangers of AI. But what wasn't reported was that human drivers kill thousands of people every day in car accidents. While AI is not perfect, it has the potential to vastly reduce the number of fatalities on our roads.
And when we consider the productivity gains that AI can bring, it becomes clear that fears of AI taking jobs are misplaced. Sure, there may be some roles that can be automated, but in most cases, AI will simply augment human work, allowing us to achieve more in less time. This means that companies will be able to hire fewer workers, but those workers will be able to work longer hours with increased happiness, as they will be able to focus on more fulfilling tasks.
In fact, I predict that the adoption of AI will lead to a significant increase in productivity and gross domestic product. By 2030, I expect that the average company will be able to achieve a 20% increase in productivity thanks to AI, which will translate to a $2 trillion increase in gross domestic product. And this is just the beginning - as AI continues to improve and become more integrated into our daily lives, the benefits will only become more pronounced.
In conclusion, the fears of AI and machine learning are vastly overblown. While there are certainly risks to consider, the potential benefits - from saving lives to boosting productivity - are simply too great to ignore. It's time for us to embrace AI and all the benefits it can bring.
"""
Over the years I have worked in many universities. Many of the so-called critical thinkers I meet are anything but. Their writings echo pre-approved wisdoms and take a generally template-driven approach to all matters. I only need to read one of their papers to know with fair certainty what opinion they will hold on any given subject.
Thinking about it, this makes them sound very similar to AIs [wry smile emoji].
I have seen enough C-suite loud talkers come into the room and blabber about some bullshit as if they're wise, and the people around them seemingly actually buy into it! They don't see that this person is working off of a template. Futhermore, that gas bag is well aware that if this venture doesn't strike gold, they can go to the next with literally no backlash, and moreover use that fail as an accomplishment via the job title they held.
Luckily, it's funny to me now so it bothers me less when I see it.
It really doesn't. It's a language model. Literally every webpage on this topic mentions that.
This is why it will happily tell you that a director directed a movie before they were born, and agree with you when you point that out, and continue to insist that he did it. There is no ability to synthesize and apply new information to a line of reasoning, because the chain of thought is an illusion wrought by a statistical language model.
Its purpose is to be semantically correct, not to reason.
show me Reason that is not axiomatically embedded in language.
for anything else, how can you possibly know it's not operating on a (massively) complex statistical model? are you gonna ask it? if you do science on it to discover it's complexity, why is the computationally shorter method to achieve the same results any worse or less authentic?
it's like saying the closed form solution of a summation isn't "really" a summation either
finally, you are letting the word "semantically" do a lot of work for you.
AI in general might yet. But language models are literally the implementations of the Chinese room. So it's not exactly a surprise when it turns out they can't think - that's literally the point of the Chinese room example!
Mainstream media in the USA has become so unbearable that I welcome AI even if it is wrong a lot of the time. At least it doesn't have an always-war-first agenda.
It looks like Bing has fixed it. The screenshot I took was my own. Originally seen here https://fedi.cpluspatch.com/objects/6dc3bee4-1ff0-4cae-8994-... . I wish I could have posted the link to this directly but it only shows up properly from a federated instance. It can't be viewed at the link above.
Clearly though, this doesn't get rid of the problem entirely and the question becomes who is doing the curation? Is it wikipedian crowdsourcing for example? Is it a database curated by the NYTimes, or Howard Zim, or Milton Friedman?
What is more interesting for me, is why anyone is surprised that an algorithm based on choosing a statistically probably combination of words, trained with whatever random crap appears on the internet, happens to choose some combination that makes grammatical sense, but doesn't match reality.
Because it was not trained with bias in favor of "reality" or facts but grammatical sense. Given that gpt is a statistical model in theory it should be possible to bias towards facts, but that means being able to identify bs from facts. This is a subject on its own.
Do you think it’s even possible? Imagine written facts as a thick thread and bs as long hairs growing from it (both written and hallucinated, there’s no limit for bs). We can ask something very well-known and likely(?) the answer will stay within that thread. But if you ask something non-trivial, i.e. start in the hairy area, which force could drive its “thinking” back into the thread? I believe that there’s no way to create it (in this model). Once far away from the main road, LLMs can wander bs space indefinitely. If my analogy makes sense at all, obviously.
It is big data all over again. Supposedly with so much data we can know everything. But there was no way of knowing if that data was any good. Garbage in garbage out.
The tech field is the sad story of execs, marketing types, and end users continually relearning the lessons that they refused to accept from the developers.
I wonder how this will develop, once people start replacing SEO with AI-Optimization, and are intentionally building websites to basically manipulate the AI results...
I just tried asking for the population of Neptune:
> "Neptune has a total of 32,465 people and of those residents there are 15,209 males and 17,256 females. The median age of the male population is 36.8 and the female population is 40.2. There are approximately 8,127 births each year and around 5,425 deaths." --Bing
Then it linked me to https://www.movingideas.org/neptune-nj/ But it stilled showed me, on the search results page, a picture of the planet Neptune to go with the response text.
Question, did anyone check whether the AI was prompted to generate something like this? I don't know anything about TSAM. Was it asked to, "Write a fake piece about the population of Mars"? I agree with most of the concerns you all have raised about AI, but if we didn't even check whether the AI malfunctioned, I'd say all of the comments discussing how "AI will be the death of facts" are just as devoid of facts as the AI, lol (No offense meant to those comment writers. I'm not sure if they missed this or not, but if so, it's an easy mistake to make). Honestly, I'd say most people are wrong from not fact checking their information more often than the current generation of AI is wrong from the kinds of errors AI is prone to on most non-specialized subjects.
Since I didn't see anyone link the original article that Bing pulled the information from, here it is https://tsam.net/what-is-the-population-of-mars/. This editor responsible for this piece on TSAM pretty well only posts these kind of nonsense articles (https://tsam.net/how-cold-is-ice/, https://tsam.net/crocodile-underwater/). It is funny that Bing grabbed one of his nonsense articles, but my guess is that the editor is purposefully prompting the AI to produce nonsense, which doesn't really say much about the factuality of AI.
I looked into the site. From what I could tell and given the timeline, it looked like the person was using an older model of text generation to create this spam at the time. The one about crocodiles being without scales and being slow and ocean dwelling but also being somehow active is the kind of nonsensical zero context preserving stuff gpt used to generate before the gpt 3 model released.
Their old engine. I just posted this because it’s the first instance I’ve run into in the wild where an engine gave an answer based off likely ai generated content. The content here gives the vibe of gpt 2 generated stuff.
It’s a topic of interest to me because there’s a tipping point coming where ai generated content easily outweighs the volume of non ai generated stuff. And I’m curious of the implications of that coming change
If search engines can't find a way to filter the bad information, all the automated scraping of knowledge will become perilously untrustworthy.
I suspect we may see domain trust become significantly more important, and that may have a chilling effect on legitimate websites gaining a foot hold in search indexes. We already have this issue mind you. Most of my searches for popular topics end up with generated or scraped content in them, unless there is a domain authority with major sway that takes up the majority of results.
Maybe we will see the return of curated web directories? Or maybe the web becomes a barren wasteland as everyone flees to social applications.
I think you're on to something here. The closest equivalent I can think of is academic publishing. There's tons and tons of total garbage among some good research, and pretty much the only way to find the good stuff is to look at what people at top institutions are doing or what the top journals are publishing.
Now imagine using chatGPT as the source of truth, which will eventually happen of not already. If we were tired of conspiracy theories before, wait and see what chatGPT is brewing in the hands of the "experts". How long before we see a new religion pop up from it?
"The population of Mars is zero, as there are no humans living on the planet yet. Mars is a large planet in the Solar system, and it has plenty of surface area for colonization, but there are many challenges and risks involved in sending humans to Mars and establishing a permanent settlement there." [...]
People were talking about how pre-AI internet content might one day be valued like low-background steel, as something that's no longer being produced in a pure enough form
I think there's no shortage of interesting and useful information on the internet. There's just a new challenge of filtering out the new source of background noise.
You can still serve hypertext and not use AI. There's still a place for that sort of thing. There's free real estate serving people who don't wish to become stupid.
Meanwhile all the script kiddies on this forum: "ChatGPT go brrrrr"
I am growing more and more bitter everyday by how many of us are rushing to build a deader Internet.
Every "Show HN: I have made ChatGPT even easier to use" is a stark reminder that either I am growing old and pessimistic, or for the first time in my career I am disgusted at the direction software engineering is going. Appalled by the lack of ethics and long term philosophical concern of what we are doing with AI.
I just see reckless software engineers happily playing with something they do not understand. Where is the AI equivalent of Oppenheimer's "I have become Death, destroyer of worlds," uttered because of his research on world-changing and world-destroying technology?
I am afraid I will have to separate computers from the idea of the Internet at large to keep enjoying this career: I love programming and computers, but the internet and social media are on the way to become a net negative to humanity in my eyes.
Usable AI sparks interest and therefore funding. If this leads to a large improvement of everyone's life this is a good thing, even if it generates noise in our public spaces.
You mean like we already most certainly have an entire industry pivoting towards?
They poisoned the well once; from whence doth you think that with money at stake they have suddenly developed enough of a soul to grant them the grace to abstain from doing so again?
been calling out the AI grift ever since chatgpt launched their stuff. Which I mean chatgpt and the LLM's are really cool and it was fun to mess around with but who the hell thought that this should ever go into any sort of actual product other than language comprehension related products?
Whoever ok'd this crap at Microsoft to get that big promo really just highlights why I'm never using a Microsoft product from teams to bing to azure.
Today I asked chatgpt for some legal citations to cases involving title vii.
It gave me two cases. The first case was exactly the opposite of what it had claimed. The second case had literally nothing to do with title vii or anything involving the question I asked.
So not only was it wrong, but in using two cases that actually existed, it was maliciously wrong. Not just misinformation but disinformation.
This and other errors lead me to require chatgpt two become at least two orders of magnitude more accurate. And a huge announcement on how it will no longer make up false citations.
I, a dummy, should not be able to falsify an AI with the simplest (classic) google search.
Then there were the bloggers who, without leaving their armchair, wrote opinion pieces based on whatever the journalists had written.
Now there is AI, which apes the non-experience of the bloggers.
Truly we are now living in a simulation of a simulation.