Hacker News new | past | comments | ask | show | jobs | submit login
The Age of AI has begun (gatesnotes.com)
243 points by avonmach on March 21, 2023 | hide | past | favorite | 220 comments



I guess one of my own issues with AI, is seeing how the tools we already have, have been twisted by greedy, ruthless people (many of whom are on this very venue).

I was one of the people that started off in tech with dreams of a digital Nirvana. I ignored the "negative naysayers," and sallied forth, with an open heart, and hope for the future.

I'm really quite disappointed in what we've done with these marvelous tools. As I've gotten older, wiser, and a lot more cynical, I can tell you that, if I had known, then, about human nature, what I know now, it would have been entirely predictable.

I think we'll be seeing AI-driven scammers, calling and messaging us, almost incessantly, I think we'll be seeing AI-generated political manipulation, like fake videos and audio, of politicians and public figures, I think we'll be seeing AI-driven promotional campaigns, and almost undetectably subtle media manipulation, like corporations seeding news stories with their talking points. I also think that we'll be seeing AI-driven workplaces, where it will be hard to be simply human, and AI-driven surveillance, by authoritarian regimes (and people right here, will happily sell it to them).

I want to believe that we'll do good, but bad pays so well, and there's outright scorn for doing stuff for any reason other than pure, personal profit.


This was exactly what I was commenting the other day. I am still a technophile. It is not tech that worries me but the rich who will exploit and monopolize it to rule us more and that too more unscrupulously. May be it is time to actually find ways to not provide more data to train ai? Sure the worlds knowledge until 2022 got ai so far, that doesn't mean we cannot starve it going further if we act collectively?


I’ve adopted a similarly pessimistic view, that even if I’m one of the ones who gets to keep my job, “it’s” all going to shit from here. I’m surprised and confused to say the least.


Same boat.

I think in the ways gen-x’ers talk about the “pre-internet” days spent offline, we’ll talk about the “pre-AI” days spent online. I can imagine a flood of content so gargantuan and vast that by 2030, finding new real words online written by real breathing people will be like finding a teacup in a junkyard. I’ll stress that this view is strictly my own and not my employers, but I personally wish this box had remained closed. The benefit we’re seeing now just isn’t worth what’s to come, I fear.


> we’ll talk about the “pre-AI” days spent online.

I like the pre-social media days we spent online. When making a web site took some non-trivial effort so just having something up and hosted meant that you were either interested in the technology, or felt that whatever you were publishing was important enough to warrant the effort.

When there were no "normies" online, or if they were, they were confined to e-mail.

I'm sure if you go far back enough, you'll find someone centuries ago expressing the same thing. Oh how the printing press has cheapened writing. Back when people had to put quill on paper, they really had something to say . . .


Same with films. On analog, you have really think what story you want to say. Mistakes was expensive. Nowadays cheap mistakes let you experiment more, but again... Without thinking about purpose of those experiment, nothing valuable is made.


Yeah, I've been a technophile all my life but this is probably the first development I don't see as a net positive. All the use cases I've seen so far lead to a shittier world, and we probably won't even get sci-fi AGI in return.


Personally I think there’s a distinction to be made between generative AI and that of say, self driving cars. I’d definitely consider the latter a net positive for safety.


Massive net negative for anyone who can't afford a new car once those become required. Without expansion of public transportation basically everywhere, self-driving cars are a money making method that will lock out millions of people, not a real solution.


Time sharing of self-driving cars will be very inexpensive. Biggest cost now is drivers.

Think about how little time utilization most cars get.

That blurs the line between public and private, in a good way.

In tandem to self-driving, electric car hardware keeps getting cheaper. Projected lifetimes keep getting longer. So the economics are going to trend better for quite a while.


Agree that better public transport is the ideal solution. When do you see self driving cars becoming required?


It's like how William Gibson moved away from cyberpunk, because all the shitty aspects of cyberpunk are in today's world with few, if any, of the cool bits.


>I think we'll be seeing AI-driven scammers, calling and messaging us, almost incessantly, I think we'll be seeing AI-generated political manipulation, like fake videos and audio, of politicians and public figures, I think we'll be seeing AI-driven promotional campaigns, and almost undetectably subtle media manipulation, like corporations seeding news stories with their talking points. I also think that we'll be seeing AI-driven workplaces, where it will be hard to be simply human, and AI-driven surveillance, by authoritarian regimes (and people right here, will happily sell it to them).

I've been having similar concerns, however, I just wanted to note in this paragraph there is not a damn thing you've listed here that isn't already being done right now without ai. We already get scammed incessantly. Every political message we receive is meant for manipulation versus actual understanding. Corporations have been seeding news stories since there was ever news stories. We have corporate cultures that routinely dehumanize us, forcing us to be sedentary indoors, when we ought to be foraging during the daylight if we actually lived according to our biology. we surveille eachother, and have no need of any advanced technology to oppress ourselves by the millions with brutal effectiveness. If you are concerned now, then you must not have been paying much attention. If you have issues with these things, then ai is not what you need to be going after, this is how the world is incentivized to run.


Many of these scams are older than dirt.

I remember, in the 1980s, companies getting actual snail mail 419 scams, then, in the 1990s, faxes.

But AI will be giving the same sharks, new tools.

Soon, you will actually be hearing your granddaughter on the phone, screaming, as they rip out her fingernails. The "IRS agent" will no longer have a dodgy Indian accent. Those "I caught you masturbating" emails will now have actual deepfakes of your face. In fact, I wouldn't be surprised if a new racket appears, of extortion, using wholly-fabricated nude/embarrassing photos. I suspect that influencers will be targeted, as they put out so much material, and many have means.

I just know exactly how bad people can get; better than most, hereabouts. I have few illusions.

But we'll also be getting some good. This is the real deal. It isn't Claptrap's BECHOWafers.


If the scams become widespread people will react accordingly. Real kidnappers ripping fingernails for ransom will find that they get hung up on and now have to resort to other strategies to get past the spam and get payment. People will realize how easy it is to make a deepfake and just not care about photos of them in compromising situations since they could now always claim plausible deniability. Gullible people will fall victim of course, but it doesn't take much even today to have someone hand over their life to you with some social engineering. There's probably millions of bots that were doing just that with some canned responses and getting a good enough return to continue hosting the bot farm at least even before chatgpt. I doubt the internet scammer needs much sophistication to get a decent return.


It's a question of scale.

Back in the day, a three-letter agency could send a person to spy on your 24/7. They couldn't do that for everyone. Now, mass surveillance is possible.

Back in the day, you had to send a letter in the mail or have a person calling someone to run a scam operation, and that had a cost to it. Email started changing that, but remained impersonal, non-interactive. Generative ML changes that.

Back in the day, traveling across the US took months or even years. Now, you can fly across in a few hours, or drive across in a few days. You're behaving as if that didn't change things.


We are at a unique inflection point: one where manipulation is basically transparent if you can see all the information at once. This has never been true in the past (except for small communities where everyone knew everyone else’s business) and it basically annihilates the very thing that manipulation needs: shadow.

Personally, I’m looking forward to an AI that can immediately answer the questions “is this a scam?” and “what are the facts underpinning this?”.


We're just trapped in the present day tech zeitgeist. Taking a step back and looking at things on a longer time scale, you'll see there has always been good people to keep things in balance, and the general state of things improves over time. There are great people working on tools to identify AI generated content, and even regular people will eventually catch up to the impact generative AI brings and adapt accordingly, like most other technologies / developments.

I'm optimistic we'll pull through.


Unfortunately the last time we had a publishing revolution we had two world wars before we got the press properly regulated. We’re now having two publishing revolutions within 30 years and are only in early stage fascism


Hey, don’t forget the thirty years war after the invention of printing press


>you'll see there has always been good people to keep things in balance

yes but technology has enabled good people to be shut down much more effectively


I wonder if humans would behave better - or at least, be more reigned-in - if we could only interact with people we are physically close to.


You have thousands of years of human history to look back on to figure out whether that is true or not. I'm going to go ahead and say that the answer is a pretty clear "no".


I'm not so sure about that. We have more information at our fingertips than ever before, and we can talk to people on the other side of the planet in real time.

However, all it takes is a couple minutes of browsing social media or playing an online game to see that humanity has some pretty poor ideas of what constitutes acceptable social behavior.

I don't think we've advanced socially all that much. I've read a fair number of history books (my degree is in history), and while our data on daily social behavior of the common person is woefully incomplete, I haven't seen much evidence that we're all that much more polite than our ancient Babylonian cousins.


That works well actually, until you get some other group that moves into the community, then they are blamed for all ills and targeted with violence that is only possible due to physical proximity.


I know what you mean, my life has unfolded in a similar way.

What I have realized is that until we evolve our hearts and minds, no amount of technology will solve our collective existential crisis.

With that said, I am still young (33), live in Maui, and am still hungry and foolish to try new and bold things. :)


The problem is that humanity is not just capable of the best and worst outcomes, but that we constantly act to realize both the best and worst we can achieve.


Ruthlessness and greed tend to be signature qualities of successful companies and their executives.


Don’t forget AI driven relationships, ala what happened with Replika


> In my lifetime, I’ve seen two demonstrations of technology that struck me as revolutionary.

> The first time was in 1980, when I was introduced to a graphical user interface

> The second big surprise came just last year. I’d been meeting with the team from OpenAI

I guess it's not surprising since Microsoft was late to the game with IE, but it's interesting he never mentions "the internet" in this list.


Maybe he doesn't mention it because he didn't think it was revolutionary when it first appeared.

From https://www.nytimes.com/1997/03/14/news/skeptics-cite-overlo... (https://archive.is/r4F8T)

> Even Bill Gates, the founder and chairman of Microsoft Corp. and widely regarded as the crown prince of the World Wide Web, was taken unawares by the Internet's grassroots acceptance.

> In his book, "The Road Ahead" (not available on-line), Mr. Gates admitted that he believed the technology for "killer applications" was inadequate to lure consumers to the Internet.

Of course, like any smart person at the time, he changed his views as the internet became more popular and eventually mainstream.


Uhm, crown prince of the WWW? Microsoft was actively working against the open web and instead they were trying to push their crappier Microsoft Networks(or whatever it was called) in the early 90s.


Gates favored an approach he (and others) referred to as the information superhighway vs the post 1980s modern Internet. He regarded the Internet as a precursor to what he called the information superhighway in the book.

It's not that he didn't see something equivalent to the Internet happening, he disagreed with exactly how it would look (who would own/control it is more likely what he had in mind given the context; I think he wanted a more closed off network that companies like Microsoft could thoroughly dominate).

https://en.wikipedia.org/wiki/The_Road_Ahead_(Gates_book)


He absolutely wanted a closed, proprietary network, and that’s also what he built: MSN.

But it didn’t work, so they recycled the name into the MSN we know today.

https://en.m.wikipedia.org/wiki/MSN_Dial-up


Which of course got commingled with “Live”, MSN the dialup, “MSNBC”, and even active desktop (IIRC)… probably way more I’m forgetting too.


I’d forgotten all that stuff. It’s weird, they usually can’t keep the name of their products the same for more than a couple of years at a time, but this one lasted decades for some reason.


The internet never had its "aha" moment like AI is now. Most people didn't know or care about it until decades into its existence.


For the longest time I would have said the internet was the most revolutionary moment in the last century.

But the more I dwell on it, I debate if the pocket touchscreen smartphone was really the moment.

Before phone, I printed Mapquest onto paper. The end result was pretty similar to me using an old paper map. The internet was evolutionary in that regard. It replaced some newspaper, and letters/phone calls. Text messaging sort of existed tangentially parallel to the internet. Search engines were pretty cool. It was only when the internet became portable that it really infected every moment of life. You didn’t have to like, drive home, to go dial in. Even having a color touchscreen pda that ran doom, it was still a novelty, and not like an extension of my being like the iPhone is now. Momentum scroll. And realistically it was the iPhone 3GS or iPhone 4 that was the turning point. iPhone 4 introduced mobile copy and paste.


> iPhone 4 introduced mobile copy and paste.

To the iPhone. There were phones on the market that supported copy and paste by the time the first iPhone rolled around.

I would tend to agree with you about the rest; the internet didn't strike the average person as very useful, or at least as revolutionary as it did with the advent of the smartphone. Arguably the iPhone was the mainstream breakthrough there.

But this is what kills me, and you said it yourself: "It was only when the internet became portable that it really infected every moment of life."

That was a good thing when the titans of today's web didn't employ armies of psychologists to hijack your attention to sell ads, inadvertently destroying the fabric of society. I don't have to tell the HN crowd how algorithms tuned to increase engagement blindly steer us toward outrage and division.

I recently posted comments about how I went from the Apple ecosystem to as open source as feasible. Nowadays the FairPhone only does duty as a GPS (hardmounted to my car). I have to be able to receive phone calls so I still use a Nokia, but it lives at home as much as possible.

For me personally, the total invasion of every aspect of life by the internet, and more specifically the tech giants in Silicon Valley, has killed all the joy and excitement I used to feel at the bleeding edge of tech. The bright-eyed sci-fi future turned out to be just another avenue for greed.

It's just a laptop now, and a gaming PC for the (very) occasional game with friends. If my house burned down with those in it, I doubt I'd feel loss.


Yeah, I had copy/paste on a Windows CE device, but Apples per usual was better.

And what I meant by it is I would otherwise give the revolution award to the 3GS, but really the 4 was the first “feature complete” iPhone, without visible pixels. It was the maturation and commotization moment for a previously bit niche product.


The internet would work just fine without smartphones.

Smartphones would be pretty useless without internet.


Smartphones without the internet would be PDAs with amazing cameras. Still rather valuable, though with a somewhat lower set of price brackets for budget/medium/premium.


Agreed. Back when I was wringing the maximum utility out of every dollar, an iPod Touch alongside a feature phone provided a whole lot of usefulness for a few years, even with no mobile connection.


AI has been around for as long (or even longer, depending on how you see it) as the internet (counting the start of internet as the beginning of TCP/IP).


As someone who has done some research on AI in the last 25 years I find this current iteration amazing.

25 years ago when I was just starting my Bachelor's degree, AI was in a sort of "hiatus". The "great" AI procedures were mostly rule based systems. We read about MYCIN and similar expert systems and how they could do amazing stuff. We also read about AI generating new strategies for Backgammon. That was amazing.

But during the 90s/2000s, nothing spectacular happened. We had some advances in neural networks, which were just an academic curiosity given their processing requirements. (I remember implementing a NN that could recognize vowels using C in school). We also had Genetic Algorithms, that were basically optimizers. We had some interesting implementations such as the Creatures game series and similar.

Then "Machine Learning" started gaining notoriety. Industry started using "Machine Learning" techniques to perform some types of forecasting. This came a bit later after "Data warehousing". I remember all the talk about hypercubes and that crap. And how Walmart would extract "knowledge" and place beer six-packs in the diapers aisle so that "dad" would impulse-buy beer when "mom" makes him go to buy diapers.

The ML time was an exciting time, and for some time mainstream thought that ML was AI ... but there were lots of other things cooking.

Now suddenly we have enough power to make non-trivial Neural Networks... and suddenly we have Deep Learning and these LLM with HUGE networks. And industry has just started looking at it. These are exciting times.

The other side of the coin is that during all these times, "mainstream" goal-post has been pushed and pushed for AI... what was AI in the 60s-70s (rule based systems like MyCIN), is not AI nowadays (they are just pattern matching, if-elses!! they say). What was AI in the 80s-90s (Genetic Algorithms, Multi-Agent systems) is not AI nowadays... and what was AI in the 2000-2010s (Random Forests, SVMs, AdaBoost) is no AI nowadays (just statistics!). But overall, it has been a very interesting subject to study.


I studied and researched NLP at the beginning of ML but before Deep Learning and it was pretty unimpressive as someone interested in Linguistics. Before I started I had some high expectations about what computers could do with human languages (I did not yet have a background in CS). I remember Neural Networks presented as some interesting approach that could solve some problems, but nothing impressive.

After graduating I did do some work on rule-based and ML translation engines, but I quickly got bored of it. I have been focused on application development since then. I didn't pay much attention to Deep Learning until 2022 came around and it surprised me. Now I am reviewing old notes and trying to catch up. 2023 is a very exciting year for computing and human language from an application side (I don't think LLMs give any insight into the way humans generate language).


2023 AI is basically magic.

LLMs and diffusors are the first AI technology to keep the AI name. All previously developed AIs lost their AI naming rights - magic powers - soon after production deployment. These two are truly AI - they’re still magic, perhaps even more so than even 3 months ago and given the number of parameters in them, this will likely remain true for a while. Working with ChatGPT and stable diffusion, seeing GPT-4 write games and midjourney generate presidents of US as Warhammer 40k characters really feels surreal.

This is the sufficiently advanced technology. Literally indistinguishable from magic.


> LLMs and diffusors are the first AI technology to keep the AI name

"AI" is not a static name that can be given to something and it remains AI. What was AI today will not be AI tomorrow, it'll be too simple by then.

Just as the AI written in the 1950s is not even AI by today standards, while it was groundbreaking at the time.

This moment will too come to pass, and people will look at 2023 AI as "AI that lost its naming rights".


I hope not… it’d mean the current generation is obsolete, and it’s already scary enough as is


Why not? That means we're making progress. If we're not calling today's technologies "outdated" in the future means we're stagnating.

Just like most programmers who get better year by year will look back at any code they used to write and say "How did I think that was good code?!"


I can see the failures of the current tech as well as its benefits. There's at least one more generation, and so it's not at all impossible to me that these two, Transformation and Diffusion, will cease to be called AI in the same way as those other things.

(Although myself I would still call most of them AI).


Love your quote. I was just reading Arthur C. Clarke's wiki page .

Mobile's are magic to my grandma...

I know how NN work... I've done them. I am so excited at the thought of my equivalent to what mobiles are to my grandma. I'm 40 years old and wish so hard I live to see the marvels of 2060


> "But during the 90s/2000s, nothing spectacular happened."

I don't know, around 2005 there were guys making cat door that could recognize the picture of the home cat.

https://www.metafilter.com/15802/

There was also someone playing chess with a spam filter.

https://dbacl.sourceforge.net/spam_chess-1.html

Indeed there were anti-viruses and spam filters.

In the mobile industry I remember a company selling noise reduction systems which used RNN.


Well, the 70's AI is out there optimizing the software you write; making sure you can use all the bandwidth on your network links; routing people over the maps they have on their pockets.

The 90's AI is out there making trucks and ships cheaper to run; reducing the amount of wood, steel, plastics, etc our industries use; helping design bridges or microprocessors; optimizing your code at runtime inside the CPU.

The 10's AIs aren't out there a lot. Maybe it is because of lack of time. But it's just research that moves on, the value stays.


Deep learning is out there in real products a lot more than you think. Any system that’s processing images, video, audio, or natural language (better than what existed in 2010, which was almost universally very poor) is almost certainly using DL.


People might not call them AIs, but the big AIs of the 2010's are definitely recommendation algorithms and they have had a huge influence on society (facebook, youtube, twitter, tiktok, instagram, etc.).


We haven't had AI like this.


I don’t think this is true. I would agree that the aha moment was not for everyone at the same time.

But for most people that I saw accessing the internet the first time it clicked immediately and they were hooked.


> I would agree that the aha moment was not for everyone at the same time.

Yes, and the aha moment for (generative) AI is at the same time for everyone. That's what makes it a true aha moment.


It did. Netscape provided that aha moment - via the Web - in the mid to late 1990s. It shook the landscape at the time. If you lived through it, you know what I'm talking about. It was suddenly popping up everywhere, across all media, across the general population. It rocketed into the mainstream in the matter of a few years. Even if people didn't fully realize it was the Internet making it possible.


For a small group of enthusiasts, sure. Ask random people whether they know or have used Netscape Navigator and 95%+ of the responses will be "huh, what is that?" Now ask them about ChatGPT and you will see the difference.


No it wasn't a small group of enthusiasts. It went nation-wide in the 1990s (in the US, and most affluent nations), rapidly. One of the fastest product adoptions in world history.

In a middle of nowhere town that I grew up in, in Appalachia, it was common to use Netscape + dial-up access circa 1995-1999 (that included at home, at work and at school). Tens of millions of Americans rapidly adopted the Web due to Netscape making it easy to navigate and that all happened in about five years. The Web became a common story point of movies, TV, advertisements (@ everywhere campaigns, email, urls on movie screen ads, etc etc); it went from almost nowhere to everywhere in roughly 4-6 years.

A tiny fraction of the US and world populations have utilized ChatGPT. It would be equivalent to where the Web was at in terms of adoption in ~1994.


I honestly believe a lot more people have used Netscape Navigator than ChatGPT. If you include Internet Explorer(which quickly took the crown from Netscape), that number will be dramatically higher.


You could make the case that the internet wasn't a true revolution, but rather social media and smartphones were the revolutions.

The internet was just a technology that made it possible, like the semiconductor or batteries, but for most people the internet without social media is just the place you go to check your e-mail and maybe read the news.


I can agree with that. It’s hard to point to a single moment. It was more like a 20 year assimilation of everything. Google search was the biggest watershed I can think of. The iPhone was incredible, but also a giant step backwards as it had such a limited browser and connectivity. And for many years it was sort of a weird counter culture place where only geeks gathered.


The internet evolved, it wasn't just useful all at once (as in "a demonstration that struck me"). Early users struggled every day to grow its utility and find more uses, all the way from a glorified BBS to what it is today.


The first time someone demo'ed the web to me, my first thought was -- why would I use this when I have ftp, gopher, archie, etc... The web definitely didn't seem revolutionary to me. And for me my introduction to the internet was even more mundane -- it was just email and the ability to telnet so I could do my programming assignments.


The web felt like both a simultaneous step forward and backward from HyperCard.

I think the web would have evolved quite differently had their been a good PowerPoint like gui design tool for building sites, one that wasn’t focused on the backside.


And a lot of people here helped it to become what it is today. Cheers!


Besides the hamster dance there was no one moment when the internet showed us its potential. Rather it was gradual.


That's only because there was no Internet to spread through to everyone quickly.


> "Artificial intelligence is as revolutionary as mobile phones and the Internet."

This is subtitle of the article, just above the byline, so I suspect he's talking about how impressive the demonstration was to him.


From the article:

I’m lucky to have been involved with the PC revolution and the Internet revolution. I’m just as excited about this moment.


Here I am thinking the gui abstracted too much away and future generations (mine included) are worse off for it


> late to the game with IE

What do you mean? In 2004 MSIE had 95% market share. It was toppled by Chrome only in 2012.


They were also late to the GUI game…


Last paragraph:

> I’m lucky to have been involved with the PC revolution and the Internet revolution.


Considering everything, it sounds more like another big wegde hammered into the divide. This will just make the rich richer, because now they can even save on artists, writers and other "expenses", all of which they can just add to their bonus now. All while you of course will still pay full price for whatever they offer.

Admittedly I write this in frustration, from the perspective of someone who has friends who did not get contract extensions because of AI, stock photo sites that are just flooded with AI content, and managers clearly using gpt to write half their mails.


AI will take the "means of production" from knowledge workers: their brains.


Well ... for now we can take the bullshit argument beloved by management: "work cheap now, it's a learning/growth opportunity that will pay off later" ... and throw it back in their face saying: since AI is going to take our jobs (yes that argument you were just using to try an convince us we are worth less) we'll just have to get all the money we can earn right now ... so pay up because there is no "later".


> This will just make the rich richer

Wealth isn't zero-sum. Your friends who got laid off had their jobs automated. Now what will happen, is they will go to other parts of the economy and add value where they are more needed. Total economic output has just increased and the world economy is now slightly more productive (same number of people making more stuff). You can't protest against your friends losing their jobs in this instance and not do the same for every previous technological invention going back to the wheel, all of which put someone out of a job. Because what you'd be advocating for is a lot of suffering and misery, in the form of material poverty.

If you want to protest against a lack of social policies contributing to wealth inequality, then just do that.

> All while you of course will still pay full price for whatever they offer.

It's far cheaper.


Where are they needed now do you think? This isn't a rhetorical question.


I wonder does it ultimately lead, in civilized societies, to the notion of "basic income"


I can see it working for ghost writing. One could use Chat GPT to generate unlimited Hardy Boy's novels. But I'm not sure it's going to fool readers looking for quality.


I’ve been thinking a lot about how AI can reduce some of the world’s worst inequities.

But he fails to mention in the risks how AI can also be a cause of increase for some of the world's worst inequalities due to failure to have representative datasets, or simply by creating an ever increasing technological gap between classes(just 2 off the top of my head).


Job displacement: AI could automate low-skilled jobs, leading to job losses for low-income workers while increasing demand for high-skilled workers, widening the income gap. Digital divide: Unequal access to advanced AI technologies could exacerbate existing inequalities between developed and developing nations, as well as between urban and rural populations. Concentration of wealth: As AI technologies are developed and controlled by a few large companies, wealth may become increasingly concentrated, leading to greater economic inequality. Education disparity: Access to high-quality education and training in AI-related fields may be limited to individuals with financial resources, further increasing income inequality. Biased AI systems: AI algorithms that inadvertently reinforce existing social biases can disproportionately affect disadvantaged populations, perpetuating and worsening economic inequality. Unequal access to benefits: The benefits of AI-driven productivity gains may not be equally distributed across society, leading to increased income disparities. Monopolistic behavior: AI-enabled companies with significant market power may use their technology to stifle competition, which could result in fewer opportunities for small businesses and entrepreneurs, contributing to economic inequality. Financial markets: AI-driven financial technologies may disproportionately benefit those with access to capital and financial knowledge, further widening the wealth gap. Healthcare: AI-powered diagnostics and treatments may not be accessible to everyone, especially those in low-income communities or developing nations, resulting in health disparities that could exacerbate economic inequality. Surveillance and privacy: AI-powered surveillance technologies may disproportionately impact vulnerable populations, limiting their economic opportunities and reinforcing existing inequalities.


ChatGPT comment


> The rise of AI will free people up to do things that software never will—teaching, caring for patients, and supporting the elderly, for example.

Every sci-fi movie show any of those jobs as replaced by AI (or robots). Probably they will not achieve the same level of connection (mostly because they will lack a human body to make expressions, at least at the beginning).

Those jobs will be replaced, not because there is a human that will be replaced, but because there is no human doing them. There are plenty of people in need of education, caring or just loneliness.


I don't understand his reasoning behind this point. Previous productivity advancements in history didn't free us, because technology seemingly has little to do with this. Let's say AI will do your job - will you be free from responsibility of daunting task or free from opportunity to afford next rent payment? What's the difference? Why would we spend more time on X if X becomes cheaper? Is there even a single living person who got freedom through his job being automated except automata owners?


I'm not holding my breath while waiting for an AI based solution that can go to elderly patient's home, give them medicine, clean their house and help them with other daily activities. This would require highly advanced robotics and ability to maneuver in varying physical environments, so far we have nothing even close to that.

I believe AI might instead replace a lot of work done by journalists, software devs, managers and other office workers who mainly produce and manipulate text. Then there people may be re-educated to become care workers.


What is the profit motive to make a robot with that capability today without the AI to drive it? None. But as soon as you have that AI suddenly the profit motive changes.


I like Bill Gates - out of the major iconic tech personas, he is one of the most interesting to me. Yes he kind of didn't see internet or phones coming, so it's not like he is an oracle, but he is right on many issues too. I think he is right on this one as well.

The most intriguing prediction was the medical one. He envisions training "medical AIs" that can act as a source of knowledge in poorer countries. Honestly, sounds madness to tell someone to rely on a LLM for medical advice today, although I can kind of imagine it happening over time as we get better at taming them.


I would think an LLM trained on an accurate description of virtually every medical diagnosis ever made (along with a description of the treatment and outcome) would be far more capable of diagnosing almost any newly described condition better than any human doctor. What I'd expect is a good deal further off is a AI- controlled surgical robots, but we'll likely see them used within our lifetimes.


You'd probably start by using the LLM as a filter, not for the final diagnosis. If the LLM thinks something is strange (say, 5% of the time) --> refer to human doctor to ensure it isn't a false positive.

If the problem is a shortage of doctors, this is one solution.


Only if you can provide an accurate description of the condition someone is suffering from. Without it, an LLM would probably recommend some average treatment.

Basically, prompt engineering for medical conditions.


Absolutely, and clearly in many cases such an accurate description couldn't be supplied purely by the patient self-describing their symptoms. But for others it seems fairly plausible. I'd still expect to see qualified professionals to be required to actually approve the recommended treatment (including prescribing medications etc.).


> Yes he kind of didn't see internet or phones coming

Microsoft, led by Gates, were far ahead on pursuing smartphones. They picked the wrong approach (Gates was certain it'd use a stylus for interaction; Jobs was certain the finger was the ideal tool) and lost to the path Apple pursued.

They absolutely saw it, at least a decade before the iPhone was launched.


Yes, I loved the direction Microsoft was going with their phones and Windows Mobile.

Microsoft Continuum was insane - you would be able to plug in your phone to a USB-hub like dock with monitor and keyboard, and it would effective run a near-desktop Windows experience[1]. In 2015.

Tiles were also so ahead of their time it was insane.

---

[1] https://www.youtube.com/watch?v=ZQ44vz5MJ4s


I just had my first assistance from ChatGPT today, courtesy of a "junior" (I would prefer "less-experienced") engineer. He wanted to click a web button as part of his work on improving our steam engine (ok, calibration system) so I worked out the URL for the button and the appropriate curl incantation to trigger it. Gave him the URL and suggest he twiddled some python to do it.

About an hour later he came back, told me he'd asked ChatGPT to write the code for him and yes, it did work.

Anyway, thing is it really made me think about how much of the grunt/support work can potentially be offloaded like this; in this case it really did save us both some time.

On the other hand, when I've played with it before, it got stuff really badly wrong (ever tried ego-surfing with it?).


> Probably they will not achieve the same level of connection (mostly because they will lack a human body to make expressions, at least at the beginning).

What is ego-surfing?


Searching the internet for information about yourself personally. It was very flattering what it said about me (and in fact was mostly wrong) so I tried it on some co-workers as well and they also found it flattering but inaccurate.


>I’ve been thinking a lot about how AI can reduce some of the world’s worst inequities.

huh, closeAI is certainly not helping at this moment in time.


I thought this was a pretty good post, but I'm still pretty shocked that people as intelligent as Gates don't spend more time on the potential downsides. To be clear, he does highlight some of the downsides, but his responses are just "handwavy" in my opinion.

What is most baffling to me is that the highlight of this post is that Gates talks about the potential for AI to reduce worldwide inequality, where in all honesty I see the exact opposite being more likely. Our current economic systems just fundamentally can't deal with ever larger swaths of the population being unable to compete with AI. And despite all the answers I've seen of "well, duh, that's because our current economic systems are so broken!", I have yet to see any sort of plausible answer that addresses how we get from our current version of capitalism to something else where all the spoils just don't go to the few people that control the strongest AI implementations (that doesn't involve massive amounts of strife and despair).

Perhaps technologists, even ones as smart as Gates, are just too siloed in their worldview of technology being an ever increasing force for good. I actually find this particularly surprising from Gates, given how he's seen first hand that large swaths of the world blame him for the pandemic, or for wanting to vaccinate us with "microchips", due to the power of propaganda through social media.


Gates is still a stakeholder in Microsoft. I think this piece is not independent of what Microsoft is doing right now.


Even if had he no financial stake in the company it's clear (and understandable) that he feels a sense of ownership of what it does today and wants to see it continue to succeed. To the point he's definitely likely to be somewhat blind to its failings.


> To be clear, he does highlight some of the downsides, but his responses are just "handwavy" in my opinion.

“the government should fix that”

Narrator: the government didn’t fix it.


[dead]


Ah yes. The founder of one of the top tech firms in the world really just doesn't know much about tech. Thanks for your analysis. You must be swimming in the millions from your genius investments.


It looks like AI is having self-driving hype moment. It works neatly on simple roads while you keep your hands on the steering wheel and watch the car. There are wowzers but the whole Internet feels like ChatGPT is about to disrupt the very fabric of the Universe.

Wonder what will be equivalent stories of careless people turning autopilot on and going to sleep while the car happily drives them off the highway.


I think the types of error chatgpt makes are indeed very similar to the ones a tesla autopilot does ( probably due to the nature of NN)

There is one big difference though : there are TONS of tasks chatgpt can do that don't require 100% accuracy to be useful and actually replace humans.


What are some of those tasks humans are doing today where they need information but it doesn't have to be accurate information? Brainstorming for sitcoms?


If you populate the prompt with some context it can become highly accurate:

- billing/ordering/anything clerical: add the product catalog and customer details to the prompt

- customer support: add the entire support manual and customer history to the prompt, use devices like prompt tools to allow the model to pull in data from external systems

- sales/pre-sales: the "internet sales" team of car dealerships can be replaced with gpt

- medical billing/transcription: this one should be pretty obvious

- anything where customer communication/status is needed (half the job of a project manager)

- paralegal: prompt with details of document to be drawn up, which is then reviewed by a lawyer

- journalism/copywriting/editing: should be obvious

- admins/assistants

In some cases a role is completely replaced. In others, it allows for fewer people to support the same or higher output. People supervise the model and catch any errors rather than handle each task directly.


Anything where it's less time consuming to validate work than to do the work yourself, which is a lot. If it's not real-time like driving then this is great.


I'm scratching my head at an output where it would take less time to verify than to just do it yourself. Once you've fully verified you've put in the work required to do it yourself, no?


It’s a lot easier to proofread something than to write it from scratch.

Also from a computational complexity angle there’s actually a mathematical definition — “NP Complete” which describes problems that are cheap to verify but hard to solve. One example could be a solution to a maze. It’s easy to look at the maze solution and verify it connects start to end without crossing walls, but much more time consuming to actually solve the maze with all the wrong turns needed to produce a solution.


Anything low stakes like learning a new language or writing some code/creating some art for a landing page for a new idea.


We've seen things like it spitting out incorrect code that looks correct. I would think that would set you up for failure if you tried to use this unreliable highly confident teacher to learn the fundamentals of anything. Maybe you can create some art but I guess that depends on how good you prompt, so its probably easier to iterate quickly with an artist you hired on fiver.


Don't forget that this thing is way cheaper than a human. Of course I'd prefer to hire a real language tutor or a real artist on Fiverr, but for many use-cases (e.g. my first 3 months learning a new language) I am happy for something 98% as good for 1% of the price. Not everything is a binary where the tutor/helper/artist needs to be absolutely perfect else it's completely useless. There are low stakes use cases where you don't need perfection and are happy to accept that trade-off for the lower cost. So the real comparison in many situations is between using ChatGPT as a language tutor, and not having a language tutor at all, and I know which one I'd prefer.


If you can't afford the tutor, then its not really a tradeoff between chatgpt and the tutor. Its a tradoff between chatgpt and some other way to spend your time to learn a language. I suspect you'd probably advance faster through your first three months of a language spending the same chunks of time with a language learning book from the library than with chatgpt, which might very well be teaching you how to conjugate a verb in latin instead of spanish or some other error the model makes.


any kind of non-critical content production. Marketing, advertising, entertainment, low-level hotline, documentation, etc.


I would think for all of that you want the information to be accurate even if its low level or a first line of defence. Imagine if you looked up the documentation for your tool and chatgpt wrote it.... Now it could be flat out wrong and you are spending hours wondering why nothing makes sense. Likewise with marketing or advertising, just watch the "Pepsi where's my jet" documentary to see what happens if you aren't dotting all your i's and crossing all your t's when it comes to marketing.


> AGI doesn’t exist yet

I wouldn't be so sure about that!

This "ground truth" model doing the rounds seems to be outperforming all other models in their respective papers!

;p


I think that “ground truth” typically satisfies GI, but not A.


Do you have a reference for that?


I believe this was a joke. Papers compare trained models to "ground truth" to determine accuracy. Ground truth refers simply to the correct answers, but, the way it's used grammatically, you can interpret it as another model called "ground truth" which has a 100% accuracy at all tasks.


what ground truth model? is there some paper i can read? can you post a link to it?


Or it's the fall before the next AI Winter.


Oh please.

This is such a trope at this point.

We asked ChatGPT a question about a system the other day. Its response was,

> That number is equivalent to [another number]. That number controls $thing, and that number is the recommended value for thing according to $standard. It relates to $unrelated_thing.

It did the math wrong when converting the number from one unit to another. (From days to seconds.) While it got the gist of the number right in the first half of its response, and it even cites essentially the right source, the claim it says the citation backs … is not backed by the citation. It then goes on to equate it to something that it isn't related to.

It's words, strung together: i.e., bullshit. The higher-level reasoning (like that the math needs to work out, or the citation should actually have the cited claim) are utterly absent, and will never be present in these models no matter how much data you train them on.

The response is at best wrong, and at worst, fools someone into believing that it knows what it is talking about.

But yeah, this is going to replace me.

The actual article is … pretty long-winded, and really lacks specific, concrete examples as to how AI will actually move a needle somewhere. Take this chunk,

> Computers haven’t had the effect on education that many of us in the industry have hoped. There have been some good developments, including educational games and online sources of information like Wikipedia, but they haven’t had a meaningful effect on any of the measures of students’ achievement.

(Wikipedia hasn't had a meaningful effect?! [citation needed] there. Wikipedia affected my own achievements, and it has been a while since I was in school. I can only think that such a high quality encyclopedia is of immense value to students, particularly if you're not lucky enough … old enough? … to have an actual encyclopedia — even the use of the word actual here exposes how much effect Wikipedia has had!)

> But I think in the next five to 10 years, AI-driven software will finally deliver on the promise of revolutionizing the way people teach and learn. It will know your interests and your learning style so it can tailor content that will keep you engaged. It will measure your understanding, notice when you’re losing interest, and understand what kind of motivation you respond to. It will give immediate feedback.

We've been saying this about ads for decades. The last ad I remember seeing was "The Mandela Effect: In Queen's 'We Are The Champtions (sic)' He Doesn't Say '…Of The World'" ((sic: the ad title is factually inaccurate) ; … this is no more related to my interests than any of the many Taboola and Taboola-like junk ads at the bottom of half the Internet now. I have no idea where that ad ponzi scheme even ends.)

> There are many ways that AIs can assist teachers and administrators, including assessing a student’s understanding of a subject and giving advice on career planning. Teachers are already using tools like ChatGPT to provide comments on their students’ writing assignments.

And we're 3 paras in with very little meat here: the only example is "comments on student papers". Given the inability to reason … no, I don't think AI will be evaluating papers anytime soon. (The whole passage … if not the article … feels like a human vaguely touched up the output from GPT.)

And the entire idea of trying to help poor unfortunate souls in desperate places with AI feels very much like God vomit from a Cory Doctorow tale.


The part on education was the most cringe for me.

As a general rule: when rich big egos try to improve education, it ends poorly. Consider the Gates Foundation itself trying to MBAify teaching and then being astounded when it (a) didn't improve student outcomes and (b) drove teachers away (not surprising: if you have put up with MBA levels of brain rot you might as well get paid better than teaching can ever sustainably pay) [1].

[1] https://www.businessinsider.com/bill-melinda-gates-foundatio...


I'm not sure anyone is claiming that ChatGPT is going to steal their job. I think most people are just extrapolating the ridiculously steep rate of progression and pondering what the future holds.

Sure, maybe we're about to hit a wall. But I'm not seeing any evidence of that as it stands.

Also, it's not clear to me why it's impossible for an LLM to learn symbolic manipulation?

https://arxiv.org/abs/1912.01412


I got very let down by GPT a couple of days ago after I asked to give me a couple of options for Korean food in a specific city in Mexico (Torreon, Coahuila), including their address. The thing very sure of itself named 3 restaurants and their corresponding address in that city. But they were completely wrong; the name of the restaurant I think is one from California, and the address is an address for some public office in Torreon, Coahuila.

I was very excited by GPT when it came out, but this showed to me how an absolute confident bullshiter it is. It can't say "I don't know".


You are wrong. Try gpt4. Gpt3 showed the glimmer on certain questions. Gpt4 is smart, maybe not quite as smart as me, but I fully expect gpt5 to be as smart or smarter than me.


Don't ask language models to solve problems that involve math. That's not what they're useful for, and as you've seen here they'll get it laughably wrong.


Two years ago it would have been considered “too high level” for an AI to produce art. “It’s just a sophisticated collage” is what some comments would say about Dall.e. Now we are realising that a human’s ability to create art isn’t actually all that special and can be reduced to a model.

A year ago it would have been considered too high level for an AI to provide insightful feedback on the themes contained in the draft of a script, but here we are again.

Every comment here is words strung together, and we are going to realise that we were the ones bullshitting all along. AI will expose that our high levelled reasoning (and who knows what else) is not particularly special.


You're wrong. I wonder if you've tried GPT-4 yet.


Let me be facetious here.

If GPT-4 was truly all that, it would be classified and at work in Ukraine (or somewhere) right now. Not running Akinator-tier promo marketing gigs.


It's not about GPT-3 or GPT-4 per se. It's about the rate of progress and the development of Theory of Mind.

GPT-4 plays ~1400 level chess having never been taught the rules of the game explicitly (except for potentially reading them on one of it's ingested webpages)


> except for potentially reading them on one of it's ingested webpages

Yeah, that's a pretty huge "except".

Chess literature is just literally hundreds and hundreds of transcripts of chess games. You just need to memorize them.


I would actually assume there is something more powerful than ChatGPT-4 at work helping the US military right now. Likewise for Russia. There's even a possibly that's the reason the conflict is dragging on so long.


You only think the conflict is dragging on for tomorrow long time because of the media hype that one of the sides is going to win... any day now!

The fact is that most wars tend to drag on for years.


I've never got any impression either side is about to win at all. And yes, there are plenty of wars historically (and even recently, e.g. the Syrian civil war) that have dragged on for many many years. But Putin would never have launched the attack against Ukraine if he thought things would be where they are now. How far off are we from a military-trained AI that could make that sort of prediction accurately?


Unlike Bill Gates, I believe that this AI is the most revolutionary technology in human history, but at the same time, I am also concerned about its impact. The potential for AI to automate a significant number of jobs in a short period of time is alarming and unprecedented. It is ironic that the jobs we once believed to be the most difficult for AI to replace, such as those requiring high levels of human intelligence and creativity, are now at risk of being taken over.


"will seem as distant as the days when using a computer meant typing at a C:> prompt". Isn't it "C:\>"? Did bill really write this post? :-)


I like to think its more the age of the Genuine Personal Digital Assistant has begun. The LLMs are great at assisting you to do your job but they wont take over yet.


Her is a forward looking movie in a lot of ways

Kids born today will grow up with their phones/device being their best friend and pet, like a Pokémon. You will be incredibly socially shunned if you don’t have one, and corporations will have not only data but an emotional hook into you for life


I'm not sure that's true. Even among teenagers right now, there's widespread recognition of the downsides of social media/always connectedness/etc, and if there's one thing we can rely on kids to do, it's to poke and prod the "sacred cows" of their parents generation to find out if they're any good or not.

The "always connected nature of cell phones" is pretty clearly not any good, and you can safely assume that most teenagers right now have watched at least one parent, grandparent, or relative get turned into something disgusting by being always connected - that one relative who's always on TwitFace railing about how evil Muskerberg is, or whatever the outrage of the day is. I've got a few. They're certain that Zuckerberg is a traitor, will be arrested Any Day Now, etc... and can't comprehend that by posting all about this on Facebook, all they're doing is helping out his yacht fund.

It's been an utterly toxic last decade of consumer technology, from human-first perspective, and I'd wager that those being born today will have the long view back over that and see how awful it's really been.

Remember: Last year, vinyl outsold CDs in "units moved" for the first time since the mid-1980s, and is showing no signs of slowing down any time soon. Instax cameras ("Polaroid, but newer") are quite popular. There's a rejection of modern digital/consumer tech going on.


Talking to ChatGPT is like talking to a marketing ad copy robot. (Not surprising, because that's basically what it is.) I couldn't imagine something more square and unhip for kids-these-days.

It can't even parrot this year's memes!


Sure, because it's tuned to be a corporate drone. You can easily tune it to respond using whatever tone you want. If it dropped the f bomb and mispelled things it would probably pass the Turing test with most people.


> You can easily tune it

You literally can’t [0], though you can prompt it.

[0] Unless you are an OpenAI insider or someone else with privileged access to the model that isn’t provided to the public.


<sigh> I mean that ChatGPT sounding like a "marketing ad copy robot" is not an inherent limitation of ChatGPT, it's merely the finishing stage of its training process. If OpenAI wanted to, they could have a hundred ChatGPT flavors.


It’s true that OpenAI can change it radically by tuning, but to a very significant extent users can change it by prompting (which is more actionable for people building apps on top of the chat models) but lack the ability to do tuning (which OpenAI makes available for the base GPT-3.5, etc., models but not the chat models, including GPT-4, which is currently available, even via the API, only as a chat model.)


Good. I can't wait to boot up Megaman.exe and jack into my oven!


ChatGPT is only second version and it can already take over every single financial job, every single copyrighting job, entry level programming, and many more.

Imagine what it will do in 5 years. Many versions later. Intellect based job is going to die as we know it.


I was literally just trying to refresh my memory on some linear algebra yesterday and I kept getting stuck in infinite loops of "Oops, looks like we did something wrong, please check the work and try again."

I know it's a trope to insert, "but I did x thing and it failed miserably," but I do think people actually using it on a daily basis have a much more realistic view of its capabilities.


The comments here are particularly disappointing. Relitigating old Microsoft debates (all the way down to adolescent insults — instead of Micro$oft we have (c)opywrong), blaming Microsoft for global inequality (!), multiple references to AI being “bullshit” because a model didn’t perform up to snuff on a particular task on a particular try.

HN used to (this is a new account, but I am not new to HN) embrace technology, optimism, and the intersection of the two. Now it’s a race to see who can demonstrate their bona fides by sneering enough or affecting enough disillusionment. It’s really disappointing! And yes, I realize I’m not being the change I want to see, but it is so disheartening to see so many grumpy people racing to tear others down.


My primary concern with recent advances in AI is the imbalance it will create. Most optimistic views on AI come from people already employed by the major players (OpenAI, Microsoft, Nvidia, etc.), and one their main arguments is that while they acknowledge AI will displace numerous jobs, it will also generate superior ones. However, the main problem arises when these "better" jobs replace the lost ones, but are performed by fewer individuals. One developer will be capable of accomplishing the work of at least three, and the same applies to lawyers, analysts, and so on.


> I picked AP Bio because the test is more than a simple regurgitation of scientific facts—it asks you to think critically about biology.

Of course, the LLM is just regurgitating the facts from the textbooks as well as thousands or millions of test samples.

It's certainly possible to formulate a biology question that a decent student would be able to answer easily, but LLM would be helpless at (by carefully choosing heterodox wording & structure).

The Age of Bullshit continues unabated.


Yeah, I think that we need to probe how much these models are merely memorising their inputs rather than learning a more genereal, compressed (sortof) representation.

Honestly though, ten years ago NLP was using Bag of Words and Markov Chains, it's pretty phenomenal how far it's come.

To be fair, I mostly agree with your main point, but it's still remarkably impressive progress in a relatively short period of time.


It is impressive and useful technology but Gates is misleading by implying that the LLM is doing "critical thinking", when it is doing the opposite -- shallow regurgitation of "information" (i.e. tokens).


Experiments have demonstrated that GPT-4 does have a basic theory of mind, so it does not simply regurgitate.


No, experiments have demonstrated that GPT-4 can respond as if it had theory of mind. Basically all measures of psychological constructs need to be rethought given the development of LLMs.


> Of course, the LLM is just regurgitating the facts from the textbooks as well as thousands or millions of test samples.

Indeed. For some of us in the world of business, this is actually a very compelling "just".

Most use cases are indeed still bullshit, but I think we are getting to a point where maybe there exists at least one that isn't, and many of us don't want to be left holding the bag when the threshold hits 50%.


> Of course, the LLM is just regurgitating the facts from the textbooks as well as thousands or millions of test samples.

Sure, it is regurgitating facts. What's impressive is that it knows which facts to regurgitate.


It also makes me question how different this is from what we (humans) do. We learn a bunch of information from a variety of sources and experiences and synthesize that into answers. Sometimes we are correct and sometimes we are not. I suppose the difference is that we can extrapolate from our synthesized information vs simply formulating answers, but what the best LLMs do doesn't seem radically different than the average person.


The difference is that we are reading and comprehending information, while the model is making a prediction of what word is most likely to come next based on a huge dataset of words. As such, there have been many documented cases of the model seeming correct at first glance, maybe even second glance, but its actually totally wrong in a way that a human isn't totally wrong. Like chatgpt will give you a summary and a title and an author and a year and be entirely confident that this is the truth, whereas the title was written by a different author in a related field perhaps or was made up entirely. A human would not behave like this at all. If I asked you for some info you hardly knew, you wouldn't word vomit a paragraph like this, you'd reply with a confidence score, "I don't know," or "I know about this but not that," perhaps. Seems that if there is any money or time on the line, you'd rather use the human.


UBI, UBI, UBI ! Then who’s going to clean the toilet? yes that’s exactly what needs to happen right now. Less supply of workforce, high demand for automation. Less carbon footprint, long term. Whoever has better Ai will win any major future global conflict. They will be fought by drones and robot spiders. I feel like every piece of puzzle is coming together. Look around. Everyone’s getting old. No babies being born. Yes we are at THAT point.


The transition would probably not be very smooth or peaceful.


> Philanthropy is my full-time job these days, and I’ve been thinking a lot about how—in addition to helping people be more productive—AI can reduce some of the world’s worst inequities. Globally, the worst inequity is in health: 5 million children under the age of 5 die every year.

One great first step OpenAI can take towards this is by actually making their models open. As long as the models are intentionally kept back and intentionally profited from by already wealthy private citizens, inequality will grow, not shrink.

Not to mention that people in poor countries (which as Gates cites as the "hotspots" for inequality in health) won't be able to use some HTTP API based in US/SF. Not just that latency will kill requests, some of the places don't even have internet connectivity. How is OpenAI supposed to help those communities if the "inequality fighter GPT-4" is gate-kept behind a centralized HTTP API?


Honestly I think its borderline criminal that gates say stuff like this "AI can reduce some of the world’s worst inequities. Globally, the worst inequity is in health: 5 million children under the age of 5 die every year." Like how is this not false advertising at its lightest and outright lying at its worse until AI is actually used to do something here with 5 million children dying? As far as I am aware the issue with famine and starvation in the global south is not that we have been unable to model these issues well with our current tooling, which probably also involves ml, and now need to ask a chatbot for what to do. It's been one of political will, considering how much wealth is present in this world.


Before this happens AI will be utilized to better navigate billing at hospitals to maximize the profits.


Let’s play a game of “conservative impacts of AI on the labor force”

- the Bureau of Labor Statistics thinks we should have 15% more software engineers in 10 years. What if we had 10% fewer?

-800k project managers - how about 650k in 10 years?

-nearly 4 million executive assistants and admins - let’s take 1M off that total

-millions combined (still) in clerical roles across industries, including financial, medical, warehouse, etc. let’s take 1-2M off that total

- journalists, writers, editors, graphic designers, interior designers, illustrators, …

… and the list goes on. These are intentionally conservative numbers (no 50% cuts or jobs going away entirely) but they still add up to millions.


In what sense is any of that “conservative”?

(I mean, my intuition would be, for example, that it means more software engineers, and disproportionately even more PMs -- though probably often with a different title -- as it expabds the market for software solutions, and reduces the engineering effort to product decision ratio. But that’s not conservative, any more than your catastrophizing is.)


Okay let’s take the software industry out of it. Do you think millions of non-software jobs will be affected?



> [...] reduce some of the world’s worst inequities.

Is it fair to say that 90's and early 2000's microsoft was a great proponent and enabler of such inequalities?


Microsoft was the main driving force behind bringing down the price of computers and software come to a point where they could be in every home and in every classroom around the world. They have done more to educate people in developing countries and offer them a path out of poverty than maybe any other company in history.


At one point, I remember windows vista license being a significant price of a new computer. At the same time, microsoft made huge efforts to hurt linux image on the desktop, which could really bring "down the price of computers and software".

One may see them as a good "driving force behind bringing down the price of computers and software" for a very limited time. I don't know if the result of 90's and early 2000's microsoft was a net positive to the field.

I don't think "They have done more to educate people in developing countries [..] than maybe any other company in history" considering how they also made many governments, institutions and people dependent on their software. This eventually cost these same entities a lot more than needed if there was a healthy competition which always fought against.


What hurt Linux was that it was – and still is – completely unusable for the average non-technical person. Microsoft wasn't the one holding back adoption.


Been living in Mali for some times, worked in a few African countries.

Done charity work there.

Created an IT company there.

You have it totally backward.

IT is NOT helping the poorest part of Africa. Quite the opposite.

But it does help us create a lot of BS virtue signaling about all the good we do in Africa.

The explanation would take a lot of text, unfortunately and I'm not in the mood for the rant again. But it comes from experience, not from the idealized narrative you have been sold.


The main reason microsoft software was ever used in developing countries was because it was easy enough to pirate -- not something the company did "on purpose" that I'm aware of


This! And I have to back you on that: “And as long as they’re going to steal it, we want them to steal ours. They’ll get sort of addicted, and then we’ll somehow figure out how to collect sometime in the next decade.” [1]

[1] https://www.latimes.com/archives/la-xpm-2006-apr-09-fi-micro...


oh I had never heard it was an explicit strategy, I thought it sort of happened by accident. good to know


We could have just used Linux, which was free, instead of pirating Windows, but we didn't.

People in developing countries chose to pirate and use Windows because it ran most of the important apps of the era, namely video games, which were also easy to pirate, and because it ran on any custom built PC from the parts each user could afford to scrap together. It was the perfect storm.


Is it your theory that capitalism makes people poorer?


If the money supply is fixed, and one person is rich, where does that money come from? By it not going to someone else.


It’s interesting how the first benefit he lists for AI is productivity. It was a similar message in one of OpenAI’s recent blog posts.


> AI can reduce some of the world’s worst inequities.

To reduce inequity abolish (c)opywrong laws.

BG grew rich via (c)opywrong laws.

That is the dilemma. Does he see it and just not connect the dots? Or can he connect the dots but doesn't really care? I am not sure.


What are "(c)opywrong" laws?

I'm not being facetious. What specifically are you drawing objection to?


Laws that make it illegal to use one's own property to share information.

A correction of the copyright misnomer.


Got it, so you have a general aversion to copyright, the concept of intellectual property, and to an extent, the funding of knowledge work I'm assuming?

You didn't carve out any exceptions, hence my assumption.


I have no aversion to `the funding of knowledge work`. (c)opywrong law is actually a huge tax on doing knowledge work for everyone except the 1%.


Can you help me understand how "[copyright] law is actually a huge tax on doing knowledge work for everyone except the 1%?"


Revenue gained from (c)opywrong law - minus sum of [all fees], [royalties], [license overhead], [additional red tape someone has to jump through to use information] - minus sum of the same for all government agencies that one pays taxes to.


> Revenue gained from [copyright] law - minus sum of [all fees], [royalties], [license overhead], [additional red tape someone has to jump through to use information] - minus sum of the same for all government agencies that one pays taxes to.

I'm not sure this answers the question. Who's revenue is this, and which of the elements you described is "a tax"?


He didn't grow rich via copywrong laws. He grew rich by creating value and selling it. Their business model was selling software. Copyright laws kept people from copying the software freely without paying for it.


Joshua Ward didn't grow rich via slavery. He grew rich by creating cotton and selling it. Their business model was selling cotton. Slavery laws kept people from breathing freely without paying for it.


I think Gates is right that AI is the next era (obvious) and wrong about its positive improvements. More convenient productivity has brought about more garbage output and done nothing to release modern anxiety. If anything, AI is going to cause us to waste time in even more stupendously meaningless things because it's doing all our thinking. And anyway it can waste an infinite amount of our time by having us read or watch its infinite output.


A few paragraphs in, we get this:

> The rise of AI will free people up to do things that software never will—teaching, caring for patients, and supporting the elderly, for example.

First, he's already shown how teaching jobs can be easily eliminated by AI, just a few paragraphs above. And 'caring for patients' and 'supporting the elderly' sound sort ok, until you realize that these joblines are all sharply constrained by available funding. Further competition will drive wages down further. Not to mention that an out-of-work software engineer would probably make a terrible nurse, even if working under the watchful eye of HospitalAI or whatever.

Moreover, these are all already underfunded pink-collar jobs -- which means you're going to have a lot of men that, for reasons of identity or machismo or inclination or what-have-you, simply don't want to do. We already have a crisis in college & career performance with men, especially young men. Telling them that they will be helping dress their grandparents for $16/h, with zero opportunity to advance, is probably not going to go down great.

Statistically, this is precisely the demographic most inclined to cause political unrest. Further declines in opportunity will therefore probably lead to a rise in, for example, far-right militias, QAnon-like phenomena, and attempted coups.

[aside: Jan 6 will probably be seen as a 'Columbine moment' -- a singular event that spawned countless imitations, to which we will become so inured that they will become part of the fabric of the evening news.(Perhaps the Onion will make an article for coups that it recycles every coup.)]

So, once again, we have an AI-celebratory article that sounds great until you press it on the question of: what will we be doing with our time, and how will any of us make a living doing it?

Billg has no coherent idea.

Socioeconomically, we're the proverbial racetrack greyhound that caught the mechanical rabbit. The game is over, the greyhound is confused, and everybody is checking their losses, wondering if there will be another race any time soon.

EDIT: light revision for clarity (broke out the paragraph beginning 'Jan 6' as an aside, removed the word 'ever' in front of 'any time soon', and cut redundant/reiterative first paragraph) at 13:32PDT Mar 21


AI when it becomes powerful enough will be used as tool by the world leaders; the same as the atom bomb was used as a weapon to end WW2 by beating the advisory into the ground. This is the same repeating story throughout written history why should it change now because Mr. Gates and some other philanthropists are optimists about human nature.


For only $29.99 a month, you too can get yourself out of abject poverty. Sign up today!


I’m more inclined to call it the Age of Disinformation


That's indeed what i'd call it too

AI was already a thing with robotics / games

What's being promoted these days are AI assisted generative tools, i personally call it "AI assisted copy pasta"


They already had an ai assisted complete in gmail. I wonder how many managers operated by just hitting tab tab tab send and not writing any english at all the last few years?


It's funny because I was searching on Bing Chat earlier today and came across a linux command <().

I enquired about it, and it then referenced an article.

At the bottom of the article was something along the lines of "This content was generated with ChatGPT.".

It made me think, if people stop thinking for themselves and start making articles which contain slight errors, those errors will become pervasive, and then these AI's could enter a loop where they generate confidently sounding incorrect information.

It'll then make sense to weight information sources - but then who is in charge of deciding what weights to give what sources?

I guess in the end where we get our "facts" will be decided by our "Benevolent AI Dictator Overlords".


Can you elaborate why you think this is the most notable feature as opposed to others?


open AI is the continuation of the liberal, neo democrat agenda. I thought we were in the age of disinformation with medias, social networks and censorship those past few years and thought with a slight relief that Elon Musk buying twitter would be the end of an era and that mankind will be brought back to a safe space where debates are seen as enriching, divergence of opinion as a strength but boy was I wrong. it was just the last sign of an old world trying to fight against a demon of smoke, revealing to all of us a new woke world where AI are agents of disinformation that will shape mankind through the woke agenda of the 1% for the good of no one but them.

truly evil. and all the people praising what chatgpt can do will be the first one to tell us that they warned us back then.


Why do you think this is coming from "GatesNotes"

This guy needs to GTFO out of Humanity at this point.

ANYTHING this guy says about AI is ging to be because he is going to fund AI-Malevolence-Protection-Racket then sell some pre-planned AI-cyber-storm solution that every computer on the planet will be mandated on installing.


given all the world's pain and suffering that has been caused by the GUI, i fear AI for the first time...


My controversial opinion is that the gui was a leap back to the stoneage. Teach a person to fish? Nah, lets teach them nothing about the rod, the line, the water, the hook, or even the fish, and sell them fish sticks since they can barely handle a microwave. Abstraction is distraction.


> Risks and problems with AI

One of the biggest risks right now is viruses created by transformers trained on known viruses and anti-virus binaries and databases.

Humans created relatively simple viruses which took down hospitals, etc.. Transformer can make new "super" viruses and keep fighting the anti-viruses/firewalls in real-time with high-quality custom attacks.


Anti-virus have been trivially bypassed for decades without the need of AI


Is it a breakthrough in AI or affordable hardware?


> it raises hard questions about the workforce, the legal system, privacy, bias, and more

Privacy is the least talked about aspect of our current AI systems from what I've gathered. Perhaps we're sleepwalking into yet another surveillance capitalism nightmare and handing over data blindly to these systems, where they learn our preferences, private thoughts, habits, etc (And store all that forever).

Anyone reluctant to 'feed the beast' as it were, and starve these AI systems of personal data that could be potentially weaponized at a future date for nefarious purposes?


Title did remind me of this: https://youtu.be/R1x4JkZTfPs


AI is little better today than in the early 1990's, mostly it's a barn broadside hitting dung fest.


Yeah all this stuff is just advanced chatbots, stuff we've had for a long time


Bill also said most meetings would be happening in the metaverse in two years


Most meetings likely do happen over Zoom/WebEx/Jitsi/FaceTime at this point, so that would make him correct (minus the VR part, but I think the sentiment was "virtual meetings").


What would Daddy Marx say?


He would say that we are witnessing a collapse wavefront moving through the labour market, yielding a concentration of capital that has never before been seen, and that will almost certainly foment a revolution.

He'd be right, too. But with an eye to Jan 6, I fear it won't be the revolution he wanted.


It’s the other way around: tests and materials which we believed made humans distinctive from AI and our “models” of human measurement or standardized tests, seem to fail when examined seriously. They are purported somewhat to prevent computer cheating because we enforce conditions that allow us to claim the results are accurate.

Clearly that is not the case, and the tests or measurements themselves lack much or all value.

What most people believe makes someone’s opinion valuable (economic production, social wanking, allowing and enforcing inequality) is ironically what prevents most of these people from being in a room with actually intelligent people.

Take the case of Grigori Perelman: mathematical genius who created insights into fractal geometry and chaos mathematics who now contributes nothing but his personal disgust at the state of the world and how willing people are to cheat and steal.

Bill Gates has made a lot of money, but he has never made any deep insights or valuable contributions. He can make more money but history will bury the man because he is nothing more than someone who became an exalted dung beetle of wealth. Mindless, brainless, and stacking that shit.

I find this article in the same vein: mostly some shitty business crap that will extol it’s fecal values to the investors and businessmen who are now immune to their own stink.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: