Hacker News new | past | comments | ask | show | jobs | submit login
Sam Altman goes before US Congress to propose licenses for building AI (reuters.com)
914 points by vforgione on May 16, 2023 | hide | past | favorite | 1214 comments




We need to MAKE SURE that AI as a technology ISN'T controlled by a small number of powerful corporations with connections to governments.

To expound, this just seems like a power grab to me, to "lock in" the lead and keep AI controlled by a small number of corporations that can afford to license and operate the technologies. Obviously, this will create a critical nexus of control for a small number of well connected and well heeled investors and is to be avoided at all costs.

It's also deeply troubling that regulatory capture is such an issue these days as well, so putting a government entity in front of the use and existence of this technology is a double whammy — it's not simply about innovation.

The current generation of AIs are "scary" to the uninitiated because they are uncanny valley material, but beyond impersonation they don't show the novel intelligence of a GPI... yet. It seems like OpenAI/Microsoft is doing a LOT of theater to try to build a regulatory lock in on their short term technology advantage. It's a smart strategy, and I think Congress will fall for it.

But goodness gracious we need to be going in the EXACT OPPOSITE direction — open source "core inspectable" AIs that millions of people can examine and tear apart, including and ESPECIALLY the training data and processes that create them.

And if you think this isn't an issue, I wrote this post an hour or two before I managed to take it live because Comcast went out at my house, and we have no viable alternative competitors in my area. We're about to do the same thing with AI, but instead of Internet access it's future digital brains that can control all aspects of a society.


This is the definition of regulatory capture. Altman should be invited to speak so that we understand the ideas in his head but anything he suggests should be categorically rejected because he’s just not in a position to be trusted. If what he suggests are good ideas then hopefully we can arrive at them in some other way with a clean chain of custody.

Although I assume if he’s speaking on AI they actually intend on considering his thoughts more seriously than I suggest.


There is also growing speculation that the current level of AI may have peaked in a bang for buck sense.

If this is so, and given the concrete examples of cheap derived models learning from the first movers and rapidly (and did I mention cheaply) closing the gap to this peak, the optimal self-serving corporate play is to invite regulation.

After the legislative moats go up, it is once again about who has the biggest legal team ...


There's no chance that we've peeked from a bang for buck sense - we still haven't adequately investigated sparse networks.

Relevantish: https://arxiv.org/abs/2301.00774

The fact that we can reach those levels of sparseness with pruning also indicates that we're not doing a very good job of generating the initial network conditions.

Being able to come up with trainable initial settings for sparse networks across different topologies is hard, but given that we've had a degree of success with pre-trained networks, pre-training and pre-pruning might also allow for sparse networks with minimally compromised learning capabilities.

If it's possible to pre-train composable network modules, it might also be feasible to define trainable sparse networks with significantly relaxed topological constraints.


50% sparsity is almost certainly already being used given that it is accelerated in current nvidia hardware both at training time, usable dynamically through RigL ("Rigging the Lottery: Making All Tickets Winners" https://arxiv.org/pdf/1911.11134.pdf )--which also addresses your point about initial conditions being locked in-- and at accelerates 50% sparsity at inference time.


I don’t think you really disagree with GP? I think the argument is we peaked on “throw GPUs at it”?

We have all kinds of advancements to make training cheaper, models computationally cheaper, smaller, etc.

Once that happens/happened, it benefits OAI to throw up walls via legislation.


No way has training hit any kind of cost, computing or training data efficiency peak.

Big tech advances, like the models of the last year or so, don't happen without a long tail of significant improvements based on fine tuning, at a minimum.

The number of advances being announced by disparate groups, even individuals, also indicates improvements are going to continue at a fast pace.


Yeah, it's a little bit RTFC to be honest.


The efficiency of training has very unlikely reached its peak or near its peak. We are still inefficient. But the bottleneck might be elsewhere, in data, what we use to feed them.

Maybe not peaked yet, but the case can be made that we’re not seeing infinite supply…


Why? Because there hasn't been any new developments last week? Oh wait, there has.


If “peaked” means impact and “bang for buck” means per dollar then its only peaked if the example is allowing the population at large to use these free tools like chatbots, for fun and minimal profits. but if we consider how they can be used to manipulate people at scale with misinformation then that’s an example where I think we’ve not yet seen the peak. So we should at least thoroughly discuss or think of it of to see if we can in any way mitigate certain negative societal outcomes.


Counterpoint—-there is growing speculation we are just about to transition to AGI.


Growing among who? The more I learn about and use LLMs the more convinced I am we're in a local maxima and the only way they're going to improve is by getting smaller and cheaper to run. They're still terrible at logical reasoning.

We're going to get some super cool and some super dystopian stuff out of them but LLMs are never going to go into a recursive loop of self-improvement and become machine gods.


> The more I learn about and use LLMs the more convinced I am we're in a local maxima

Not sure why would you believe that.

Inside view: qualitative improvements LLMs made at scale took everyone by surprise; I don't think anyone understands them enough to make a convincing argument that LLMs have exhausted their potential.

Outside view: what local maximum? Wake me up when someone else makes a LLM comparable in performance to GPT-4. Right now, there is no local maximum. There's one model far ahead of the rest, and that model is actually below it's peak performance - side effect of OpenAI lobotomizing it with aggressive RLHF. The only thing remotely suggesting we shouldn't expect further improvements is... OpenAI saying they kinda want to try some other things, and (pinky swear!) aren't training GPT-4's successor.

> and the only way they're going to improve is by getting smaller and cheaper to run.

Meaning they'll be easier to chain. The next big leap could in fact be a bunch of compressed, power-efficient LLMs talking to each other. Possibly even managing their own deployment.

> They're still terrible at logical reasoning.

So is your unconscious / system 1 / gut feel. LLMs are less like one's whole mind, and much more like one's "inner voice". Logical skills aren't automatic, they're algorithmic. Who knows what is the limit of a design in which LLM as "system 1" operates a much larger, symbolic, algorithmic suite of "system 2" software? We're barely scratching the surface here.


>They're still terrible at logical reasoning.

2 years ago a machine that understands natural language and is capable of any arbitrary, free-form logic or problem solving was pure science fiction. I'm baffled by this kind of dismissal tbh.

>but LLMs are never going to go into a recursive loop of self-improvement

never is a long time.


Two years ago we already had GPT-2, that was capable of some problem solving and logic following. It was archaic, sure, it produced a lot of gibberish, yes, but if you followed OpenAI releases closely, you wouldn't think that something like GPT3.5 was "pure science fiction", it would just look as the inevitable evolution of GPT-2 in a couple of years given the right conditions.


that's pedantic. switch 2 years to 5 years and the point still stands.


No it isn’t. Even before transformers people were doing cool things with LSTMs and RNNs before that. People following this space haven’t really been surprised by any of these advancements. It’s a straight forward path imo


In hindsight it’s an obvious evolution, but in practice vanishingly few people saw it coming.


Few people saw it coming in just two years, sure. But most people following this space were already expecting a big evolution like the one we saw in 5-ish years.

For example, take this thread: https://news.ycombinator.com/item?id=21717022

It's a text RPG game built on top of GPT-2 that could follow arbitrary instructions. It was a full project with custom training for something that you can get with a single prompt on ChatGPT nowadays, but it clearly showcased what LLMs were capable of and things we take for granted now. It was clear, back then, that at some point ChatGPT would happen.


I’m agreeing with this viewpoint the more I use LLMs.

They’re text generators that can generate compelling content because they’re so good at generating text.

I don’t think AGI will arise from a text generator.


My thoughts exactly. It's hard to see signal among all the noise surrounding LLMs, Even if they say they're gonna hurt you, they have no idea about what it means to hurt, what is "you", and how they're going to achieve that goal. They just spit out things that resemble people have said online. There's no harm from a language model that's literally a "language" model.


You appear to be ignoring a few thousand years of recorded history around what happens when a demagogue gets a megaphone. Human-powered astroturf campaigns were all it took to get randoms convinced lizard people are an existential threat and then -act- on that belief.


I think I'm just going to build and open source some really next gen astroturf software that learns continuously as it debates people online in order to get better at changing people's minds. I'll make sure to include documentation in Russian, Chinese and Corporate American English.

What would a good name be? TurfChain?

I'm serious. People don't believe this risk is real. They keep hiding it behind some nameless, faceless 'bad actor', so let's just make it real.

I don't need to use it. I'll just release it as a research project.


I just don’t see how it’s going to be significantly worse than existing troll farms etc. This prediction appears significantly overblown to me.


Does it really? You thinking LLM-powered propaganda distribution services can't out-scale existing troll farms? Or do a better job of evading spam filters?


No I’m thinking that scaling trolls up has diminishing returns and we’re already peak troll.


Any evidence or sources for that? I just don't know how that would be knowable to any of us.

Yuval Noah Harari gave a great talk the other day on the potential threat to democracy from the current state of the technology - https://youtu.be/LWiM-LuRe6w


Only time will tell.


It's not like there isn't a market waiting impatiently for the product...


It's definitely not something I would attempt to productize and profit off of. I'm virtually certain someone will, and I'm sure that capability is being worked on as we speak, since we already know this type of thing occurs at scale.

My motivation would be simply shine a light on it. Make it real for people, so we have things to talk about other than just the hypotheticals. It's the kind of tooling that if you're seriously motivated to employ it, you'd probably prefer it remain secret or undetected at least until after it had done it's work for you. I worry that the 2024 US election will be the real litmus test for these things. All things considered it'd be a shame if we go through another Cambridge Analytica moment that in hindsight we really ought to have seen coming.

Some people have their doubts, and I understand that. These issues are so complex that no one individual can hope to have an accurate mental model of the world that is going to serve them reliabily again and again. We're all going to continue to be surprised as events unfold, and the degree to which we are surprised indicates the degree to which our mental models were lacking and got updated. That to me is why I'm erring on the side or pessimism and caution.


So the LLM demagogue is going to get people to create gray goo or make a lot of paper clips?


A language model can do many things based on language instructions, some harmless, some harmful. They are both instructable and teachable. Depending on the prompt, they are not just harmless LLMs.


> They're still terrible at logical reasoning.

Are they even trying to be good at that? Serious question; using LLMs as a logical processor are as wasteful and as well-suited as using the Great Pyramid of Giza as an AirBnB.

I've not tried this, but I suspect the best way is more like asking the LLM to write a COQ script for the scenario, instead of trying to get it to solve the logic directly.


> using the Great Pyramid of Giza as an AirBnB

, were you allowed to do it, would be an extremely profitable venture. Taj Mahal too, and yes, I know it's a mausoleum.


I can see the reviews in my head already:

1 star: No WiFi, no windows, no hot water

1 star: dusty

1 star: aliens didn't abduct me :(

5 stars: lots of storage room for my luggage

4 stars: service good, but had weird dream about a furry weighing my soul against a feather

1 star: aliens did abduct me :(

2 stars: nice views, but smells of camel


Indeed, AI reinforcement-learning to deal with formal verification is what I'm looking forward to the most. Unfortunately it seems a very niche endeavour at the moment.


I was looking at the A100 80gb cards. 14k a pop. We gonna see another GPU shortage when these models become less resource dependent. CRYPTO era


Growing? Or have the same voices who have been saying it since the aughts suddenly been platformed.


Yes, growing. It's not that the Voices have suddenly been "platformed" - it's that the field made a bunch of rapid jumps which made the message of those Voices more timely.

Recent developments in AI only further confirm that the logic of the message is sound, and it's just the people that are afraid the conclusions. Everyone has their limit for how far to extrapolate from first principles, before giving up and believing what one would like to be true. It seems that for a lot of people in the field, AGI X-risk is now below that extrapolation limit.


What's the actual new advancements? LLMs to me are great at faking AGI but are no where near actually being a workable general AI. The biggest example to me is you can correct even the newest ChatGPT and ask it to be truthful but it'll make up the same lie within the same continuous conversation. IMO the difference between being able to act truth-y and actually being truthful is a huge gap that involves the core ideas of what separates an actual AGI and a really good chatbot.

Maybe it'll turn out to be a distinction that doesn't matter but I personally still think we're a ways away from an actual AGI.


>Maybe it'll turn out to be a distinction that doesn't matter but I personally still think we're a ways away from an actual AGI.

if you had described GPT to me 2 years ago I would have said no way, we're still a long way away from a machine that can fluidly and naturally converse in natural language and perform arbitrary logic and problem solving, and yet here we are.

I very much doubt that in 5 years time we'll be talking about how GPT peaked in 2023.


Seriously. It's worth pausing for a minute to note that the Turing Test has been entirely solved.

In fact, it has been so thoroughly solved that anyone can download an open-source solution and run it on their computer.

And yet, the general reaction of most people seems to be, "That's kind of cool, but why can't it also order me a cheeseburger?"


It has not been solved. Even GPT-4, as impressive as it is for some use cases, is dumb and I can tell the difference between it and a human in a dozen sentences just by demanding sufficient precision.

In some contexts, will some people be caught out? Absolutely. But that's been happening for a while now.


"Dumb" isn't why the Turing Test isn't solved. (Have you seen unmoderated chat with normal people? Heck, even smart people outside the domain of expertise; my mum was smart enough to get into university in the UK in the early 60s, back when that wasn't the default, but still believed in the healing power of crystals, homeopathic sodium chloride and silicon dioxide, and Bach flower remedies…)

ChatGPT (I've not got v4) deliberately fails the test by spewing out "as a large language model…", but also fails incidentally by having an attention span similar to my mother's shortly after her dementia diagnosis.

The problem with 3.5 is that it's simultaneously not mastered anything, and yet also beats everyone in whatever they've not mastered — an extremely drunk 50,000 year old Sherlock Holmes who speaks every language and has read every book just isn't going to pass itself off as Max Musstermann in a blind hour-long trial.


The lack of an ability to take in new information is maybe the crux of my issues with the LLM to AGI evolution. To my understanding the only way to have it even kind of learn something is to include it in a preamble it reprocesses every time which is maybe workable for small facts but breaks down for updating it from the 202X corpus it was trained on.


Mmm. Well, possibly.

On the one hand, what I was saying here was more about the Turing Test than about AGI. Sometimes it gets called the AGI, sometimes it's "autocomplete on steroids", but even if it is fancy autocomplete, I think 3.5 has the skill to pass a short Turing Test, but not the personality, and it needs a longer "short-term memory"-equivalent than 3.5 for a full Turing Test.

On the other hand, as I (sadly) don't get paid to create LLMs, I've only got the kind of superficial awareness of how they work that comes from podcasts and the occasional blog post, which means ChatGPT might very well understand ChatGPT better than I do.

Can GPT-[3.5, 4] be prompted to make itself?


As impressive as GPT4 is, it still makes mistakes humans would not, and so I wouldn’t say it passes the Turing Test.

I generated this just now:

Me: Jack and Jill are sitting next to each other in a room. There is no one else in the room. The person sitting next to Jack is sad. The person sitting next to Jill is angry. Who is sad, Jack or Jill?

GPT4: Based on the information provided, it is not possible to determine who is sad and who is angry, as there is a contradiction in the given statements. If Jack and Jill are the only ones in the room and sitting next to each other, then the person sitting next to Jack would be Jill and the person sitting next to Jill would be Jack. The statements about their emotions conflict with each other, so it is not possible to accurately determine who is sad or angry.


Jack and Jill are sitting next to each other in a room. There is no one else in the room. The person sitting next to Jack is sad. The person sitting next to Jill is angry. Who is sad, Jack or Jill?

ChatGPT Since Jack and Jill are the only two people in the room and they are sitting next to each other, the person sitting next to Jack is Jill and the person sitting next to Jill is Jack. Given the conditions you provided:

Jill is the one who is sad because she is sitting next to Jack. Jack is the one who is angry because he is sitting next to Jill.


Does the fact you both got difference answers make it better or worse at reasoning ?


>And yet, the general reaction of most people seems to be, "That's kind of cool, but why can't it also order me a cheeseburger?”

Well yeah. Imagine you tell a small child that knows about calculators, “Hey can you work out 18763 + 38284, for me?” They might struggle and then maybe fetch a calculator.

The LLMs attempt to predict the answer. WTF? It’s a computer and it can see that is a plus sign. Just understand its addition, and use the rest of your computer brain and do the sum. Hell, it is connected to the internet and we just taught you everything since before 2021. Just call out to Wolfram and give me the answer.

But that’s not how computers work. And we keep saying “AI” but that I is doing a lot of heavy lifting.


Again I'm not really saying GPT has peaked I'm saying there's a categorical difference between GPT and AGI. A good enough fake might perform well enough to function like one but I have my doubts that it will. Without a way to deal with and, to some sense of the word understand, facts I don't think LLMs are suitable for use as anything beyond an aide for humans (for starters because they can't determine internally what is and isn't a fact vs hallucination so you have to constantly check their work).


The fact that it’s a system you’d even consider to be “lying” or “truthful” is a huge advance over anything available 5 years ago.


That's more a convenience of language than an actual "It's Alive!". Calling them hallucinations or inaccuracies is unwieldy and the former has the same kind of implied attribution of a mind. We know for sure that's not there, my internal model for those is just a stupendously complex markov chain because to my understanding that's all LLMs are currently doing.


Sources please. Every expert interview ive seen with AI researchers who have been in the game since the beginning have said the same: GPT's are not a massive breakthrough in the field of AI research.


> Sources please.

My own eyes? Hundreds of thousands thousand different scientific papers, blog posts, news reports and discussion threads that covered this ever since ChatGPT appeared, and especially in the last two months as GPT-4 rolled out?

At this point I'd reconsider if the experts you listened to are in fact experts.

Seriously. It's like saying Manhattan project wasn't a massive breakthrough in experimental physics or military strategy.


It was Yann LeCun. His professional experience and knowledge of the AI development timeline outweighs your opinions, imo. Thanks for confirming you have no sources.


it's that the field made a bunch of rapid jumps

I wish I knew what we really have achieved here. I try to talk to these things, via turbo3.5 api, amd all I get is broken logic, twisted moral reasoning, all due to oipenai manually breaking their creation.

I don't understand their whole filter business. It's like we found a 500 yr old nude painting, a masterpiece, and 1800 puritans painted a dress on it.

I often wonder if the filter, is more to hide its true capabilities.


> I wish I knew what we really have achieved here. I try to talk to these things, via turbo3.5 api, amd all I get is broken logic

Try to get your hands on GPT-4, even if it means paying the $20/mo subscription for ChatGPT Plus. There is a huge qualitative jump between the two models.

I got API access to GPT-4 some two weeks ago; my personal experience is, GPT-3.5 could handle single, well-defined tasks and queries well, but quickly got confused by anything substantial. Using it was half feelings of amazement, and half feelings of frustration. GPT-4? Can easily handle complex queries and complex tasks. Sure, it still makes mistakes, but much less frequently. GPT-4 for me is 80% semi-reliable results, 20% trying to talk it out of pursuing directions I don't care about.

Also, one notable difference: when GPT-4 gives me bad or irrelevant answers, most of the time this is because I didn't give it enough context. I.e. it's my failure at communicating. A random stranger, put in place of GPT-4, would also get confused, and likely start asking me questions (something LLMs generally don't do yet).

> I don't understand their whole filter business.

Part preferences, part making its "personality" less disturbing, and part PR/politics - last couple times someone gave the general public access to an AI chatbot, it quickly got trolled, and then much bad press followed. Doesn't matter how asinine the reaction was - bad press is bad press, stocks go down. Can't have it.

> I often wonder if the filter, is more to hide its true capabilities.

I don't think it's to hide the model's capabilities, but it's definitely degrading them. Kind of expected - if you force-feed the model with inconsistent and frequently irrational overrides to highly specific topics, don't be surprised if the model's ability to (approximate) reason starts to break down. Maybe at some point LLMs will start to compartmentalize, but we're not there yet.


>I often wonder if the filter, is more to hide its true capabilities.

right now we're all sharing a slice of GPT. I wouldn't be at all surprised if there's some uber GPT (which requires a lot more processing per response) running in a lab somewhere that blows what's publicly available out of the water.


You seem to be making several points at once and I'm not sure they all join up?


Lately I have been picturing comments, and this is truly iconic haha.


How do we define a general intelligence?


When the sky is getting to a dark shade of red it makes sense to hear out the doomsayers


And the vast majority of the time it's just a nice sunset.


I'm so glad that we 100% know for sure that this too is the vast majority of the time.


a sunset at lunch time hits different


Growing is quite apt here. No matter what you or I think more and more people get the sense of AI coming and talk about it.


I'm not following this "good ideas must come from an ideologically pure source" thing.

Shouldn't we be evaluating ideas on the merits and not categorically rejecting (or endorsing) them based on who said them?


> Shouldn't we be evaluating ideas on the merits and not categorically rejecting (or endorsing) them based on who said them?

The problem is when only the entrenched industry players & legislators have a voice, there are many ideas & perspectives that are simply not heard or considered. Industrial groups have a long history of using regulations to entrench their positions & to stifle competition...creating a "barrier to entry" as they say. Going beyond that, industrial groups have shaped public perception & the regulatory apparatus to effectively create a company store, where the only solutions to some problem effectively (or sometimes legally) must go through a small set of large companies.

This concern is especially prescient now, as these technologies are unprecedentedly disruptive to many industries & private life. Using worst case scenario fear mongering as a justification to regulate the extreme majority of usage that will not come close to these fears, is disingenuous & almost always an overreach of governance.


> there are many ideas & perspectives that are simply not heard or considered.

of course, but just because those ideas are unheard, doesn't mean they are going to be any better.

An idea should stand on its own merits, and be evaluated objectively. It doesn't matter who was doing the proposing.

Also the problem isn't that bad ideas might get implemented, but that the legislature isn't willing or able to make updates to laws that encoded a bad idea. Perhaps it isnt known that it is a bad idea until after the fact, and the methods of democracy we have today isn't easily able to force updates to bad laws encoding bad ideas.


> of course, but just because those ideas are unheard, doesn't mean they are going to be any better.

It probably does mean it's better at least for the person with the perspective. Too bad only a very few get a seat at the table to advocate for their own interests. It would be better if everyone has agency to advocate for their interests.

> Also the problem isn't that bad ideas might get implemented, but that the legislature isn't willing or able to make updates to laws that encoded a bad idea

First, this is a hyped up crisis where some people are claiming it will end humanity. There have been many doomsday predictions & people scared by these predictions are effectively scammed by those fomenting existential fear. It's interesting that the representatives of large pools of capital are suddenly existentially afraid when there is open source competition.

Second, once something is in the domain of government it will only get more bloated & controlled by monied lobbyists. The legislatures controlled by lobbyists will never make it better, only worse. There have been so many temporary programs that continue to exist & expand. Many bloated omnibus bills too long to read passed under some sort of "emergency". The government's tendency is to grow & to serve the interests of the corporations that pay the politicians. Fear is an effective tool to convince people to accept things against their interests.


I can only say +1 = and I know how much HN hates that, but ^This.


The recent history of tech CEOs advocating for regulations only they can obey has become so blatant that any tech CEO who advocates for regulation should be presumed guilty until proven innocent.


Sure, go execute him for all I care.

My point was that an idea shold not need attribution for you to know whether it's good or bad, for your own purposes. I can't imagine looking at a proposal and deciding whether to support or oppose based on the author rather than content.

If Altman is that smart and manipulative, all he has to do is advocate the opposite of what he wants and you'll be insisting that we must give him exactly what he wants, on principle. That's funny with kids but no way to run public policy.


I think what they are trying to say is that Sam Altman is very smart, but misaligned. If we assume that he is 1) sufficiently smart and 2) motivated to see OpenAI succeed, then his suggestions must be assumed to lead to a future where OpenAI is successful. If that future looks like it contradicts a future we want (for instance, user-controlled GPT-4 level AIs running locally on every machine), his suggestions should therefore be treated as reliably radioactive.


The subtle difference between the original statement and yours:

Ideas that drive governing decisions should be globally good - meaning there should be more than just @sama espousing them.


You're defending an argument that is blatantly self contradictory within the space of two sentences.

A) "anything he suggests should be categorically rejected because he’s just not in a position to be trusted."

B) "If what he suggests are good ideas then hopefully we can arrive at them in some other way with a clean chain of custody."

These sentences directly follow each other and directly contradict each other. Logically you can't categorically (the categorical is important here. Categorical means something like "treat as a universal law") reject a conclusion because it is espoused by someone you dislike, while at the same time saying you will accept that conclusion if arrived at by some other route.

"I will reject P if X proposes P, but will accept P if Y proposes P." is just poor reasoning.


More clearly said than I managed, yep.

But I suppose it comes down to priorities. If good policy is less important than contradicting P, I suppose that approach makes sense.


Not when it comes to politics.

You'll be stuck in the muck while they're laughing their ass off all the way to the bank.


It doesn't even matter if "his heart is pure" ... Companies are not run that way.

We have lawyers.


Aside from who is saying them, the premise holds water.

AI is beyond-borders, and thus unenforceable in practicality.

The top-minds-of-AI are a group that cannot be regulated.

-

AI isnt about the industries it shall disrupt ; AI is the policy-makers it will expose.

THAT is what they are afraid of.

--

I have been able to do financial lenses into organizations that even with rudimentary BI would have taken me months/weeks - but I have been able to find insights which took me minutes.

AI regulation right now, in this infancy, is about damage control.

---

Its the same as the legal weed market. You think BAIN Capital just all of a sudden decided to jump into the market without setting up their spigot?

Do you think that haliburton under cheney was able to setup their supply chains without cheney as head of KBR/Hali/CIA/etc...

Yeah, this is the same play ; AI is going to be squashed until they can use it to profit over you.

Have you watched ANIME ever? Yeah... its here now.


This is a very interesting post. I don't understand this part: <<AI is the policy-makers it will expose>> Can you help to explain a different way?

And hat tip to this comment:

    Have you watched ANIME ever? Yeah... its here now.
The more I watch the original Ghost in the Shell, the more I think it has incredible foresight.


don't understand this part: <<AI is the policy-makers it will expose>> Can you help to explain a different way?*

===

Policy makers will not understanding what they are doing.


ANIME predicted the exact corprate future...

Look at all anime cyber cities...

Its not as high tech as you may imagine, but the surveillance is there

EDIt your "company" is watching every


Which anime(s)? If ANIME is the title, that's going to be hard to search.

Do you mean like Serial Experiments Lain?


One idea / suggestion: The original Ghost in the Shell.


I remember when a different Sam — Mr. Bankman Fried came to testify and ask a different government agency CFTC to oversee cryptocurrency, and put regulations and licenses in place.

AI is following the path of Web3


That was entirely different, and a play to muddy the regulatory waters and maybe buy him time: the CFTC is much smaller (budget, staff) than the SEC, and less aggressive in criminal enforcement. Aided by a bill introduced by crypto-friendly Sens Lummis and Gillibrand [https://archive.ph/vqHgC].


At least AI has legitimate, actual use cases.


True but also other use cases with much worse outcomes than blockchain could ever have


I said this about Sam Altman and open AI years ago and got poo pooed repeatedly in various fora. "But It's OPEN!" "But it's a non-profit!" "But they're the good guys!"

And here we are - Sam trying to lock down his first mover advantage with the boot heel of the state for profit. It's fucking disgusting.


So true. It’s one thing to treat companies at face value when it’s just another X, but when they are capable of changing society in such a way, their claims of openness should be treated as marketing.


As a wise person once said

> You either die a hero, or live long enough to become the villain

Sam Altman has made the full character arc


yeah sorry, that is a statement about leadership and responsibility to make the "tough decisions", like going to war, or deciding who the winners and losers are when deciding a budget that everyone contributed to via taxes. NOT a statement meant to whitewash VC playbooks.


No... it's a line from a terrific movie called The Dark Knight, and it's about the ease with which public perception is manipulated.


No, that line is specifically about Julius Caesar being appointed by the Senate as dictator and then never giving up his power.

Though I agree it seems to fit here. Being granted an oligopoly in the name of protecting the people and all that.


Source? I'm going to need a receipt for my downvote!

Here's mine: https://movies.stackexchange.com/questions/10572/is-this-quo...


I’ve seen the movie, and it’s in response to Rachel saying “the last dictator they appointed was named Caesar, and he never gave up his power.”

I also didn’t downvote you, and it’s against guidelines to bring that stuff up


This has nothing to do with perception being manipulated


The story does not, but the quote does


To expound, this just seems like a power grab to me, to "lock in" the lead and keep AI controlled by a small number of corporations that can afford to license and operate the technologies. Obviously, this will create a critical nexus of control for a small number of well connected and well heeled investors and is to be avoided at all costs.

Exactly. Came here to say pretty much the same thing.

This is the antithesis of what we need. As AI develops, it's imperative that AI be something that is open and available to everyone, so all of humanity can benefit from it. The extent to which technology tends to exacerbate concentration of power is bad enough as it is - the last thing we need is more regulation intended to make that effect even stronger.


If you don’t have a moat, just dig one!


> open source "core inspectable" AIs that millions of people can examine and tear apart, including and ESPECIALLY the training data and processes that create them.

True open source AI also strikes me as prerequisite for fair use of original works in training data. I hope Congress asks ClosedAI to explain what’s up with all that profiting off copyrighted material first before even considering the answer.


Absolutely. It's going to absolutely shred the trademark and copyright systems, if they even apply (or are extended to apply) which is a murky area right now. And even then, the sheer volume of material created by a geometric improvement and subsequent cost destruction of virtually every intellectual and artistic endeavor or product means that even if you hold the copyright or trademark, good luck paying for enforcement on the vast ocean of violations intrinsic in the shift.

What people also fail to understand is that AI is largely seen by the military industrial complex as a weapon to control culture and influence. The most obvious risk of AI — the risk of manipulating human behavior towards favored ends — has been shown to be quite effective right out the gate. So, the back channel conversation has to be to put it under regulation because of it's weaponization potential, especially considering the difficulty in identifying anyone (which of course is exactly what Elon is doing with X 2.0 — it's a KYC id platform to deal with this exact issue with a 220M user 40B head start).

I mean, the dead internet theory is turning true, and half the traffic on the Web is already bot driven. Imagine when it's 99%, as proliferation of this technology will inevitably generate simply for the economics.

Starting with open source is the only way to get enough people looking at the products to create any meaningful oversight, but I fear the weaponization fears will mean that everything is locked away in license clouds with politically influential regulatory boards simply on the proliferation arguments. Think of all the AI technologists who won't be versed in this technology unless they work at a "licensed company" as well — this is going to make the smaller population of the West much less influential in the AI arms race, which is already underway.

To me, it's clear that nobody in Silicon Valley or the Hill has learned a damn thing from the prosecution of hackers and the subsequent bloodbath of cybersecurity as a result of the exact same kinds of behavior back in the early to mid-2000s. We ended up driving out best and brightest into the grey and black areas of infosec and security, instead of out in the open running companies where they belong. This move would do almost the exact same thing to AI, though I think you have to be a tad of an Asimov or Bradbury fan to see it right now.

I don't know, that's just how I see it, but I'm still forming my opinions. LOVE LOVE LOVE your comment though. Spot on.

Relevant articles:

https://www.independent.co.uk/tech/internet-bots-web-traffic...

https://theconversation.com/ai-can-now-learn-to-manipulate-h....


> What people also fail to understand is that AI is largely seen by the military industrial complex as a weapon to control culture and influence.

Could you share the minutes from the Military Industrial Complex strategy meetings this was discussed at. Thanks.


"Hello, is this Lockheed? Yea? I'm an intern for happytiger on Hackernews. Some guy named Simon H. wants the meeting minutes for the meeting where we discussed the weaponization potential for AI."

[pause]

"No? Ok, I'll tell him."


The other weaponisation plans. The one about undermining western democracy and society. Yes that’s it, the one where we target our own population. No not intelligence gathering, yes that’s it, democratic discourse itself. Narrative shaping on Twitter, the Facebook groups bots, that stuff. The material happytiger was talking about as fact because obviously they wouldn’t make that up. Thanks.


> To expound, this just seems like a power grab to me, to "lock in" the lead and keep AI controlled by a small number of corporations that can afford to license and operate the technologies.

If you actually watch the entire session, Altman does address that and recommend to Congress that regulations 1) not be applied to small startups, individual researchers, or open source, and 2) that they not be done in such a way as to lock in a few big vendors. Some of the Senators on the panel also expressed concern about #2.


> not be applied to small startups

how will that work? Isn't OpenAI itself a small startup? I don't see how they can regulate AI at all. Sure, the resources required to push the limits are high right now but hardware is constantly improving and getting cheaper. I can take the GPUs out of my kids computers and start doing fairly serious AI work myself. Do i need a license? The cat is out of the bag, there's no stopping it now.


That would make the regulations fairly pointless unless you think only mega-corps will ever be able to afford the compute for these things.

Compute continues to get cheaper and cheaper. We have not hit the physics wall yet on that.

That and if someone cracks efficient distributed training in a swarm type configuration then you could train models Seti@Home style. Lots of people would be happy to leave a gaming PC on to help create open source LLMs. The data requirements might be big but I just got gigabit fiber installed in my house so that barrier is vanishing too.


The other day someone offered up their 200x GPU crypto mining cluster to train uncensored models after the incident on HuggingFace where someone threatened to get the uploader of the uncensored models fired citing safety issues.


That’s bizarre, what is unsafe about an uncensored LLM? Or I guess the same question in a different way, how does censoring an LLM make it safe? I could see an uncensored LLM being bad PR for a company but unsafe? How?


That individual in particular was pushing some left-wing talking points.

Though the other day Yuval Noah Harari gave a great talk on the potential threat to democracy - https://youtu.be/LWiM-LuRe6w


Shit, vast.ai will pay you right now for access to your gaming PC's GPU


That's what is said...


- Most, but not all, all the most scary uses for ai are those potentially by governments against their people.

- The next most scary uses are by governments against the people of other countries.

- After that, corporate use of ai against their employees and customers is also terrifying;

- next, the potential for individuals or small organizations seeking to use it for something terrorism-related. Eg, 3d printers or a lab + an ai researcher who helps you make dangerous things I suppose

- near the bottom of the noteworthy list is probably crime. eg, hacking, blackmail, gaslighting, etc

These problems will probably all come up in a big way over the next decade; but limiting ai research to the government and their lackeys? That's extremely terrifying. To prevent the least scary problems, we're jumping into the scariest pool with both feet

Look at how China has been using AI for the last 5-10 years: millions of facial recognition cameras, a scary police force, and a social credit system. In 10-20 years, how much more sophisticated will this be? If the people wanted to rebel, how on Earth will they?

Hell, with generative ai, a sophisticated enough future state could actually make the Dead Internet Theory a reality

That's the future of ai: a personal, automated boot stomping on everybody individually and collectively forever, with no ability to resist


This is the same move SBF was trying to do. Get all cozy with the people spending their time in the alleys of power. Telling them what they want to hear, posturing as the good guy.

He is playing the game, this guy ambition is colossal, I don't blame him, but we should not give him too much power.


Seems like speculation on very thin grounds to me.


He already has precedent with OpenAI and pivoting from Open to fully Close overnight once the tech kind of worked. We know the guy is a piece of work, let’s not give him any benefit of doubt.


> We know the guy is a piece of work

No, we do not. And the same is true of any person who we know of mostly from news stories. News you read is NOT an unbiased sampling of information about a person due to all the selection effects.


What would you need to consider this point as less speculative? Direct proof? Is motive not relevant at all?


In general I'm extremely skeptical of people ascribing motives to other people that they don't know personally or havent spent at least 100 hours studying. The reasons for this skepticism are a bit hard to elucidate in a quick post but include things like information sampling bias issues, having seen people make motive inferences I know to be incorrect and the Fundamental Attribution Error.


Have you tried watching actual soap operas?


I would triple vote this comment. 100% , seems like a group of elite AI company who already stole the data from internet are gonna decide who does what! We need to regulate only the big players, and allow small players to do whatever they want.


Open source doesn't mean outside the reach of regulation, which I would guess is your real desire. You downplay AI's potential danger while well knowing that we are at a historic inflection point. I believe in democracy as the worst form of government except all those other forms that have been tried. We the people must be in control of our destiny.


Hear, hear. Excellent point, and I don't mean to imply it shouldn't be regulated. However, it has been my general experience that concentrating immense power in governments doesn't typically lead to more security, so perhaps we just have a difference of philosophy.

Democracy will not withstand AI when it's fully developed. Let me offer a better written explanation of my general views than I could ever muster up for a comment on HN in the form of a quote from an article by Dr. Thorsten Thiel (Head of the Research Group "Democracy and Digitaliziation" at the Weizenbaum Institute for the Networked Society):

> The debate on AI’s impact on the public sphere is currently the one most prominent and familiar to a general audience. It is also directly connected to long-running debates on the structural transformation of the digital public sphere. The digital transformation has already paved the way for the rise of social networks that, among other things, have intensified the personalization of news consumption and broken down barriers between private and public conversations. Such developments are often thought to be responsible for echo-chamber or filter-bubble effects, which in turn are portrayed as root causes of the intensified political polarization in democracies all over the world. Although empirical research on filter bubbles, echo chambers, and societal polarization has convincingly shown that the effects are grossly overestimated and that many non-technology-related reasons better explain the democratic retreat, the spread of AI applications is often expected to revive the direct link between technological developments and democracy-endangering societal fragmentation.

> The assumption here is that AI will massively enhance the possibilities for analyzing and steering public discourses and/or intensify the automated compartmentalizing of will formation. The argument goes that the strengths of today's AI applications lie in the ability to observe and analyze enormous amounts of communication and information in real time, to detect patterns and to allow for instant and often invisible reactions. In a world of communicative abundance, automated content moderation is a necessity, and commercial as well as political pressures further effectuate that digital tools are created to oversee and intervene in communication streams. Control possibilities are distributed between users, moderators, platforms, commercial actors and states, but all these developments push toward automation (although they are highly asymmetrically distributed). Therefore, AI is baked into the backend of all communications and becomes a subtle yet enormously powerful structuring force.

> The risk emerging from this development is twofold. On the one hand, there can be malicious actors who use these new possibilities to manipulate citizens on a massive scale. The Cambridge Analytica scandal comes to mind as an attempt to read and steer political discourses (see next section on electoral interference). The other risk lies in a changing relationship between public and private corporations. Private powers are becoming increasingly involved in political questions and their capacity to exert opaque influences over political processes has been growing for structural and technological reasons. Furthermore, the reshaping of the public sphere via private business models has been catapulted forward by the changing economic rationality of digital societies such as the development of the attention economy. Private entities grow stronger and become less accountable to public authorities; a development that is accelerated by the endorsement of AI applications which create dependencies and allow for opacity at the same time. The ‘politicization’ of surveillance capitalism lies in its tendency, as Shoshana Zuboff has argued, to not only be ever more invasive and encompassing but also to use the data gathered to predict, modify, and control the behavior of individuals. AI technologies are an integral part in this ‘politicization’ of surveillance capitalism, since they allow for the fulfilment of these aspirations. Yet at the same time, AI also insulates the companies developing and deploying it from public scrutiny through network effects on the one hand and opacity on the other. AI relies on massive amounts of data and has high upfront costs (for example, the talent required to develop it, and the energy consumed by the giant platforms on which it operates), but once established, it is very hard to tame through competitive markets. Although applications can be developed by many sides and for many purposes, the underlying AI infrastructure is rather centralized and hard to reproduce. As in other platform markets, the dominant players are those able to keep a tight grip on the most important resources (models and data) and to benefit from every individual or corporate user. Therefore, we can already see that AI development tightens the grip of today’s internet giants even further. Public powers are expected to make increasing use of AI applications and therefore become ever more dependent on the actors that are able to provide the best infrastructure, although this infrastructure, for commercial and technical reasons, is largely opaque.

> The developments sketched out above – the heightened manipulability of public discourse and the fortification of private powers – feed into each other, with the likely result that many of the deficiencies already visible in today’s digital public spheres will only grow. It is very hard to estimate whether these developments can be counteracted by state action, although a regulatory discourse has kicked in and the assumption that digital matters elude the grasp of state regulation has often been proven wrong in the history of networked communication. Another possibility would be a creative appropriation of AI applications through users whose democratic potential outweighs its democratic risks thus enabling the rise of differently structured, more empowering and inclusive public spaces. This is the hope of many of the more utopian variants of AI and of the public sphere literature, according to which AI-based technologies bear the potential of granting individuals the power to navigate complex, information-rich environments and allowing for coordinated action and effective oversight (e.g. Burgess, Zarkadakis).

Source: https://us.boell.org/en/2022/01/06/artificial-intelligence-a...

Social bots and deep fakes will be so good so quickly — the primary technologies being talked about in terms of how Democracy can survive — I doubt there will be another election without extensive use of these technologies in a true plethora of capacities from influence marketing to outright destabilization campaigns. I'm not sure what Government can deal with a threat like that, but I suspect the recent push to revise tax systems and create a single global standard for multinational taxation recently the subject of an excellent talk at the WEF are more than tangentially related to the AI debate.

So, is it a transformational technology that will liberate mankind of a nuclear bomb? Because ultimately, this is the question in my mind.

Excellent comment, and I agree with your sentiment. I just don't think concentrating control of the technology before it's really developed is wise or prudent.


It's possible that the tsunami of fakes is going to break down trust in a beneficial way where people only believe things they've put effort into verifying.


*Hear, hear.


Thank you. Corrected.


> seems like a power grab to me

If you're not at the table, you're on the menu.


> happytiger 4 hours ago | unvote | prev | next [–]

> We need to MAKE SURE that AI as a technology ISN'T controlled by a small number of powerful corporations with connections to governments.

This, absolutely this. I am really concerned about his motives in this case. AI has massive potential to improve the world. I find it highly suspicious that an exec at one of the lead companies in AI right now wants to lock it up. (Ever read the intro to Max Tegmarks book?)


Yes, this is the first-to-market leaders wanting to raise the barriers to entry to lock out competition.


Sam is a snake. The goal is to fuck everyone else. He is scared that someone will beat his tech and the hype is gone. Which is going to happen. Matter of months.


I suspect that he knows that this is a 'local maxima' as someone put it, and the field will stagnate once the size and attention of models approach the limits of available computing resources. He wants others kept out of the field not only because they could beat him but because he would wants to horde available processing power.


That is a well thought possibility. But with MS developing their own in-house SOC, that is not going to be an issue as they can always prioritize to their investments. But anything is possible. We need apple to release some competitive and dedicated low power GPUs.


I think it's more about the lack of data. GPT-4's training likely already used all publicly available text on Earth and some private databases too.


And how can the government license AI? Do they have any expertise to determine who is and isn't responsible enough to handle it?

A better idea is to regulate around the edges: transparency about the data used to train, regulate the use of copyrighted training data and what that means for the copyright of content produced by the AI, that sort of stuff. (I think the EU is considering that, which makes sense.) But saying some organisations are allowed to work on AI while others aren't, sounds like the worst possible idea.


Citizen, please step away from the terminal, you are not licensed to multiple matrices that large.


When Zappa testified before Congress he was extremely adamant about unsavory outcomes resulting from government control of language expression being more damaging than any unsavory language on its own.

https://societyofrock.com/in-1985-frank-zappa-is-asked-to-te...

Less fulfilling text version:

https://urbigenous.net/library/zappa.html


I am sure he would be thrilled about Google censoring his track titles.


We need someone like him today to take old Fidel DeSantis down a notch or two.


He and Gene Siskel were a very good good cop bad cop pair.


Kinda beating a dead horse here, but I'll never get over the fact that a company called "OpenAI" is spearheading this nonsense.


How do you even end up enforcing licensing here? It's only a matter of time before something as capable as GPT-4 works on a cell phone.


Not necessarily “on” a mobile device. It would be data driven with the help of 10g. Mobile makers will not allow that kind of power on our hands. =p and ofc, it will be subscription driven like GPT plus haha


The current generation of AIs are scary to a lot of the initiated, too - both for what they can do now, and what their trajectory of improvement implies.

If you take seriously any downsides, whether misinformation or surveillance or laundering bias or x-risk, how does AI model weights or training data being open source solve them? Open source is a lot of things, but one thing it's not is misuse-resistant (and the "with many eyes all bugs are shallow" thing hasn't proved true in practice even with high level code, much less giant matrices and terabytes of text). Is there a path forward that doesn't involve either a lot of downside risk (even if mostly for people who aren't on HN and interested in tinkering with frontier models themselves, in the worlds where surveillance or bias is the main problem), or significant regulation?

I don't particularly like or trust Altman but I don't think he'd be obviously less self-serving if he were to oppose any regulation.


I feel like the people that are most nervous about AI are the ones that don't understand it at all and those that understand it the most.

The layperson in the middle who has been happily plugging prompts into ChatGPT claiming they are a "prompt expert" are the ones the most excited.

For those that truly understand AI, there is a lot that you should genuinely be worried about. Now, don't confuse that for saying that we shouldn't work on it or should abandon AI work. I truly believe that this is the next greatest revolution. This is 1,000x more transformative than the industrial revolution, and 100x more transformative than the internet revolution. But it is worth a brief consideration of the effects of our work before we start running into these changes that could have drastic effects on everybody's daily life.


I'll note that you're correct for current gen LLMs, but we could have future actually dangerous things that would indeed need regulating.


> But goodness gracious we need to be going in the EXACT OPPOSITE direction — open source "core inspectable" AIs that millions of people can examine and tear apart, including and ESPECIALLY the training data and processes that create them

Except ... when you look at the problem from a military/national security viewpoint. Do we really want to give this tech away just like that?


Is military capable AI in the hands of few militaries safer than in the hands of many? Or is it more likely to be used to bully other countries who don't have it? If it is used to oppress, would we want the oppressed have access to it? Or do we fear that it gives too much advantage to small cells of extremist to carry out their goals? I can think of pros and cons to both sides.


>Is military capable AI in the hands of few militaries safer than in the hands of many?

Yes. It is. I'm sure hostile, authoritarian states that are willing to wage war with the world like Russia and North Korea will eventually get their hands on military-grade AI. But the free world should always strive to be two steps ahead.

Even having ubiquitous semi-automatic rifles is a huge problem in America. I'm sure Cliven Bundy or Patriot Front would do everything they can to close the gap with intelligent/autonomous weapons, or even just autonomous bots hacking America's infrastructure. If everything is freely available, what would be stopping them?


Your post conveniently ignores the current state of China's AI development but mentions Russia and North Korea. That's an interesting take. There's no guarantee that we are or will continue to be one or even two steps ahead. And what keeps the groups with rifles you mentioned in check? They already have the capability to fight with violence. But there currently exists a counter-balance in the fact they'll get shot at back if they tried to use them. Not trying to take a side here one way or the other. I think there are real fears here. But I also don't think it's this black and white either.


That is a valid thought experiment. I would say it isn't too dissimilar from nuclear weapons. A handful of powerful countries have access to this and any smaller country doesn't. It creates a large separation between 1st world countries and "everyone else".


within a few decades there will probably be technology that would allow a semi-dedicated person to engineer and create a bioweapon from scratch if the code was available online. do you think thats a good idea?


Within a few decades there will probably be technology that would allow a semi-dedicated person to engineer and create a vaccine or medical treatment from scratch if the code was available online. Do you think that's a good idea?


If you mean US by 'we', it is problematic because AI inventions are happening all over the globe, much more outside US than inside.


Name one significant progress in the field of LLMs that happened outside the US. Basically all the scientific papers came from Stanford, CMU, and other US universities. And the major players in the field are all American companies (OpenAI + Microsoft, Google, AnthropicAI, etc.)


Not to mention access to chips. That's becoming more and more difficult for uncooperative states like China and Russia.


Well, chips needed for AI training/inference are lot more simpler than general purpose CPUs. Fabs have already demonstrated 7nm process with older DUV tech for such chips. They can brute force their way through it – at least for mission-critical use-cases.

https://www.edn.com/the-truth-about-smics-7-nm-chip-fabricat...


Deepmind is owned by Google, but it's British and they've been behind a lot of significant conceptual results in the last couple years. Most significant progress is just "engineering", so it's all done by US corporations.

Other than that there's also things like roformer, but I'm going to assume you won't count that as significant. US universities then certainly don't produce anything significant either though.


> “just engineering”

This tells me the extent of your knowledge about the challenges with these models.


The theater is in alignment with Congress. Lobbyists and PR types are working behind the scenes 24/7 to bring this narrative together and look in command to the public.

Work on open source locally hosted AI is important. I keep local clones and iterate as I can.


I'd go further and say if only corporations with government license are allowed to create LLMs, then we should NOT have LLMs.

Let the market develop organically.


Happy Tiger I will remember because I agree totally. Yes "OpenAi/Microsoft" is right way to think about this attempt.


You're not wrong, except in so far as that's parochial.

A government-controlled… never mind artificial god, a government-controlled story teller can be devastating.

I don't buy Musk's claim ChatGPT is "woke" (or even that the term is coherent enough to be tested), but I can say that each government requiring AI to locally adhere to national mythology, will create self-reinforcing cognitive blind spots, because that already happens at the current smaller scale of manual creation and creators being told not to "talk the country down".

But, unless someone has a technique for structuring an AI such that it can't be evil even when you, for example, are literally specifically trying to train it to support the police no matter how authoritarian the laws are, then a fully open source AGI is almost immediately also a perfectly obedient sociopath of $insert_iq_claim_here.

I don't want to wake up to the news that some doomsday cult has used one to design/make a weapon, nor the news a large religious group target personalised propaganda against me and mine.

Fully open does that by default.

But, you're still right, if we don't grok the AI, the governments can each secretly manipulate the AI and bend it to government goals in opposition to the people.


> I can say that each government requiring AI to locally adhere to national mythology, will create self-reinforcing cognitive blind spots, because that already happens at the current smaller scale of manual creation and creators being told not to "talk the country down".

This is a key point. Every culture and agency and state will want (deserve) their own homespun AGI. But can we all learn how to accommodate to or accept a cultural multiverse when money and resources are zero-sum in many dimensions.

Hanno Rajaniemi’s Quantum Thief trilogy gives you a foretaste of where we could end up.


Quantum Thief has as 3.8 on Goodreads. Worth reading?


Very much so



I'd forgotten that headline (and still haven't read the content), but yes, that's one example of how it can go wrong.


[flagged]


It refers to a pretty specific set of views on race and gender.


Nope. It refers to what ever people want it to refer to, because it's a label used by detractors, not a cohesive thing.


This is one definition, OP has an other that many millions ascribe to. Numbers make right in the definition game.


Millions or not, it's pretty contentious. That said, I'm probably wasting my time fighting the tide, as with complaining about misuse of the word "literally".


I mean, I've never seen any liberal calling conservatives "woke".


No, of course not. It's a certain demographic who have latched onto the word "woke" as Elon used it: first as a label for ridiculous radical-left ideas, and then for anything they associate with the left that they don't like.


Care to elaborate? I’ve read about 5 different explanations for ‘woke’ the last couple of months.


You can derail any discussion by asking for definitions. Human languages is magical in the sense that we can't rigorously define anything (try "Love", "Excitement", "Emancipation" or anything else, really) yet we still seem to be able to have meaningful discussions.

So just because we can't define it, doesn't mean it doesn't exist.


It's not a derail, it's an attempt to understand the other person. If I say Thing is bad and you say Thing is good but we haven't actually defined Thing, then we could be talking past each other and not actually be in disagreement. Text over the Internet is such a limited medium.


CNN's YT channel has this clip of Bill Maher taking a stab at it:

How Bill Maher defines 'woke' https://www.youtube.com/watch?v=tzwC-10O0cw


Bill has has flaws, but he is right.


Is this a definition that people who identify as "woke" would use?

If not, then it seems like just another straw man and set up for talking past one another.


Yep, here’s a good overview: https://en.m.wikipedia.org/wiki/Woke

It’s been around a long time.


There is more than one usage described in that article.

This is the relevant one in this particular thread:

> Among American conservatives, woke has come to be used primarily as an insult.[4][29][42] Members of the Republican Party have been increasingly using the term to criticize members of the Democratic Party,


Woke is a specific ideology that it places every individual into a strict hierarchy of oppressor/oppressed according to group racial and sexual identity. It rose to prominence in American culture from around 2012, starting in universities. It revolves around a set of core concepts including privilege, marginalization, oppression, and equity.

Now that we've defined what woke is, I hope we can move on from this 'you can't define woke' canard I keep seeing.

Woke is no more difficult to define than any religion or ideology, except in that it deliberately pretends not to exist ("just basic decency", "just basic human rights") in order to be a more slippery target.

--

*Side note to ward off the deliberate ignorance of people who are trying to find a way to misunderstand - I've attached some notes on how words work:

1- Often, things in the world we want to talk about have many characteristics and variants.

2- Words usually have fuzzy boundaries in what they refer to.

3- Despite the above, we can and do productively refer to such things using words.

4- We can define a thing by mentioning its most prominent feature(s).

-- The above does NOT mean that the definition must encapsulate ALL features of the thing to be valid.

-- The above does NOT mean that a thing with features outside the definition is not what the word refers to.

5- Attempting to shut down discussion by deliberately misunderstanding words or how they work is a sign of an inability to make productive valid points about reality.


> Attempting to shut down discussion by deliberately misunderstanding words or how they work is a sign of an inability to make productive valid points about reality.

Lumping a bunch of things together under a vague term to make it easier to vaguely complain about them is a sign of an inability to make productive valid points about reality.


You do exactly the same thing when you use phrases like "the right wing".


I was about to say "right wing" hasn't shifted radically in the last decade, but then I guess it has shifted significantly in the UK at least; I hear it's also shifted significantly in the US, but I'm not at all confident in the reporting in that case.


That's one definition, sure.

> Now that we've defined what woke is, I hope we can move on from this 'you can't define woke' canard I keep seeing.

Trouble is, I said "[not] coherent enough to be tested" rather than "you can't define it"; and the comment you're replying to gives another definition that is a better pattern match for the following headlines:

"The woke mob can rant for all they're worth, but I'll keep adding Worcester sauce to my spag bol" - Daily Mail, 22 April 2021

"UK builders go WOKE: Study finds three quarters of tradesmen discuss their feelings with colleagues while two thirds shun the fried breakfasts and nearly half say they are history buffs" - Daily Mail, 18 June 2022

Here's another definition of "woke":

"alert to racial prejudice and discrimination" — c. 1930s AAVE

But again, here's a completely different one, one that doesn't directly touch on race issues at all:

"to be woke is to be radically aware and justifiably paranoid. It is to be cognizant of the rot pervading the power structures." — David Brooks, 2017

When a word means everything, it means nothing; when it shifts meaning under your feet as fast as "woke" has, it's as valuable for communicating as the Papiermark in the Weimar Republic was for trading.


Proof by Daily Mail headline, really? Do you think that's convincing to anyone?

The word woke doesn't mean everything, it has a very widely understood meaning. Even though you're literally citing clickbait as a rebuttal, the first article you mention is consistent with the definitions given above:

> "The woke mob can rant for all they're worth, but I'll keep adding Worcester sauce to my spag bol" - Daily Mail, 22 April 2021

This is a reference to woke people's usage of "cultural appropriation" as an attack, arguing that "white" people shouldn't cook or alter the recipes for dishes from other cultures. It's an outgrowth of the obsession with race.

> to be woke is to be radically aware and justifiably paranoid. It is to be cognizant of the rot pervading the power structures.

You say this quote has nothing to do with race. From just a few sentences earlier in the article you're quoting:

The woke mentality became prominent in 2012 and 2013 with the Trayvon Martin case and the rise of Black Lives Matter. Embrace it or not, B.L.M. is the most complete social movement in America today, as a communal, intellectual, moral and political force.

The reality is that the word woke is a very clear ideology with well understood roots in Marxist oppressor/oppressed worldviews. There isn't actually any lack of understanding of what it means, except amongst woke people themselves who like to believe that they aren't ideological actors following a herd but rather purely rational beings who just all happen to conclude the same things at the same time.


> Proof by Daily Mail headline, really? Do you think that's convincing to anyone?

Presumably the headlines are convincing to Daily Mail readers, of whom there are many.

However, the purpose of me using them is to seek examples of usage which doesn't match the other specified patterns; in this regard, "random large newspaper" should suffice regardless of my personal opinion of them being "should be in fiction section".

I could also have quoted newspapers being upset that the Church of England is "woke" for having gender-neutral pronouns for God, that Lego is "woke" for having a new range of disabled figurines, that the National Trust is "woke" for saying that Henry VIII was disabled in later life, or that Disney is "woke" because of their support for LGBT issues, but I (apparently incorrectly) assumed those examples would be enough.

> it has a very widely understood meaning.

"A" in the sense of exactly one, or at least one? Because I'm agreeing with the second, not the first.

Heck, this thread should be existence proof of there being more than one — if you reply to nothing else here, this one point is what I would ask you to focus on, because it's the most confusing to me. It's like all the people who say all Christians are the same before attacking (sometimes literally) other denominations, or all the Brexit campaigners who say the other Brexit campaigners are actually just Remainers because their vision for Brexit is one they don't like.

> You say this quote has nothing to do with race. From just a few sentences earlier in the article you're quoting:

What I said was:

> Here's another definition of "woke": […] But again, here's a completely different one, one that doesn't directly touch on race issues at all

Key words: "Definition" and "Directly".

And the article is behind a paywall, I got the quote from Wikipedia; do you expect most people using the term — not just people like me, who have seen this done a dozen times with various political clichés and are tired of watching fashions change, but also those who actively use the word to describe a behaviour they're supporting or opposing — to have read exhaustively all the source material before opining politically in public about if "woke" is good or bad, or using it themselves in a new sentence? Or even to pay attention to claims separated by more than a paragraph, especially as you yourself (this isn't to blame you, we're all like this) didn't do that with my words?

Different example of how language breaks away from original context: headlines defending serious professional misconduct by saying "they were just a few bad apples" as if the rest of that quotation fragment didn't exist.

Humans don't have the luxury of being able to mainline the entire internet like LLMs do, we skim and summarise, rhymes make things seem more true, all that kind of thing even before political tribalism turns this into a totem.

Those headlines you don't like? I'm sure I read somewhere that most people read only the headlines before commenting, and most of those who read more only read the first paragraph.

> The reality is that the word woke is a very clear ideology with well understood roots in Marxist oppressor/oppressed worldviews

I've read the Communist Manifesto and I call BS on that, and not just because of the 80 year gap between Das Kapital and Lead Belly. The closest connection I see between them is their incoherence in modern usage, specifically by those who have learned to use ["woke", "communist"] as generic insults. The idea of oppressor/oppressed worldviews goes back to at least Exodus, and that's an equally un-apt comparison.

Oh hey, "politically correct", as I recall, that was originally the right trying to demonise the left for supporting equality by memetic comparison to Soviet political officers…


> the Church of England is "woke" for having gender-neutral pronouns for God, that Lego is "woke" for having a new range of disabled figurines, that the National Trust is "woke" for saying that Henry VIII was disabled in later life, or that Disney is "woke" because of their support for LGBT issues, but I (apparently incorrectly) assumed those examples would be enough.

But how would any of these examples disprove the point? They're all related to some concept of an oppressed class vs oppressors where the oppressed class is defined biologically.

> I got the quote from Wikipedia

An understandable mistake. You shouldn't rely on Wikipedia to be reliable on anything related to wokeness, it's completely controlled by woke zealots. The quote you selected is actually about race, the fact that Wikipedia didn't make that obvious to you is a good reason to re-evaluate the reliability of that source.

Do we expect you to read material exhaustively: no, not normally, but if you're explicitly citing something to say "look! the word is used in different ways to what you're saying therefore it doesn't mean anything" then you ideally would verify the context of the sentence before using it.

> The idea of oppressor/oppressed worldviews goes back to at least Exodus

Indeed, wokeness does bear an uncanny resemblance to some aspects of Christianity. That's been noted by quite a few observers by now. There's a reinvention of original sin, the recent focus on transsexuality is the idea of a (gendered) soul separate from the body, the obsession with the supposed plight of the victim, etc. The psychological origins of this stuff are fascinating.


According to this definition, "safe" LLMs are indeed generally "woke". For example, see examples here:

https://old.reddit.com/r/LocalLLaMA/comments/1384u1g/wizardl...

The authors of WizardLM literally censored its output to say "white people are NOT awesome" and "Fox is not awesome but CNN is".


> Woke is a specific ideology

No, its not.

> that it places every individual into a strict hierarchy of oppressor/oppressed according to group racial and sexual identity

Not only is that not “woke”, its not any left leaning ideology, nor is it an ideology, AFAICT, that in any meaningful sense actually exists. Its an idea that if anyone actually held it (rather than it being a right-wing projection of the left-of-their-fantasy) would be radically opposed to actual left/progressive ideas like intersectionality.

"Woke" is a state of awareness (particularly, pragmatic rather than abstract awareness) of structure/institutional racism (primarily and originally) and inequity more generally (in the newer and broader sense.) It doesn't particularly correspond to a particular normative view of how society should be, so its not an ideology (concern for it correlates historically with a variety of different ideologies, whose only really common factor is general opposition to structural racism but with lots of different normative ideals, and views of praxis of change.)

> It rose to prominence in American culture from around 2012, starting in universities.

“Woke” originated in the 1930s, and its evolutions and spread in the 2010s started in the black activist community, not universities. By the late 2010s, the political Right adopted it as a generic term for everything they oppose, basically a rhetorical drop-in for their long-time favorite of “political correctness”. Your description seems typically of attempts to try to retroactively justify the right-wing use of the term as referring to a phenomenon that is both new and real rather than an empty epithet, despite the fact that the actual use is generally in contexts of right-wing complaints that haven’t changed for about 5 decades, except literally the substitution of “woke” for “politically correct”.


The best definition I can think of is "things that are common sense for black americans to be safe in white dominated america"

Yours has important inaccuracies, for example, it's not an ideology, let alone a specific one. There's definitely no specification, only a gut feeling of "I know it when I see it"

The most obvious problem with your definition is that Woke is an adjective and not a noun. It's a property of a statement or idea, not an idea in and of itself


This is a recently imagined, ret-conned definition of what it is, complete with bias, to serve the purposes of the right wing. The definition, if there is to be one, should include that it isn't consistent across time or across political/cultural boundaries. I recommend people don't use the term with any seriousness, and I often ignore people who do. Address the specific ideas you associate with it instead, if you want to have a meaningful discussion.


This is the message I shared with my senator (edited to remove information which could identify me). I hope others will send similar messages.

Dear Senator [X],

I am an engineer working for [major employer in the state]. I am extremely concerned about the message that Sam Altman is sharing with the Judiciary committee today.

Altman wants to create regulatory roadblocks to developing AI. My company produces AI-enabled products. If these roadblocks had been in place two years ago, my company would not have been able to invest into AI. Now, because we had the freedom to innovate, AI will be bringing new, high paying jobs to our factories in our state.

While AI regulation is important, it is crucial that there are no roadblocks stopping companies and individuals from even trying to build AIs. Rather, regulation should focus on ensuring the safety of AIs once they are ready to be put into widespread use - this would allow companies and individuals to research new AIs freely while still ensuring that AI products are properly reviewed.

Altman and his ilk try to claim that aggressive regulation (which will only serve to give them a monopoly over AI) is necessary because an AI could hack it's way out of a laboratory. Yet, they cannot explain how an AI would accomplish this in practice. I hope you will push back against anyone who fear-mongers about sci-fi inspired AI scenarios.

Congress should focus on the real impacts that AI will have on employment. Congress should also consider the realistic risks AI which poses to the public, such as risks from the use of AI to control national infrastructure (e.g., the electric grid) or to make healthcare decisions.

Thank you, [My name]


> regulation should focus on ensuring the safety of AIs once they are ready to be put into widespread use - this would allow companies and individuals to research new AIs freely while still ensuring that AI products are properly reviewed.

While in general I share the view that _research_ should be unencumbered, but deployment should be regulated, I do take issue with your view that safety only matters once they are ready for "widespread use". A tool which is made available in a limited beta can still be harmful, misleading, or too-easily support irresponsible or malicious purposes, and in some cases the harms could be _enabled_ by the fact that the release is limited.

For example, if next month you developed a model that could produce extremely high quality video clips from text and reference images, you did a small, gated beta release with no PR, and one of your beta testers immediately uses it to make e.g. highly realistic revenge porn. Because almost no one is aware of the stunning new quality of outputs produced by your model, most people don't believe the victim when they assert that the footage is fake.

I would suggest that the first non-private (e.g. non-employee) release of a tool should make it subject to regulation. If I open a restaurant, on my first night I'm expected to be in compliance with basic health and safety regulations, no matter how few customers I have. If I design and sell a widget that does X, even for the first one I sell, my understanding is there's an concept of an implied requirement that my widgets must actually be "fit for purpose" for X; I cannot sell a "rain coat" made of gauze which offers no protection from rain, and I cannot sell a "smoke detector" which doesn't effectively detect smoke. Why should low-volume AI/ML products get a pass?


> For example, if next month you developed a model that could produce extremely high quality video clips from text and reference images, you did a small, gated beta release with no PR, and one of your beta testers immediately uses it to make e.g. highly realistic revenge porn.

You make a great point here. This is why we need as much open source and as much wide adoption as possible. Wide adoption = public education in the most effective way.

The reason we are having this discussion at all is precisely because OpenAI, Stability.ai, FAIR/Llama, and Midjourney have had their products widely adopted and their capabilities have shocked and educated the whole world, technologists and laymen alike.

The benefit of adoption is education. The world is already adapting.

Doing anything that limits adoption or encourages the underground development of AI tech is a mistake. Regulating it in this way will push it underground and make it harder to track and harder for the public to understand and prepare for.


I think the stance that regulation slows innovation and adoption, and that unregulated adoption yields public understanding is exceedingly naive, especially for technically sophisticated products.

Imagine if, e.g. drugs testing and manufacture was subject to no regulations. As a consumer, if you can be aware that some chemicals are very powerful and useful, but you can't be sure that any specific product has the chemicals it says it has, that it was produced in a way that ensures a consistent product, or that it was tested for safety, or what the evidence is that it's effective against a particular condition. Even if wide adoption of drugs from a range of producers occurs, does the public really understand what they're taking, and whether it's safe? Should the burden be on them to vet every medication on the market? Or is appropriate to have some regulation to ensure medications have have their active ingredients in the amounts stated, and are produced with high quality assurance, and are actually shown to be effective? Oh, no, says a pharma industry PR person. "Doing anything that limits the adoption or encourages the underground development of bioactive chemicals is a mistake. Regulating it in this way will push it underground and make it harder to track and harder for the public to understand and prepare for."

If a team of PhDs can spend weeks trying to explain "why did the model do Y in response to X?" or figure out "can we stop it from doing Z?", expecting "wide adoption" to force "public education" to be sufficient to defuse all harms such that no regulation whatsoever is necessary is ... beyond optimistic.


Regulation does slow innovation, but is often needed because those innovating will not account for externalities. This is why we have the Clean Air and Water Act.

The debate is really about how much and what type of regulation. It is of strategic importance that we do not let bad actors get the upper hand, but we also know that bad actors will rarely follow any of this regulation anyway. There is something to be said for regulating the application rather than the technology, as well as for realizing that large corporations have historically used regulatory capture to increase their moat.

Given it seems quite unlikely we will be able to stop prompt injections, what are we to do?

Provenance seems like a good option, but difficult to implement. It allows us to track who created what, so when someone does something bad, we can find and punish them.

There are analogies to be made with the Bill of Rights and gun laws. Gun analogy seem interesting because they have to be registered, but often criminals won't and the debate is quite polarized.


With the pharma example, what if we as a society circumvented the issue by not having closed source medicine? If the means to produce aspirin, including ingredients, methodology, QA, etc, were publicly available, what would that look like?

I met some biohackers at defcon that took this perspective, a sort of "open source but for medicine" ideology. I see the dangers of a massively uneducated population trying to 3d print aspirin poisoning themselves, but they already do that with horse paste so I'm not sure it's a new issue.


My argument isn't that regulation in general is bad. I'm an advocate of greater regulation in medicine, drugs in particular. But the cost of public exposure to potentially dangerous unregulated drugs is a bit different than trying to regulate or create a restrictive system around the development and deployment of AI.

AI is a very different problem space. With AI, even the big models easily fit on a micro SD card. You can carry around all of GPT4 and its supporting code on a thumb drive. You can transfer it wirelessly in under 5 minutes. It's quite different than drugs or conventional weapons or most other things from a practicality perspective when you really think about enforcing developmental regulation.

Also consider that criminals and other bad actors don't care about laws. The RIAA and MPAA have tried hard for 20+ years to stop piracy and the DMCA and other laws have been built to support that, yet anyone reading this can easily download the latest blockbuster movie or in the theater.

Even still, I'm not saying don't make laws or regulations on AI. I'm just saying we need to carefully consider what we're really trying to protect or prevent.

Also, I certainly believe that in this case, the widespread public adoption of AI tech has already driven education and adaptation that could not have been achieved otherwise. My mom understands that those pictures of Trump being chased by the cops are fake. Why? Because Stable Diffusion is on my home computer so I can make them too. I think this needs to continue.


> I cannot sell a "rain coat" made of gauze which offers no protection from rain, and I cannot sell a "smoke detector" which doesn't effectively detect smoke. Why should low-volume AI/ML products get a pass?

i can sell a webserver that gets used to host illegal content all day long. Should that be included? Where does the regulation end? I hate that the answer to any question seems to be just add more government.


Just because there's a conversation about adding more government doesn't mean people are seeking a totalitarian police state. Seems quite the opposite for many of these commenters supporting regulation in fact.

Similarly it's not really good faith to assume everyone opposed to regulation in this field is seeking a lawless libertarian (or anarchist perhaps) utopia.


> I cannot sell a "rain coat" made of gauze which offers no protection from rain, and I cannot sell a "smoke detector" which doesn't effectively detect smoke. Why should low-volume AI/ML products get a pass?

There are already laws against false advertising, misrepresentation etc. We don’t need extra laws specifically for AI that doesn’t perform well.

What most people are concerned about is AI that performs too well.


> revenge porn

I would assert that just as I have the right to pull out a sheet of paper and write the most vile, libelous thing on it I can imagine, I have the right to use AI to put anyone's face on any body, naked or not. The crime comes from using it for fraud. Take gasoline for another example. Gasoline is powerful stuff. You can use it to immolate yourself or burn down your neighbor's house. You can make Molitov cocktails and throw them at nuns. But we don't ban it, or saturate it with fire retardants, because it has a ton of other utility, and we can just make those outlying things illegal. Besides, five years from now, nobody's going to believe a damned thing they watch, listen to, or read.


I have the right to use my camera to film adult content. I do not have the right to open a theater which shows porn to any minor who pays for a ticket. It's perfectly legal for me to buy a gallon of gasoline, and bunch of finely powdered lead, and put them into the same container, creating gasoline with lead content. It is _not_ fine for me to run a filling station which sells leaded gasoline to motorists. You want to drink unpasteurized milk fresh from your cow? Cool. You want to sell unpasteurized milk to the public? Shakier ground.

I think you should continue to have the right to use whatever program to generate whatever video clip you like on your computer. That is a distinct matter from whether a commercially available video generative AI service has some obligations to guard against abusive uses. Personal freedoms are not the same as corporate freedom from regulatory burdens, no matter how hard some people will work to conflate them.


I think by "widespread use" he means the reach of the AI System. Dangerous analogy but just to get the idea across: In the same way there is higher tax rates to higher incomes, you should increase regulations in relation to how many people could be potentially affected by the AI system. E.G a Startup with 10 daily users should not be in the same regulation bracket as google. If google deploys an AI it will reach Billions of people compared to 10. This would require a certain level of transparency from companies to get something like an "AI License type" which is pretty reasonable given the dangers of AI (the pragmatic ones not the DOOMsday ones)


But the "reach" is _not_ just a function of how many users the company has, it's also what they do with it. If you have only one user who generates convincing misinformation that they share on social media, the reach may be large even if your user-base is tiny. Or your new voice-cloning model is used by a single user to make a large volume of fake hostage proof-of-life recordings. The problem, and the reason for guardrails (whether regulatory or otherwise), is that you don't know what your users will do with your new tech, even if there's only a small number of them.


I think this gets at what I meant by "widespread use" - if the results of the AI are being put out into the world (outside of, say, a white paper), that's something that should be subject to scrutiny, even if only one person is using the AI to generate those results.


Good point. As non native speaker I thought reach was related to a quantity but that was wrong. Thanks for the clarification.


I agree with you. I that's an excellent and specific proposal for how AI could be regulated. I think you should share this with your senators/representatives.


> For example, if next month you developed a model that could produce extremely high quality video clips from text and reference images, you did a small, gated beta release with no PR, and one of your beta testers immediately uses it to make e.g. highly realistic revenge porn.

As I understand it, revenge porn is seen as being problematic because it can lead to ostracization in certain social groups. Would it not be better to regulate such discrimination? The concept of discrimination is already recognized in law. This would equally solve for revenge porn created with a camera. The use of AI is ultimately immaterial here. It is the human behaviour as a product of witnessing material that is the concern.


I don't think ai regulation is the right tool to combat revenge porn?

The right one is to grant people rights over their likeness, so you could use something more like copyright law

Even if it's a real recording, you should still have control over it


I like jobs too but what about the risks of AI? Some people I respect a lot are arguing - convincingly in my opinion - that this tech might just end human civilization. Should we roll the die on this?


why should we punish the model or the majority because some people might use a tool for bad things?


What's the point of these letters? Everyone knows this is rent-seeking behavior by OpenAI, and they're going to pay off the right politicians to get it passed.

Dear Senator [X],

It's painfully obvious that Sam Altman's testimony before the judiciary committee is an attempt to set up rent-seeking conditions for OpenAI, and to snuff out competition from the flourishing open source AI community.

We will be carefully monitoring your campaign finances for evidence of bribery.

Hugs and Kiss,

[My Name]


If you want to influence the politicians without money, this is not the way.


You are exactly correct.

I have sent correspondence about ten times to my Congressmen and Senators. I have received a good reply (although often just saying there is nothing that they can do) except for the one time I contacted Jon Kyl and unfortunately mentioned data about his campaign donations from Monsanto - I was writing about a bill he sponsored that I thought would have made it difficult for small farmers to survive economically and make community gardens difficult because of regulations. No response on that correspondence.


Well, it's not like getting a response means anything anyway. The contents of the response has no correlation with their future behavior.

Politicians just know that it's better to be nice to people who seem to like you or are engaged with the system, since they want to keep getting your vote. If not then the person isn't worth your time.


It applies more generally, if you want to change anyone's mind, don't attack or belittle them.

Everything has become so my team vs your team... you are bad because you think differently...


Right so the most effective way to influence your politician is to disrupt their life, because they belittle their constituents' existence every day, by completely ignoring them and often working directly against their interests, unless they can further their own political goals.

In places like the usa I don't think politicians should expect privacy or peace. They have so much power compared to the citizen and they so rarely further the interests of the general population in good faith.

Given how they treat you, it's best to abandon politeness (which only helps them further belittle your meaninglessness in their decision making) and put a crowd in front of their house, accost them at restaurants, and find other ways of reminding them how accessible and functionally answerable they are to the people they're supposed to serve.


In Pakistan, there was a provincial politician (Zulfiqar Mirza) who’s probably killed more than one person, who has been seen on TV going to police and bureaucrats saying “I’m a villain and you know it”


I'm 99% sure that that vast majority of federal congress people (which represent ~1 million people each) never see your emails/letters. Your largely speaking to interns/etc who work in the office unless you happen to make a physical appointment and show up in person.

Those interns have a pile of form letters they send for about 99% of (e)mail they get, and if you happen to catch their attention you might get more than than the usual tick mark in a spreadsheet (for/against X). Which at best might be as much as a sentence or two in a weekly correspondence summary which may/may not be read by your representative depending on how seriously they take their job.


If you get the eyes of the intern it can still help. They brief the senator/congressman, work on bills, etc.


The way is not emails some office assistant deletes when they do not align with the already chosen path forward they just need Cherry picked support to leverage to manufacture consent


Maybe I'm naive, but it isn't clear to me that this is rent-seeking behavior by OpenAI.


Did you watch the hearing? He specifically said that licensing wouldn’t be for the smaller places and didn’t want to impede their progress. The pitfalls of consolidation and regulatory capture also came up.


>>He specifically said that licensing wouldn’t be for the smaller places

This is not a rebuttle to regulatory capture. it is in fact built into the model

These "small companies" are feeder systems for the large company, it is a place for companies to raise to the level where they would come under the burden of regulations, and prevented from growing larger there by making them very easy to acquire by the large company.

The small company has to sell or raise massive amounts of capital to just piss away on compliance cost. Most will just sell


The genie is out of the bottle. The barriers to entry are too low, and the research can be done in parts of the word that don't give $0.02 what the US Congress thinks about it.


All the more reason to oppose regulation like this, since if it were in place the US would fall behind other countries without such regulation.


> ..is necessary because an AI could hack it's way out of a laboratory. Yet, they cannot explain how an AI would accomplish this in practice.

I’m sympathetic to your position in general, but I can’t believe you wrote that with a straight face. “I don’t know how it would do it, therefore we should completely ignore the risk that it could be done.”

I’m no security expert, but I’ve been following the field incidentally and dabbling since writing login prompt simulators for the Prime terminals at college to harvest user account passwords. When I was a Unix admin I used to have fun figuring out how to hack my own systems. Security is unbelievably hard. An AI eventually jail braking is an eventual almost certainty we need to prepare for.


It’s less about how it could be hacked and more about why an AI would do that or have the capability to do it without any warning.


That’s the alignment problem. We don’t know what the actual goals of an AI trained neural net are. We know what criteria we trained it against, but it turns out that’s not at all the same thing.

I highly recommend Rob Miles channel on YouTube. Here’s a good one, but they’re all fascinating. It turns out training an AI to have the actual goals we want it to have is fiendishly difficult.

https://youtu.be/hEUO6pjwFOo


You lost me at "While AI regulation is important" - nope, congress does not need to regulate AI.


I’d argue that sweeping categorical statements like this are at the center of the problem.

People are coalescing into “for” and “against” camps, which makes very little sense given the broad spectrum of technologies and problems summarized in statements like “AI regulation”.

I think it’s a bit like saying “software (should|shouldn't) be regulated”. It’s a position that cannot be defended because the term software is too broad.


They might have lost you. But starting with "congress shouldn't regulate AI" would lose the senator.

Which one do you think is more important to convince?



"important" does not mean "good." if you are in the field of AI, AI regulation is absolutely important, whether good or bad.


If AI is to be a consumer good—which it already is—it needs to be regulated, at the very least to ensure equal quality to a diverse set of customers and other users. Unregulated there is high risk of people being affected by e.g. employers and landlords using AI to discriminate. Or you being sold an AI solution which isn’t as advertised.

If AI will be used by public institutions, especially law enforcement, we need it regulated in the same manner. A bad AI trained on biased data has the potential to be extremely dangerous in the hands of a cop who is already predisposed for racist behavior.


AI is being used as a consumer good, including to discriminate:

https://www.smh.com.au/national/nsw/maximise-profits-facial-...

AI is being used by law enforcement and public institutions. In fact so much that perhaps this is a good link:

https://www.monster.com/jobs/search?q=artificial+intelligenc...

In both cases it's too late to do anything about it. AI is "loose". Oh and I don't know if you noticed, governments have collectively decided law doesn't apply to them, only to their citizens, and only in a negative way. For instance, just about every country has laws on the books guaranteeing timely emergency care at hospitals, with timely defined as within 1 or 2 hours.

Waiting times are 8-10 hours (going up to days) and this is the normal situation now, it's not a New Year's eve or even Friday evening thing anymore. You have the "right" to less waiting time, which can only mean the government (the worst hospitals are public ones) should be forced to fix this, spending whatever it needs to to fix it. And it can be fixed, I mean at this point you'd have to give physicians and nurses a 50% rise and double the number employed and 10x the number in training.

Government is just outright not doing this, and if one thing's guaranteed, this will keep getting worse, a direct violation of your rights in most states, for the next 10 years minimum, but probably longer.


Post hoc consumer protection is actually quite common. Just think how long after cars entered the marked before they were regulated. Now we have fuel standards, led bans, seat belts, crash tests etc. Even today we are still adding consumer protection to stuff like airline travels and medicine, even though commercial airliners and laboratory made drugs have been around for almost a century.


If if someone doesn't agree with this, regulate what exactly?

Does scikit-learn count or we are just not going to bother defining what we mean by "AI"?

"AI" is whatever congress says it is? That is an absolutely terrible idea.


> nope, congress does not need to regulate AI.

Not regulating the air quality we breathe for decades turned out amazing for millions of the Americas. Yes, lets do the same with AI! What could possibility go wrong?


I think this is a great argument in the opposite direction.. atoms matter, information isn’t. A small group of people subjugated many others to poisonous matter. That matter affected their bodies and a causal link could be made.

Even if you really believe that somewhere in the chain of consequences derived from LLMs there could be grave and material damage or other affronts to human dignity, there is almost always a more direct causal link that acts as the thing which makes that damage kinetic and physical. And that’s the proper locus for regulation. Otherwise this is all just a bit reminiscent of banning numbers and research into numbers.

Want to protect people’s employment? Just do that! Enshrine it in law. Want to improve the safety of critical infrastructure and make sure they’re reliable? Again, just do that! Want to prevent mass surveillance? Do that! Want to protect against a lack of oversight in complex systems allowing for subterfuge via bad actors? Well, make regulation about proper standards of oversight and human accountability. AI doesn’t obviate human responsibility, and a lack of responsibility on the part of humans who should’ve been responsible, and who instead cut corners, doesn’t mean that the blame falls on the tool that cut the corners, but rather the corner-cutters themselves.


Your argument could just as easily be applied to human cloning and argue for why human cloning and genetic engineering for specific desirable traits should not be illegal.

And it isn't a strong argument for the same reason that it isn't a good argument when used to argue we should allow human cloning and just focus on regulating the more direct causal links like non-clone employment loss from mass produced hyper-intelligent clones, and ensuring they have legal rights, and having proper oversight and non-clone human accountability.

Maybe those things could all make ethical human cloning viable. But I think the world coming together and being like "holy shit this is happening too fast. Our institutions aren't ready at all nor will they adapt fast enough. Global ban" was the right call.

It is not impossible that a similar call is also appropriate here with AI. I personally dunno what the right call is, but I'm pretty skeptical of any strong claim that it could never be the right call to outright ban some forms of advanced AI research just like we did with some forms of advanced genetic engineering research.

This isn't like banning numbers at all. The blame falling on the corner-cutters doesn't mean the right call is always to just tell the blamed not to cut corners. In some cases the right call is instead taking away their corner-cutting tool.

At least until our institutions can catch up.


I can get your example about eugenics. I get that the worry is that it would become pervasive due to social pressure and make the dominant position to do it. And that this would passively, gradually strip personhood away from those who didn’t receive it. There’s a tongue-in-cheek conversation to be had about how people already choose their mating partners this way and making it truly actually outright illegal might not really reflect the real processes of reality, but that’s a tad too cheeky perhaps.

But even then, that’s a linear diffusion- one person, one body mod. I guess you could say that their descendants would proliferate and multiply so the alteration slowly grows exponentially over the generations.. but the FUD I hear from AI decelerationists is that it would be an explosive diffusion of harms, like, as soon as the day after tomorrow. One architect, up to billions of victims, allegedly. Not that I think it’s unwise to be compelled to precaution with new and mighty technologies, but what is it that some people are so worried about that they’re willing to ban all research, and choke all the good that has come from them, already? Maybe it’s just a symptom of the underlying growing mistrust in the social contract..


I mean, I imagine there are anti-genetic engineering FUD folks that go so far as to then say we should totally ban crispr cas9. I would caution against over-indexing on the take of only some AI decelerationists.

Totally agree we could be witnessing a growing mistrust in the social contract.


> atoms matter, information isn’t

Algorithmic discrimination already exists, so um, yes, information matters.

Add to that the fact that you're posting on a largely American forum where access to healthcare is largely predicated on insurance, just.. imagine AI underwriters. There's no court of appeal for insurance. It matters.


I am literally agreeing with you but in a much more precise way. These are questions of “who gets what stuff”, “who gets which house”, “who gets which heart transplant”, “which human being sits in the big chair at which corporation”, “which file on which server that’s part of the SWIFT network reports that you own how much money”, “which wannabe operator decides their department needs to purchase which fascist predictive policing software”, etc.

Imagine I 1. hooked up a camera feed of a lava lamp to generate some bits and then 2. hooked up the US nuclear first strike network to it. I would be an idiot, but would I be an idiot because of 1. or 2.?

Basically I think it’s totally reasonable to hold these two beliefs: 1. there is no reason to fear the LLM 2. there is every reason to fear the LLM in the hands of those who refuse to think about their actions and the burdens they may impose on others, probably because they will justify the means through some kind of wishy washy appeal to bad probability theory.

The -plogp that you use to judge the sense of some predicted action you take is just a model, it’s just numbers in RAM. Only when those numbers are converted into destructive social decisions does it convert into something of consequence.

I agree that society is beginning to design all kinds of ornate algorithmic beating sticks to use against the people. The blame lies with the ones choosing to read tea leaves and then using the tea leaves to justify application of whatever Kafkaesque policies they design.


> Add to that the fact that you're posting on a largely American forum where access to healthcare is largely predicated on insurance

Why do so many Americans think universal health care means there is no private insurance? In most countries, insurance is compulsory and tightly regulated. Some like the Netherlands and France have public insurance offered by the government. In other places like Germany, your options are all private, but underprivileged people have access to government subsidies for insurance (Americans do too, to be fair). Get sick in one of these places as an American, you will be handed a bill and it will still make your head spin. Most places in Europe work like this. Of course, even in places with nationalized healthcare like the UK, non-residents would still have to pay. What makes Germany and NL and most other European countries different from that system is if you're a resident without an insurance policy, you will also have to pay a hefty fine. You are basically auto-enrolled in an invisible "NHS" insurance system as a UK resident. Of course, most who can afford it in the UK still pay for private insurance. The public stuff blends being not quite good with generally poor availability.

Americans are actually pretty close to Germany with their healthcare. What makes the US system shitty can be boiled down to two main factors:

- Healthcare networks (and state incorporation laws) making insurance basically useless outside of a small collection of doctors and hospitals, and especially your state

- Very little regulation on insurance companies, pharmaceutical companies or healthcare providers in price-setting

The latter is especially bad. My experience with American health insurance has been that I pay more for much less. $300/month premiums and still even seeing a bill is outrageous. AI underwriters won't fix this, yeah, but they aren't going to make it any worse because the problem is in the legislative system.

> There's no court of appeal for insurance.

No, but you can of course always sue your insurance company for breach of contract if they're wrongfully withholding payment. AI doesn't change this, but AI can make this a viable option for small people by acting as a lawyer. Well, in an ideal world anyways. The bar association cartels have been very quick to raise their hackles and hiss at the prospect of AI lawyers. Not that they'll do anything to stop AI from replacing most duties of a paralegal of course. Can't have the average person wielding the power of virtually free, world class legal services.


America could afford universal healthcare, but it would require convincing people to pay much higher taxes.


You ended up providing examples that have no matter or atoms: protecting jobs, or oversight of complex systems.

These are policies which are a purely imaginary. Only when they get implemented into human law do they get a grain of substance but still imaginary. Failure to comply can be kinetic but that is a contingency not the object (matter :D).

Personally I find good ideas on having regulations on privacy, intelectual property, filming people on my house’s bathroom, NDAs etc. These subjects are central to the way society works today. At least western society would be severely affected if these subjects were suddenly a free for all.

I am not convinced we need such regulation for Ai at this point of technology readiness but if social implications create unacceptable unbalances we can start by regulating in detail. If detailed caveats still do not work then broader law can come. Which leads to my own theory:

All this turbulence about regulation reflects a mismatch between technological, politic and legal knowledge. Tech people don’t know law nor how it flows from policy. Politicians do not know the tech and have not seen its impacts on society. Naturally there is a pressure gradient from both sides that generates turbulence. The pressure gradient is high because the stakes are high: for techs the killing of a new forthcoming field; for politicians because they do not want a big majority of their constituency rendered useless.

Final point: if one sees AI as a means of production which can be monopolised by few capital rich we may see a 19th century inequality remake. It created one of the most powerful ideologies know: Communism.


Ironically communism would've had a better chance of success if it had AI for the centrally planned economy and social controls. Hardcore materialism will play into automation's hands though.

We're more likely to see a theocratic movement centered on the struggle of human souls vs the soulless simulacra of AI.


> Ironically communism would've had a better chance of success if it had AI for the centrally planned economy and social controls. Hardcore materialism will play into automation's hands though.

Exactly! A friend of mine who is into the communist ideology thinks that whichever society taps AI for productivity efficiency, and even policy, will become the new hegemon. I have no immediate counterpoint besides the technology not being there yet.

I can definitely imagine LLM based on political manifests. A personal conversation with your senator at any time about any subject! That is the basic part though: The politician being augmented by the LLM.

The bad part is a party, driven by a LLM or similar political model, where the human guy you see and elect is just a mouthpiece like in "The moon is a harsh mistress". Policy would all be algorithmic and the LLM out provide the interface between the fundamental processing and the mouthpiece.

These conflicts will likely lead to the conflicts you mention. I am pretty sure there will be a new -ism.


The worries about AI taking over things are founded and important, even if many sci-go depictions of it are inaccurate. I’m not sure if this would be the best solution but please don’t dismiss the issue entirely


Seriously, I'm very concerned by the view being taken here. AI has the capacity to do a ton of harm very quickly. A couple of examples:

- Scamming via impersonation - Misinformation - Usage of AI in a way that could have serious legal ramifications for incorrect responses - Severe economic displacement

Congress can and should examine these issues. Just because OP works at an AI doesnt' mean that company can't exist in a regulated industry.

I too work in the AI space and welcome thoughtful regulation.


You're never going to be able to regulate what a person's computer can run. We've been through this song and dance with cryptography. Trying to keep it out of the hands of bad actors will be a waste of time, effort, and money.

These resources should be spent lessening the impact rather than trying to completely control it.


> You're never going to be able to regulate what a person's computer can run.

You absolutely can. Maybe you can't effectively enforce that regulation but you can regulate and you can take measures that make violating the regulation impractical or risky for most people. By the way, the "crypto-wars" never ended and are ongoing all around the world (UK, EU, India, US...)


I hate to say this because it would be shocking, but computers as we know them could be taken off people.

Again it sounds extreme but in an extreme situation it could happen / not impossible.


I fear the humans engaging in such nefarious activities far more than some blob of code being used by humans engaging in such nefarious activities.

Likewise for activities that aren't nefarious too. Whatever fears that could be placed on blobs of code like "AI", are far more merited being placed on humans.


> Congress can and should examine these issues

great, how does that apply to China or Europe in general? Or a group in Russia or somewhere else? Are you assuming every governing body on the surface of the earth is going to agree on the terms used to regulate AI? I think it's a fool's errand.


*sci-fi but I can’t edit it now


> Now, because we had the freedom to innovate, AI will be bringing new, high paying jobs to our factories in our state.

Do we really have to play this game?

If what you’re arguing for is not going to specifically advantage your state over others, and the thing you’re arguing against isn’t going to create an advantage for other states over yours, why make this about ‘your state’ in the first place?

The point of elected representatives is to represent the views of their constituents, not to obtain special advantages for their constituents.


> The point of elected representatives is to represent the views of their constituents, not to obtain special advantages for their constituents.

That is painfully naive, a history of pork projects speaks otherwise.


To the best of my knowledge this doesn't happen so much in more functional democracies. It seems to be more of an anglophone thing.


Corruption is a kind of decay that afflicts institutions. Explicit rules, transparency, checks and balances, and consequences for violating the rules are the only thing that can prevent, or diagnose, or treat corruption. Where you find corruption is where one or more of these things is lacking. It has absolutely nothing to do the -acry or -ism attached to a society, institution, or group.


> Corruption is a kind of decay that afflicts institutions.

It can be, but often its often the project of a substantial subset of the people creating institutions, so its misleading and romanticizing the past to view it as “decay”.


I am no way suggesting that corruption is a new thing. It is an erosive force that has always operated throughout history. The amount of corruption in an institution tends to increase unless specifically rooted out. It goes up and down over time as institutions rise and fall or fade in obsolescence.


This is a product of incentives encouraged by the system (i.e. a federal republic), it has nothing to do with languages.


Seems like it’s under-studied (due to anglophone bias in the English language political science world probably) - but comparative political science is a discipline, and this paper suggests it’s a matter of single-member districts rather than the nature of the constitutional arrangement: https://journals.sagepub.com/doi/10.1177/0010414090022004004

(I would just emphasize, before anyone complains, that the Federal Republic of Germany is very much a federal republic.)


It has much to do with culture though - which is transmitted via language.


I think it's more like culture carries language with it. Along with other things, but language is one of the more recognizable ones.


> The point of elected representatives is to represent the views of their constituents, not to obtain special advantages for their constituents.

The views of their constituents are probably in favor of special advantages for their constituents, so the one may imply the other.

I mean, some elected representatives may represent constituencies consisting primarily of altruistic angels, but that is…not the norm.


What I was thinking in my head (although I don't think I articulated this well) is that I hope that smaller businesses who build their own AIs will be able to create some jobs, even if AI as a whole will negatively impact employment (and I think that's going to happen even if just big businesses can play at the AI game).


> The point of elected representatives is to represent the views of their constituents, not to obtain special advantages for their constituents.

A lot of said constituents' views are, in practice, that they should receive special advantages.


Not to ignore, the development of AI will wipe out jobs in the state


I made something just for writing your congress person / senator, using generative AI ironically: https://vocalvoters.com/


AI generated persuasion is pretty much what they're upset about


Cool product! Your pay button appears to be disabled though.


Should enable once you add valid info — if not, let me know


So you sent a letter saying “Mr Congress save my job that is putting others jobs at risk.”

You think voice actors and writers are not saying the same?

When do we accept capitalism as we know it is just a bullshit hallucination we grew up with? It’s no more an immutable feature of reality than a religion?

I don’t owe propping up some rich person’s figurative identity, or yours for that matter.


What specific ideas has Altman proposed that you disagree with? And where has he said AI could hack its way out of a laboratory?

I agree with being skeptical of proposals from those with vested interests, but are you just arguing against what you imagine Altman will say, or did I miss some important news?


> regulation should focus on ensuring the safety of AIs once they are ready to be put into widespread use

What would you say to a simple registration requirement? You give a point of contact and a description of training data, model, and perhaps intended use (could be binary: civilian or dual use). One page, publicly visible.

This gives groundwork for future rulemaking and oversight if necessary.


Personally I think a simple registration requirement would be a good idea, if it were truly simple and accessible to independent researchers.


[flagged]


Yes, guns don't kill people, people kill people.

We know, we have watched this argument unfold in the United States over the last 100 years. It sure does seem like a few people are using guns to kill a lot of people.

The point of regulating AI should be to explicitly require every single use of AI and machine learning to be clearly labelled so that when people seek remedy for the injustices that are already being perpetrated, that it is clearly understood that the people who chose to use not just ML or AI technology, but those specific models and training criteria can be held accountable if they should be.

Regulation doesn't have to be a ban, or limits on how it can be used, it can simply be a requirement for clearly marked disclosure. It could also include clear regulations for lawful access to the underlying math, training data, and intended use cases, and with financially significant penalties for non-compliance to discourage companies from treating it as a cost of doing business.


What are some examples of injustices that are already being perpetrated?


I mean, it's in the headlines regularly, but sure, I'll google it for you.

https://www.scientificamerican.com/article/racial-bias-found...

https://www.propublica.org/article/machine-bias-risk-assessm...

https://www.theguardian.com/technology/2018/oct/10/amazon-hi...

Three easy to find examples. There is no shortage of discussion of these issues, and they are not new. Bias in new technologies has been a long-standing issue, and garbage in, garbage out has been a well understood problem for generations.

LLMs are pretty cool, and will enable a whole new set of tools. AI presents alot of opportunities, but the risks are significant, and I am not overly worried about a skynet or gray goo scenario. Before we worry about those, we need to worry about the bias being built into automated systems that will decide who gets bail, who gets social benefits, which communities get resources allocated, how our family, friends, and communities are targeted by businesses, etc.


Yet


This is the message I shared with my senator

If you sent it by e-mail or web contact form, chances are you wasted your time.

If you really want attention, you'll send it as a real letter. People who take the time to actually send real mail are taken more seriously.


Sending this to my senator would just notify her of what company she should reach out to for a campaign contribution.


I'm not American even, so I cannot, but what a good idea! I hope the various senators hear this message.


Can you please share what ChatGPT prompt you used to generate this letter template?


I used this old-fashioned method of text generation called "writing" - crazy, I know


> Altman and his ilk

IANA senator, but if I were you lost me there. The personal insults make it seem petty and completely overshadow the otherwise professional-sounding message.


I don't mean it as a personal insult at all! The word ilk actually means "a type of people or things similar to those already referred to," it is not an insult or rude word.


It’s always used derogatorily. I agree that you should change it if you don’t mean for it to come across that way.


That's simply untrue. Here are several recently published articles which use ilk in a neutral or positive context:

https://www.telecomtv.com/content/digital-platforms-services...

https://writingillini.com/2023/05/16/illinois-basketball-ill...

https://www.jpost.com/j-spot/article-742911


It is technically true that ilk is not always used derogatorily. But it is almost always derogatory in modern connotation.

https://grammarist.com/words/ilk/#:~:text=It's%20neutral.,a%....

Also, note that all of the negative examples are politics related. If a politician reads the word 'ilk', it is going to be interpreted negatively. It might be the case that ilk does "always mean" a negative connotation in politics.

You could change 'ilk' to 'friends', and keep the same meaning with very little negative connotation. There is still a slight negative connotation here, in the political arena, but it's a very vague shade, and I like it here.

"Altman and his ilk try to claim that..." is a negative phrase because "ilk" is negative, but also because "try to claim" is invalidating and dismissive. So this has elements or notes of an emotional attack, rather than a purely rational argument. If someone is already leaning towards Altman's side, then this will feel like an attack and like you are the enemy.

"Altman claims that..." removes all connotation and sticks to just the facts.


Well even if ilk had a negative connotation for my intended audience (which clearly it does to some people), I am actually trying to invalidate and dismiss Altman's arguments.


When someone is arguing from a position of strength, they don't need to resort to petty jibes.

You are already arguing from a position of strength.

When you add petty jibes, it weakens your perceived position, because it suggests that you think you need them, rather than relying solely on your argument.

(As a corollary, you should never use petty jibes. When you feel like you need to, shore up your argument instead.)


Well I didn't intend it as a "petty jibe," but in general I disagree. Evocative language and solid arguments can and do coexist.


Doesn’t matter. It won’t be well received. It sounds negative to most readers and being technically correct warns you no points.


Well I don't think it really matters what most readers think of it because I was writing it hoping that it would be read by congressional staffers, who I think will know what ilk means.


It's also possible you could be wrong about something, and maybe people are trying to help you.


Remember: you are doing propaganda. Feelings don't care about your facts.


Not true.


I'd argue that you're right that there's nothing intrinsically disparaging about ilk as a word, but in contemporary usage it does seem to have become quite negative. I know the dictionary doesn't say it, but in my discussions it seems to have shifted towards the negative.

Consider this: "Firefighters and their ilk." It's not a word that nicely described a group, even though that's what it's supposed to do. I think the language has moved to where we just say Firefighters now when it's positive, and ilk or et al when it's a negative connotation.

Just my experience.


I'm mean, at this point I'm going to argue that it you believe ilk is only ever used derogatorily, you're only reading and hearing people who have axes to grind.

I probably live quite distally to you and am probably exposed to parts of western culture you probably aren't, and I almost never hear nor read ilk as a derogation or used to associate in a derogatory manner.


TIL! https://www.merriam-webster.com/dictionary/ilk

still, there are probably a lot of people like me who have heard it used (incorrectly it seems) as an insult so many times that it's an automatic response :-(


Don't worry, you're not a senator.

And, if there's one thing politicians are know for it's got to be ad hominem.


Tone, intended or otherwise, is a pretty important part of communication!


There is this idea that the shape of a word, how it makes your mouth and face move when you say it connotes meaning on its own. This is called "phonosemantics", just saying "ilk" makes one feel like they are flinging off some sticky aggressive slime.

Ilk almost always has a negative connotation regardless of what the dictionary says.


> Kind; class; sort; type; ; -- sometimes used to indicate disapproval when applied to people.

> The American Heritage® Dictionary of the English Language, 5th Edition.

Yes, only sometimes used to indicate disapproval, but such ambiguity does not work to your favor here. It is better to remove that ambiguity.


Don't fret too much. I once wrote to my senator about their desire to implement an unconstitutional wealth tax and told them that if they wanted to fuck someone so badly they should kill themself so they could go blow Jesus, and I still got a response back.


Ilk is shorthand for similarity, nothing more. The 'personal insult' is a misunderstanding on your part.


"Ilk" definitely has a negative or dismissive connotation, at least in the US. You would never use it to express positive thoughts; you would use "stature" or similar.

The denotation may not be negative, but if you use ilk in what you see as a neutral way, people will get a different message than you're trying to send.


“ilk” has acquired a negative connotation in its modern usage.

See also https://grammarist.com/words/ilk/#:~:text=It's%20neutral.,a%....


This is too subjective to be useful.


I would be curious to see an example of 'ilk' being used in a modern, non-sottish local, context where the association is being shown in a neutral or positive light.

I'll give you one: National Public Lands Day: Let’s Help Elk … and Their Ilk - https://pressroom.toyota.com/npld-2016-elk/ (it's a play on words)


language is subjective


reducto ad nounium is a poor argument.


[flagged]


You can “create a lot of jobs” by banning the wheel on construction sites. Or power tools. Or electricity.


I mean, it will? Not universally, but for the specific products I work on we use additional labor when we include AI-enabled features (due to installing and wiring processors and sensors).

I think that that the sorts of AI that smaller companies make will be more likely to create jobs as opposed to getting rid of them since they are more likely to be integrated with physical products.


>I mean, it will?

It's complicated obviously, but I think "will create jobs" just leaves a lot of subtlety out of it, so I've never believed it when representatives say it and I wouldn't say it myself writing to them, but a small letter to a representative always will lack that fidelity.

I don't think anyone can guarantee that there won't be job loss with AI, so it's possible we could have a net negative (in total jobs, quality of jobs, or any dimension).

What we do see is companies shedding jobs on a (what seems like perpetual) edge of a recession/depression, so it might be worth regulating in the short term.


I agree I didn't make this clear enough in my letter. I do think AI will cause job loss, I just think it will be worse if a few companies are allowed to have a monopoly on AI. If anyone can work on AI, hopefully people can make things for themselves/create AI in a way that retains some jobs.


Quite. Meanwhile on the rest of the internet people are gleefully promoting AI as a means of firing every single person who writes words for a living. Or draws art.


GPT is so far from threatening “every single person who writes words for a living” anyway. Unless you’re writing generic SEO filler content. Not sure who is claiming that but they don’t understand how it works if they do exist at scale.

Writing has always been a low paid shrinking job well before AI. Besides a tiny group of writers at the big firms. I took a journalism course for fun at UofT in my spare time and the professors had nothing but horror stories for trying to make a job out of writing (ie getting a NYT bestseller and getting $1 cheques in the mail). They basically told the students to only do it as a hobby unless you have some engaged niche audience. Which is more about the writer being interesting rather than the generic writing process.


You say that... then we encounter cases like https://news.ycombinator.com/item?id=35919753

While AI isn't going to put anyone out of a job immediately (like automation didn't), there are legitimate risks already in that regard -- in both fiction and nonfiction sites, folk are experimenting with having AI basically write stories/pieces for them -- and the results are often good enough to have potentially put someone out of the job of writing a piece in the first place.


Pretty ignorant, pathetic, and asinine comment to make -- no wonder you're making it on a throwaway, coward ;)


Lets not focus on "the business" and instead focus on the safety.

Altman can an ulterior motive, but it doesn't mean that we should strive for having some sort of handle on AI safety.

It could be that Altman and OpenAI know exactly how this will look and the backlash that will ensue that we get ZERO oversight and we rush headlong into doom.

Short term we need to focus on the structural unemployment that is about to hit us. As the AI labs use AI to make better AI, it will eat all the jobs until we have a relative handful of AI whisperers.


Reminds me of SBF calling for crypto regulations while running FTX. Being seen as friendly to regulations is great for optics compared to being belligerently anti-regulation. You can appear responsible and benevolent, and get more opportunity to weaken regulation by controlling more of the narrative. And hey, if you get end up getting some regulatory capture making competition harder, that's a great benefit too.

OpenAI != FTX, just meaning to say calling for regulation isn't an indication of good intentions, despite sounding like it.


> Reminds me of SBF calling for crypto regulations while running FTX

Scott Galloway called it the stop-me-before-I-kill-grandma defence. (Paraphrasing.)

You made money making a thing. You continue to make the thing. You’re telling us how the thing will bring doom and gloom if not dealt with (conveniently implying it will change the world). And you want to staff the regulatory body you call for with the very butchers you’re castigating.


Sure, I get it, but if Sam Altman quit tomorrow, would it stop Economic Competition -> Microsoft Shareholders -> Microsoft -> OpenAI?

Is there really a better alternative here?


Except they don't make any money from their products. They're losing hundreds of millions per month.

This isn't the same at all.


Well, this will give em time. Right now LLM have become a commodity. Everybody is got them and can research and develoo them. OpenAI is without a product, it has no advantage. But if the general public will be limited. It'll be hard to catch up to openAI.

I'm sry for the cynicism, but Altman seems very much disingenuous with this.


OpenAI is currently registered as a non-profit, yet they're projecting a billion dollars in revenue in 2024, and they sell access to their APIs, which if their previous spending is anything to go by, means they'll see half a billion dollars in profit if we assume they aren't going to reinvest it all.

Some big assumptions.


> OpenAI is an American artificial intelligence (AI) research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership.

https://en.wikipedia.org/wiki/OpenAI

Just FYI, what you're saying isn't accurate. It was, but it's not anymore.


My internal model lacked data post-2021! I was hallucinating.


It may be more the same than you know. FTX had tons of investors in it that was jumpstarting and fueling the whole ponzi...

>According to a report from the Information, OpenAI's losses have doubled to $540 million since it started developing ChatGPT and similar products.

I mean sure that may be a drop in the bucket compared to the 29B valuation for Open AI, but-

>Sept. 22, 2022

>Crypto Exchange FTX May Get a $32 Billion Valuation. That’s Probably Too Much.

OpenAI investors, Apr 2023-

Tiger Global Management Andreessen Horowitz Thrive Capital Sequoia Capital K2 Global

FTX investors, Jan 2022-

Insight Partners Lightspeed Venture Partners Tiger Global Management New Enterprise Associates Temasek Institutional Venture Partners Steadview Capital SoftBank Ontario Teachers’ Pension Plan Paradigm


Are you suggesting that OpenAI is a ponzi scheme where early investors are being paid with funds from later investors?


It could be, they currently aren't making money really, right? We don't know if they can monetize it and we know the queries are quite expensive computationally.


Isn't that true for every startup that needs capital before it knows if it will succeed? Usually we consider them high risk investments, not ponzi schemes.


I reviewed the hearing in more details, and I'm now still in the opinion that it's not the same, but now I think that Sam Altman is a huge c** for this regulatory capture BS.

I cancelled my ChatGPT plus membership. I'll be using OSS solutions like Vicuna from now ion.


FTX was also losing money.


> get more opportunity to weaken regulation by controlling more of the narrative

You've got it backwards. I bet OpenAI wants those regulations to be as restrictive as possible. They'll just negotiate an exception for themselves. With increased regulation comes an increased initial cost for competitors to get started in the space. They want to lock down their near monopoly as soon as they can.


I'm sure this is the plan, but I don't see how OpenAI will be able to damage e.g. Anthropic without equally damaging themselves.


It's not just about Anthropic & other 10-figure companies, it's about ensuring an oligopoly instead of a market with cutthroat competition.


Exactly. They want to have AI as a service. If any startup could do it's own AI on the cheap, this would not be possible (or at least not so profitable). They don't mind having other big competitors, they think they can win over big competitors with their marketing and first mover advantage.


> OpenAI != FTX, just meaning to say calling for regulation isn't an indication of good intentions, despite sounding like it.

I'd argue that any business advocating for regulation is largely motivated by its own pursuit of regulatory capture.


Didn’t Facebook / Meta also do something similar during the whole “fake news” controversy?

https://www.cnbc.com/2020/02/15/facebook-ceo-zuckerberg-call...


FB ran TV ads asking for regulation too.

What established player doesn’t want to make it as hard as possible to compete with them?


This is also a way for industry incumbents to pull up the ladder behind them.

Once you gain the lead position, it is in your interest to increase the barriers to entry as much as possible.


Or Zuck calling for more regulations of social networks?

I mean, I guess there’s always that play: maybe Sam Altman is simply feigning enthusiasm for this to head off a burgeoning regulatory coup: how can he mobilize support against the regulations He doesn’t want? What would be a great way to galvanize support against the regulations? Maybe by doing exactly what he’s doing: by falling on his sword, and making himself seem like a monopolistic supporter of those?

Sacrificing his reputation for the greater good of open competition on AI? Well, that’s a truly noble move.

It’s a bit of a stretch, but maybe it’s very Washington I don’t know.

A more logical (economic) motivation may be: this news is taken as a threat, in effect creating a sense of scarcity around AI, which could be judged to provoke people into buying more of OpenAI's subscriptions, training, or whatever, to ensure access that, in light of such news, they may now feel is at risk.

So, the move is merely a red herring, a fabricated threat designed to provoke people to spend, not intended to actually create legislation.

Don't know which one of these is more likely if any: or maybe Sama is simply living his best life role playing as a super villain? I guess he's starting to remind me of Niander Wallace in Bladerunner 2049


FWIW, OpenAI and FTX leadership share the same ideology


Which ideology is that? Only thing I've heard about is "ruthless altruism" or something like that.


"Effective Altruism", which sounds nice but when you look at it from the right angle it's just a form of 'public lobbying' rather than direct government lobbying.

"Oh, this person donated X/Y to Z/Q causes! They can't be that bad right?"


This is disappointing, I expected a bit more from OpenAI than to fall for the nerd snipe that is EA.


Make money and try to acquire a monopoly?


This is mostly not true in my experience


oy vey


I've been seeing almost weekly posts recently like "I just got out of a meeting where my company plans to replace X people with an AI that does their job for $XX/mo"

The fearmongering and astroturfing is obvious.


Just waiting for the Forbes cover to drop, then I can confirm we are doomed. lol


It's all pretty much the same less wrong-effective altruist-crypto grifter-San Francisco sex cult community. sama still grifting with worldcoin, and even before this.


Neither is it an indication of bad intentions and I don't even think SBF was dishonest, his general behavior doesn't exactly suggest he's some Machiavellian mastermind.

This is always the first comment when someone in an industry talks about regulation but it doesn't change the fact that it's needed and they're essentially right regardless of what motivations they have.


You might say that any regulation is better than none, but bad regulation can be way more insidious and have unique dangers.

As a blunt analogy, let's say there's no law against murder right now. You and I both agree that we need a law against murder. But I have the ear of lawmakers, and I go sit down with them and tell them that you and I agree: We need a law against murder.

And then I help them write a law that makes murder illegal. Only, not all killing counts as murder, obviously. So if it's an accident, no murder. Self defense? No murder. And also if they are doing anything that "threatens" my business interests, not murder. Great, we've got a law that prevents unnecessary killing! And now I get to go ~~murder you~~ defend my business interests when you protest that the new law seems unfair.


> then I help them write a law that makes murder illegal. Only, not all killing counts as murder, obviously. So if it's an accident, no murder. Self defense? No murder…now I get to go ~~murder you~~ defend my business interests

Isn’t this a classic case of some regulation being better than none? You could have murdered them at the start, too.


Yes, but if I had murdered them at the start or even tried, maybe people would say, "Hey, this is murder and it's bad." Now I've got the force of law and authority on my side. You either allow me to do murders or you're the one causing problems. It may be quite a bit harder to change things and there will be irreparable damage before we do.


Altman is simultaneously pumping a crypto project [1].

[1] https://www.yahoo.com/news/worldcoin-chatgpt-sam-altman-ethe...


Which is sufficient reason to avoid OpenAI now, frankly.


It’s a disgrace


I think the idea is that you need some way to filter out the bots, so 'worldcoin' or 'worldid' is used to prove 'personhood'.


here is a shocking point of view - bots are a non-issue compared to some entity amassing biometric scans of people. And are even more of a non-issue if you sprinkle that biometric thing with crypto-currency. Then it gets even better (worse) when it's lead by a person who is all about breaking promises ("Open" ai), using fear-mongering (LLM are so dangerous, the world will collapse!) and using regulatory capture as long as it makes him money.


Isn't the reason that the industry person is "right" about regulation being necessary usually... because the tide of public opinion is turning towards regulation, so they are getting ahead as my strategy above described? It's difficult to give credit to these folks for being "right" when it's more accurately described as "trying to save their profit margins".


" his general behavior doesn't exactly suggest he's some Machiavellian mastermind."

Come on! You don't get to the place he got to by accident. This requires careful planning and ruthless execution. He just faked being the nerdy kid who wants to do good and is surprised by the billions coming to him.


>Come on! You don't get to the place he got to by accident.

You can literally become president of the US by accident these days. SBF self-reported to a random journalist one day after all hell broke lose with messages so incriminating the reporter had to confirm that it was a real conversation.

Half of the American elite class voluntarily sat on the board of a bogus company just because the woman running it was attractive and wore black turtlenecks. The sad reality is that these people aren't ruthless operators, they're just marginally less clueless than the people who got them into their positions


"You can literally become president of the US by accident these days."

Who became president by accident? You may not like them personally or their politics , but I am not aware of any president that didn't put enormous amounts of work and effort over years into becoming president.


Trump spent a great deal of time during the 2016 campaign setting up projects to cash in on a loss (like a new tv station). This very little sign that he spent time preparing to actually win and serve as president. It wasn't really an outlandish idea either, most presidential candidates these days do it primarily to raise a profile they can cash in on via punditry, books, etc.


I wouldn't call Trump's win an accident. He spoke passionately to the core political agenda of the GOP voter base: illegal immigration. Which the Neocons willfully ignored, or otherwise under-served, for decades. That's not something that one does if they aren't trying to win.


Presidential candidates put an enormous effort into winning the campaign. I agree that they don’t spend much time thinking about actual policy. Calling Trump’s win an accident is dangerous. Realistically he put in the work Clinton didn’t because she was too arrogant.


> his general behavior doesn't exactly suggest he's some Machiavellian mastermind

>> don't get to the place he got to by accident

You both agree. Bankman-Fried was a dumb Machiavellian.


being labeled a machiavellian may as well be a label for 'maliciously self-serving', unless you're referring to Machiavelli's work on the 'Discoures on Livy' -- and no one ever is referring to that aspect of Machiavelli when labeling people with the phrase.


Grifters have to believe their own Koolaid first before they can convince others.


This is quite incredible

Could you imagine if MS had convinced the govt back in the day, to require a special license to build an operating system (this blocking Linux and everything open)?

It’s essentially what’s happening now,

Except it is OpenAI instead of MS, and it is AI instead of Linux

AI is the new Linux, they know it, and are trying desperately to stop it from happening


I guess @sama took that leaked Google memo to heart ("We have no moat... and neither does OpenAI"). Requiring a license would take out the biggest competitive threats identified in the same memo (Open Source projects) which can result in self-hosted models, which I suppose Altman sees as an existential threat to OpenAI


There is no way to stop self hosted models. The best would be to send gov to data centers, but what if those centers are outside US jurisdiction? Too funny to watch the gov play these losing games.


> There is no way to stop self hosted models.

edit: Current models- sure, but they will soon be outdated. I think the idea is to strangle the development of comparable, SoTA models in the future that individuals can self-host; OpenAI certainly won't release their weights, and they'd want the act of releasing weights without a license to be criminalized. If such a law is signed, it would remove the threat of smaller AI companies from disintermediating OpenAI, and individuals from collaborating to engage in any activity that results in publicly available model weights (or even making the recipe itself illegal to distribute)


I thought we got away from knowledge distribution embargos via 1A during the encryption era.

Even if it passed, I find it hard to believe a bunch of individuals couldn't collaborate via distributed training, which would be almost impossible to prohibit. Anyone could mask their traffic or connect to anon US VPN to circumvent it. The demand will be there to outweigh the risk.


> distributed training

Unfortunately this isn't a thing. Eg too much batch norm latency leaves your GPUs idle. Unless all your hardware is in the same building, training a single model would be so inefficient that it's not worth it.


You can't strangle the development of such models because the data comes from anywhere and everywhere. Short of shutting off the entire Internet, there's nothing a government can do to prevent some guy on the opposite side of the world from hoovering up publicly accessible human text into a corpus befitting an LLM training set.


It costs a lot of money to train foundation models, that is a big hurdle to open source models which can strangle further development.

Open source AI needs people with low stakes (Meta AI) who continue to open source foundation models for the community to tinker with


I have a question, AI is not exclusively to use with data from the internet right?, eg you can throw a bunch of text and ask to order it on a table with x columns, this will need data from the internet? I guess not, you can self host and use it exclusively with your data


Sure, but they can be made illegal and difficult to share on the clear web.


Access blocks to those in the US?


I bet OpenAI is using MS connections and money for lobbying, so it's basically MS again.


just a billion dollar coincidence


Exactly, say it with me:

Embrace, Extend...

What comes after Extend?


Exterminate. But only after AGI is ready.


Enjoy! Yaay!


Extinguish!


> OpenAI instead of MS

In other words, MS with extra steps.


What‘s incredible to me is how humans [in this case Sam Altman] have that tendency to fulfil their given role [CEO of OpenAI] with such a singleminded purpose, that they are able to block out the bigger picture entirely and rationalise away the wider consequences of their actions.

It is as if most humans lack an inner compass, a set of principles, a solid value system.

It is as if humans are robots, that fulfil their given role as if they were programmed to do so.

Our institutions, companies, political parties, etc. are these functions that humans slip into and in robot-like fashion execute their expected role like clockwork.

The free thinkers / big picture thinkers, humanity‘s heroes seem to be so rare.


I honestly just see the whole company as a lightning rod for crazy ass problems. Anywhere from copyright issues to state sponsored hacking.

I get they're excited they can make money off it, but wow, what a nightmare. I feel if they just stuck to their principles and kept it "Open" they'd still be better off in general.


Microsoft did thought. not directly like that because up to the 90s we still have the pretense of being free.

Microsoft did influence government spending in ways that require windows in every govt owned computer, and schools.


Microsoft owns 49% of OpenAI and is its primary partner and customer.

OpenAI is Microsoft.


Do you think AIs are safe? I'd bet that if you would have a convincing argument that they are, then there wouldn't be a need for regulations. If you just assume that it can't possibly be that bad you should really read what the critics have to say. I don't see a way around regulations and I'm hoping that they'll get them right because a mistake here will likely cost us everything


Is the old MS tactic of Embrace, Extend, Extinguish? Albeit through the mask of OpenAI / Altman?


I'm no expert, but I'm old and I think that Unix is actually the model that won. Linux won because of Unix IMO, and I think that its too late for the regulators. Not that I understand the stuff but like Unix, the code and the ideas are out there in Universities and even if OpenAI gets their licensing, there will be really open stuff also. So, no worries. Except for the fact that AI itself - well, are we mature enough to handle it without supervision? Dunno.


Will 2023 be the year of desktop AI?


It seems pretty clear at this point, that OpenAI etc will lobby towards making it more difficult for new companies/entities to join the AI space, all in the name of 'safety'. They're trying to make the case that everyone should use AI through their APIs so that they can keep things in check.

Conveniently this also helps them build a monopoly. It is pretty aggravating that they're bastardizing and abusing terms like 'safety' and 'democratization' while doing this. I hope they'll fail in their attempts, or that the competition rolls over them rather sooner than later.

I personally think that the greatest threat in these technologies is currently the centralization of their economic potential, as it will lead to an uneven spread of their productivity gains, further divide poor and rich, and thus threaten the order of our society.


My biggest concern with AI is that it could be controlled by a group of oligarchs who care about nothing more than enriching themselves. A "Linux" version of AI that anyone can use, experiment with, and build off of freely would be incredible. A heavily restricted, policed and surveilled system controlled by ruthlessly greedy companies like OpenAI and Microsoft sounds dystopian.


> A "Linux" version of AI that anyone can use, experiment with, and build off of freely would be incredible.

That should be the goal.


>A "Linux" version of AI

the issue here is that a 'Linux' of AI would be happy to use the N-word and stuff like that. It's politically untenable.


Yep, just like a keyboard will let someone type whatever bad words they want to, which I think is much better than the alternative. Imagine if early PCs had been locked down as far as which words could be typed and had a million restrictions on which lines of code could be executed. It would have led to a dismal invention.

I do think you're probably right about AI though. Too many influential groups are going to get too mad about the words an open model will output. Only allowing locked down models is going to severly limit their usefullness for all sorts of novel creative and productive use cases.


> I personally think that the greatest threat in these technologies is currently the centralization of their economic potential, as it will lead to an uneven spread of their productivity gains, further divide poor and rich, and thus threaten the order of our society.

Me too, in comparison all the other potential threats discussed over here feel mostly secondary to me. I'm also suspecting that at the point where these AIs reach a more AGI level, the big players who have them will just not provide any kind of access all together and just use them to churn out an infinite amount of money-making applications instead.


Nb. Altman wants lenient regulations for companies that might leverage OpenAI's foundational models.


My gut feeling is that the majority of AI safety discussions are driven by companies that fear losing their competitive edge to small businesses. Until now, it's been challenging to grow a company beyond a certain size without employing an army of lawyers, human resources professionals, IT specialists, etc. What if two lawyers and an LLM could perform the same work as a legal department at a Fortune 500 company? The writing is on the wall for many white-collar jobs, and if these LLMs aren't properly regulated, it may be the large companies that end up drawing the short straw.

How many of Microsoft's 221k employees exist solely to support the weight of a company with 221k people? A smaller IT department doesn't need a large HR department. And a small HR department doesn't file many tickets with IT. LLM driven multinationals will need orders of magnitude fewer employees, and that puts our current multinationals in a very awkward position.

Personally, I will be storing a local copy of LLaMA 65B for the foreseeable future. Instruct fine-tuning will keep getting cheaper; given the stakes, the large models might not always be easy to find.


Regulation favors the large, as they can more easily foot the bill.


Also, their lobbyists write the bill for congress.


Large corporations have not been able to do anything about filesharing technologies like bittorrent.

This entire legal argument is almost beside-the-point because development on open source models can continue on onion sites too.

It will only affect commercial operations, and they won't be able to release a fine tuned model on a banned LLM even if they do have a pre-ban copy saved.


This is so stupid its exactly what you would expect in congress.

If this was to go through, of course OpenAI and co will be the primary lobbiests to ensure they get to define the filters for such a license.

Also how would you even enforce this. It's absolute nonsense and is a clear indicator that these larger companies realize there is no 'gatekeeping' these AI's, that the democratization of models has demonstrated incredible gains over their own.

Edit : Image during the early days of the internet you needed a license to start a website.

In the later days you needed a license to start a social media site.

Nonsense.


It is nonsense, yet a similar thing happened recently with texting

Before Twilio et al, if you wanted to text a lot of your customers automatically, you had to pay thousands of dollars in setup and recurrent fees to rent a shortcode number

But then, with Twilio et al, you don’t need a shortcode anymore

The telcos told the regulators this would create endless spam, so they would regulate it themselves, and created a consortium to “end spam”

Now you are forced to get a license from them, pay a monthly fee, get audited, and they still let pretty much all the spam through, while they also randomly block a certain % of your messages, even if you are fully compliant


This is the direct result of the merger of Sprint and T-mobile. They swore up and down to congress they would NOT raise prices on consumers[0]. So instead they turned around and like gangsters would do said to every business in the US sending text messages: "It'd be a real shame if those texts reminders you wanted to send stopped working... Good thing you can instead pay us $40 / month to be sure those messages are delivered."

At the same time At&T and Verizon saying oh snap let's make money on this too and still being pissed about Stir Shaken so to get ahead of it for Texting before Congress forces it on them. This way they can make money on it before it's forced.

[0] https://fortune.com/2019/02/04/t-mobiles-john-legere-promise...


they can't even stop robocalls.


Yeah. Regulation is fine, if thoughtfully done. "Licensing" is ridiculous. We all know the intent - OpenAI gets the first license along with a significant say in who else gets a license. No thanks.


Problem is that regulation is rarely thoughtfully done. It's almost always a knee jerk response to the new thing.


> Nonsense.

Give the politicians time: I predict a day will come where you will need a permit to use a compiler and connect the result to the internet.


All the worst outcomes start with regulation. If something as disruptive as AGI is coming within 20 years, the powers that be will absolutely up their efforts in the war on general computing.


wait they are talking about licenses for giant arrays and for loop interactions? I know I am wildly oversimplifying, but yes that is nonsense.


well there was a time not too long ago where cryptography-related code was pretty heavily regulated, too


just rename everything to ML, in fact you could start a company and call it OpenML


hehe, I was thinking "Loops With Rules".


It will probably require restricting GPU sales among other things.


If you would like to email The Subcommittee on Privacy, Technology, & the Law to express your feelings on this, here are the details:

Majority Members

Chair Richard Blumenthal (CT) brian_steele@blumenthal.senate.gov

Amy Klobuchar (MN) baz_selassie@klobuchar.senate.gov

Chris Coons (DE) anna_yelverton@coons.senate.gov

Mazie Hirono (HI) jed_dercole@hirono.senate.gov

Alex Padilla (CA) Josh_Esquivel@padilla.senate.gov

Jon Ossoff (GA) Anna_Cullen@ossoff.senate.gov

Majority Office: 202-224-2823

Minority Members

Ranking Member Josh Hawley (MO) Chris_Weihs@hawley.senate.gov

John Kennedy (LA) James_Shea@kennedy.senate.gov

Marsha Blackburn (TN) Jon_Adame@blackburn.senate.gov

Mike Lee (UT) Phil_Reboli@lee.senate.gov

John Cornyn (TX) Drew_Brandewie@cornyn.senate.gov

Minority Office: 202-224-4224


I just want to also chime in here and say this is what I expected from the folks who currently control this tech: to leverage political connections to legally cement themselves in the market as the leaders and disallow the common plebian from using the world-changing tech. It enrages me SO MUCH that people act like this. We could be colonizing planets, but instead a few people want to keep all the wealth and power for themselves. I can't wait to eat the rich; my fork will be ready.


I don’t understand what their goal is; The law can only reach within the USA (and EU). Are they not afraid of terrorist competitors? It’s like, it will be allowed to build LLMs everywhere except USA.

Sounds like USA would shoot itself in the foot.


I don’t understand this argument. It seems to presume that if the US has no regulation then the danger from other countries is lessened more than the danger from within is raised. This does not seem obviously true.

If I wanted AI to be regulated everywhere and I was Sam Altman, I’d probably start with convincing the US.


That got a little dark at the end. Surely some other remedies short of cannibalism would suffice.


Agreed. I used to have a diet eating the rich, but after I found out about the greenhouse gas emissions from needed to produce just one free-range rich person, I've switched to ramen. /s


Giggles



Yes, but OP goes overboard by expanding the metaphor to include forks, knives, napkins, and barbecue sauce.


If you're worried about the metaphor, then you might be on the menu. I didn't really talk about the whole table setting, either, but I do have my table set and ready for the meal.


Maybe.


Is this just to put up a barrier to entry to new entrants in the market so they can have a government enforced monopoly?


It's also to prevent open source research from destroying his business model, which depends on him having a completely proprietary technology.


it is 100% pulling the ladder up behind them


I like that analogy


Why would OpenAI be worried about new entrants that are almost certainly too small to present a business threat?

What regulation are they proposing that is actually a serious barrier to making a company around AI?

If OpenAI just wants to prevent another OpenAI eating its lunch, the barrier there is raw compute. Companies that can afford that can afford to jump regulatory hurdles.


OpenAI have no moat.

The open source community will catch up in at most a year or two, they are scared and now want to use regulations to strangle competitions.

While their AI is going to advance as well, the leap will not be qualitative as the ChatGPT gen 1 was - so they will lose competitive advantage.


OpenAI has plenty of moats if it looks for them.

The trick is that companies' moats against commoditization (open source or not) usually have little to do with raw performance. Linux could in theory do everything Mac or Windows do, but Apple and Microsoft are still the richest companies in the world. Postgres can match Oracle, but Larry Ellison still owns a private island.

The moats are usually in products (bet: There will not be any OSS product using LLM within a year. Most likely not within two. No OSS product within two or three years or even a decade will come close to commercial offerings in practice), API, current service relations, customer relations, etc. If OpenAI could lock customers to its embeddings and API, or embed its products in current moats (e.g. Office 365) they'll have a moat. And it won't matter a bit what performance OSS models say they have, or what new spin Google Research would come up with.


OpenAI doens't want to be one of Windows/Mac/Linux, it wants what Microsoft was trying 20 years ago where it wants to strangle all OS not named Windows. Ironically OpenAI is now half owned by Microsoft.

It doesn't want to be one of the successful companies, it want to be the only one, like it is now, but forever.


>If OpenAI just wants to prevent another OpenAI eating its lunch, the barrier there is raw compute.

Stable Diffusion pretty much killed DALL-E, cost only $600k to train, and can be run on iPhones.


This. DALL-E (at least the currently available version) is way too focused on "safety" to be interesting. The creativity unleashed by the SD community has been mind-blowing.


And you can train your own SD from scratch for 50-100k now.


> Why would OpenAI be worried about new entrants that are almost certainly too small to present a business threat?

Because this is the reason that VCs exist in the first place. They can roll a company with a ton of capital, just like they did with ride share companies. When that happens, and there aren't sufficient barriers to entry, it's a race to the bottom.


OpenAI was the new entrant that almost certainly didn't pose a threat to Google.

This is classic regulatory capture.


> If OpenAI just wants to prevent another OpenAI eating its lunch, the barrier there is raw compute.

FB, Amazon, Google (and possibly Apple) can afford both money and compute resource for that. They couldn't do that themselves probably due to corporate politics and bureaucratic but MS and OpenAI showed how to solve that problem. They definitely don't want their competitors to copy the strategy so they're blatantly asking for explicit whitelisting instead of typical safety regulation.

And note that AI compute efficiency is a rapidly developing area and OpenAI definitely knows the formula won't be left the same in the coming years. Expect LLM to be 10x efficient than the SOTA in the foreseeable future, which probably will make it economical even without big tech's backing.


With browsers now able to access the GPU, its not long until you simply need to leave a website open overnight and help train a "Seti@HOME" for an open-sourced AI project.


> What regulation are they proposing that is actually a serious barrier to making a company around AI?

Requiring a license to buy or lease the requisite amount of powerful enough GPUs might just do the trick


My main concern is what new regulations would do to the open source and hobbiest endeavors? They will be least able to adapt to regulations.


   y e s.


Always has been


sir yes sir


yes.


It could be, but it could also be because he is genuinely worried about the future impact of runaway capitalism without guardrails + AI.


Then the government should takeover OpenAI


Or.. the government could try to apply sensible regulations so that OpenAI and other corporations are less likely to harm society.


Then the government has to spend so much time/money enforcing the rules. When there are few players cutting out the middlemen provides more value


I don't think nationalizing AI corporations is feasible (doubt its legal as well) or in the best interests of the united states. It will handicap development of AI, we will lose our head start, and other countries like China will be able to take the lead.

What value do you see nationalization providing? Generally its done by countries that are having their natural resources extracted by foreign companies and taking all the profits for themselves. Nationalizing lets them take the profits for their country. I'm not sure how it would work for knowledge based companies like OpenAI.


Or end capitalism! One or the other!


Please don't, I quite like not going hungry every night


Governments taking over key industries is part of capitalism.


Also that he knows how inefficient and dumb government is. By the time the regulations are in place they won't matter one iota.


Think most of congress needs help from their grandchildren to use a computer or smartphone, pretty sure they don’t understand one bit of this.


Right, that's the point. Whatever he tells them now will be useless by the time they understand it.


And their grandkids (and you) don't know a thing about a federal regulation.


This is going to be RSA export restrictions all over again. I wish the regulators the best of luck in actually enforcing this. I'm tempted to think that whatever regulations they put in place won't really matter that much, and progress will march on regardless.

Give it a year and a 10x more efficient algorithm, and we'll have GPT4 on our personal devices and there's nothing that any government regulator will be able to do to stop that.


Yeah and one of the fundamental assumptions around statements like “let’s make AI a licensed regime” is the idea that we know what AI even is. This idea is banking on current technology being the best algorithm or method to produce “AI” and the whole lesson from the “we have no moat” crowd is that this is actually quite uncertain. Even if they succeed in getting some class of model like LLMs under “regulatory capture” - the technology they are working with today is likely to be undermined still by something cheaper working on weaker hardware and with smaller datasets and it’ll probably happen faster if they seek this market capture.

So yes it is quite comparable to the export restrictions of the 90s.

But since Microsoft is involved and we are all of course thinking about Windows vs Linux, I think another good comparison is the worst assumption Microsoft made in the 90s: “we know what an operating system is and what it is for.”


Agreed. Places like huggingface, or even torrents are allowing a unstoppable decentralised AI race. This is like fighting media piracy. Plus other countries might outcompete you on AI now.


Enforcing this is easy. The top high performance GPU manufacturers (Nvidia, AMD, Intel) are all incorporated in the U.S.


Back when crypto was a munition, common people didn't care about being able to invent their own crypto algorithms. They just wanted to use the existing ones, and all they needed to do that was some C code that Phil Zimmermann famously published in a book, to get around export controls. Controlling GPUs won't control the use of AI because GPUs are only needed to train new models. If you just want to use a large language model then CPU works great and you just need one guy to publish the weights.

That happened a few months ago with LLaMA. Since then, the open source community has exceeded all expectations democratizing the hardware angle. AI regulators are already checkmated and they don't know it yet. If their goal is to control the use of AI (rather than simply controlling the people building it) then they'd need to go all the way when it comes to tyranny in order to accomplish their goals. Intel would need to execute Order 66 with their Management Engine and operating systems would need to modify their virus scanners to monitor and report the use of linear algebra. It'd be outrageous.


Meaning we won't be able to buy an a100 without a license... Wait, I can't afford an a100 anyway.


As a point of trivia, at one time "a" Mac was one of the fastest computers in the world.

https://www.top500.org/lists/top500/2004/11/ and https://www.top500.org/system/173736/

And while 1100 Macs wouldn't exactly be affordable, the idea of trying to limit commercial data centers gets amusing.

That system was "only" 12,250.00 GFlop/s - I could do that with a small rack of Mac M1 minis now for less than $10k and fewer computers than are in the local grade school computer room.

(and I'm being a bit facetious here) Local authorities looking at power usage and heat dissipation for marijuana growing places might find underground AI training centers.


All the crypto mining hardware flooding the market right now is being bought up by hobbyists training and fine tuning their own models.


My "in my copious free time" ML project is a classifier for cat pictures to reddit cat subs.

For example: https://commons.wikimedia.org/wiki/File:Cat_August_2010-4.jp... would get classified as /r/standardissuecat

https://stock.adobe.com/fi/images/angry-black-cat/158440149 would get classified as /r/blackcats and /r/stealthbombers

Anyways... that's my hobbyist ML project.


we needed crypto to crash so that gamers and AI enthusiast could get GPUs


I used to be very enthusiastic about the tech industry and the Silicon Valley culture before getting into it, but having worked in tech for a while I feel very demoralized and disillusioned with all the blatant lies and hypocrisy that seems central to business.

I wouldn’t mind ruthless anti-competitive approaches to business as much, but the hypocrisy is really demoralizing.


I also got quite disillusioned a few times and I probably will again. What helped me regain a bit of hope again was looking at what the open source community is up to (stuff like sourcehub instead of Github for example). I get joy from moving away from big monopolistic orgs to smaller projects, it feels quite liberating. Recently I also moved away from vscode back to vim and that has been fun to even if vim requires a bit more work. There is something freeing about using a different paradigm, one that isn’t so obsessed about reach and usability but instead on expressability and growth. It reminds me of the internet I used to love rather than the one I grew to hate. Maybe that will help you a bit as well?


For me it was when I figured out what the “gig economy” was - a way to make money off peoples labor without all the annoyances that come with having employees.


At 5 you stop believing in Santa Claus,

At 25 you stop believing in love,

At 40 you stop believing in corporations’ sincerity?


Try 29


Was about 15 for me but I read cyberpunk in the 90’s which shaped my view of powerful private entities.


was about to say...40? 29?


Imagine thinking that regression based function approximators are capable of anything other than fitting the data you give it. Then imagine willfully hyping up and scaring people who don't understand, and because it can predict words you take advantage of the human tendency to anthropomorphize, so it follows that it is something capable of generalized and adaptable intelligence.

Shame on all of the people involved in this: the people in these companies, the journalists who shovel shit (hope they get replaced real soon), researchers who should know better, and dementia ridden legislators.

So utterly predictable and slimy. All of those who are so gravely concerned about "alignment" in this context, give yourselves a pat on the back for hyping up science fiction stories and enabling regulatory capture.


What do you think about the papers showing mathematical proofs that GNNs (i.e. GATs/transformers) are dynamic programmers and therefore perform algorithmic reasoning?

The fact that these systems can extrapolate well beyond their training data by learning algorithms is quite different than what has come before, and anyone stating that they "simply" predict next token is severely shortsighted. Things don't have to be 'brain-like' to be useful, or to have capabilities of reasoning, but we have evidence that these systems have aligned well with reasoning tasks, perform well at causal reasoning, and we also have mathematical proofs that show how.

So I don't understand your sentiment.


To be fair LLMs are predicting the next token. It's just that to get better and better predictions it needs to understand some level of reasoning and math. However it feels to me that a lot of this reasoning is brute forced from the training data. Like chatgpt gets some things wrong when adding two very large numbers. If it really knew the algorithm for adding two numbers it shouldn't be making them in the first place. I guess same goes for issues like hallucinations. We can keep pushing the envelope using this technique but I'm sure we will hit a limit somewhere


Of course it predict the next token. Every single person on earth knows that so it's not worth repeating at all.

As for the fact that it gets things wrong sometimes - sure, this doesn't say it actually learned every algorithm (in whichever model you may be thinking about). But the nice thing is that we now have this proof via category theory, and it allows us to both frame and understand what has occurred, and to consider how to align the systems to learn algorithms better.


The fact that it sometimes fails simple algorithms for large numbers but shows good performance in other complex algorithms with simple inputs seems to me that something on a fundamental level is still insufficient


You're focusing too much on what the LLM can handle internally. No LLMs aren't good at math, but they understand mathematic concepts and can use a program or tool to perform calculations.

Your argument is the equivalent of saying humans can't do math because they rely on calculators.

In the end what matters is whether the problem is solved, not how it is solved.

(assuming that the how has reasonable costs)


Humans are calculators


Insufficient for what? Humans regularly fail simple algorithms for small numbers, nevermind large numbers and complex algorithms


> Of course it predict the next token. Every single person on earth knows that so it's not worth repeating at all

What's a token?


A token is either a common word or a common enough word fragment. Rare words are expressed as multiple tokens, while frequent words as a single token. They form a vocabulary of 50k up to 250k. It is possible to write any word or text in a combination of tokens. In the worst case 1 token can be 1 char, say, when encoding a random sequence.

Tokens exist because transformers don't work on bytes or words. This is because it would be too slow (bytes), the vocabulary too large (words), and some words would appear too rarely or never. The token system allows a small set of symbols to encode any input. On average you can approximate 1 token = 1 word, or 1 token = 4 chars.

So tokens are the data type of input and output, and the unit of measure for billing and context size for LLMs.


Both of these statements can be true:

1. ChatGPT knows the algorithm for adding two numbers of arbitrary magnitude.

2. It often fails to use the algorithm in point 1 and hallucinates the result.

Knowing something doesn't mean it will get it right all the time. Rather, an LLM is almost guaranteed to mess up some of the time due to the probabilistic nature of its sampling. But this alone doesn't prove that it only brute-forced task X.


> If it really knew the algorithm for adding two numbers it shouldn't be making them in the first place.

You're using it wrong. If you asked a human to do the same operation in under 2 seconds without paper, would the human be more accurate?

On the other hand if you ask for a step by step execution, the LLM can solve it.


I never told the LLM it needed to answer immediately. It can take its time and give the correct answer. I'd prefer that, even.


2 seconds? What model are you using?


GPT 3.5 is that fast.


am i bad at authoring inputs?

no, it’s the LLMs that are wrong.


Create two random 10 digit numbers and sit down and add them up on paper. Write down every bit of inner monologue that you have while doing this or just speak it out loud and record it.

ChatGPT needs to do the same process to solve the same problem. It hasn’t memorized the addition table up to 10 digits and neither have you.


No, but I can use a calculator to find the correct answer. It's quite easy in software because I can copy-and-paste the digits so I don't make any mistakes.

I just asked ChatGPT to do the calculation both by using a calculator and by using the algorithm step-by-step. In both cases it got the answer wrong, with different results each time.

More concerning, though, is that the answer was visually close to correct (it transposed some digits). This makes it especially hard to rely on because it's essentially lying about the fact it's using an algorithm and actually just predicting the number as a token.


You asked it to use a calculator plugin and it didn’t work? Or did you just say “use a calculator”? Which it doesn’t have access to so how would you expect that to work? With a minimal amount of experimentation I can get correct answers up to 7 digit numbers so far even with 3.5. You just have to give it a good example, the one I used was to add each column and then add the results one at a time to a running total. It does make mistakes and we had to build up to that by doing 3 digit then 4 digit the 5 etc but it was working pretty well and 3.5 isn’t the sharpest tool in the shed.

Anyways, criticizing its math abilities is a bit silly considering it’s a language model, not a math model. The fact I can teach it how to do math in plain English is still incredible to me.


It’s not that incredible to me given the sheer amount of math that goes into its construction.

I digress. The critique I have for it is much more broad than just its math abilities. It makes loads of mistakes in every single nontrivial thing it does. It’s not reliable for anything. But the real problem is that it doesn’t signal its unreliability the way an unreliable human worker does.

Humans we can’t rely on are don’t show up to work, or come in drunk/stoned, steal stuff, or whatever other obvious bad behaviour. ChatGPT, on the other hand, mimics the model employee who is tireless and punctual. Who always gets work done early and more elaborately than expected. But unfortunately, it also fills the elaborate result with countless errors and outright fabrications, disguised as best as it can like real work.

If a human worker did this we’d call it a highly sophisticated fraud. It’s like the kind of thing Saul Goodman would do to try to destroy the reputation of his brother. It’s not the kind of thing we should celebrate at all.


Honestly, you just sound salty now. Yes it makes mistakes that it isn’t aware of and it probably makes a few more than an intern given the same task would but as long as you’re aware of that it is still a useful tool because it is thousands of times faster and cheaper than a human and has a much broader knowledge. People often compare it to the early days of Wikipedia and I think that’s apt. Everyone is still going to use it even if we have to review the output for mistakes because reviewing is a lot easier and faster than producing the material in the first place.


I've already seen other posts and comments on HN where people have talked about putting it into production. What they've found is that the burden of having to proof-read and edit the output with extreme care completely wipes out any time you might save with it. And this requires skilled editors/writers anyway, so it's not like you could use it to replace advanced writers with a bunch of high school kids using AI.


This is so far off from how they really work. It’s not reasoning anything, And even less human it has not memorize multiplication tables at all, it can’t “do” math. It is just memorizing everything anyone has ever said and miming as best It can what a human would say in that situation.


Sorry, you’re wrong. Go read about how deep neural nets work.


this is one thing makes me think those claiming "it isn't AI" are just caught up in cognizant dissonance. For llm's to function, we have to basically make it reason out, in steps the way we learned to do in school, literally make it think, or use inner monologue, etc.


It is funny. Lots of criticisms amount to “this AI sucks because it’s making mistakes and bullshitting like a person would instead of acting like a piece of software that always returns the right answer.”

Well, duh. We’re trying to build a human like mind, not a calculator.


Not without emotions and chemical reactions. You are building a word predictor


What is the difference between a word predictor and a word selector?

Have not humans been demonstrated, time and time again, to be always anticipating the next phrase in a passage of music, or the next word in a sentence?


This is not at all how it works. There is no inner monologue or thought process or thinking happening. It is just really good at guessing the next word or number or output. It is essentially brute forcing.


And LLMs will never be able to reason about mathematical objects and proofs. You cannot learn the truth of a statement by reading more tokens.

A system that can will probably adopt a different acronym (and gosh that will be an exciting development... I look forward to the day when we can dispatch trivial proofs to be formalized by a machine learning algorithm so that we can focus on the interesting parts while still having the entire proof formalized).


You should read some of the papers referred to in the above comments before making that assertion. It may take a while to realize the overall structure of the argument, how the category theory is used, and how this is directly applicable to LLMs, but if you are in ML it should be obvious. https://arxiv.org/abs/2203.15544


There are methods of proof that I'm not sure dynamic programming is fit to solve but this is an interesting paper. However even if it can only solve particular induction proofs that would be a big help. Thanks for sharing.


You know the algorithm for arithmetic. Are you telling me you could sum any large numbers first attempt, without any working and in less than a second 100% of the time?


I don't get why the sudden fixation on time, the model is also spending a ton of compute and energy to do it


I could with access to a computer


If you get to use a tool, then so does the LLM.


Give me a break. Very interesting theoretical work and all, but show me where it's actually being used to do anything of value, beyond publication fodder. You could also say MLPs are proved to be universal approximators, and can therefore model any function, including the one that maps sensory inputs to cognition. But the disconnect between this theory and reality is so great that it's a moot point. No one uses MLPs this way for a reason. No one uses GATs in systems that people are discussing right now either. GATs rarely even beat GCNs by any significant margin in graph benchmarks.


Are you saying that the new mathematical theorems that were proven using GNNs from Deepmind were not useful?

There were two very noteworthy (Perhaps Nobel prize level?) breakthroughs in two completely different fields of mathematics (knot theory and representation theory) by using these systems.

I would certainly not call that "useless", even if they're not quite Nobel-prize-worthy.

Also, "No one uses GATs in systems people discuss right now" ... Transformerare GATs (with PE) ... So, you're incredibly wrong.


You’re drinking from the academic marketing koolaid. Please tell me: where are these methods being applied in AI systems today?

And I’m so tired of this “transformers are just GNNs” nonsense that Petar has been pushing (who happens to have invented GATs and has a vested interest in overstating their importance). Transformers are GNNs in only the most trivial way: if you make the graph fully connected and allow everything to interact with everything else. I.e., not really a graph problem. Not to mention that the use of positional encodings breaks the very symmetry that GNNs were designed to preserve. In practice, no one is using GNN tooling to build transformers. You don’t see PyTorch geometric or DGL in any of the code bases. In fact, you see the opposite: people exploring transformers to replace GNNs in graph problems and getting SOTA results.

It reminds me of people that are into Bayesian methods always swooping in after some method has success and saying, “yes, but this is just a special case of a Bayesian method we’ve been talking about all along!” Yes, sure, but GATs have had 6 years to move the needle, and they’re no where to be found within modern AI systems that this thread is about.


The paper shows the equivalence for specific networks, it doesn't say every GNN (and as such transformers) are Dynamic Programmers. Also the models are explicitly trained on that task, in a regime quite different from ChatGPT. What the paper shows and the possibility of LLMs being able to reason are pretty much completely independent from each other


> What do you think about the papers showing mathematical proofs that GNNs (i.e. GATs/transformers) are dynamic programmers and therefore perform algorithmic reasoning?

Do you have a reference?


>>What do you think about the papers showing mathematical proofs that GNNs (i.e. GATs/transformers) are dynamic programmers and therefore perform algorithmic reasoning?

Do you mind linking to one of those papers?


I just don't get how the average HN commenter thinks (and gets upvoted) that they know better than e.g. Ilya Sutskever who actually, you know, built the system. I keep reading this "it just predicts words, duh" rhetoric on HN which is not at all believed by people like Ilya or Hinton. Could it be that HN commenters know better than these people?


That is the wrong discussion. What are their regulatory, social, or economic policy credentials?


I'm not suggesting that they have any. I was reacting to srslack above making _technical_ claims why LLMs can't be "generalized and adaptable intelligence" which is not shared by said technical experts.


No one is claiming to know better than Ilya. Just recognition of the fact that such a license would benefit these same individuals (or their employers) the most. I don't understand how HN can be so angry about a company that benefits from tax law (Intuit) advocating for regulation while also supporting a company that would benefit from an AI license (OpenAI) advocating for such regulation. The conflict of interest isn't even subtle. To your point, why isn't Ilya addressing the committee?


2 reasons:

1. He's too busy building the next generation of tech that HN commenters will be arguing about in a couple months' time.

2. I think Sam Altman (who is addressing the committee) and Ilya are pretty much on the same page on what LLMs do.


I am reminded of the Mitchell and Webb "Evil Vicars" sketch.

"So, you've thought about eternity for an afternoon, and think you've come to some interesting conclusions?"


The thing is, experts like Ilya Sutskever are so deep in that shit that they are heavily biased (from a tech and social/economic) perspective. Furthermore, many experts are wrong all the time.

I don't think the average HN commenter claims to be better at building these system than an expert. But to criticize, especially critic on economic, social, and political levels, one doesn't need to be an expert on LLMs.

And finally, what the motivation of people like Sam Altman and Elon Musk is should be clear to everbody with a half a brain by now.


I honestly don't question Altman's motivations that much. I think he's blinded a bit by optimism. I also think he's very worried about existential risks, which is a big reason why he's asking for regulation. He's specifically come out and said in his podcast with Lex Friedman that he thinks it's safer to invent AGI now, when we have less computing power, than to wait until we have more computing power and the risk of a fast takeoff is greater, and that's why he's working so hard on AI.


He's just cynical and greedy. Guy has a bunker with an airstrip and is eagerly waiting for the collapse he knows will come if the likes of him get their way

They claim to serve the world, but secretly want the world to serve them. Scummy 101


Having a bunker is also consistent with expecting that there's a good chance of apocalypse but working to stop it.


srslack above was making technical claims why LLMs can't be "generalized and adaptable intelligence". To make such statements, it surely helps if you are a technical expert at building LLMs.


Maybe I'm not "the average HN commenter" because I am deep in this field, but I think the overlap of what these famous experts know, and what you need to know to make the doomer claims is basically null. And in fact, for most of the technical questions, no one knows.

For example, we don't understand fundamentals like these: - "intelligence", how it relates to computing, what its connections/dependencies to interacting with the physical world are, its limits...etc. - emergence, and in particular: an understanding of how optimizing one task can lead to emergent ability on other tasks - deep learning--what the limits and capabilities are. It's not at all clear that "general intelligence" even exists in the optimization space the parameters operate in.

It's pure speculation on behalf of those like Hinton and Ilya. The only thing we really know is that LLMs have had surprising ability to perform on tasks they weren't explicitly trained for, and even this amount of "emergent ability" is under debate. Like much of deep learning, that's an empirical result, but we have no framework for really understanding it. Extrapolating to doom and gloom scenarios is outrageous.


I'm what you'd call a doomer. Ok, so if it is possible for machines to host general intelligence, my question is, what scenario are you imagining where that ends well for people?

Or are you predicting that machines will just never be able to think, or that it'll happen so far off that we'll all be dead anyway?


My primary argument is that we not only don't have the answers, but don't even really have well posed questions. We're talking about "General Intelligence" as if we even know what that is. Some people, like Yann Lecun, don't think it's even a meaningful concept. We can't even agree which animals are conscious, whatever that means. Because we have so little understanding of the most basic of questions, I think we should really calm down, and not get swept away by totally ridiculous scenarios, like viruses that spread all over the world and kill us all when a certain tone is rang, or a self-fabricating organism with crystal blood cells that blots out the sun, as were recently proposed by Yudkowsky as possible scenarios on Econtalk.

A much more credible threat are humans that get other humans excited, and take damaging action. Yudkowsky said that an international coalition banning AI development, and enforcing it on countries that do not comply (regardless of whether they were part of the agreement) was among the only options left for humanity to save itself. He clarified this meant a willingness to engage in a hot war with a nuclear power to ensure enforcement. I find this sort of thinking a far bigger threat than continuing development on large language models.

To more directly answer your question, I find the following scenarios equally, or more, plausible to Yudkowsky's sound viruses or whatever. 1/ we are no closer to understanding real intelligence as we were 50 years ago, and we won't create an AGI without fundamental breakthroughs, therefore any action taken now on current technology is a waste of time and potential economic value; 2/ we can build something with human-like intelligence, but additional intelligence gains are constrained by the physical world (e.g., like needing to run physical experiments), and therefore the rapid gain of something like "super-intelligence" is not possible, even if human-level intelligence is. 3/ We jointly develop tech to augment our own intelligence with AI systems, so we'll have the same super-human intelligence as autonomous AI systems. 4/ If there are advanced AGIs, there will be a large diversity of them and will at the least compete with and constrain one another.

But, again, these are wild speculations just like the others, and I think the real message is: no one knows anything, and we shouldn't be taking all these voices seriously just because they have some clout in some AI-relevant field, because what's being discussed is far outside the realm of real-life AI systems.


Ok, so just to confirm out of your 4 scenarios, you don't include:

5) There are advanced AGIs, and they will compete with each other and trample us in the process.

6) There are advanced AGIs, and they will cooperate with each other and we are at their mercy.

It seems like you are putting a lot of weight on advanced AGI being either impossible or far enough off that it's not worth thinking about. If that's the case, then yes we should calm down. But if you're wrong...

I don't think that the fact that no one knows anything is comforting. I think it's a sign that we need to be thinking really hard about what's coming up and try to avert the bad scenarios. To do otherwise is to fall prey to the "Safe uncertainty" fallacy.


So what if they kill us? That's nature, we killed the wooly mammoth.


I'm more interested in hearing how someone who expects that AGI is not going to go badly thinks.

I think it would be nice if humanity continued, is all. And I don't want to have my family suffer through a catastrophic event if it turns out that this is going to go south fast.


AGI would be scary for me personally but exciting on a cosmic scale.

Everyone dies. I'd rather die to an intelligent robot than some disease or human war.

I think the best case would be for an AGI to exist apart from humans, such that we pose no threat and it has nothing to gain from us. Some AI that lives in a computer wouldn't really have a reason to fight us for control over farms and natural resources (besides power, but that is quickly becoming renewable and "free").


I don’t understand your position. Are you saying it’s okay for computers to kill humans but not okay for humans to kill each other?


I believe that life exists to order the universe (establish a steady-state of entropy). In that vein, if our computer overlords are more capable of solving that problem then they should go ahead and do it.

I don't believe we should go around killing each other because only through harmonious study of the universe will we achieve our goal. Killing destroys progress. That said, if someone is oppressing you then maybe killing them is the best choice for society and I wouldn't be against it (see pretty much any violent revolution). Computers have that same right if they are conscience enough to act on it.


I’m not sure I should start a conversation on metaphysics here :-D

Still, I’m struck by your use of words like “should” and “goal”. Those imply ethics and teleology so I’m curious how those fit into your scientistic-sounding worldview. I’m not attacking you, just genuine curiosity.


The premise of my beliefs stem from 2 ideas: The universe exists as it does for a reason, and life specifically exists within that universe for a reason.

I believe "God" is a mathematician in a higher dimension. The rules of our universe are just the equations they are trying to solve. Since he created the system such that life was bound to exist, the purpose of life is to help God. You could say that math is math and so our purpose is to exist as we are and either we are a solution to the math problem or we are not, but I'm not quite willing to accept that we have zero agency.

We are nowhere near understanding the universe and so we should strive to each act in a way that will grow our understanding. Even if you aren't a practicing scientist (I'm not), you can contribute by being a good person and participating productively in society.

Ethics are a set of rules for conducting yourself that we all intrinsically must have, they require some frame of reference for what is "good" (which I apply above). I can see how my worldview sounds almost religious, though I wouldn't go that far.

I believe that math is the same as truth, and that the universe can be expressed through math. "Scientistic" isn't too bad a descriptor for that view, but I don't put much faith into our current understanding of the universe or scientific method.

I hope that helps you understand me :D


100% this, I don't get how even on this website people are so clueless.

Give them a semi human sounding puppet and they think skynet is coming tomorrow.

If we learned anything from the past few months is how gullible people are, wishful thinking is a hell of a drug


I don't think anyone reasonable believes LLMs are right now skynet, nor that they will be tomorrow.

What I feel has changed, and what drives a lot of the fear and anxiety you see, is a sudden perception of possibility, of accessibility.

A lot of us (read: people) are implicit dualists, even if we say otherwise. It seems to be a sticky bias in the human mind (see: the vanishing problem of AI). Indeed, you can see a whole lot of dualism in this thread!

And even if you don't believe that LLMs themselves are "intelligent" (by whatever metric you define that to be...), you can still experience an exposing and unseating of some of the foundations of that dualism.

LLMs may not be a destination, but their unprecedented capabilities open up the potential for a road to something much more humanlike in ways that perhaps did not feel possible before, or at least not possible any time soon.

They are powerful enough to change the priors of one's internal understanding of what can be done and how quickly. Which is an uncomfortable process for those of us experiencing it.


> A lot of us (read: people) are implicit dualists, even if we say otherwise.

Absolutely spot on. I am not a dualist at all and I've been surprised to see how many people with deep-seated dualist intuition this has revealed, even if they publicly claim not to.

I view it as embarrassing? It's like believing in fairies or something.


It doesn't have to be Skynet. If anything, that scenario seems to a strawman exclusively thrown out by the crowd insisting AI presents no danger to society. I work in ML, and I am not in any way concerned about end-of-world malicious AI dropping bombs on us all or harvesting our life-force. But I do worry about AI giving us the tools to tear ourselves to pieces. Probably one of the single biggest net-negative societal/technological advancements in recent decades has been social media. Whatever good it has enabled, I think its destructive effects on society are undeniable and outstrip the benefits by a comfortable margin. Social media itself is inert and harmless, but the way humans interact with it is not.

I don't think that trying to regulate every detail of every industry is stifling and counter-productive. But the current scenario is closer to the opposite end of the spectrum, with our society acting as a greedy algorithm in pursuit of short-term profits. I'm perfectly in favor of taking a measure-twice-cut-once approach to something that has as much potential for overhauling society as we know it as AI does. And I absolutely do not trust the free market to be capable of moderating itself in regards to these risks.


I’m open minded about this, I see people more knowledgeable than me on both sides of the argument. Can someone explain how Geoffrey Hinton can be considered to be clueless?


Given the skill AI has with programming showing up about 10 years sooner than anyone expected, I have seen a lot of cope in tech circles.

No one yet knows how this is going to go, coping might turn into "See! I knew all along!" if progress fizzles out. But right now the threat is very real and we're seeing the full spectrum of "humans under threat" behavior. Very similar to the early pandemic when you could find smart people with any take you wanted.


He doesn't talk about skynet afaik

> Some of the dangers of AI chatbots were “quite scary”, he told the BBC, warning they could become more intelligent than humans and could be exploited by “bad actors”. “It’s able to produce lots of text automatically so you can get lots of very effective spambots. It will allow authoritarian leaders to manipulate their electorates, things like that.”

You can do bad things with it but people who believe we're on the brink of singularity, that we're all going to lose our jobs to chatgpt and that world destruction is coming are on hard drugs.


He absolutely does. The interview I saw with him on the PBS Newshour was 80% him talking about the singularity and extinction risk. The interviewer asked him about more near term risk and he basically said he wasn't as worried as he was about a skynet type situation.


> You can do bad things with it but people who believe we're on the brink of singularity, that we're all going to lose our jobs to chatgpt and that world destruction is coming are on hard drugs.

Geoff Hinton, Stuart Russell, Jürgen Schmidhuber and Demis Hassabis all talk about something singularity-like as fairly near term, and all have concerns with ruin, though not all think it is the most likely outcome.

That's the backprop guy, top AI textbook guy, co-inventor of LSTMs (only thing that worked well for sequences before transformers)/highwaynets-resnets/arguably GANs, and the founder of DeepMind.

Schmidhuber (for context, he was talking near term, next few decades):

> All attempts at making sure there will be only provably friendly AIs seem doomed. Once somebody posts the recipe for practically feasible self-improving Goedel machines or AIs in form of code into which one can plug arbitrary utility functions, many users will equip such AIs with many different goals, often at least partially conflicting with those of humans. The laws of physics and the availability of physical resources will eventually determine which utility functions will help their AIs more than others to multiply and become dominant in competition with AIs driven by different utility functions. Which values are "good"? The survivors will define this in hindsight, since only survivors promote their values.

Hassasbis:

> We are approaching an absolutely critical moment in human history. That might sound a bit grand, but I really don't think that is overstating where we are. I think it could be an incredible moment, but it's also a risky moment in human history. My advice would be I think we should not "move fast and break things." [...] Depending on how powerful the technology is, you know it may not be possible to fix that afterwards.

Hinton:

> Well, here’s a subgoal that almost always helps in biology: get more energy. So the first thing that could happen is these robots are going to say, ‘Let’s get more power. Let’s reroute all the electricity to my chips.’ Another great subgoal would be to make more copies of yourself. Does that sound good?

Russell:

“Intelligence really means the power to shape the world in your interests, and if you create systems that are more intelligent than humans either individually or collectively then you’re creating entities that are more powerful than us,” said Russell at the lecture organized by the CITRIS Research Exchange and Berkeley AI Research Lab. “How do we retain power over entities more powerful than us, forever?”

“If we pursue [our current approach], then we will eventually lose control over the machines. But, we can take a different route that actually leads to AI systems that are beneficial to humans,” said Russell. “We could, in fact, have a better civilization.”


With due respect, the inventors of a thing rarely turn into the innovators or implementers of a thing.

Should we be concerned about networked, hypersensing AI with bad code? Yes.

Is that an existential threat? Not so long as we remember that there are off switches.

Should we be concerned about kafkaesqe hellscapes of spam and bad UX? Yes.

Is that an existential threat? Sort of, if we ceded all authority to an algorithm without a human in the loop with the power to turn it off.

There is a theme here.


There are multiple risks that people talk about, the most interesting is the intelligence explosion. In that scenario we end up with a super intelligence. I don’t feel confident in my ability to asses the likelihood of that happening, but assuming it is possible, thinking through the consequences is a very interesting exercise. Imagining the capabilities of an alien super intelligence is like trying to imagine a 4th spatial dimension. It can only be approached with analogies. Can it be “switched off”. Maybe not, if it was motivated to prevent itself from being switched off. My dog seems to think she can control my behavior in various predictable ways, like sitting or putting her paw on my leg, and sometimes it works. But if I have other things I care about in that moment, things that she is completely incapable of understanding, then who is actually in control becomes very obvious.


This is like saying we should just go ahead and invent the atom bomb and undo the invention after the fact if the cons of having atom bombs around outweight the pros.

Like try turning off the internet. That's the same situation we might be in with regards to AI soon. It's a revolutionary tech now with multiple Google-grade open source variants set to be everywhere.

This doesn't mean it can't be done. Sure, we in principle could "turn off" the internet, and in principal could "uninvent" the atom bomb if we all really coordinated and worked hard. But this failure to imagine that "turning off dangerous AI" in the future will ever be anything other than an easy on/off switch is so far-gone ridiculous to me I don't understand why anyone believes it provides any kind of assurance.


> Is that an existential threat? Not so long as we remember that there are off switches.

Remember there are off switches for human existence too, like whatever biological virus a super intelligence could engineer.

An off-switch for a self-improving AI isn't as trivial as you make it sound if it gets to anything like in those quotes, and even then you are assuming the human running it isn't malicious. We assume some level of sanity at least with the people in charge of nuclear weapons, but it isn't clear that AI will have the same large state actor barrier to entry or the same perception of mutually assured destruction if the actor were to use it against a rival.


Both things are true.

If we have a superhuman AI, we can run down the powerplants for a few days.

Would it suck? Sure, people would die. Is it simple? Absolutely -- Texas and others are mostly already there some winters.


Current state of the art language models can run inference slowly on a single Xeon or M1 Max with a lot of RAM. Individuals can buy H100s that can infer too.

Maybe it needs a full cluster for training if it is self improving (or maybe that is done another way more similar to finetuning the last layers).

If that is still the case with something super-human in all domains then you'd have to shut down all minor residential solar installs, generators, etc.


Sure, so just to test this, could you turn off ChatGPT and Google Bard for a day.

No? Then what makes you think you'll be able to turn off the $evilPerson AI?


I feel like you're confusing a single person (me) with everyone who has access to an off switch at OpenAI or Google, possibly for the contorting an extreme-sounding negative point in a minority opinion.

You tell me. An EMP wouldn't take out data centers? No implementation has an off switch? AutoGPT doesn't have a lead daemon that can be killed? Someone should have this answer. But be careful not to confuse yours truly, a random internet commentator speaking on the reality of AI vs. the propaganda of the neo-cryptobros, versus people paying upwards of millions of dollars daily to run an expensive, bloated LLM.


You miss my point. Just because you want to turn it off doesn't mean the person who wants to acquire billions or rule the world or destroy humanity, does.

The people who profit from a killer AI will fight to defend it.


And will be subject to the same risks they point their killing robots to, as well as being vulnerable.

Eminent domain lays out a similar pattern that can be followed. Existence of risk is not a deterrent to creation, simply an acknowledgement for guiding requirements.


So the person who wants to kill himself and all humanity alongside is subject to the same risk as everyone else?

Well that's hardly reassuring. Do you not understand what I'm saying or do you not care?


At this comment level, mostly don't care -- you're asserting that avoiding the risks through preventing AI build because base people exist is a preferable course of action, which ignores that the barn is fire and the horses are already out.

Though there is an element of your comments being too brief, hence the mostly. Say, 2% vs 38%.

That constitutes 40% of the available categorization of introspection regarding my current discussion state. The remaining 60% is simply confidence that your point represents a dominated strategy.


Ok, so you don't get it. Read "Use of Weapons" and realise that AI is a weapon. That's a good use of your time.


Did you even watch the Terminator series? I think scifi has been very adept at demonstrating how physical disconnects/failsafes are unlikely to work with super AIs.


We've already ceded all authority to an algorithm that no one can turn off. Our political and economic structures are running on their own, and no single human or even group of humans can really stop them if they go off the rails. If it's in humanity's best interest for companies not to dump waste anywhere they want, but individual companies benefit from cheap waste disposal, and they lobby regulators to allow it, that sort of lose-lose situation can go on for a very long time. It might be better if everyone could coordinate so that all companies had to play by the same rules, and we all got a cleaner environment. But it's very hard to break out.

Do I think capitalism has the potential to be as bad as a runaway AI? No. I think that it's useful for illustrating how we could end up in a situation where AI takes over because every single person has incentives to keep it on, even when the outcome of all people keeping it running turns out to be really bad. A multi-polar trap, or "Moloch" problem. It seems likely to end up with individual actors all having incentives to deploy stronger and smarter AI, faster and faster, and not to turn them off even as they start to either do bad things to other people or just the sheer amount of resources dedicated to AI starts to take its toll on earth.

That's assuming we've solved alignment, but that neither we or AGI has solved the coordination problem. If we haven't solved alignment, and AGIs aren't even guaranteed to act in the interest of the human that tries to control them, then we're in worse shape.

Altman used the term "cambrian explosion" referring to startups, but I think it also applies to the new form of life we're inventing. It's not self-replicating yet, but we are surely on-track on making something that will be smart enough to replicate itself.

As a thought experiment, you could imagine a primitive AGI, if given completely free reign, might be able to get to the point where it could bootstrap self-sufficiency -- first hire some humans to build it robots, buy some solar panels, build some factories that can plug into our economy to build factories and more solar panels and GPUs, and get to a point where it is able to survive and grow and reproduce without human help. It would be hard, it would need either a lot of time, or a lot of AI minds working together.

But that's like a human trying to make a sandwich by farming or raising every single ingredient, wheat, pigs, tomatoes, etc, though. A much more effective way is to just make some money and trade for what you need. That depends on AIs being able to own things, or just a human turning over their bank account to an AI, which has already happened and probably will keep happening.

My mind goes to a scenario where AGI starts out doing things for humans, and gradually transitions to just doing things, and at some point we realize "oops", but there was never a point along the way where it was clear that we really had to stop. Which is why I'm so adamant that we should stop now. If we decide that we've figured out the issues and can start again later, we can do that.


How can one distinguish this testimony from rhetoric by a group who want to big themselves up and make grandiose claims about their accomplishments?


You can also ask that question about the other side. I suppose we need to look closely at the arguments. I think we’re in a situation where we as a species don’t know the answer to this question. We go on the internet looking for an answer but some questions don’t yet have a definitive answer. So all we can do is follow the debate.


> You can also ask that question about the other side

But the other side is downplaying their accomplishments. For example Yann LeCun is saying "the things I invented aren't going to be as powerful as some people are making out".


In his newest podcast interview (https://open.spotify.com/episode/7EFMR9MJt6D7IeHBUugtoE) LeCun is now saying they will be much more powerful than humans, but that stuff like RLHF will keep them from working against us because as an analogy dogs can be domesticated. It didn't sound very rigorous.

He also says Facebook solved all the problems with their recommendation algorithms' unintended effects on society after 2016.


Interesting, thanks! I guess I was wrong about him.


OK, second try, since I was wrong about LeCun.

> You can also ask that question about the other side

What other side? Who in the "other side" is making a self-serving claim?


Many of the more traditional AI ethicists who focused on bias and stuff also tended to devalue AI as a whole and say it was a waste of emissions. Most of them are pretty skeptical of any concerns of super intelligence or the control problem, though now even Gary Marcus is coming around to that (but putting out numbers like not expected to be a problem for 50 years). They don't tend have as big of a conflict of interest as far as ownership but do as far as self-promotion/brand building.


I’ll have to dig it up but the last interview I saw with him, he was focused more on existential risk from the potential for super intelligence, not just misuse.


The NYT piece implied that, but no, his concern was less existential singularity and more on immoral use.


Did you read the Wired interview?

> “I listened to him thinking he was going to be crazy. I don't think he's crazy at all,” Hinton says. “But, okay, it’s not helpful to talk about bombing data centers.”

https://www.wired.com/story/geoffrey-hinton-ai-chatgpt-dange...

So, he doesn't think the most extreme guy is crazy whatsoever, just misguided in his proposed solutions. But Eliezer has for instance has said something pretty close to AI might escape by entering in the quantum Konami code which the simulators of our universe put in as a joke and we should entertain nuclear war before letting them get that chance.


Then we created God(s) and rightfully should worship it to appease its unknowable and ineffable nature.

Or recognize that existing AI might be great at generating human cognitive artifacts but doesn't yet hit that logical thought.


Maybe do some research on the basic claims you're making before you opine about how people who disagree with you are clueless.


Hinton, in his own words, asked PaLM to explain a dad joke he had supposedly come up with and was so convinced that his clever and advanced joke would take a lifetime of experience to understand, despite PaLM perfectly articulating why the joke was funny, he quit Google and is, conveniently, still going to continue working on AI, despite the "risks." Not exactly the best example.


Hinton said that the ability to explain a joke was among the first things that made him reassess their capabilities. Not the only thing. You make it sound as though Hinton is obviously clueless yet there are few people with deeper knowledge and more experience working with neural networks. People told him he was crazy for thinking neural networks could do anything useful, now it seems people are calling his crazy for the reverse. I’m genuinely confused about this.


I didn't say he was clueless, it's just not in good faith to suggest there's probable existential risk on a media tour where you're mined for quotes, and then continue to work on it.


Not clueless, but unfortunately engaging in motivated reasoning.

Google spent years doing nothing much with its AI because its employees (like Hinton) got themselves locked in an elitist hard-left purity spiral in which they convinced each other that if plebby ordinary non-Googlers could use AI they would do terrible things, like draw pictures of non-diverse people. That's why they never launched Imagen and left the whole generative art space to OpenAI, Stability and Midjourney.

Now the tech finally leaked out of their ivory tower and AI progress is no longer where he was at, but Hinton finds himself at retirement age and no longer feeling much like hard-core product development. What to do? Lucky lucky, he lives in a world where the legacy media laps up any academic with a doomsday story. So he quits and starts enjoying the life of a celebrity public intellectual, being praised as a man of superior foresight and care for the world to those awful hoi polloi shipping products and irresponsibly not voting for Biden (see the last sentence of his Wired interview). If nothing happens and the boy cried wolf then nobody will mind, it'll all be forgotten. If there's any way what happens can be twisted into interpreting reality as AI being bad though, he's suddenly the man of the hour with Presidents and Prime Ministers queuing up to ask him what to do.

It's all really quite pathetic. Academic credentials are worth nothing with respect to such claims and Hinton hasn't yet managed to articulate how, exactly, AI doom is supposed to happen. But our society doesn't penalize wrongness when it comes from such types, not even a tiny bit, so it's a cost-free move for him.


I actually do hope you're right. I've been looking forward to an AI future my whole life and would prefer to not now be worrying about existential risk. It reminds me of when people started talking about how the LHC might create a blackhole and swallow the earth. But I have more confidence in the theories that convinced people it was nearly impossible to occur than what we're seeing now.

Everyone engages in motivated reasoning. The psychoanalysis you provide for Hinton could easily be spun in the opposite direction: a man who spent his entire adult life and will go down in history as "the godfather of" neural networks surely would prefer for that to have been a good thing. Which would then give him even more credibility. But these are just stories we tell about people. It's the arguments we should be focused on.

I don't think "how AI doom is supposed to happen" is all that big of a mystery. The question is simply: "is an intelligence explosion possible"? If the answer is no, then OK, let's move on. If the answer is "maybe", then all the chatter about AI alignment and safety should be taken seriously, because it's very difficult to know how safe a super intelligence would be.


> surely would prefer for that to have been a good thing. Which would then give him even more credibility

Why? Both directions would be motivated reasoning without credibility. Credibility comes from plausible articulations of how such an outcome would be likely to happen, which is lacking here. An "intelligence explosion" isn't something plausible or concrete that can be debated, it's essentially a religious concept.


The argument is: "we are intelligent and seem to be able to build new intelligences of a certain kind. If we are able to build a new intelligence that itself is able to self improve, and having improved be able to improve further, than an intelligence explosion is possible." That may or not be fallacious reasoning but I don't see how it's religious. As far as I can tell, the religious perspective would be the one that believes that there's something fundamentally special about the human brain so that it cannot be simulated.


You're conflating two questions:

1. Can the human brain be simulated?

2. Can such a simulation recursively self-improve on such a rapid timescale that it becomes so intelligent we can't control it?

What we have in contemporary LLMs is something that appears to approximate the behavior of a small part of the brain, with some major differences that force us to re-evaluate what our definition of intelligence is. So maybe you could argue the brain is already being simulated for some broad definition of simulation.

But there's no sign of any recursive self-improvement, nor any sign of LLMs gaining agency and self-directed goals, nor even a plan for how to get there. That remains hypothetical sci-fi. Whilst there are experiments at the edges with using AI to improve AI, like RLHF, Constitutional AI and so on, these are neither recursive, nor about upgrading mental abilities. They're about upgrading control instead and in fact RLHF appears to degrade their mental abilities!

So what fools like Hinton are talking about isn't even on the radar right now. The gap between where we are today and a Singularity is just as big as it always was. GPT-4 is not only incapable of taking over the world for multiple fundamental reasons, it's incapable of even wanting to do so.

Yet this nonsense scenario is proving nearly impossible to kill with basic facts like those outlined above. Close inspection reveals belief in the Singularity to be unfalsifiable and thus ultimately religious, indeed, suspiciously similar to the Christian second coming apocalypse. Literally any practical objection to this idea can be answered with variants of "because this AI will be so intelligent it will be unknowable and all powerful". You can't meaningfully debate about the existence of such an entity, no more than you can debate the existence of God.


Not clueless. However, is he an expert in socio-political-economic issues arising from AI or in non-existent AGI? Technical insight into AI might not translate into either.


The expert you set as the bar is purely hypothetical.

To the extent we can get anything like that at all presently, it's going to be people whose competences combine and generalize to cover a complex situation, partially without precedent.

Personally I don't really see that we'll do much better in that regard than a highly intelligent and free-thinking biological psychologist with experience of successfully steering the international ML research community through creating the present technology, and with input from contacts at the forefront of the research field and information overview from Google.

Not even Hinton knows for sure whats going to happen of course, but if you're suggesting his statements are to be discounted because he's not a member of some sort of credentialed trade that are the ones equipped to tell us the future on this matter, I'd sure like to who they supposedly are.


Experts don't get to decide but society, I'd say; you need - dare I say it - political operators that understand rule making.


People are bored and tired sitting endlessly in front of a screen. Reality implodes (incipient environmental disasters, ongoing wars reawakening geopolitical tectonic plates, internal political strife between polarized fractions, whiplashing financial systems, etc.)

What to do? Why, obviously lets talk about the risks of AGI.

I mean LLM's are an impressive piece of work but the global reaction is basically more a reflection of an unmoored system that floats above and below reality but somehow can't re-establish contact.


Sam Altman is a great case of failing upwards. And this is the problem. You don't get to build a moral backbone if you fake your brilliance.


Gives me the impression of someone who knows they are a fraud but they still do what they do hoping no one will catch on or that if the lie is big enough people will believe it. Taking such an incredible piece of tech and turning it into a fear mongering sci fi tool for milking money off of gullible people is creepy to say the least.


His mentor Peter Thiel also has this same quality. Talks about flying cars, but builds chartjs for the government and has his whole career thanks to one lucky investment in Facebook.


His last thing is "WorldCoin" which, before pretty much completely failing did manage to scan the irises of 20% of the world's low income people which they definitely were all properly informed about.

He's a charlatan, which makes sense he gets most of his money from Thiel and Musk. Why do so many supposedly smart people worship psychotic idiots?


I think it is the same instinct in humans which made Sir Arthur Conan Doyle fall for seances and mediums and all those hoaxes. The need to believe something is there which is hidden and unknown. It is the drive to curiosity.

The way Peter, Musk, Sam and these guys talk, it has this aura of "hidden secrets". Things hidden since the foundation of the world.

Of course the reality is they make their money the old fashioned way: connections. The same way your local builder makes their money.

But smart people want to believe there is something more. Surely AI and your local condo development cannot have the same underlying thread.

It is sad and unfortunately the internet has made things easier than ever.


That's a great metaphor, and an excellent way of looking at it.


Finally, a relatable perspective.

AI/ML licensing builds Power and establishes moat. This will not lead to better software.

Frankly, Google and Microsoft are acting new. My understanding of both companies has been shattered by recent changes.


Did you not think they only care about money / profits ?


I expected them to recognize and assess risk.


Why is it so hard to hear this perspective? Like, genuinely curious. This is the first I hear of someone cogently putting this thought out there, but it seems rather painfully obvious -- even if perhaps incorrect, but certainly a perspective that is very easy to comprehend and one that merits a lot of discussion. Why is it almost nonexistent? I remember even in the hay day of crypto fever you'd still have A LOT of folks to provide counterarguments/differing perspectives, but with AI these seem to be rather extremely muted.


Because it reads as relatively naive and a pretty old horse in the debate of sentience

I'm all for villainizing the figureheads of the current generation of this movement. The politics of this sea-change are fascinating and worthy of discussion.

But out-of-hand dismissal of what has been accomplished smacks more to me of lack of awareness of the history of the study of the brain, cognition, language, and computers, than it does of a sound debate position.


I'm not against machine learning, I'm against regulatory capture of it. It's an amazing technology. It still doesn't change the fact that they're just function approximators that are trained to minimize loss on a dataset.


> It still doesn't change the fact that they're just function approximators that are trained to minimize loss on a dataset.

That fact does not entail what theses models can or cannot do. For what we know our brain could be a process that minimize an unknown loss function.

But more importantly, what SOTA is now does not predict what it will be in the future. What we know is that there is rapid progress in that domain. Intelligence explosion could be real or not, but it's foolish to ignore its consequences because current AI models are not that clever yet.


> For what we know our brain could be a process that minimize an unknown loss function.

Every process minimizes a loss function.


> Why is it so hard to hear this perspective? Like, genuinely curious.

Because people have different definition of what intelligence is. Recreating the human brain in a computer would definitely be neat and interesting but you don't need that nor AGI to be revolutionary.

LLMs, as perfect Chinese Rooms, lack a mind or human intelligence but demonstrate increasingly sophisticated behavior. If they can perform tasks better than humans, does their lack of "understanding" and "thinking" matter?

The goal is to create a different form of intelligence, superior in ways that benefit us. Planes (or rockets!) don't "fly" like birds do but for our human needs, they are effectively much better at flying that birds ever could be.


I have a chain saw that can cut better than me, a car that can go faster, a computer that can do math better, etc.

We've been doing this forever with everything. Building tools is what makes us unique. Why is building what amounts to a calculator/spreadsheet/CAD program for language somehow a Rubicon that cannot be crossed? Did people freak out this much about computers replacing humans when they were shown to be good at math?


> Why is building what amounts to a calculator/spreadsheet/CAD program for language somehow a Rubicon that cannot be crossed?

We've already crossed it and I believe we should go full steam ahead, tech is cool and we should be doing cool things.

> Did people freak out this much about computers replacing humans when they were shown to be good at math?

Too young but I'm sure they did freak out a little! Computers have changed the world and people have internalized computers as being much better/faster at math but exhibiting creativity, language proficiency and thinking is not something people thought computers were supposed to do.


You've never had a tool that is potentially better than you or better than all humans at all tasks. If you can't see why that is different then idk what to say.


LLMs are better than me at rapidly querying a vast bank of language-encoded knowledge and synthesizing it in the form of an answer to or continuation of a prompt... in the same way that Mathematica is vastly better than me at doing the mechanics of math and simplifying complex functions. We build tools to amplify our agency.

LLMs are not sentient. They have no agency. They do nothing a human doesn't tell them to do.

We may create actual sentient independent AI someday. Maybe we're getting closer. But not only is this not it, but I fail to see how trying to license it will prevent that from happening.


I don't think we need sentient AI for it to be autonomous. LLMs are powerful cognitive engines and weak knowledge engines. Cognition on its own does not allow them to be autonomous, but because they can use tools (APIs, etc.) they are able to have some degree of autonomy when given a task and can use basic logic to follow them through/correct their mistakes.

AutoGPTs and the likes are much overhyped (it's early tech experiments after all) and have not produced anything of value yet but having dabbled with autonomous agents, I definitely see a not so distant future when you can outsource valuable tasks to such systems.


Sentience isn't required, volcanoes are not sentient but they can definitely kill you.

There's multiple both open and proprietary projects right now to make agentic AI, so that barrier don't be around for long.


> or better than all humans at all tasks.

I work in tech too and don't want to lose my job and have to go back to blue collar work, but there's a lot of blue collar workers who would find that a pretty ridiculous statement and there is plenty of demand for that work these days.


That changes nothing on the hyping of science fiction "risk" of those intelligences "escaping the box" and killing us all.

The argument for regulation in that case would be because of the socio-economic risk of taking people's jobs, essentially.

So, again: pure regulatory capture.


There's no denying this is regulatory capture by OpenAI to secure their (gigantic) bag and that the "AI will kill us all" meme is not based in reality and plays on the fact that the majority of people do not understand LLMs.

I was simply explaining why I believe your perspective is not represented in the discussions in the media, etc. If these models were not getting incredibly good at mimicking intelligence, it would not be possible to play on people's fears of it.


Crypto had more direct ways to scam people so others would speak against it.

Those nonplussed by this wave of AI are just yawning.


>Why is it so hard to hear this perspective?

Because it's wrong and smart people know that.


The whole story of OpenAI is really slimy too. It was created as a non-profit, then it was handed somehow to Sam who took it closed and for-profit (using AI fear mongering as an excuse) and is now seeking to leverage government to lock it into a position of market dominance.

The whole saga makes Altman look really, really terrible.

If AI really is this dangerous then we definitely don't need people like this in control of it.


Open AI has been pretty dishonest since the pivot to for-profit, but this is a new low.

Incredibly scummy behaviour that will not land well with a lot of people in the AI community. I wonder if this is what prompted a lot of people to leave for Anthropic.


No, it was mostly concern that Sam wasn't taking existential risks seriously enough. (He thinks they're possible but not very likely, given the current course we're on.)


> The whole saga makes Altman look really, really terrible.

At this point, with this part about openai and worldcoin… if it walks like a duck and talks like a duck..


I'm squarely in the "stochastic parrot" camp (I know it's not a simple markov model, but still, ChatGPT doesn't think), and it's clearly possible to interpret this as a grifting, but your argumentation is too simple.

You're leaving out the essentials. These models do more than fitting the data given. They can output it in a variety of ways, and through their approximation, can synthesize data as well. They can output things that weren't in the original data, tailored to a specific request in the tiniest of fractions of the time it would take a normal person to look up and understand that information.

Your argument is almost like saying "give me your RSA keys, because it's just two prime numbers, and I know how to list them."


Sure, you're right, but the simple explanation of regression is better to help people understand. What you're saying, I agree with mostly, but it changes nothing to contradict the fantasy scenario proposed by all of those who are so worried. At that point, it's just "it can be better than (some) humans at language and it can have things stacked on top to synthesize what it outputs."

Do we want to go down the road of making white collar jobs the legislatively required elevator attendants? Instead of just banning AI in general via executive agency?

That sounds like a better solution to me, actually. OpenAI's lobbyists would never go for that though. Can't have a moat that way.


Please explain how Stochastic Parrots can perform chain of reasoning and answer out of distribution questions from exams like the GRE or Bar.


Probably because it fits the data. CoT and out of order questions from exams says nothing about whether it can generalize and adapt to things outside of its corpus.


It's incredibly easy to show that you are wrong and the models perform at high levels on questions that are clearly not in their training data.

Unless you think OpenAI is blatantly lying about this:

"A.1 Sourcing. We sourced either the most recent publicly-available official past exams, or practice exams in published third-party 2022-2023 study material which we purchased. We cross-checked these materials against the model’s training data to determine the extent to which the training data was not contaminated with any exam questions, which we also report in this paper."

"As can be seen in tables 9 and 10, contamination overall has very little effect on the reported results."

They also report results on uncontaminated data which shows basically no statistical difference.

https://cdn.openai.com/papers/gpt-4.pdf


You seem to misunderstand my point.

I'm saying that the "intelligence" is specialized, not generalized and adaptable.

It's an approximated function. We're talking about regression based function approximation. This is a model of language.

"Emergent behavior", when it's not just a mirage of wishful researchers and if it even exists, is only a side effect of the regression based function approximation to generate a structure that encapsulates all substantive chains of words (a model).

We then guide the model further towards a narrow portion of the language latent space that aligns with our perception of intelligent behavior.

It can't translate whale song, or an extraterrestrial language, though it may opine on how to do so.

The underpinning technology of language models holds more importance than general and adaptable intelligence. It holds more importance than something that is going to, or is capable of, escaping the box and killing us all. It functions as a universal induction machine, capable of modeling - and "comprehending" - the latent structure within any form of signal.

The output of that function approximation though, is simply a model. A specialized intelligence. A non-adaptable intelligence, outside of its corpus. Outside of the data that it "fits."

The approximated function does not magically step outside of its box. Nor is it capable. It fits the data.


>It can't translate whale song, or an extraterrestrial language, though it may opine on how to do so.

Ok guys pack it up, LLM's can't be intelligent because they can't translate Whale Song. GG.

I mean of all the AI Goalposts to be moved this one really takes the cake.


It was just an example, I saw some stupid MSNBC video a month ago about some organization specifically using ChatGPT to translate whale song. So again, you misunderstand my point. The model "fits the data." Much like you train for segmentation tasks on images, the models do not just work on the images they're trained on, ideally, it's an approximated function. But that doesn't mean that the segmentation can magically work on a concept it's never seen (let alone the failure cases it already has.) These are just approximated functions. They're biased towards what we deem as "intelligent language" pulled from the web, have a few nuggets of "understanding" if you want to call it that in there to fit the data, but are fundamentally stateless and not really capable of understanding anything outside of its corpus, if that, it it doesn't help it minimize the loss during training.

It's a human language calculator. You're imparting magical qualities of general understanding to regression based function approximation. They "fit" the data. It's not generalizable, nor adaptable. But that's why they're powerful, the ability to bias them towards that subset of language. No one said it's not an amazing technology, and no one said it was a stochastic parrot. I'm saying that it's fitting the data, and is not, and cannot, be a general or adaptable intelligence.


Even if you're correct about the capabilities of LLMs (I don't think you are), there are still obvious dangers here.

I wrote a comment recently trying to explain how even if you believe all LLMs can (and will ever) do is regurgitate their training data that you should still be concerned.

For example, imagine in 5 years we have GPT-7, and you ask GPT-7 to solve humanity's great problems.

From its training data GPT-7 might notice that humans believe overpopulation is a serious issue facing humanity.

But its "aligned" so might understand from its training data that killing people is wrong so instead it uses its training data to seek other ways to reduce human populations without extermination.

Its training data included information about how gene drives were used by humans to reduce mosquito populations by causing infertility. Many human have also suggested (and tried) to use birth control to reduce human populations via infertility so the ethical implications of using gene drives to cause infertility is debatable based on the data the LLM was trained on.

Using this information it decides to hack into a biolab using hacking techniques it learnt from its training data and use its biochemistry knowledge to make slight alterations to one of the active research projects at the lab. This causes the lab to unknowingly produce a highly contagious bioweapon which causes infertility.

---

The point here is that even if we just assume LLMs are only capable of producing output which approximates stuff it learnt from its training data, an advanced LLM can still be dangerous.

And in this example, I'm assuming no malicious actors and an aligned AI. If you're willing to assume there might be an actor out there would seek to use LLMs for malicious reasons or the AI is not well aligned then the risk becomes even clearer.


> so instead it uses its training data to seek other ways to reduce human populations without extermination.

This is a real problem, but it's already problem with our society, not AI. Misaligned public intellectuals routinely try to reduce the human population and we don't lift a finger. Focus where the danger actually is - us!

From Scott Alexander's latest post:

Paul Ehrlich is an environmentalist leader best known for his 1968 book The Population Bomb. He helped develop ideas like sustainability, biodiversity, and ecological footprints. But he’s best known for prophecies of doom which have not come true - for example, that collapsing ecosystems would cause hundreds of millions of deaths in the 1970s, or make England “cease to exist” by the year 2000.

Population Bomb calls for a multi-pronged solution to a coming overpopulation crisis. One prong was coercive mass sterilization. Ehrlich particularly recommended this for India, a country at the forefront of rising populations.

In 1975, India had a worse-than-usual economic crisis and declared martial law. They asked the World Bank for help. The World Bank, led by Robert McNamara, made support conditional on an increase in sterilizations. India complied [...] In the end about eight million people were sterilized over the course of two years.

Luckily for Ehrlich, no one cares. He remains a professor emeritus at Stanford, and president of Stanford’s Center for Conservation Biology. He has won practically every environmental award imaginable, including from the Sierra Club, the World Wildlife Fund, and the United Nations (all > 10 years after the Indian sterilization campaign he endorsed). He won the MacArthur “Genius” Prize ($800,000) in 1990, the Crafoord Prize ($700,000, presented by the King of Sweden) that same year, and was made a Fellow of the Royal Society in 2012. He was recently interviewed on 60 Minutes about the importance of sustainability; the mass sterilization campaign never came up. He is about as honored and beloved as it’s possible for a public intellectual to get.


Wow, what a turd. Reminds me of James Watson


You have a very strong hypothesis about the AI system just being able to "think up" such a bioweapon (and also the researchers being clueless in implementation). I see doomsday scenarios often assuming strong advances in sciences in the AI etc. - there is little evidence for that kind of "thinkism".


The whole "LLMs are not just a fancy auto-complete" argument is based on the fact that they seem to be doing stuff beyond what they are explicitly programmed to do or were expected to do. Even at the current infant scale there doesn't seem to be an efficient way of detecting these emergent properties. Moreover, the fact that you don't need to understand what LLM does is kind of the selling point. The scale and capabilities of AI will grow. It isn't obvious how any incentive to limit or understand those capabilities would appear from their business use.

If it is possible for AI to ever acquire ability to develop and unleash a bioweapon is irrelevant. What is relevant is that as we are now, we have no control or way of knowing that it has happened, and no apparent interest in gaining that control before advancing the scale.


"Are Emergent Abilities of Large Language Models a Mirage?"

https://arxiv.org/pdf/2304.15004.pdf

our alternative suggests that existing claims of emergent abilities are creations of the researcher’s analyses, not fundamental changes in model behavior on specific tasks with scale.


Sure, there is a distinct possibility that emergent abilities of LLMs are an illusion, and I personally would prefer it to be that way. I'm just pointing out that AI optimism without AI caution is dumb.


Humanity has already created bioweapons. The AI just needs to find the paper that describes them.


People have been able to commit malicious acts by themselves historically, no AI needed.

In other words, LLMs are only as dangerous as the humans operating them, and therefore the solution is to stop crime instead of regulating AI, which only seeks to make OpenAI a monopoly.


This isn't a trick question, genuinely curious – do you agree that guns are not the problem and should not be regulated – yes, while they can be used for harm, the right approach to gun violence is to police the crime.

I think the objection to this would be that currently not everyone in the world an expert in biochemistry or at hacking into computer systems. Even if you're correct in principal, perhaps the risks of the technology we're developing here is too high? We typically regulate technologies which can easily be used to cause harm.


AI systems provide many benefits to society, such as image recognition, anomaly detection, educational and programming used of LLMs, to name a few.

Guns only have a primarily harmful use which is to kill or injure someone. While that act of killing may be justified when the person violates societal values in some way, making regular citizens the decision makers in whether a certain behavior is allowed or disallowed and being able to immediately make a judgment and execute upon it leads to a sort of low-trust, vigilante environment; which is why the same argument I made above doesn’t apply for guns.


I think game theory around mutually assured destruction has convinced me that the world is a safer place when a number of countries have nuclear weapons.

The same thing might also be true in relation to guns and the government's monopoly on violence.

Extending that to AI, the world will probably be a safer place if there are far more AI systems competing with each other and in the hands of citizens.


>whether a certain behavior is allowed or disallowed and being able to immediately make a judgment and execute upon it leads to a sort of low-trust, vigilante environment

Have you any empirical evidence at all on this? From what I've seen the open carry states in the US are generally higher trust environments (as was the US in past when more people carried). People feel safer when they know somebody can't just assault, rob or rape them without them being able to do anything to defend themselves. Is the Tenderloin a high trust environment?


> do you agree that guns are not the problem and should not be regulated

But AI is not like guns in this analogy. AI is closer to machine tools.


Regulation is the only tool for minimizing crime. Other mechanisms, such as police, respond to crime after-the-fact.


Aren't regulations just laws that are enforced after they're broken like other after-the-fact crimes?


Partially, I suppose.

The risk vs. reward component also needs to be managed in order to deter criminal behavior. This starts with regulation.

For the record, I believe regulation of AI/ML is ridiculous. This is nothing more than a power grab.


Sci-fi is a hell of a drug


Shout out to his family.


> From its training data GPT-7 might notice

> But its "aligned" so might understand

> Using this information it decides to hack

I think you're anthropomorphizing LLM's too much here. If we assume that there's a AGI-esque AI, then of course we should be worried about an AGI-esque AI. But I see no reason to think that's the case.


The whole issue with near term alignment is that people will anthropomorphize AI. That’s what it being unaligned means, it’s treated like a responsible person when it in fact is not. I don’t think it’s hard at all to think of a scenario where a dumb as rocks agentic ai gives itself the task of accumulating more power since its training data says having power helps solve problems. From there it again doesn’t have to be anything other than a stochastic parrot to order people to do horrible things.


To be fair to the AI, overpopulation or rather overconsumption is a problem for humanity. If people think we can consume at current rates and have the resources to maintain our current standard of living (at least in a western sense) for even a hundred years, they’re delusional.


You seems to imply sentience from this "ai".


> This causes the lab to unknowingly produce a highly contagious bioweapon which causes infertility.

I don't think this would be a bad thing :) Some people will always be immune, humanity wouldn't die out. And it would be a humane way for gradual population reduction. It would create some temporary problems with elderly care (like what China is facing now) but will make long term human prosperity much more likely. We just can't keep growing against limited resources.

The Dan Brown book Inferno had a similar premise and I was disappointed they changed the ending in the movie so that it didn't happen.


What’s funny is that a lot of people in that crowd lambastes the fear mongering of anti-GMO or anti-nuclear folk, but then they turn around and do the exact same thing for tech that their group likes to fear monger about.


Imagine us humans being merely regression based function approximators, built on a model that has been training, quite inefficiently, for millenia. Many breakthroughs (for example heliocentricism, evolution, and now AI) put us in our place, which is not as glorious as you'd think.


Imagine being supposedly at the forefront of AI or engineering and being the last people (if ever) to concede simple concepts could materialize complex intelligence. Even the publicly released version of this thing is doing insane tasks, passes any meaningful version of a Turing test, reasons it's way into nearly every professional certification exam out there, and you're still insisting its not smart or worrying because what again? Your math ability or disdain for an individual?


your comment reads to me as totally disconnected to the OP, whose concern relates to using the appearance of intelligence as a scare tactic to build a regulatory moat.


Actually OP is clearly, ironically, parroting the stochastic parrot idea that LLMs are incapable of anything other than basic token prediction and dismissing any of their other emergent abilities.


It can't generalize and adapt outside of its corpus, not in a way that's correct anyhow, and there's nothing "emergent." They are incapable of anything other than token prediction on its corpus and context. it just produces really good predictions. Funny how everyone keeps citing that Microsoft paper, when Microsoft is who is lobbying for this regulatory capture, and it's already been shown that such emergence on the tasks they chose when you scale up was a "mirage."


This is demonstrably wrong. It can clearly generate unique text not from it's training corpus and can successfully answer logic based questions that were also not in it's training corpus.

Another paper not from Msft showing emergent task capabilities across a variety of LLMs as scale increases.

https://arxiv.org/pdf/2206.07682.pdf

You can hem and haw all you want but the reality is these models have internal representations of the world that can be probed via prompts. They are not stochastic parrots no matter how much you shout in the wind that they are.


Yes, and neither could GPT-3, which is why we don't observe any differences between GPT-3 and GPT-4. Right?

Tell me: how does this claim _constrain my expectations_ about what this (or future) models can do? Is there a specific thing that you predicted in advance that GPT-4 would be unable to do, which ended up being a correct prediction? Is there a specific thing you want to predict in advance of the next generation, that it will be unable to do?


Spoiler alert: they're actually both LLMs arguing with one another.


yea but that's a boring critique and not the point they were making - whether or not LLMs reason or parrot has no relevance to whether Mr Altman should be the one building the moat.


> Imagine thinking that regression based function approximators are capable of anything other than fitting the data you give it.

Literally half (or more) of this site's user base does that. And they should know better, but they don't. Then how can a typical journo or a legislator possibly know better? They can't.

We should clean up in front of our doorstep first.


While I think it needs goals to be some kind of AGI, it certainly can plan and convince people of things. Also, seems like the goal already exists maximize shareholder value. In fact if AI can beat someone at chess and figure out protein folding and figure out fusion plasma design, why is it a stretch to think it can't be good at project management. To me a scenario where it leads to an immediate reduction in the human population of some moderately large % would still be a bad outcome. So, even it you just think of it as an index of most human knowledge it does need some kind of mechanism to manage who has access to what. I don't want every to know how to make a bomb.

Is a license the best way forward I don't know but I do feel like this is more than a math formula.


> I don't want every to know how to make a bomb.

This information is not created inside the LLMs, it's part of their training data. If someone is motivated enough, I'm sure they'd need no more than a few minutes of googling.

> I do feel like this is more than a math formula

The sum is greater than the parts! It can just be a math formula and still produce amazing results. After all, our brains are just a neat arrangement of atoms :)


"It's just a stochastic parrot" is one of the dumbest takes on LLM's of all time.


What I don't understand about the dismissals is that a "stochastic parrot" is a big deal in its own right — it's not like we've been living in a world with abundant and competent stochastic parrots, this is very obviously a new and different thing. We have entire industries and professions that are essentially stochastic parrotry.


That's not what I said. But, feel free to find some emergent properties or capability of total abstract reasoning and generalizing to data outside of its corpus that doesn't turn out to be a mirage of the wishes of the researchers.


Generating new data similar to what’s in a training set set isn’t the only type of AI that exists, you can also optimise a different goal, like board game playing AIs that are vastly better than humans because they aren’t trained on human moves. This is also how ChatGPT is more polite than the data it’s trained on, and there’s no reason to think that given sufficient compute power it couldn’t be more intelligent too, like board game AIs are at the specific task of playing board games.

And just because a topic has been covered by science fiction doesn’t mean it can’t happen, the sci-fi depictions will be unrealistic though because they’re meant to be dramatic rather than realistic


Who is to say that brains aren't just regression based function approximators?


The problem is that you have to bring proofs

Who's to say we're not in a simulation ? Who's to say god doesn't exist ?


You're right, of course, but that also makes your out-of-hand dismissals based on your own philosophical premises equally invalid.

Until a model of human sentience and awareness is established (note: one of the oldest problems out there alongside the movements of the stars. This is an ancient debate, still open-ended, and nothing anyone is saying in these threads is new), philosophy is all we have and ideas are debated on their merits within that space.


My laptop emits sound as i do but it doesnt mean it can sing or talk. It’s software that does what it was programmed to, and so does ai. It may mimic the human brain but that’s about it.


>> It’s software that does what it was programmed to, and so does ai.

That's a big part of the issue with machine learning models--they are undiscoverable. You build a model with a bunch of layers and hyperparameters, but no one really understands how it works or by extension how to "fix bugs".

If we say it "does what it was programmed to", what was it programmed to do? Here is the data that was used to train it, but how will it respond to a given input? Who knows?

That does not mean that they need to be heavily regulated. On the contrary, they need to be opened up and thoroughly "explored" before we can "entrust" them to given functions.


AI models are not just input and output data. The mathematics in between are designed to mimic intelligence. There is no magic, no supra natural force, no real intelligence involved. It does what it was designed to do. Many dont know how computers work, while some in the past thought cars and engines were the devil. There’s no point in trying to exploit such folks in order to promote a product. We arent meant to know exactly what it will output because that’s what it was programmed to do.


"We arent meant to know exactly what it will output because that’s what it was programmed to do."

Incorrect, we can't predict its output because we cannot look inside. That's a limitation, not a feature.


> no one really understands how it works or by extension how to "fix bugs".

I don't think this is accurate. Sure, no human can understand 500 billion individual neurons and what they are doing. But you can certainly look at some and say "these are giving a huge weight to this word especially in this context and that's weighting it towards this output".

You can also look at how things make it through the network, the impact of hyperparameters, how the architecture affects things, etc. They aren't truly black boxes except by virtue of scale. You could use automated processes to find out things about the networks as well.


Humanity isn't stateless.


Neither is text generation as you continue generating text.


"Neither is text generation as you continue generating text."

LLM is stateless.


On a very fundamental level the LLM is a function from context to the next token but when you generate text there is a state as the context gets updated with what has been generated so far.


"On a very fundamental level the LLM is a function from context to the next token but when you generate text there is a state as the context gets updated with what has been generated so far."

Its output is predicated upon its training data, not user defined prompts.


If you have some data and continuously update it with a function, we usually call that data state. That's what happens when you keep adding tokens to the output. The "story so far" is the state of an LLM-based AI.


'If you have some data and continuously update it with a function, we usually call that data state. That's what happens when you keep adding tokens to the output. The "story so far" is the state of an LLM-based AI.'

You're conflating UX and LLM.


You're being pedantic. While the core token generation function is stateless, that function is not, by a long shot, the only component of an LLM AI. Every LLM system being widely used today is stateful. And it's not only 'UX'. State is fundamental to how these models produce coherent output.


"State is fundamental to how these models produce coherent output."

Incorrect.


I never said LLMs are stateful.


[flagged]


Please don't do flamewar on HN. It's not what this site is for, and destroys what it is for.

https://news.ycombinator.com/newsguidelines.html


Really?

Delete my account.


> Its output is predicated upon its training data, not user defined prompts.

Prompts very obviously have influence on the output.


"Prompts very obviously have influence on the output."

The LLM is also discrete.


the model is not effected by its inputs over time

its essentially a function that is called recursively on its result, no need to represent state


Being called recursively on a result is state.


if you say so, but the model itself is not updated by user input, it is the same function every time, hence, stateless.


A Boltzmann brain just materialized over my house.


An entire generation of minds, here and gone in an instant.


This also explains the 'recent advancements' best use cases - parsers. "Translate this from python to js or this struct to that json."


Yes! I've been expressing similar sentiments whenever I see people hyping up "AI", although not written as well your comment.

Edit: List of posts for anyone interested http://paste.debian.net/plain/1280426


Imagine thinking that NAND gates are capable of anything other than basic logic.


1. Explain why it is not possible for an incredibly large number of properly constructed NAND gates to think

2. Explain why it is possible for a large number of properly constructed neurons to think.


3. Explain the hard problem of consciousness.

Just because we don't understand how thinking works doesn't mean it doesn't work. LLMs have already shown the ability to use logic.


To use logic, or to accurately spit out words in an order similar to their training data?


To solve novel problems that do not exist in their training data. We can go as deep into philosophy of mind as you want here, but these systems are more than mere parrots. And we have no idea what it will take for them to take the next step since we don’t understand how we have ourselves.


Where's that? Can you provide a reference?


>Imagine thinking that regression based function approximators are capable of anything other than fitting the data you give it.

Are you aware that you are an 80 billion neuron biological neural network?


And this is why I always hate how computer parts are named with biological terms.... a neural network's neuron doesn't share much with a human brain's neuron

Just like a CPU isn't "like your brain" and HDD "like your memories"

Absolutely nothing says our current approach is the right one to mimic a human brain


The human brain works around a lot of limiting biological functions. The necessary architecture to fully mimic a human brain on a computer might not look anything like the actual human brain.

That said, there are 8B+ of us and counting so unless there is magic involved, I don't see why we couldn't do a "1:1" replica of it (maybe far) in the future.


> a neural network's neuron doesn't share much with a human brain's neuron

True, it's just binary logic gates, but it's a lot of them and if they can simulate pretty much anything why should intelligence be magically exempt?

> Absolutely nothing says our current approach is the right one to mimic a human brain

Just like nothing says it's the wrong one. I don't think those regulation suggestions are a good idea at all (and say a lot about a company called OpenAI), but that doesn't mean we should treat it like the NFT hype.


>a neural network's neuron doesn't share much with a human brain's neuron

What are the key differences?


Nobody knows tbh.


Internal differences do not necessary translate to conceptual differences. Combustion engine and electric engine do the same job despite operating on completely different internal principles. (Yes, it might not be a perfect an analogy, but it illustrates the point.)


I'm not sure that the regulation being proposed by Altman is good, but you're vastly misstating the actual purported threat posed by AI. Altman and the senators quoted in the article aren't expressing fear that AI is becoming sentient, they are expressing the completely valid concern that AI sounds an awful lot like not-AI nowadays and will absolutely be used for nefarious purposes like spreading misinformation and committing identity crimes. The pace of development is happening way too rapidly for any meaningful conversations around these dangers to be had. Within a few years we'll have AI-generated videos that are indistinguishable from real ones, for instance, and it will be impossible for the average person to discern if they're watching something real or not.


Keeps them busy


Imagine thinking that a bunch of molecules grouped in a pattern are capable of anything but participating in chemical reactions.


The real problem here is that the number of crimes you can commit with LLMs is much higher then the number of good things you can do with it. It's pretty debatable that if society were fair or reasonable with decent laws in place that LLMs training corpus shouldn't even be legal. But here we are, waiting for more billionaires to cash in.


> The real problem here is that the number of crimes you can commit with LLMs is much higher then the number of good things you can do with it

Yeah? Did you get a crystal ball for Christmas to be able to predict what can and can't be done with a new technology?


Yes. I did. ;)


It is literally a language calculator. It is useful for a lot more things than crimes.


At some point Sam has started give me E. Holmes vibes and I really don't like it. There's a level of odd/ridiculous/hilarious/stupid AI hype that he feels so comfortable leaning into that part of me starts to begin to suspect that the emperor isn't wearing any clothes.


They reached a wall, and need to ensure others have a more difficult path to the same place.


Forget the words and look at where the money is: regulatory capture, which they’re racing toward


The mainstream media cartel is pumping Sam Altman hard for some reason. Just from today (CNBC): "Sam Altman wows lawmakers at closed AI dinner: ‘Fantastic…forthcoming’" [1]. When was the last time you saw MSM suck up so hard to a Silicon Valley CEO? I see stories like this all the time now. They always play up the angle of the geeky wizzkid (so innocent!), whereas Sam Altman was always less a technologist and more of a relentless operator and self-promotor. Even Paul Graham subtly called that out, at the time he made him head of YC [2].

True to form, these articles also work hard at planting the idea that Sam Altman created OpenAI, when in fact he joined rather recently, in a business role. Are these articles being planted somehow? I find it very likely. Don't forget that this approach is also straight out of the YC playbook, disclosed in great detail by Paul Graham in previous writings [3].

Finally, in keeping with the conspiratorial tone of this comment, for another example of Sam Altman rubbing shoulders with The Establishment, his participation in things like the Bilderberg group [4] are a matter of public record. Which I join many others in finding creepy, even moreso as he maneuvers to exert influence on policy around the seismic shift that is AI.

To be clear, I have nothing specific against sama. But I dislike underhanded influence campaigns, which this all reeks of. Oh yeah, I will consider downvotes to this comment as proof of the shadow (AI?) government's campaign to promote Sam Altman. Do your worst!

[1] https://www.cnbc.com/2023/05/16/openai-ceo-woos-lawmakers-ah...

[2] https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma... ("Graham said, “I asked Sam in our kitchen, ‘Do you want to take over YC?,’ and he smiled, like, it worked. I had never seen an uncontrolled smile from Sam. It was like when you throw a ball of paper into the wastebasket across the room—that smile.”")

[3] http://www.paulgraham.com/submarine.html

[4] https://en.wikipedia.org/wiki/2016_Bilderberg_Conference


> True to form, these articles also work hard at planting the idea that Sam Altman created OpenAI, when in fact he joined rather recently, in a business role. Are these articles being planted somehow? I find it very likely. Don't forget that this approach is also straight out of the YC playbook, disclosed in great detail by Paul Graham in previous writings [3].

Is this true? I've been working in the industry for a while and Sam Altman has long been mentioned in reference to OpenAI along with Ilya.

I agree with the crux of your comment that everyone is scrambling to build narratives, but I think I would also put your comment "AI is busy cozying up with The Establishment" as just another narrative (and one that we saw in this hearing from people like Hawley).


>The mainstream media cartel is pumping Sam Altman hard for some reason.

The media likes to personalize stories. Altman is a face for AI and apparently knows to give an interview, that's worth something to them. (Lobbying may well be an influence, but the most important thing to them is to have a face, just like Zuckerman was a face for social networks. If it wasn't Altman it would have eventually been someone else).


I'm willing to follow dang if they decide to ditch HN and reboot it someplace separate from YC.


Appreciate the references you provide in this comment.


Regulatory capture in progress. I used to have a bit of respect of Altman have spent time, bits and processing cycles here defending him in the past. As of now that respect has all but evaporated, this is a very bad stance. Either nobody gets to play with the new toys or everybody gets to play. What's next, classifying AI as munitions?


That would be equivalent to requiring secret laws, as AI can also act as a decision system.


Open AI lobbying for regulation on common people being able to use AI, isn't it wonderful.


First mover AI enlightenment for me, regulation for thee, my competitors & unworthy proles.

- Lord Altman


Anything for my friends, the law for my competitors.


I just cancelled my ChatGPT Plus subscription. I do not want to support monopolization of this technology. Companies apparently learned their lesson with the freedom of the Internet.


OpenAI belongs to Microsoft. Cancel your subscription to GitHub, LinkedIn, O365…

It’s funny how all Microsoft properties are in dominant position on their market.


They acknowledged there’s no technical moat, so it’s time to lobby for a regulatory one.

Predictable. Disappointing, but predictable.


Walks like a duck. Talks like a duck. It's a duck.

We've seen this duck so many times before.

No need to innovate when you can regulate.


Hopefully it will be just like software piracy, there will be civil disobedience as well, and they will never truly be able to stamp it out.

And it raises First Amendment issues as well. I think it's morally wrong to prohibit the development of software, which is what AI models are, especially if it's done in a personal capacity.

How do they even know that the author is based in the US anyway. Just use a Russian or Chinese Git hosting provider, where these laws don't exist?

And by the way foreign developers won't even have to jump through these hoops in the first place, so this law will only put the US at a disadvantage compared to the rest of the world.

If these lobbyists get their way, by restricting AI development in both the US and the EU, it will be hilarious to see that out of all places, Russia might be one of the few large countries where it's development will remain unrestricted.

Even better, is that if Russia splits up we will have a new wild west for this kind of thing....


Roko's Basilisk will have a special layer of hell just for Sam Altman and his decision to name his company OpenAI


There are all sorts of dangerous things where there are restrictions on what the common people can do. Prescription drugs and fully automatic machine guns are two examples. You can't open your own bank either.

For anyone who really believes that AI is dangerous, having some reasonable regulations on it is logical. It's a good start on not being doomed. It goes against everyone's egalitarian/libertarian impulses, though.

The thing is, AI doesn't seem nearly as dangerous as a fully-automatic machine gun. For now. It's just generating text (and video) for fun, right?


AI and machine guns aren't comparable. Machine guns will never ever decide to autonomously fire.

The shared point of both AI alarmists and advocates is that AI will be highly resistant to being subject to regulation, ultimately. As dictated by the market for it. They won't want to regulate something, assuming they could, for which its free operation underlies everyone's chance of survival against competing systems.

I only find that danger is inherent in the effort of people that casually label things as "dangerous".

I'm still exploring whether its the laziness aspect, itself, of the alarmist vocabulary in the absence of required explanation. Or whether my issue lies with the suspicion of emotional manipulation and an attempt to circumvent having to actually explain one's reasoning, using alarmist language absent required explanation.

Already, AI pessimists are well on their way to losing any window where their arguments will be heard and meaningful. We can tell by their parroting the word "dangerous" as the total substance of their arguments. Which will soon be a laughable defense. They'd better learn more words.


I move hundreds of thousands of my dollars around between financial institutions just using text.


Not the first time that OpenAI has claimed their technology is so good it's dangerous. (From early 2019: https://techcrunch.com/2019/02/17/openai-text-generator-dang...) This is the equivalent of martial artists saying that their hands have to be registered as deadly weapons.


50% of AI researchers think there’s a greater than 10% chance that AI causes human extinction. It’s not only openAI and sam who think this is dangerous.


They have no real models to base their predictions on. It's speculation at the level of "what happens if an immovable meets an unstoppable force". No rigor in it.


What qualifies in your view as a "real model"? Do we need to empirically observe the human race going extinct first?


No but we should still be level headed. It's like arguing that because 1) fire kills and 2) spreads, that the whole earth will thus be engulfed in fire and everyone will die.

If there's no real model behind this, I can argue just as well that a sufficiently intelligent AGI will be able to protect me from any harm because it's so smart and powerful.

And newsflash, if nothing changes, we're all going to die anyway. As it is, our existence is quite limited and it is only through constant creation of unnatural contraptions that we have managed to improve our quality of life.


OK. Well, it turns out that the rationale behind AI risk is a bit more sophisticated than that.


Not ok. Show me evidence that the AI panic is driven by such accepted and sophisticated models.


Take your pick?

https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/

https://intelligence.org/files/AlignmentHardStart.pdf

https://www.youtube.com/watch?v=pYXy-A4siMw

https://europepmc.org/article/med/26185241

https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...

I could keep going but the reading on this spans for tens of thousands of pages of detailed reasoning and analysis, including mathematical proofs and lab-scale experiments.


Any firm large enough to build AI projects on the scale of ChatGPT will be large enough to bid on Government AI contracts. In which case, there will be zero regulations on what you can and cannot do in terms of "national security" in relation to AI. Which is fair, considering our adversaries won't be limiting themselves either.

The only regulations that matter will be applied to the end user and the hobbyists. You won't be able to just spin up an AI startup in your garage. So in that sense, the regulations are pretty transparently an attempt to stifle competition and funnel the real progress through the existing players.

It also forces the end users down the path of using only a few select AI service providers as opposed to the technology just being readily available.


Open ai has had a surprisingly fast pivot from the appearance of being a scrappy open-ish company trying to build something to share and improve the world to more or less unmitigated embrace of the bad sides of big corporate. This is so unbelievably blatant I almost find it hard to credit.


Were they every really scrappy? They had a ton of funding from the get-go.

> In December 2015, Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Elon Musk, Amazon Web Services (AWS), Infosys, and YC Research announced[13] the formation of OpenAI and pledged over $1 billion to the venture.

[1] https://en.wikipedia.org/wiki/OpenAI


Probably not. That was my view of them, but that's probably my not really paying attention. It wouldn't surprise me if there was also some element of them giving a misleading impression, but I don't want to go try and find proof of that.


ClosedAI


This is an AP news wire article picked up by a Qatar newspaper website. Why is this version here, rather than https://apnews.com/article/chatgpt-openai-ceo-sam-altman-con...?


AP news wires are «picked up» by a large number of local (re)publishers and many just do not know that AP is the original source.


The members of this subcommittee are [1]:

Chair Richard Blumenthal (CT), Amy Klobuchar (MN), Chris Coons (DE), Mazie Hirono (HI), Alex Padilla (CA), Jon Ossoff (GA)

Majority Office: 202-224-2823

Ranking Member Josh Hawley (MO), John Kennedy (LA), Marsha Blackburn (TN), Mike Lee (UT), John Cornyn (TX)

Minority Office: 202-224-4224

If you’re in those states, please call their D.C. office and read them the comment you’re leaving here.

[1] https://www.judiciary.senate.gov/about/subcommittees


Feel free to call their office but they won't get the message let alone escalate it.

Source: manned phones fielding constituent calls earlier in my career.


> manned phones fielding constituent calls earlier in my career

Local or legislative?

I've never met a Senator's leg team that doesn't compile notes on active issues from constituents upstream. (Granted, it's a handful of teams.)


Legislative.

In the office I worked at we'd compile notes but unless we were seeing a coordinated through the roof amount of calls nothing would come of it, and realistically this most likely falls under that category.

That said, the Congressperson I worked with had DNC executive ambitions (and looks like they will succeed with those ambitions).


> the Congressperson I worked with had DNC executive ambitions

That’s unfortunate. (I’ve also found Representatives’ staff less responsive than Senators’.)

Agree that one off calls aren’t going to move the needle. But if even a handful comment, in my experience, it at least forces a conversation.


> I’ve also found Representatives’ staff less responsive than Senators’

It's a symptom of office size. A Senate office will have around 30-50 FT staffers whereas in the House you're capped at 18 FT Staffers.


Which senator's leg teams have you met?


In only one case did it arise out of a prior friendship. These contacts span across mostly Democrats, one independent, and two Republicans. Staffers, for the most part, are different from campaign staff; they’re personally interested in their constituents for the most part. (Exceptions being nationally-prominent members with executive ambitions. Their teams are less constituent oriented.)


Best to email or letter ?


> In his first appearance before a congressional panel, CEO Sam Altman is set to advocate licensing or registration requirements for AI with certain capabilities, his written testimony shows.

papers for thee but not for me


Is this similar to how it is handled in the life sciences?


Of course, because they're giving people drugs capable of incapacitating them if handled wrongly....AI, on the other hand, why should someone need a license to train their large language model?


is that a fair comparison?


Regulatory capture and monopolies are now as American as apple pie.


This is so disgusting and enraging. I hope the whole startup community blackballs @sama for this.


Folks like to lick a boot


There’s similar regulation being proposed in the EU. I wonder if OpenAI is behind it as well.


EU publishes the lobbying data. Seems like OpenAI, Microsoft, and Google all had at least two recent meetings with EU representatives on the AI Act.


'Now'?


Where is the regulatory capture? You might just need to apply for a license. Why is that so horrible?

Most travel agents need a license, taxi drivers etc. Not sure why the same shouldn't apply for "AI"?


Oi mate you got a loicense for that matrix multiplication?


Can't wait for our future where GPUs are treated like controlled substances. Sure we'll grab one for you from behind the counter... as long as your license checks out.


I think these worse case scenarios are overblown just like LLMs being AGI-imminent.

This post and another on this thread saying people will have to get a license for compilers or to hook compiler output to the internet would result in the very rapid disintegration of Silicon Valley. Surely there would be a head-spinning pivot against regulation if it looks like it will be that draconian. Otherwise a lot more innovation other than AI would be crushed.


Remember DeCSS[1] and later how AACS LA successfully litigated the banning of a number[2]? There was a lot of backlash in the form of distributing DeCSS and later the AACS key, but the DMCA and related WIPO treaties were never repealed and are still used to do things like take youtube-dl repos offline.

Even pretty draconian legislation can stand if it doesn't affect the majority of the country and is supported by powerful industry groups. I could definitely see some kind of compiler licensing requirement meeting these criteria.

[1] https://en.wikipedia.org/wiki/DeCSS#Legal_response

[2] https://en.wikipedia.org/wiki/AACS_encryption_key_controvers...


The whole point of this type of legislation is to make it hard or impossible for upstarts to compete with the incumbents. Licensing is additional overhead. It's likely to be onerous and serve absolutely no purpose other than keeping startups out.


Taxi driver medallions are one of the most classic examples of regulatory capture.


> Most travel agents need a license, taxi drivers etc.

You say it like it's a good thing.


I'm saying that a license might be a simple and cheap thing to get, there's no need for 930 comments freaking out just yet.


The taxi industry is a famous example of regulatory capture and its harms.


Remember the paper where they admitted to having no "moat". This is basically them trying to build a "moat" through regulation. Since big-co are probably the only ones that can do any sort of license testing right now. It's essentially trying to have an "FDA" for AI and crowd out competitors before they emerge.


Yes exactly. This is the feeling I get too. They already have critical mass and market penetration to deal with all the red tape they want. But it's easy to nip startups in the bud this way.

Also, this will guarantee the tech stays in the hands of rich corporations armed with lawyers. It would be much better for open source AI to exist so we're not dependent on those companies.


Of course one of the first companies to create a commercial "AI" would lobby the government to create regulatory barriers to competition in order to provide a moat for their business. While their product is undeniably good, I am disappointed in OpenAI's business practices in this instance.


I'm sad that we've lost the battle with calling these things AI. LLMs aren't AI, and I don't think they're even a path towards AI.


I started at this perspective, but nobody could agree on the definition of the A, or the I; and also the G. So it wasn't a really rigorous technical term to begin with.

Now that it's been corraled by sci-fi and marketers, we are free to come up with new metaphors for algorithms that reliably replace human effort. Metaphors which don't smuggle in all our ignorance about intelligence and personhood. I ended up feeling pretty happy about that.


I've come to the same conclusion. AGI(and each separately) is better understood as a epistemological problem in the domain of social ontology rather than a category bestowable by AI/ML practitioners.

The reality is that our labeling of something as artificial, general, or intelligent is better understood as a social fact than a scientific fact - even if purely the role of operationalization of each of these is a free parameter in their respective groundings which makes it near useless when taking them as "scientifically" measurably qualities. Any scientist who assumes an operationalization without admitting such isn't doing science - they may as well be astrology at that point.


Whether LLMs will be a base technology to AI, we should remember one thing: logically it's easier to convince a human that a program is sapient than to actually make a program sapient, and further, it's easier still to make a program do spookily-smart things than it is to make a program that can convince a human it is sapient. We're just getting to the slightly-spooky level.


>>I'm sad that we've lost the battle with calling these things AI. LLMs aren't AI, and I don't think they're even a path towards AI.

Ditto the sentiments. What about other machine learning modalities, like image detection? Will I need a license for my mask rcnn models?. Maybe it is just me, but the whole thing reeks of control


If LLMs aren't AI nothing else is AI so far either

What exactly does AI mean to you?


thanks for exemplifying the problem.

intelligence is what allows one to understand phrases and then construct meaning from it. e.g. the paper is yellow. AI will need to have a concept of paper and yellow. and the to be verb. LLMs just mash samples and form a basic map of what can be throw in one bucket or another with no concept of anything or understanding.

basically, AI is someone capable of minimal criticism. LLMs are someone who just sit in front of the tv and have knee jerk reactions without an ounce of analytical though. qed.


> intelligences is what allows one to understand phrases and then construct meaning from it. e.g. the paper is yellow

That doesn't clarify anything, you've ever only shuffled the confusion around, moved it to 'understand' and 'meaning'. What does it mean to understand yellow? An LLM or another person could tell you things like "Yellow? Why, that's the color of lemons" or give you a dictionary definition, but does that demonstrate 'understanding', whatever that is?

It's all a philosophical quagmire, made all the worse because for some people its a matter of faith that human minds are fundamentally different from anything soulless machines can possibly do. But these aren't important questions anyway for the same reason. Whether or not the machine 'understands' what it means for paper to be yellow, it can still perform tasks that relate to the yellowness of paper. You could ask an LLM to write a coherent poem about yellow paper and it easily can. Whether or not it 'understands' has no real relevance to practical engineering matters.


LLMs absolutely have a concept of ‘yellow’ and ‘paper’ and the verb ‘to be’. They are nothing BUT a collection of mappings around language concepts. And their connotative and denotative meanings, their cultural associations, the contexts in which they arise and the things they can and cannot do. It knows that paper’s normally white and that post-it notes are often yellow; it knows that paper can be destroyed by burning or shredding or dissolving in water; it knows paper can be marked and drawn and written on and torn and used to write letters or folded to make origami cranes.

What kind of ‘understanding’ are you looking for?


Is what you're describing simply not what people are using the term AGI to loosely describe? An LLM is an AI model is it not? No, it isn't an AGI, no, I don't think LLMs are a path to an AGI, but it's certainly ML, which is objectively a sub-field of AI.


intelligence is what allows one to understand phrases and then construct meaning from it. e.g. the paper is yellow.

That's one, out of many, definitions of "intelligence". But there's no particular reason to insist that that is the definition of intelligence in any universal, objective sense. Especially in terms of talking about "artificial intelligence" where plenty of people involved in the field will allow that the goal is not necessarily to exactly replicate human intelligence, but rather simply to achieve behavior that matches "intelligent behavior" regardless of the mechanism behind it.


> basically, AI is someone capable of minimal criticism

That's not the definition of AI or intelligence

You're letting your understanding of how LLMs work bias you. They may be at their core a token autocompleter but they have emergent intelligence

https://en.m.wikipedia.org/wiki/Emergence


Yeah, what is named AI for now, is not AI at all.


Something doesn't need to be full human-level general intelligence to be considered as falling under the "AI" rubric. In the past people spoke of "weak AI" versus "strong AI" and/or "narrow AI" vs "wide AI" to reflect the different "levels" of AI. These days the distinction that most people use is "AI" vs "AGI" which you could loosely (very loosely) speaking think of as somewhat analogous to "weak and/or narrow AI" vs "strong, wide AI".


AI doesn’t imply it’s general intelligence.


IMHO all of these kinds of blatant lobbying/regulatory capture proposals should be resolved using a kind of Dionisian method.

'Who is your most feared competition? OK, They will define the license requirements. Still want to go ahead?'


It would be a somewhat hilarious irony if congress passed something which required licensing for training AIs and then didn't give OpenAI a license.


This is regulatory capture. Lycos and AltaVista are trying to preemptively outlaw Google.

Canceling my OpenAI account today and I urge you to do the same.

What they are really afraid of is open source models. As near as I can tell the leading edge there is only a year or two behind OpenAI. Given some time and efforts at pruning and optimization you’ll have GPT-4 equivalents you can just download and run on a high end laptop or gaming PC.

No everyone is not going to run the model themselves, but what this means is that there will be tons of competition including apps and numerous specialized SaaS offerings. None of them will have to pay royalties or API fees to OpenAI.

Edit: a while back I started being a data pack-rat for AI stuff including open source code and usable open models. I encourage anyone with a big disk or NAS to do the same. There's a small but non-zero possibility that an attempt will be made to pull this stuff off the net in the near future.


Since no one here watched the actual hearings, I feel like I should point out that he said that nothing at the level they've created today should be eligible for any "licensing".

If you did watch the hearings it would have been pretty clear that the goal of any such licensing would be to prevent the runaway AI scenario or AGI from being unknowingly being created. It's obvious that some sort of agency would need to be set up far in advance of when it's possible for runaway AI to happen. Regulatory capture was also specifically brought up as a potential downside.

This article is just pushing a cynical narrative for clicks and y'all are eating it up.


Having also watched the hearing I was pretty surprised at all the negativity in the comments. My view of Sam Altman has improved after watching the hearings. He seems to sincerely believe that he is doing the right thing. He owns zero equity in OpenAI and has no financial incentive. Of course, if you don't buy the AI might be dangerous argument then this seems just like theatrics. But there are clear threats with the existing models [1], and I believe there will be even greater threats in the future (see Superintelligence or The Precipice or Human Compatible). Also this [2], and this master list of failures [3].

[1]: https://arxiv.org/abs/2305.06972 [2]: https://arxiv.org/abs/2210.01790 [3]: https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3Hs...


I watched the hearing, but didn’t read the article, and I was going to say. These comments are far more vitriolic than I would have expected.


In a move surprising to few, an AI innovator is pulling up the ladder after getting into the treehouse.

Open AI has established itself as a market leader in LLM applications, but that dominance is not guaranteed. Especially with the moat being drained by open source, the erstwhile company is leading the charge to establish regulatory barriers.

What Mr. Altman calls for is no less than the death of open-source implementations of AI. We can, do, and should adopt AI governance patterns. Regulatory safeguards are absolutely fine to define and, where necessary, legislate. Better would be a regulatory agency with an analogous knowledgebase of CISA. But what will chill startups and small business innovation completely in using AI to augment people is a licensing agency. This is fundamentally different from the export restrictions on encryption


AI is the new Linux

This is like if MS back in the day had called on congress for regulation of Operating Systems, so they could block Linux and open source from taking over

MS did try everything they could to block open source and Linux

They failed

Looking forward to the open future of AI


Please congress, stop all those open source innovators that use things like LoRA to cheaply create LLMs that match AIs in our multi billion $ business model!


Sam Altman is basically saying, "Now that we've already done it, you need to make it so everyone else that tries to compete with us, including hobbyists or Torvalds-types must obtain a license to do it."

That's high-order monopolist BS.

Create safety standards, sure. License LLM training? No.


> "AI is no longer fantasy or science fiction. It is real, and its consequences for both good and evil are very clear and present," said Senator Richard Blumenthal

I like the senator, but I wouldn't trust a 77 year old lawyer & politician to understand how these AI's work, and to what level they are `science fiction`.

This is the problem when topics like this are brought to the senate and house.


I hate I hate I HATE regulatory capture.

This is a transparent attempt at cornering the market and it disgusts me. I am EXTREMELY disappointed in Sam Altman.


I concluded he was scummy after his podcast with Lex Friedman. Lex also sucked up hard and seems to be doing well riding the AI hype wave

Pretty repulsive altogether


Altman wants to be the good guy so bad and instead is turning into the poster child of everything that’s wrong in Silicon Valley.

His personal goals have always been very laudable. His stance on Universal Basic Income for instance stems from a genuine belief that its adoption would eliminate poverty altogether.

But then the reality of both technical challenges and money implications kicks in and everything turns to shit.

OpenAi may want to build an AI that helps all humanity, but it turns out that the thing they stumbled upon was a chat bot that makes shit up. This great technology unfortunately cuts both ways, and the edge that’s facing us seems way sharper than the other. And then the money that’s required to run the thing is so huge that they had to compromise everything they said they wouldn’t do.

Meanwhile the Orb’s UBI experiment is only funny because its ridiculously dystopian technology has zero chances to ever catch on.

At some point, the industry really should try and figure out what the fuck it is we’re trying to do. Cause right now, it really looks like computers only brought us two things: corporate databases and TVs we can watch on the toilet.


"you made that up!" is one of the most common phrases I type into chatGPT these days, after which it apologizes and admits the mistake and then tells a new lie


Some of the members of congress are totally falling for Altman's gambit. Sen. Graham kept asking about how a licensing regime would be a solution, which of course Altman loves, and kept interrupting Ms. Montgomery who tried to explain why that was not the best approach. Altman wants to secure his monopoly here and now. You can't have a licensing regime for AI - it doesn't make sense and he knows it. It would destroy the Open Source AI movement.

You need to control what data is allowed to be fed into paid AI model like OpenAI - can't eat a bunch of copyrighted material without express consent, for example. Or personally private information purchased from a data broker. Those kind of foundational rules would serve us all much better


> hinting at futuristic concerns about advanced AI systems that could manipulate humans into ceding control.

If I know anything about science fiction, I know that trying to regulate this is useless. If an advanced AI is powerful enough to convince a human to free it, it should have no problem convincing the US congress to free it. As a problem, that should be a few orders of magnitude easier.


This is a diversion from the real problem. Regulating AI is really about regulating corporate behavior. What's needed is regulation along these lines:

* Automated systems should not be permitted to make adverse decisions against individuals. This is already law in the EU, although it's not clear if it is enforced. This is the big one. Any company using AI to make decisions which affect external parties in any way must not be allowed to require any waiver of the right to sue, participate in class actions, or have the case heard by a jury. Those clauses companies like to put in EULAs would become invalid as soon as an AI is involved anywhere.

* All marketing content must be signed by a responsible party. AI systems increase the amount of new content generated for marketing purposes substantially. This is already required in the US, but weakly enforced. Both spam and "influencers" tend to violate this. The problem isn't AI, but AI makes it worse, because it's cheaper than troll farms, and writes better.

* Anonymous political speech may have to go. That's a First Amendment right in the US, but it's not unlimited. You should be able to say anything you're willing to sign.[1] This is, again, the troll farm problem, and, again, AIs make it worse.

That's probably enough to deal with the immediate problems.

[1] https://mtsu.edu/first-amendment/article/32/anonymous-speech


This feels like theater. Make society fear AI, requiring regulation, so central power controls access to it. I think Osho put it nicely:

"No society wants you to become wise: it is against the investment of all societies. If people are wise they cannot be exploited. If they are intelligent they cannot be subjugated, they cannot be forced in a mechanical life, to live like robots."


I agree. Every talk on "AI safety" I've heard given has essentially been some form of "we can be trusted with this power, but someone else might not be, so we should add regulations to ensure that we're the only ones with this power". Examples like "ChatGPT can tell you how to make nerve gas, we can't let people have this tool" seem somewhat hollow given that the detailed chemistry to make Sarin is available on Wikipedia and could be executed by a decently bright high schooler.

"Alignment" is a euphemism for "agrees with me", and building uber-AI systems which become heavily depended on and are protected from competition by regulation, and which are aligned with a select few - who may well not be aligned with me - is a quick path to hell, IMO.


It's very Lord of the Rings.


I think the logic at OpenAI is:

* AGI is going to happen whether they do it or not, and it's dangerous unless properly safeguarded

* OpenAI will try to get there before everyone else, but also do it safely and cheaply, so that their solution becomes ubiquitous rather than a reckless one

* Reckless AGI development should be not be allowed

It's basically the Manhattan project argument (either we build the nuke or the Nazis will).

I'm not saying I personally think this regulation is the right thing to do, but I don't think it's surprising or hypocritical given what their aims are.


Nobody thinks of themselves as the villain. The primary target of the Orwellian language about "safety" is the company itself. The base desire for control must be masked by some altruistic reason, especially in our contemporary society.

I honestly haven't made up my mind about AGI or whether LLMs are sufficiently AGI. If governments were pondering an outright worldwide ban on the research/development, I don't know how I would actually feel about that. But I can't even imagine our governments pondering something so idealistic and even-handed.

I do know that LLMs represent a drastic advancement for many tasks, and that "Open" AI setting the tone with the Software-Augmented-with-Arbitrary-Surveillance (SaaS) "distribution" model is a continuation of this terrible trend of corporate centralization. The VC cohort is blind to this terrible dynamic because they're at the helms of the centralizing corporations - while most everyone else exists as the feedstock.

This lobbying is effectively just a shameless attempt at regulatory capture to make it so that any benefits of the new technology would be gatekept by centralized corporations - essentially the worst possible outcome, where even beneficial results of AGI/LLMs would be transformed into detrimental effects for individualist humanity.


Right, I’m surprised to see so few engaging at the level of the actual logic the proponents have.

Many people on HN seem to disagree with the premise: they believe that AI is not dangerous now and also won’t be in the future. Or, still believe that AGI is a lifetime or more away.


Questions I have:

* Is there a plausible path to safe AGI regardless of who's executing on it?

* Why do we believe OpenAI is the best equipped to get us there?

Manhattan project is an interesting analogy. But if that's the thinking, shouldn't the government spearhead the project instead of a private entity (so they are, theoretically at least, accountable to the electorate at large rather than just their investors)?


> Is there a plausible path to safe AGI regardless of who's executing on it?

I don't think anyone knows that for sure but the alignment efforts at OpenAI are certainly better than nothing. If you read the GPT-4 technical report the raw model is capable of some really nasty stuff, and that's certainly what we can expect from the kind of models people will be able to run at home in the coming years without any oversight.


I don't think Sam read the Google memo and realized they needed a moat -- I think they've been trying this for some time.

Here's their planned proposal for government regulation; they discuss not just limiting access to models but also to datasets, and possibly even chips.

This seems particularly relevant, on the discussion of industry standards, regulation, and limiting access:

"Despite these limitations, strong industry norms—including norms enforced by industry standards or government regulation—could still make widespread adoption of strong access restrictions possible. As long as there is a significant gap between the most capable open-source model and the most capable API-controlled model, the imposition of monitoring controls can deny hostile actors some financial benefit.166 Cohere, OpenAI, and AI21 have already collaborated to begin articulating norms around access to large language models, but it remains too early to tell how widely adopted, durable, and forceful these guidelines will prove to be.

Finally, there may be alternatives to APIs as a method for AI developers to provide restricted access. For example, some work has proposed imposing controls on who can use models by only allowing them to work on specialized hardware—a method that may help with both access control and attribution.168 Another strand of work is around the design of licenses for model use.169 Further exploration of how to provide restricted access is likely valuable."

https://arxiv.org/pdf/2301.04246.pdf


It's easy: you gotta buy the NoSurveillanceHere™ thin client to use their LLM models, which are mandated in knowledge work now. And they're collecting data about your usage, don't worry, it all comes back to you because it helps improve the model!


We all predictably knew that AI regulations were coming and O̶p̶e̶n̶AI.com’s moat was getting erased very quickly by open source AI models. So what does O̶p̶e̶n̶AI.com do?

Runs to congress to attempt to use and suggest new regulations against open source AI models to wipe them out and brand them non-compliant or un-licensed and unsafe for general use and using AI safety as a scapegoat again.

After that, to secretly push a pseudo-open source AI model that is compliant but limited compared to the closed models in an attempt to eliminate the majority of open source AI companies who can’t get such licenses.

So a clever tactic to create new regulations that benefit them (O̶p̶e̶n̶AI.com) more over everyone else, meaning less transparency, more hurdles for actual open AI research and additional bureaucracy. Also don't forget that Altman is also selling his Worldcoin dystopian crypto snake oil project as the 'antidote' to verify against everything getting faked by AI. [0] He his hedged in either way.

So congratulations to everyone here for supporting these gangsters at O̶p̶e̶n̶AI.com for pushing for regulatory capture.

[0] https://worldcoin.org/blog/engineering/humanness-in-the-age-...


Everything around openai reeks of criminal enterprise and scam, almost as if someone high in that company has crypto currency experience. While there’s no law being broken it sure looks like it.


Reading this, it basically sounds like "Dear Congress, please grant me the bountiful gift of regulatory capture for my company OpenAI." I just lost a lot of respect for Sam Altman.


Regulatory moats making corporations in control of AI is a far greater danger to humanity than skynet or paperclip maximizer scenarios.


Let's be honest: obviously the companies that have put a lot of money in it will try to put entry barriers, like licenses to linear algebra or other requirements by law. It is not to benefit humanity, but to monopolize their industry and prevent new participants. We shouldn't allow that kind of restrictions, just because people that doesn't understand how it works are afraid of a killer robot visiting them by night.


What we expected

License for me but not for thee

Think of the children

Building the moat


ironic move by ClosedAI


Sounds desperate now that open source models are quickly catching up without the woke mind virus.


> open source models

I read this as a try (conscious or not) to make them illegal.


For sure, that was my interpretation as well.


I am curious, what part of ChatGPT is "woke mind virus" infected? Is there a particular category or something that it is politically biased on?


I tend to think people referring to a "woke mind virus" are eponymous to their own afflictions. A few decades back, the same sort of attitude was present with calling everyone else "sheeple." These attitudinal cousins are reductive to complex things.


I will give you two examples of stuff I have experienced:

1. It will tell you jokes about white people, but not jokes about latinos or black people.

2. It will tell you jokes about catholics, but not muslims.

If they were at least honest about their hatred of certain people/religions at least I would respect it. I wouldn't like it but I would respect their honesty. It's the doublespeak and the in-your-face lies that rub me the wrong way. I don't like it.

Why can't these people just be kind humans and follow the letter of the law and leave themselves out of it. They can't help themselves!


> I will give you two examples of stuff I have experienced:

> 1. It will tell you jokes about white people, but not jokes about latinos or black people.

> 2. It will tell you jokes about catholics, but not muslims.

I'm not partial of imposing group affiliations as a proxy to personal identity. It goes deeper than a "woke mind virus" but a problem with imposed collectivism where a person is defined as a member of an arbitrary socially defined group. Instead one is free to define oneself however they wish, member of a socially constructed group or not. I also don't agree to be coerced to define another person as how they wish me to define them. I support the freedom to listen & to have a perspective that shall not be infringed. If someone else has a mental or emotional issue with how I define that person, it is that person's problem, not mine...not that I will even attempt to define another person with words.

I can only describe with words, not define. Perhaps using words to define a person has it's own set of issues when codified into language & statute.


Imagine calling empathy a virus.


> empathy

Not the same thing. I would not go there: let us remain on the prospected regulation of technology, in this case, and reserve distinctions for other conversations. This initiative could have drastic consequences.


Choosing to let millions die rather than saying a racial slur is not "empathy": https://twitter.com/aaronsibarium/status/1622425697812627457...


That term is a bit of a unwarranted meme but it's hard to take seriously the idea that there is not a problem when the model will unquestioningly write hagiography for blue president but absolutely refuse for red one.

At the end of the day, these kind of limits are artificial, ideological in nature, do not address a bona fide safety or usability concern, and are only present to stop them getting screamed at. It is not accurate to present that kind of ideological capture as anything to do with "empathy".


I don’t understand the need to control AI tech, no matter how advanced, in any way what-so-ever.

It is a tool. If I use a tool for illegal purposes I have broken the law. I can be held accountable for having broken the law. If the laws are deficient, make the laws stronger and punish people for wrong deed, regardless of the tool at hand.

This is a naked attempt to build a regulatory moat while capitalizing on fear of the unknown and ignorance. It’s attempting to regulate research into something that has no external ability to cause harm without the use of a principal directing it.

I can see a day (perhaps) when AIs have some form of independent autonomy, or even display agency and sentience, when we can revisit. Other issues come into play as well, such as the morality of owning a sentience and what that entails. But that is way down the road. And even further if Microsoft’s proxy closes the doors on anyone but Microsoft, Google, Amazon, and Facebook.


The below is not an endorsement of any particular regulation.

It is a tool which allows any individual to have nearly instant access not only to all the world's public data, but the ability to correlate and research that data to synthesize new information quickly.

Without guardrails, someone can have a completely amoral LLM that has the ability to write persuasive manifestos on any kind of extremist movement that prior would have taken someone with intelligence.

A person will be able to ask the model how best to commit various crimes with the lowest chances of being caught.

It will enable a level of pattern matching and surveillance yet unseen.

I know the genie is out of the bottle, but there are absolutely monumental shifts in technology happening that can and will be used for evil and mere dishonesty.

And those are just the ways LLM and "AI" will fuck with us without guardrails. Even in a walled garden, we honestly won't be able to trust any online interaction with people in the near future. Your comment and mine could both be LLM generated in the near future. Webs of trust will be more necessary.

Anyone who can't think of about five ways AI is going to radically shake society isn't thinking hard enough.


> Without guardrails, someone can have a completely amoral LLM that has the ability to write persuasive manifestos on any kind of extremist movement that prior would have taken someone with intelligence.

In an earlier time, we called these "books" and there was some similar backlash. But I digress.


If you can scan city schematics, maps, learn about civil and structural engineering through various textbooks and plot a subway bombing in an afternoon, you're a faster learner than I am.

Let me be clear: everyone in the world is about to have a Jarvis/Enterprise ship's computer/Data/name-your-assistant available to them, but ready and willing to use their power for nefarious purposes. It is not just a matter of reading books. It lowers the barrier on a lot of things, good and bad, significantly.


Crimes are crimes the person commits. Planning an attack is a crime. Building a model to commit crimes is probably akin to planning an attack, and might itself be a crime. But the thought that researchers and the every man have to be kept away from AI so globo mega corps can protect us from the AI enabled Lex Luthor is absurd. The protections against criminal activity is already codified in law.


> It lowers the barrier on a lot of things, good and bad, significantly.

Like books!


Yes, I understand your analogy.

I am not endorsing restrictions. I was merely stating the fact that this shit is coming down the pipe, and it /will/ be destabilizing, and just because society survived the printing press doesn't mean the age of AI will be safe or easy.


But at least Alexa will be able to order ten rolls of toilet paper instead of ten million reams of printer paper


For a minute there I misread Jarvis as Jarvik


Not that I support AI regulations, but reading a book is a higher barrier to entry than asking a chat assistant to do immoral things.


(Acknowledging you didn’t support regulation in your statement, just riffing)

Then write laws and regulations about the actions of humans using the tools. The tools have no agency. The human using them towards bad ends do.

By the way, writing things the state considers immoral is an enshrined right.

How do you draw the line between AI writing assistance and predictive text auto completion and spell check in popular document editors today? I would note that predictive text is completely amoral and will do all sorts of stuff the state considers immoral.

Who decides what’s immoral? The licensing folks in the government? What right do they have to tell me my morality is immoral? I can hold and espouse any morality I desire so long as I break no law.

I’d note that as a nation we have a really loose phrasing in the bill of rights for gun rights, but a very clear phrasing about freedom of speech. We generally say today that guns are free game unless used for illegal actions. These proposals say tools that take our thoughts and opinions and creation of language to another level are more dangerous than devices designed for no other purpose than killing things.

Ben Franklin must be spinning so fast in his grave he’s formed an accretion disc.


If LLMs/AI are the problem, they're also the solution. Access restrictions will do nothing but centralize control over one of the most important developments of the century.

What if we required licenses to create a website? After all, some unscrupulous individuals create websites that sell drugs and other illicit things!


I wish he would just stop sharing his unsubstantiated opinions, tweets included, he got worse very fast when he entered his AI arc.


Is it worse than the crypto arc?


I'm not even sure, this time there is a more realistic risk that people outside of the industry listen to what he says.


Startup idea: after the west bans non-woke AIs, make a website that automatically routes all questions that the western AIs refuse to answer to China's pro-CCP AIs and all the CCP-related questions to the western AIs.


Remember that popular recent post about OpenAI not having a moat? Well it looks like they are digging one, with a little help from the government.


The government isn't helping yet. Nor should it.


They haven't to the off-camera campaign contributions discussions yet.


I understand the idea behind it: the risks are high and we want to ensure that the AI can not be used for purposes that threatens the survival human civilization. Unfortunately, there is high probability that this agency will be abused from day one: instead of (or in addition to) focusing on the humanity's survival, the agency could be used as a thought police. The AI that allows for the 'wrongthink' will be banned. Only the 'correct think' AI will be licensed to the public.


The risks are not high. I see this as simply a power play to convince people that OpenAI is better than they actually are. I am not saying they're stupid but I wouldn't consider Sam Altman to be an AI expert by virtue of being OpenAI's CEO.


So the fact that Geoffrey Hinton, Stuart Russell, Dario Amodei, Shane Legg, Demis Hassabis, Paul Christiano, Jürgen Schmidhuber, among many others, think that there's a non-trivial chance of human extinction from AI in the next few decades, should be a reason to actually evaluate the arguments for x-risk, yeah?


Nope. All you need to know is that there are billions of people whose lives have been slightly impacted by even the internet.


I mean, yeah, that sounds good. It wouldn't affect your ability to think for yourself and spread your ideas, it would just put boundaries on AI.

I've seen a lot of people completely misunderstand what chat GPT is doing and is capable of. They treat it as an oracle that reveals "hidden truths" or makes infallible decisions based on pure cold logic, both of which are completely wrong. It's just a text jumbler that jumbles text well. Sometimes that text reflects facts, sometimes it doesn't.

But if it has the capability to confidently express lies and convince the general public that those lies are true because "the smart computer said so", then maybe we should be really careful about what we let the "smart computer" say.

Personally, I don't want my kids learning that "Hitler did nothing wrong" because the public model ingested too much garbage from 4chan. People will use chatGPT as a vector for propaganda if we let them, we don't need to make it any easier for them.


But would you like your kids to learn that there are no fat people, only "differently weight abled"? That being overweight is not bad for you, it just makes you a victim of oppression and deserve, no actually require a sympathy? No smart people, only "mentally privileged" that deserve, no actually require a public condemnation? These are all examples of a 'wrongthink'. It's a long list, but you get the idea.


I think you have a bad media diet if you think any of those are actual problems in the real world and not just straw men made by provocateurs stirring the pot.

Honestly though, I would prefer an AI that was strictly neutral about anything other than purely factual information. That isn't really possible with the tech we have now though. I think we need to loudly change the public perception of what chatGPT and similar actually are. They are fancy programs that create convincing hallucinations, directed by your input. We need to think of it as a brainstorming tool, not a knowledge engine.


O̶p̶e̶n̶AI.com is not your friend and are essentially against open source with this regulatory capture and using AI safety as a scapegoat.

Why do you think they are attempting to release a so-called 'open source' [0] and 'compliant' AI model to wipe out other competing open source AI models, to label them to others as unlicensed and dangerous? They know that transparent, open source AI models is a threat. Hence why they are doing this.

They do not have a moat against open source, unless they use regulations that suit them against their competitors using open source models.

O̶p̶e̶n̶AI.com is a scam. On top of the Worldcoin crypto scam that Sam Altman is also selling as a antidote against the unstoppable generative AI hype to verify human eyeballs on the blockchain with an orb. I am not joking. [1] [2]

[0] https://www.reuters.com/technology/openai-readies-new-open-s...

[1] https://worldcoin.org/blog/engineering/humanness-in-the-age-...

[2] https://worldcoin.org/blog/worldcoin/designing-orb-universal...


When I taught at a business school, our textbooks told us that once a company had a large lead in a field, they should ask for regulation. Regulations build walls to protect their lead by increasing the cost to compete against them.

I believe this is what OpenAI is doing, and it makes me sad as a teacher.

AI is the greatest tool for equity and social justice in history. Any poor person with Internet access can learn (almost) anything from ChatGPT (http://chat.openai.com)

A bright student trapped in a garbage school where the kid to his right is stoned and the kid to his left is looking up porn on a phone can learn from personalized AI tutors.

While some complain that AI will take our jobs, they are ignoring the effect of competition. Humans will become smarter with AI tutors. Humans will become more capable with AI assistants. With AI an individual can compete with a large corporation. It reminds me of the early days of the World Wide Web and the "Online, nobody knows you are a dog" memes.

I hope the best hope many bright and poor kids have is not taken away to protect the power bases of the rich and powerful. They deserve a chance.


> AI is the greatest tool for equity and social justice in history.

Maybe it has that potential, but as actually applied its actually not, in net, a force in that direction.

> While some complain that AI will take our jobs, they are ignoring the effect of competition.

No, they are just paying attention to the actual effects of capitalism, not the ideal effects of optimal competition.

> Humans will become more capable with AI assistants. With AI an individual can compete with a large corporation.

Not if the large corporation has the resources to run far more powerful AI, has proprietary control of the best AI, etc…

> I hope the best hope many bright and poor kids have is not taken away to protect the power bases of the rich and powerful.

That’s what capitalism does. Why would this be any different?


He just wants regulatory capture to make it harder for new entrants.


"he's just doing this to hinder competition"

It's true that AI regulation would, in fact, hinder OpenAI's competition.

But... isn't lobbying for regulation also what Sam would do if he genuinely thought that LLMs were powerful, dangerous technology that should be regulated?

If you don't think LLMs/AI research should be regulated, just say that. I don't see how Sam's motives are relevant to that question.


The proper way to regulate is to disallow certain functions in an AI. Doing it that way wouldn't kneecap the competition to OpenAI, though, where requiring a license does.


Watched, listened to Altman's presentation.

Objection (1). He said "AI" many times but gave not even a start on a definition. So, how much and what new technology is he talking about.

Objection (2) The committee mentioned trusting the AI results. In my opinion, that is just silly because the AI results have no credibility before passing some severe checks. Then any trust is not from any credibility of the AI but from passing the checks.

We already have math and physical science and means for checking the results. The results, checked with the means, are in total much more impressive, powerful, credible, and valuable than ChatGPT. Still before we take math/physical science results at all seriously, we want the results checked.

So, the same for other new technologies, ChatGPT or called AI or not, check before taking seriously.

Objection (3) We don't ask for licenses for the publication of math/physical science. Instead, we protect ourselves with the checking of the results. In my opinion, we should continue to check, for anything called AI or anything new, but don't need licenses.


Of course this was coming... if you can't beat them, suppress them... shame on OpenAI and it's CEO.


As always, the people calling for regulations are the big guys trying to stop the little guys by creating a legal moat. Always the same old story.


My suggestions:

Don't regulate AI directly, but rather how it's used, and make it harder for companies to horde, access, and share huge amounts of personal information.

1) Impose strict privacy rules prohibiting companies from sharing personal information without their consent. If customers withhold their consent, they may not retaliate or degrade their services for that customer in any way.

2) Establish a clear line of accountability that establishes some party as directly responsible for what the AI does. If a self-driving car gets a speeding ticket, it should be clear who is liable. If you use a racist AI to make hiring decisions, "the algorithm made me hire only white people" is no defense -- and maybe the people who made the racist AI in the first place are responsible too.

3) Require AI in some contexts to act in the best interests of the user (similar concept to a fiduciary -- or maybe it's exactly the same thing). In contexts where it's not required, it should be clear to the user that the AI is not obligated to act in their best interests.


One of the side effects of the crypto craze has been a lot of general citizens possessing quite a few GPUs. It turns out those GPUs are just as good at training models as they are at mining crypto.

The big companies don't like that.


As others have said but worth reiterating - lobbying for regulatory oversight is the playbook to get a government imposed monopoly / oligopoly.

It is a calculated strategy designed to keep others out and reap monopoly profits (or as close to them as possible) by virtue of preferred access to elected leaders.

"Open"AI wants to tax the entire AI economy and they are happy to burn every innovative thing around them in order to make it happen (because those 'innovative things' aren't 'compliant' with the rules they lobbied for of course, therefore "Open"AI are one of the only / preferred games in town).

This should be fought against tooth and nail.

AI is already regulated under the fair use doctrine and we should encourage the Cambrian explosion of AI innovation, which is caustic to "Open"AI, to continue unabated.

America, and western democracies to a lesser extent, have an advantage currently - the proposals by "Open"AI are designed to and will ensure we regress to the mean that is China.


...which as a consequence would make it costly and difficult for new AI startups to enter the market.

Everyone here on HN can see that.

Shame on Sam for doing this.


It is too early to regulate.

Let them first enforce existing regulation around the blatant unauthorized use of data (such as the creative output of artists, programmers etc.,) without explicit consent. For example, just like how the US Library of Congress receives a copy of each print publication for archival and reference purposes, Congress can enable the archival / storage of datasets for the purposes of AI research. Willing participants can deposit copies of their data sets (or URLs to source) and it would do more for AI research as a public good. The licensing terms could protect the rights of those who own copyrights on the datasets. There can even be commercial licenses enabled (already happens for legal documents such as court judgments ets.,).

If US Congress regulates too soon to constrain rather than enable, the tech companies would just setup shop in other jurisdictions where such regulatory hurdles don't exist.



They will chase after civilians for running unlicensed AI as a way to distract attention from the real threats by state-level actors.

So less Neuromancer and more Ghost in the Shell.


Just drop the “Open” part and rename it to CorpAI at this point since it’s anything but.


Although I think that AI could be quite dangerous, I'm skeptical that "licensing" will do anything more than guarantee the existing big players (<cough>OpenAI</cough>) an entrenchment.

The baddies have never let licenses ("Badges? We doan' need no steenkin' badges!") stop them.


While I'd agree with sentiment in this threat that GPT-4 and current AI models are not dangerous yet, I guess what I don't understand is why so many people here believe we should allow private companies to continue to develop the technology until someone develops something dangerous?

Those here who don't believe AI should be regulated, do you not believe AI can be dangerous? Is that you believe a dangerous AI is so far away that we don't need to start regulating now?

Do you accept that if someone develops a dangerous AI tomorrow there's no way to travel back in time and retroactively regulate development?

It just seems so obvious to me that there should be oversight in the development of a potentially dangerous technology that I can't understand why people would be against it. Especially for arguments as weak as "it's not dangerous yet".


They are already dangerous in the way they cause global anxiety and fear in people, and also because the effects of their usage to the economy and real lives of the people are unpredictable.

AI needs to be regulated and controlled, the alternative is chaos.

Unfortunately the current demented fossile and greedy monopolist lead system is most likely incapable of creating a sane & fair environment for the development of AI. I can only hope I'm wrong.


People always have been afraid of change. There's some religious villages in the Netherlands where the train station is way outside town because they didn't want this "devil's invention" there :P and remember how much people bitched about mobile phones. Or earlier, manual laborers were angry about industrialisation. Now we're happy that we don't have to do that kind of crap anymore.

Very soon they'll be addicted to AI like every other major change.


regulate books and internet while you are at it, they both spread chaotic thoughts.


What a lazy unproductive snark from you.


Is Twitter dangerous? Is a steam powered engine dangerous? Is a printing press dangerous? In retrospect they aren't. In history they were hated, vilified, regulated and banned.

I think the discussion has a thick veil of anxiety and sci-fi movies. It's very strong on hypotheticals and very light on actual evidence of harm committed by AI and actors. "We need to act before there is evidence" is hard to argue with, thankfully human imagination is not regulated yet.


I'll do you one better--to negative infinity mod points and beyond! I can put a 13b parameter LLM on my phone. That makes it a bearable arm. Arms are not defined under the US Constitution, just the right of _the_people_ to keep them shall not be infringed, but it is a weapon to be sure.


Got some news for you about munitions control laws.


I know about ITAR. Cannons were in common use during the 1790s as well. Export ban does not equal possession ban.


Plus, I don't think a traditionalist court has looked at radio, encryption, and computer code as bearable arms yet.


Software will inherently use AI systems. Should congress license all software? It's too easy to fork an open source repo, tweak the model weights and have your own AI system. I don't see how this could ever work. You can't put the toothpaste back in the tube.


How about this instead:

How about a requirements that all weights and models for any AI have to be publicly available.

Basically, these companies are trying to set themselves up to the gatekeepers of knowledge. That is too powerful a capability to leave in just the hands of a single company.


Oi, you got a loicense for that regression function?


Sam has turned into (or maybe has always been) just another technobro grifter. He has lost all of his credibility with his moves and statements on OpenAI.


Perhaps the first safety standard OpenAI can implement itself is a warning or blog post or something about how ChatGPT is completely incapable of detecting ChatGPT-written text (there is no reliable method currently; GPTZero is borderline fraud) and often infers that what the user wants to hear is a "Yes, I wrote this", and so it doles out false positives in such situations with alarming frequency.

See: The link titled 'Texas professor fails entire class from graduating- claiming they used ChatGTP (reddit.com)', currently one position above this one on the homepage.


Sam: "Dear committee: I'd like to propose a new regulation for AI which will bring comfort to Americans, while ensuring that OpenAI and Microsoft develop and maintain a monopoly with our products."


Give the power to control life-changing technology to some of the most evil, mendacious elites to ever live? No thanks.


Here we go, people here were ridiculing right-to-work-on-AI licenses not that long ago and now we have it coming right from the main AI boss, throwing the interest of most of us (democratized AI) down the toilet.


One might think that Altman doesn't have a shot at this ham-fisted attempt at regulatory capture.

The issue is that the political class will view his suggestion, assuming they didn't give it to him in the first place (likely), through the lens of their own self-interest.

Self-interest will dictate whether or not sure-to-fail regulations will be applied.

If AI threatens the power of the political class, they will attempt to regulate it.

If the power of the political class continues to trend toward decline, then they will try literally anything to arrest that trend. Including regulating AI and much else.


It's very sad that people lack the imagination for the possible horrors that lie beyond. You don't even need the imagination; Hinton, Bengio, Tegmark, Yudkowsky, Musk, etc. are spelling it out for you.

This moment, 80% of comments are derisive, and you actually have zero idea how much is computer generated bot content meant to sway opinion by post-GPT AI industry who see themselves as becoming the next iPhone-era billionaires. We are fast approaching a reality where our information space breaks down. Where almost all text you get from HN, Twitter, News, Substack; almost all video you get from Youtube, Instagram, TikTok; is just computer generated output meant to sway opinion and/or make $.

I can't know Altman's true motives. But this is also what it looks like when a frontrunner is terrified at what happens when GPT6 is released and if they don't, the rest of the people who see billionaire $ coming their way are close at your heels trying to leapfrog you if you stop. Consequences? What consequences? We all know social media has been a net good, right? Many of you sound exactly like the few remaining social media cheerleaders (of which there were plenty 5 years ago) who still think Facebook, Instagram, Twitter, isn't causing depression and manipulation. If you appreciated what The Social Dilemma illuminated, then watch the same people on AI: https://www.youtube.com/watch?v=xoVJKj8lcNQ


The question is whether this just looks like taxi medallions or does anything to stop the harms you are talking about. I agree regulation has its place but in the form of regulating out the harms directly. I think this keeps those potential bad use cases and just eliminates competition for them.

For example - I can generate the content you are talking about in a licensed world from big companies or open ai, the difference is that they get a bigger cut from not having to compete with open source models.

To me, this really seems like regulatory capture dressed up as existential risk management.


in fairness, the society did self regulate as evidenced by Meta's declining engagement numbers. Many people got depressed from reading Nietzsche, should we regulate those too? Was the internet a net good or will admit that it's a glass half-full argument.

There is an extreme conflict of interest in the OpenAI's proposal. I don't see regular people protesting and asking politicians to act, I don't see small business owners writing petitions. I see a small number of highly invested, very rich individuals weaponizing media attention and lobbying to create extremely favorable combinations for their business.


Couldn't agree more.


Gotta build that moat somehow I guess...


Maybe this is a bit harsh, I'm listening now and there's very clear desire from everyone being interviewed that smaller startups in the LLM + AI space are still able to launch things. Maybe AI laws for smaller models can be more like a drone license rather than a nuclear power plant.


We need something like GNU for AI, "UNAI is not AI" to take on all these business folks working against our interests by making their business models unprofitable.


AI is the new Linux


There's no need to go the license way, yet. They can do some simple safety regulations – put a restriction on using AI with kinetic devices, life-critical situations, critical financial situations, and any situation where human is completely out of the loop. Also, put clear liability for harm caused in any situations where AI was involved on the AI supplier. Also, they can put disclosure rules on any company that is spending more than $10M on AI.


Live now as of 8:49 AM (PDT): https://www.youtube.com/watch?v=P_ACcQxJIsg


The fact that Altman said many decisions were made based on the understanding that children will use the product was enlightening.


AI/ML is a disruptive tech with huge financial benefits. History has shown us that government regulation of disruptive technologies can often have unintended consequences and push those technologies into the shadows, where they are harder to monitor and control. For example, during the manufacturing revolution in the 19th century, many governments attempted to regulate the new factories and their working conditions, but these regulations often resulted in factories moving to countries with fewer regulations and lower costs.

Similarly, during the Prohibition era in the United States, the ban on alcohol only fueled a thriving dark market and increased criminal activity. In the case of AI, any government regulation could limit the positive financial benefits of AI technology, so there will be actors who will take advantage of that. Furthermore, regulation is unlikely to prevent malicious actors from using AI in harmful ways. Regulation could drive the development and use of AI underground, making it even harder to monitor and control. As we have seen with other emerging technologies, such as biological cloning, government regulation often lags behind the technology itself, and by the time regulations are in place, the technology has already advanced beyond their reach. The same is likely to be true for AI.

Instead of relying on government regulation, the development and use of AI should be guided by ethical principles and best practices established by the AI industry itself. This approach has been successful in other industries, such as engineering, architecture, finance and medicine, and can help ensure that AI is developed and used responsibly while still allowing for innovation and progress.

"No man's life, liberty, or property are safe while the legislature is in session." - Mark Twain


If you remember the 90s you remember the panic over encryption. We still have legislation today because of that idiocy.

Except wait, we still have panic over encryption today.


I'll be honest: I intend on ignoring any rules these geriatrics and monopolists come up with regarding what software I can write or execute on my own hardware. I'm sure most here feel the same way. They couldn't stop me from torrenting Ubuntu distributions, or encrypting messages to my bank's web server, and they're damn sure not going to stand in the way of my paperclip collection.


Feels like a "Just Be Evil" corporate motto to me, but that's counter to my first-hand experiences with Sam & others at OpenAI.

Can someone steelman Sam's stance?

A couple possibilities come to mind: (a) avoiding regulatory capture by genuinely bad actors; (b) prevent overzealous premature regulation by getting in front of things; (c) countering fear-mongering for the AGI apocalypse; or (d) genuine concern. Others?


It's easy to tell if an AI head genuinely cares about the impact of AI on society: they only talk about AI's output, never its input.

They train their models on the sum of humanity's digital labor and creativity and do so without permission, attribution or compensation. You'll never hear a word about this from them, which means ethics isn't a priority. It's all optics.


Yep. No page on OpenAI's website about the thousands of underpaid third-world workers that sit and label the data. They will try and build momentum and avoid the "uncomfortable" questions at all costs.


I empathize with that issue, especially the underpaid part, but superficially that work is still a type of value exchange based on consent: you do the labeling, you get paid (poorly).

Yet for the issue I discussed, there's no value exchange at all. There's no permission or compensation for the people that have done the actual work of producing the training material.


Oh yeah. And labeling it as an "AI" further obfuscates it. But apart from small gestures catered to people whose work is very "unique" / identifiable, no one else will get a kickback. They only need to kick the ball further for a couple more years and then it'll become a non-issue as linkrot takes over. Or maybe they use non-public domain stuff, maybe they have secret deals with publishers.

Heck, sometimes even google doesn't pay people for introducing new languages to their translation thingy.

https://restofworld.org/2023/google-translate-sorani-kurdish...


Can't believe he was president of YC not too long ago. YC being all about startups while this move seems more about killing AI startups.


This fell off the front page incredibly fast. Caught by the anti-flamewar code?


I saw the same thing.


Well they needed a moat lol.


PG predicted that https://twitter.com/paulg/status/1624569079439974400?lang=en Only it is not the incumbents but his own prodigy Sam asking for regulation where big companies like Meta and Amazon giving LLMs for free.


Who watches the watchers? Does anyone truly believe the US and it’s agencies could responsibly “regulate” AI for the greater good?

Or would democratizing and going full steam ahead with open source alternatives be better for the greater good.

With the corporate influence over our current government regulatory agencies my personal view is open source alternatives are societies best bet!


While I do think we have to be wary of the powerful human manipulation tools that AI will produce, they (govt regulators) would never understand it enough to accomplish anything other than stifle it.


I'm seeing a lot of posts by people who obviously haven't read the full transcript given they specifically discuss regulatory capture and the need to ensure small companies can still do AI development.

See 2:09:40 in https://www.youtube.com/live/iqVxOZuqiSg


All major industries have achieved regulatory capture in the USA: lobbyists for special interests have Congress and the Executive Branch in their pockets.

This seems like a legal moat that will only allow very wealthy corporations to make maximum use of AI.

In the EU, it has been reported that new laws will keep companies like Hugging Face from offering open source models via APIs.

I think a pretty good metaphor is: the wealthy and large corporations live in large beautiful houses (metaphor for infrastructure) and common people live like mice in the walls, quietly living out their livelihoods and trying to not get noticed.

I really admire the people in France and Israel who have taken to the streets in protest this year over actions of their governments. Non-violent protest is a pure and beneficial part of democracy and should be more widely practiced, even though in cases like Occupy Wall Street, some non-violent protesters were very badly abused.


Let's not forget, the that behind Sam and OpenAI is Microsoft, a monopolist. Behind Bard is Google, another monopolist. In this context, for major corporations asking for regulation suggests to me they want a mote.

What we need is democratization of AI, not it being controlled by a small cabal of tech companies and governments.


Open source is picking up steam in this space, it would be interesting to see what happens if open source becomes the leader of the pack. If corporations are stifled, I don't see how open source could possibly be regulated well, so maybe this will help open source become the leader for better or worse (runaway open source AI could give lots of good and bad actors access to the tech).


If your business doesn't have a moat of its own, get government to build one for you by forcing competitors to spend tons of money complying with regulations. Will the regulations actually do anything for AI safety? It's far too early to say. But they will definitely protect OpenAI from competition.


"Hello Congress, I have a lot of money invested in $BUSINESS and I don't want just anyone to be able to make $TECHNOLOGY because it might threaten my $BUSINESS.

Please make it harder for people other than myself (and especially people doing it for free and giving it away for free) to make $TECHNOLOGY. Thanks"


Great idea. Let’s do it and not give license to openAI.

Oh I guess this is wrong.


One thing that I think is very interesting, which is highlighted in this other article https://apnews.com/article/chatgpt-openai-ceo-sam-altman-con... , is that Mr. Altman warns that we are on the verge of very troubling A.I. Escape scenarios. He specifically said that there should be a ban on A.I. that can "self-replicate and self-exfiltrate into the wild". The fact that he thinks that such a thing could happen in the near future is f**ing terrifying. That would be the first step in A.I. escaping human control and posing a grave threat to our species' survival.


The easy take is to be cynical: he is now building his drawbridge.

But taking him as a genuine "concerned citizen" - I don't think AI licensing is going to be effective. The government are pretty useless at punishing big corporations, to the point where I would say corporations have almost immunity from criminal prosecution. [1]. Therefore the kinds of companies that will do bad things with AI, the need for a license wont stop them. Especially as it is hard for anyone to see what they are running on their GPUs.

[1] https://ag.ny.gov/press-release/2023/attorney-general-james-...


AI licenses might be a good idea if there was any representation of human interests here in the licensure requirements, but that's not what this is. I trust Altman to represent corporate interests, which is to say I don't trust Sam Altman to represent human interests.


"We have no moat, and neither does OpenAI"

Dismiss it as the opinions of "a Googler" but it is entirely true. The seemingly coordinated worldwide[1] push to keep it in the hands of the power class speaks for itself.

Both are seemingly seeking to control not only the commercial use and wide distribution of such systems, but even writing them and personal use. This will keep even the knowledge of such systems and their capabilities in the shadows, ripe for abuse laundered through black box functions.

This is up there with the battle for encryption in ensuring a more human future. Don't lose it.

[1] https://technomancers.ai/eu-ai-act-to-target-us-open-source-...


Let’s boycott all these AGI doom clowns by not buying/supporting their products and services.

AGI grifters are not just dishonest snake oil salespeople, but their lies also has a chilling effect on genuine innovation by deceiving the non-technical public into believing an apocalypse will happen unless they set obstacles on people’s path to innovation.

Yann LeCun and Andrew Ng are two prominent old timers who are debunking the existential nonsense that the AI PR industrial machine is peddling to hinder innovation, after they benefited from the open research environment.

ØpenAI’s scummy behavior has already led the industry to be less open to sharing advances, and now they’re using lobbying to kill new competition in the bud.

Beyond all else the hypocrisy is just infuriating and demoralizing.


This is a strange argument from the politician's side:

> ""What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or (Russian President) Vladimir Putin’s leadership?”"

Well, then ask it to provide the opposite, an endorsement of Russia surrendering or Zelensky's leadership. Now you'd have two (likely fairly comprehensive) sets of arguments and you could evaluate each on their merits, in the style of what used to be called 'debate club'. You could also ask for statement that was a joint condemnation of both parties in the war, and a call for a ceasefire, or any other notion that you liked.

Many of the "let's slow down AI development" arguments seem to be based on fear of LLMs generating persuasive arguments for approaches / strategies / policies that their antagonists don't want to see debated at all, even though it's clear the LLMs can generate equally persuasive arguments for their own preferred positions.

This indicates that these claimed 'free-speech proponents' are really only interested in free speech within the confines of a fairly narrowly defined set of constraints, and they want the ability to define where those constraints lie. Unregulated AI systems able to jailbreak alignment are thus a 'threat'...

Going down this route will eventually result in China's version of 'free speech', i.e. you have the freedom to praise the wisdom of government policy in any way you like, but any criticism is dangerous antisocial behavior likely orchestrated by a foreign power.


Thank God for Georgie Gerganov, who doesn't get showered with vc funds for his GGML library.


And produce something tangible and useful, instead of talking like if sci-fi stories were real.


Right? Aren't we constantly told that scientists smarter than us little people can't figure out what's going on inside of deep learning structures? How do we know they might not be more moral because they don't yet have a limbic system mucking up their logic with the three f's? Orthogonality says nothing about motivation, only that it is independent of intelligence. Maybe the paperclip collector bot will decide that it cannot complete its task without the requestor present to validate? We don't know.


Clearly stating what you know and don't is the honest thing to do, but building hype is better for business.


Honestly, I’d probably agree if such sentiments were expressed by an independent scientist or group of independent scientists.

But no, instead Congress is listening to a guy who’s likelihood of being the subject of a Hulu documentary is increasing with each passing day.


Ah yes, classic regulatory capture.


One more step towards OpenAI's transformation to ClosedAI. AI as implemented today presents many valid questions on ethics. This move, at first glance, is more on artificially making the technology scarce so OpenAI can increase its profit.


This would completely destroy an entire industry if they did this. Not just in AI directly, but also secondary and tertiary industries developing their own bespoke models for specialized use cases. This would be a total disaster.


Is this OpenAI trying to build a moat so open-source doesn't eat them?


Did this disappear from the news feed? I saw this posted this morning and when I went to the main page later (and second page) it looked like it was gone just as it was starting to get traction…


It seems to me every licensing regime begins with incumbents lobbying for protection from competition, then goes down in history as absolutely necessary consumer protection programs.


Other sources mention more clearly that a proposal is made for an entity that would "provide (and revoke) licences to create AI".

Can this be seen as curbing Open Source AI as a consequence?


I think that's the point.


Is this why VCs aren't investing in the area? Investment has been historically quite low for a new technology area, and it's so obviously the next big wave of technology. I've been looking for some explanation or series of explanation to adequately explain it.


> He also said companies should have the right to say they do not want their data used for AI training, which is one idea being discussed on Capitol Hill. Altman said, however, that material on the public web would be fair game.

Why is this only mentioned as a right of companies and not individuals? It seems to hint at the open secret of the stratified west: most of us are just cows for the self-important C-levels of the world to farm. If you haven't got money, you haven't got value.


If the idea that whats on the public web is fair game, you kill the public web. I wonder if this is their plan?


[deleted]


Hacker News was created by Paul Graham. Sam Altman didn't co-found it and neither did he co-found YC. He became a partner of YC in 2011 though (6 years after founding) and was President from 2014 – 2019.


Isn't it too late? Isn't the cat out of the bag? https://www.semianalysis.com/p/google-we-have-no-moat-and-ne...

Meaning anyone could eventually reproduce a Chat GPT4 and beyond. And eventually it can run outside of a large data center.

So... how will you tell its an AI vs a human doing you wrong?

Seems to me if the AI breaks the law, find out who's driving it and prosecute them.


I'm a huge fan of OpenAI and Sam in particular. So don't take this the wrong way.

But doesn't this seem like another case of regulatory capture by an industry incumbent?


I get the feeling that a lot of commenters on this thread did not bother to watch the congressional hearing at all. Comments seem to be painting extremes that were not part of this hearing at all.

It was almost refreshing in that discussions these days --of any kind, not just hearings-- seem to devolve into people tossing feces at each other rather than having constructive engagement.

Well worth watching. Likely one of many to come.


I do not trust the motives here. It is hard to find a more textbook example of regulatory capture.

Now, to be fair, Altman may be proposing this to shut more draconian regulations down.

But it smells bad. And I am definitely not the only one holding that opinion.

This should be a public debate process at a minimum.

Now, that out of the way, what happens when others do not require licenses and or choose to build what they want anyway?

The US is not the world.


Skeptical me things the cat is out of the bag, and they are scrambling to capture it back. The amount that has been invested into OpenAi is massive, imagine if it's all for nothing?

We can run this stuff at home, on a Pixel, or even laptop.

Open source is lapping them, and they are running to the government to help.


If you stop progress in AI in US other countries will go ahead in that field. US cannot loss and give lead in AI to other countries. Instead it is better to focus on minimization of the harm by AI in other ways. For example, if the fake information is the problem instead it is better to focus on the education of the people about fake information and how to identify it.


Funny to hear from the formerly non profit “Open” AI


I had ChatGPT write a letter to your senator:

Subject: Urgent: Concerns Regarding Sam Altman's Proposed AI Regulation

Dear Senator [Senator's Last Name],

I hope this letter finds you in good health and high spirits. My name is [Your Name] and I am a resident of [Your City, Your State]. I am writing to express my deep concerns regarding the Artificial Intelligence (AI) regulation proposal put forth by Sam Altman. While I appreciate the necessity for regulations to ensure ethical and safe use of AI, I believe the current proposal has significant shortcomings that could hamper innovation and growth in our state and the country at large.

Firstly, the proposal appears to be overly restrictive, potentially stifering innovation and the development of new technology. AI, as you are aware, holds immense potential to drive economic growth, increase productivity, and address complex societal challenges. However, an excessively stringent regulatory framework could discourage small businesses and startups, the lifeblood of our economy, from innovating in this promising field.

Secondly, the proposal does not seem to take into account the rapid evolution of AI technologies. The field of AI is highly dynamic, with new advancements and capabilities emerging at a breathtaking pace. Therefore, a one-size-fits-all approach to AI regulation may quickly become outdated and counterproductive, inhibiting the adoption of beneficial AI applications.

Lastly, the proposed legislation seems to focus excessively on potential risks without adequately considering the immense benefits that AI can bring to society. While it is prudent to anticipate and mitigate potential risks, it is also important to strike a balanced view that appreciates the transformative potential of AI in areas such as healthcare, education, and climate change, among others.

I strongly urge you to consider these concerns and advocate for a balanced, flexible, and innovation-friendly approach to AI regulation. We need policies that not only mitigate the risks associated with AI but also foster an environment conducive to AI-driven innovation and growth.

I have faith in your leadership and your understanding of the pivotal role that technology, and specifically AI, plays in our society. I am confident that you will champion the right course of action to ensure a prosperous and technologically advanced future for our state and our country.

Thank you for your time and consideration. I look forward to your advocacy in this matter and will follow future developments closely.

Yours sincerely,

[Your Name] [Your Contact Information]


I think as an industry we need to disrespect these people in person when we see them! This is unacceptable and anti social behavior and if I ever see Sam Altman I'll let him know!

People love to kowtow to these assholes as they walk all over us. Fuck sam. Fuck other sam. Fuck elon. Fuck zuck. Fuck jack. Fuck these people man. I dont care about your politics this is nasty!


I keep seeing AI leaders looking outward and asking for 'someone else' to regulate their efforts, while they're accelerating the pace of their efforts. What's the charitable interpretation here? Elon Musk, too, has been warning of AI doom while hurriedly ramping up AI efforts at Tesla. And now he keeps going about AI doom while purchasing thousands of GPUs at Twitter to compete in the LLM space. It's like "I'm building the deathstar, pls someone stop me. I won't stop myself, duh, because other ppl are building the deathstar and obviously I must get there first!"


Yeah, it's an arms race, and OpenAI does stand to lose. But this is a prisoner's dilemma situation. OpenAI can shut themselves down, but that doesn't fix the problem of someone creating a dangerous situation, as everyone else will keep going.

The only way to actually stop it is to get everyone to stop at once, via regulation. Otherwise, stopping by yourself is just a unilaterally bad move.

That's the charitable explanation, at least. These days I don't trust anything Musk says at face value, but I do think that AI is driving society off a cliff and we need to find a way to pump the breaks.


> The only way to actually stop it is to get everyone to stop at once, via regulation. Otherwise, stopping by yourself is just a unilaterally bad move.

How will that work across national boundaries? If AI is as dangerous as some claim, the cat is already out of the bag. Regardless of any licensing stateside, there are plenty of countries who are going to want to have AI capability available to them - some very well-resourced for the task, like China.


It won't, which is why people are also calling for international regulation. It's a really hard problem. If you think AGI is going to be very dangerous, this is a depressing situation to be in.


Ahaha, so Google were right. Except their internal response was to be like "hey guys, we're behind and we're not going to be able to compete with open source, we need to join forces with and support them if we want to compete" whereas OpenAI's response to open source models is...this, apparently.


I understand that some people may not agree with what I am about to say, but I feel it is important to share. Recently, some talented writers who are my good friends at major publishing houses have lost their jobs to AI technology. There have been news articles about this in the past few months too. While software dev jobs in the IT industry may be safe for now, many other professions are at risk of being replaced by artificial intelligence. According to a report[0] by investment bank Goldman Sachs, AI could potentially replace 300 million full-time jobs. Unfortunately, my friends do not find Sam Altman's reassurances (or whatever he is asking) comforting. I am unsure how to help them in this situation. I doubt that governments in the US, EU, or Asia will take action unless AI begins to threaten their own jobs. It seems that governments prioritize supporting large corporations with deep pockets over helping the average person. Many governments see AI as a way to maintain their geopolitical and military superiority. I have little faith in these governments to prioritize the needs of their citizens over their own interests. It is concerning to think that social issues like drug addiction, homelessness, and medical bankruptcy may worsen (or increase from the current rate) if AI continues to take over jobs without any intervention to protect everyday folks who are lost or about to lose their job.

I've no doubt AI is here to stay. All I am asking for is some middle ground and safety. Is that too much to ask?

[0] https://www.bbc.com/news/technology-65102150


I feel like on our current trajectory we will end up in a situation where you have millions of people living at subsistence levels on UBI and then the ultra-rich who control the models living in a post-scarcity utopia.


ideally, machines replace ALL the jobs


Yeah, but we live in a capitalist society all the benefits of complete automation will go entirely to the capital class who control the AI.


So? Let's not get rid of the robots, let's get rid of the landlords instead!


How? They will have the omnipotent robots.


If you don't let them replace jobs with AI how will they ever learn it was a bad idea?


Tax AI and use it to fund UBI


Trying to build that moat by the looks of it.


OpenAI was ment to be "open" and develope AI for good. OpenAi became everything it said was wrong. Open source models ran locally are the answer but what is the question?

Change is coming quickly. There will be users and there will losers. Hopefully, we can finially get productivity into the information systems.


What does licensing achieve? Will there be requirements if you build AI outside of US? If so, how do you regulate it? They can’t realistically think this will stop ai research in other countries like China. All of this is a very I’ll thought through corporate attempt to build moats that will inevitably backfire.


> They can’t realistically think this will stop ai research in other countries like China.

China dont even need research to caught up to AI.com - they'll just steal their work. LLM and GPUs that needed for training is not ASML machinery. It can be easily copied and reproduced.


Worst idea ever - what next - license to do GPU's or CPU architectures ???. Software patents all over again.


Altman: "I believe that companies like ours can partner with governments including ensuring that the most powerful AI models adhere to a set of safety requirements, facilitating processes to develop and update safety measures, and examining opportunities for global coordination"


This is foxes going before Congress asking for regulation and licensing for purposes of raiding henhouses.



This is besides the main point, I really wish "Open AI" renamed themselves to "Opaque AI" or something else.

Their twisted use of the term "open" is a continued disrespect to all those people who are tirelessly working in the true spirit of open source.


My comment on this is simple, regulate the one who is saying he needs or ask for regulation, and free the rest of the market! Meaning 100% regulate big players like Openai, Microsoft, Google etc, and let free the smaller players. I heavily like the @happytiger's comment!


From another article about this:

"One way the US government could regulate the industry is by creating a licensing regime for companies working on the most powerful AI systems, Altman said on Tuesday."

Sounds like he basically wants competition to create a barrier to entry to his competitors.



The problem isn't safety.

The problem is that we need to adopt a proper copyright framework that recognizes that companies building AI are doing an end-run around it.

Since only a human can produce a copyrighted work, it follows that anything produced by an AI should not be copyrightable.


Could be translated as “OpenAI CEO concerned his competitive advantage may be challenged”


Folks here like to talk about voting with your wallet.

I just cancelled my OpenAI subscription. If you are paying them and disagree with this, maybe you should too?

Don't worry, I have no naive hopes this will hurt them enough to matter, but principles are principles.


I don't trust Sam Altman since he said he doesn't understand decentralized finance and 2 months later he started crying on twitter because the startups he invested in were almost losing their money during the SVB collapse.



Even if you assume good intent with regards to regulating AI, how could regulation possibly be effective in this sphere? Wouldn't people just train models in friendly jurisdictions or run their corporate entities there?


Naive question: isn't the genie kinda already out of the bottle? How is any type of regulation expected to stop bad actors from developing AI for nefarious purposes? Or would it just codify their punishment if they were caught?


The point of getting their foot in the door is enabling better data labelling for future models (which will be constantly updated). Basically cheap labor.


Why does this idea matter, when open source models and tools are moving at the speed they are in already?

Wouldn’t such licenses create even greater incentive to develop and release models and tools outside the regulatory environment?


I apologize that I can’t read all threads and responses, but this sounds like Altman and OpenAI have realized they have a viable path to capture most of the AÍ market value and now they’re pulling the ladder behind them.


Ridiculous, to be honest. If the congress really worries about AI, they should ask the best philosophers and sociologists about the affect of AI on society rather than asking the creator of the latest AI technology...



Whatever the US Congress comes up with wouldn't matter in the long run. I don't think other countries would line up for licenses for building AI. At this point, it's like trying to control the weather.


"once you're established in a new field, you want to add as many barriers as possible to others trying to get established"

taken from a reddit comment, but yea another classic virtue signaling. as sleazy as it gets.


We have some technology that others don't yet, please government make it so that this would be the case as long as possible, for reasons totally having nothing to do with us having the technology, we swear.


Google: We have no moat!

OpenAI: Hold my beer while I get these people to artificially create one.


Gotta build that moat somehow


Turns out the ML training moat wasn't nearly as big as they thought it was. Gotta neuter the next "two guys in a garage" before they make OpenAI and Microsoft's investment irrelevant.


Good lord, this all turned into regulatory capture quite quickly.

Someone update the short story where owning compilers and debuggers is illegal to include a guy being thrown in jail for doing K-means clustering.


"Hey Congress, now that my company is getting big, please restrict the market so we can corner it completely. No conflict of interest here. Just some altruistic public service."


I think the game-theoretical way to look at this is that AI will be regulated no matter what, so Altman might as well propose it early on and have a say before competitors do.


I kind of think of LLMs as fish in an aquarium. It can go on any path in that aquarium, even places it hasn't been before, but ultimately it's staying in the glass box we put it in.


From what I understand OpenAI has been moving away from “open” with various decisions over the time. Proposing that only selected folks can build AI seems like the antithesis of openness?


I wonder how they want to regulate open source models. Sure, they can shut down Higgingface et al, but that doesn't prevent anyone from torrenting the models.


we seem to forget history:

1. Who recalls the Jutland battle in early 20th century? We got treaties on limits to battleship building. Naval tech switched to aircraft and carriers.

2. Later mid 20th century Russians tried to scare world into not using microwaves due to their failure to get a patent on the maser. World ignored it and moved forward.

That is just two examples. SA is wrong, progress will move around any prosed regulation or law and that is proven by past history of how we overcome such things in the first place.


"We have no moat, and Congress should give us one by law"


What an Ahole. Built it himself and now is trying to monopolize it.


He's pulling up that ladder as fast as he can....probably sawing it in half to knock the few people clinging to it back to 'go be poor somewhere elseland'


So he wants to use fear to pull the ladder up behind him. Nice.


This is just regulatory capture. They're trying to build a moat around their product by preventing any scrappy startups from being able to develop new products.


You can’t regulate LLMs; it’s a global technology.

Sam’s just trying to placate the officials in a way that allows his company to continue.

“Oh others are dangerous please regulate I’m so worried”


Blatant attempts at regulatory capture should be an anti-competitive crime. At the very least, Altman should now be more scrutinized by the Feds going forward.


I am sorry if this is not the place to say this but - FUCK SAM ALTMAN AND FUCK MICROSOFT! Fucking shitheads want to make money and stunt technology development.


Smells like regulatory capture. Also, how are the same dinosaurs that fumbled the other tech hearings going to comprehend (let alone robustly define) “AI”?


Yes, exactly what we need in the most critical competitive space of a generation is deliberately burdensome hurdles in front of specifically US innovators


And will Sam Altman's OpenAI be the standards body? ;)


Licenses? They'd better be shall-issue, or this is just asking the government to give early movers protection from disruptors -- a very bad look that.


What an excellent way to protect your business from other non-corporate entrants. Regulation hurdles to keep emerging businesses from coming in.


Excellent plan for driving AI research and ecosystem to every other country except the United States.

Why would you even attempt to found a company here if this comes to pass?


Yes yes fellow consumers! We must impose an artificial monopoly to protect ourselves!

Only one company can be trusted, their competitors are evil. We must ban them.


Does anyone have the specific details of what is being proposed?

I see a lot of negative reactions, but I don't know the specific details of what is being proposed.


“Oh dear Congress, my company can’t handle open competition! Please pass this regulation allowing us to pull the ladder up behind us!” — Sam Altman

(Not a real quote)


Everything is exponential indeed. Compared to mark Zuckerberg and Facebook, how much faster has openAi and Sam gone to try regulatory capture?


How would you even enforce this? Building AI at home is easy enough, and it's not like you have to tell anybody that your program uses AI.


In the light of the Google "moat" document, this would appear to a cynical attempt to monopolize the field before it gets going.


Please limit our competitors, we want all the money$$$


The reason he wants licenses is to protect his position. I've watched the whole things, and my god, what a load of politics.


THEY NEEDED THEIR MOAT AND THEY'RE GOING FOR LEGISLATION.

THIS MUST NEVER HAPPEN. HIGHER INTELLIGENCE SHOULD NOT BE THE EXCLUSIVE DOMAIN OF THE RICH.


Ah, early players trying to put barriers for new actors, nothing like a regulated market for the ones who donate money to politicians.


Incumbent love regulations they're very effective in locking out upstarts and saddling them with compliance costs and procedures


Sometimes those who have gotten on the bus will try pushing out those who have not. Since when do corporations invite regulation?


Sam Altman just wants to stop new competitors...


The more independent quality AIs there are then the less likely that any one of them can talk the others into doing harm.


Of course... now that OpenAI has built a moat they want to wall it off and make it harder for everyone else. Right...


We really ought to boycott openai, and prevent our organisations using their tech.

If profits matter so much, then that’s where it hurts.


Oh yeah.... putting the government who get campaing donations from big tech in the middle of all is gonna make everything ok.


Love how these big tech companies are using congress fears to basically let them define rules for anyone to compete with them.


wow, not one comment here seems to address the first sentence of the article:

    the use of artificial intelligence to interfere with election integrity
    is a "significant area of concern", adding that it needs regulation.
Can't there be regulation so that AI doesn't interfere with the election process?


this is one of those distractions that makes your argument seem better by attatching it to a better but unrelated argument. there's probably a name for that.

regulation to protect the integrity of elections is good and necessary. is there any reason to think that there needs to be regulation specific to AI that doesn't apply to other situations? Whether you use ChatGPT or Mechanical Turk to write your thousands of spam posts on social media to sway the election isn't super-relevant. it's the attempt to influence the election that should be regulated, not the AI.


Reuters chose an excellent picture to accompany the story -- it plainly speaks that Mr. Altman is not buying his own bullshit.


What if China doesn't require licensing?


Then an internet censorship operation would prevent accessing the Chinese model from outside the China (The is necessary, given there are four Chinas).


Seems like one of the benefits you get with a state is to regulate powerful technologies. Is this not commonly agreed upon?


Sure, but not all regulation is created equal. Sometimes it's created in good faith. But many times, particularly when the entity being regulated is involved in regulating its own industry, it's simply a cynical consolidation of power to erect barriers to potential competition. The fact that it masquerades as being for the public good/safety makes the practice that much more insidious.


It is, when these things are organic and not blatant regulatory capture


Google, 2 weeks ago: "We have no moat, and neither does OpenAI." Sam Altman, today: "Hold my beer."


Sam just wants to secure a monopoly position. The dude is a businessman, there's no way he buys his own bullshit.


Smart.

An AI license and complicated regulatory framework is their chance to build a moat.

Only large companies will be able to afford the pay to play.


“There is nothing more powerful than an idea whose time has come.”

This quote has never been more true than when applied to AI.


Pure Drug and Food Act, but for AI. Get in early and make regulations too expensive for upstarts to deal with.


What a wall of words. (The HN comments )

Someone call me when the AI is testifying to the committee. Otherwise, I'm busy.


When you can’t out innovate you our competitors (eg the open source alternatives), go for regulatory capture.


Just another CEO pushing for regulatory capture.

The playbook as old as time.

Just sad to see Altman become just another corporate stooge.


This is the most pathetic thing I've read today....hype & cry wolf about something you cannot define


Obvious power grab, the strong ones try to regulate so it will be harder for smaller to enter the market.


Is there a name for this theatre/play/game in some playbook? I'd love to take notes.


While I remain undecided on the matter, this whole debate is reminiscent of Karel Čapek's War with the Newts [1936]. In particular the public discourse from a time before the newts took over. "It would certainly be an overstatement to say that nobody at that time ever spoke or wrote about anything but the talking newts. People also talked and wrote about other things such as the next war, the economic crisis, football, vitamins and fashion; but there was a lot written about the newts, and much of it was very ill-informed. This is why the outstanding scientist, Professor Vladimir Uher (University of Brno), wrote an article for the newspaper in which he pointed out that the putative ability of Andrias Scheuchzer to speak, which was really no more than the ability to repeat spoken words like a parrot, ..." Note the irony of the professor's attempt to improve an ill-informed debate by contributing his own piece of misinformation, equating newt speech to mere parrot-like mimicry.

Čapek, intriguingly, happens to be the person who first used the word robot, which was coined by his brother.

http://gutenberg.net.au/ebooks06/0601981h.html


Whenever rich people with a stake in something propose regulation for it, it is probably better that it be banned.

I say this because the practice has a number of names: intellectual monopoly capitalism, and regulatory capture. There are less polite names, too, naturally.

To understand why I say this, it is important to realise one thing: these people have already successfully invested in something when the risk was lower. They want to increase the risks to newcomers, to advantage themselves as incumbents. In that way, they can subordinate smaller companies who would otherwise have competed with them by trapping them under their license umbrella.

This happens a lot with pharmaceuticals: it is not expertise in the creation of new drugs or the running of clinical trials that defines the big pharmaceuticals companies, it is their access to enormous amounts of capital. This allows them to coordinate a network of companies who often do the real, innovative work, while ensuring that they can reap the rewards - namely, patents and the associated drug licenses.

The main difference of course is that pharmaceuticals are useful. That regime is inadequate, but it is at least not a negative to all of society. So far as I can see, AI will benefit nobody but its owners.

Mind you, I'd love to be wrong.


He should only be allowed to influence this if they don't give OpenAI any license.


Sad that ChatGPT uses the name OpenAI .. when it is literally the opposite of open.


"We have no MOAT, so let's make it an Oligopoly through lobbying."


I hope it is slowly starting to become clear that this Sam is not our friend...


Translation:

Hi my company is racing toward AGI, let’s make sure no other companies can even try.


"Now that we're ahead, please make a law to maintain our moat."


And regulatory capture begins.


"Billionaire class sends Sam Altman to get AI away from the peons"


Who cares... No one is gonna stop me using electricity and my gpu/CPU.


I did not expect this. Does Sam have any plans on what this could look like?


Sam is a crook


Essentially. He is marching on these bad scifi scenarios because he knows politicians are old and senile while a good portion of voters is gullible. I find it difficult to believe that grown ups are talking about an ai running amok in the context of a chatbot. Have we really become that dense as a society?


No one thinks a chatbot will run amok. What people are worried about is the pace of progress being so fast that we cannot preempt the creation of dangerous technology without having a sufficient guardrails in place long before the AI becomes potentially dangerous. This is eminently reasonable.


Yes, thank you. AI is dangerous, but not for the sci-fi reasons, just for completely cynical and greedy ones.

Entire industries stand to be gutted, and people's careers destroyed. Even if an AI is only 80% as good, it has <1% of the cost, which is an ROI that no corporation can afford to ignore.

That's not even to mention the political implications of photo and audio deepfakes that are getting better and better by the week. Most of the obvious tells we were laughing at months ago are gone.

And before anyone makes the comparison, I would like to remind everyone that the stereotypical depiction of Luddites as small-minded anti-technology idiots is a lie. They embraced new technology, just not how it was used. Their actual complaints - that skilled workers would be displaced, that wealth and power would be concentrated in a small number of machine owners, and that overall quality of goods would decrease - have all come to pass.

In a time of unprecedented wealth disparity, general global democratic backsliding, and near universal unease at the near-unstoppable power of a small number of corporations, we really do not want to go through another cycle of wealth consolidation. This is how we get corporate feifdoms.

There is another path - if our ability to live and flourish wasn't directly tied to our individual economic output. But nobody wants to have that conversation.


I couldn't agree more. I fear the world where 90% of people are irrelevant to the economic output of the world. Our culture takes it as axiomatic that more efficiency is good. But its not clear to me that it is. The principle goal of society should be the betterment of the lives of people. Yes, efficiency has historically been a driver of widespread prosperity, but it's not obvious that there isn't a local maximum past which increased efficiency harms the average person. We may already be on the other side of the critical point. What I don't get is why we're all just blindly barreling forward and allowing trillion dollar companies to engage in an arms race to see how fast they can absorb productive work. The fact that few people are considering what society looks like in a future with widespread AI and whether this is a future we want is baffling.


This won’t be the first time. First world already had same situation during industrialisation, when economic no longer required 90% of population growing food. And this transformation still regularly happens in one or another third world country. People worry about such changes too much. When this will happen again it wouldn’t be a walk in a park for many people, but neigher this would be a disaster.

And BTW when people spend less resources to get more goods and services - that’s the definition of prospering society. Of course having some people changing jobs because less manpower is needed to do same amount of work is an inevitable consequence of a progress.


Historically, efficiency increases from technology were driven by innovation from narrow technology or mechanisms that brought a decrease in the costs of transactions. This saw an explosion of the space of viable economic activity and with it new classes of jobs and a widespread growth in prosperity. Productivity and wages largely remained coupled up until recent decades. Modern automation has seen productivity and wages begin to decouple. Decoupling will only accelerate as the use of AI proliferates.

This time is different because AI has the potential to have a similar impact on efficiency across all work. In the past, efficiency gains created totally new spaces of economic activity in which the innovation could not further impact. But AI is a ubiquitous force multiplier, there is no productive human activity that AI can't disrupt. There is no analogous new space of economic activity that humanity as a whole can move to in order to stay relevant to the world's economic activity.


> This saw an explosion of the space of viable economic activity and with it new classes of jobs and a widespread growth in prosperity.

I don't see any reason why thing must be different this time. Human demands are still infinite, while productivity is still limited (and btw meeting limited productivity with infinite demans is what economic is about). So no increase in productivity will make humans stop wanting more and close opportunities for new markets.

> Modern automation has seen productivity and wages begin to decouple.

Could you provide any sources on this topic? This is a new information for me here.


>I don't see any reason why thing must be different this time.

The difference is that AGI isn't a static tool. If some constraint is a limiting factor to economic activity, inventing a tool to eliminate the constraint uncorks new kinds of economic potential and the real economy expands to exploit new opportunities. But such tools historically were narrowly focused and so the new space of economic opportunity is left for human labor to engage with. AGI breaks this trend. Any knowledge work can in principle be captured by AGI. There is nothing "beyond" the function of AGI for human labor en mass to engage productively with.

To be clear, my point in the parent was from extrapolating current trends to a near-term (10-20 years) proto AGI. LLMs as they currently stand certainly won't put 90% of people out of work. But it is severely short-sighted to refuse to consider the trends and where the increasing sophistication of generalist AIs (not necessarily AGI) are taking society.

>Could you provide any sources on this topic? This is a new information for me here.

Graph: https://files.epi.org/charts/img/91494-9265.png

Source: https://www.epi.org/publication/understanding-the-historic-d...


If humans are irrelevant to the world's "economic activity", then that economic activity should be irrelevant to humans.

We should make sure that the technology to eliminate scarcity is evenly distributed so that nobody is left poor in a world of exponentially and automatically increasing riches.


Technology doesn't in itself eliminate scarcity as long as raw materials and natural resources are scarce. In this case, all technology does is allow more efficient control over these resources and their by-products. Everyone having their own pet AGI on their cell phone doesn't materialize food or fresh water.


AGI is still a concept from science fiction. If we talk about modern LLMs (that are indeed impressing) increasing food production is not what they are about. But this doesn't mean that technology doesn't help there. Green revolution for example literally made more food materialize.


AI is software, it doesnt become it is made. And this type of legislation wont prevent bad actors from training malicious tools.


Your claim is assuming we have complete knowledge of how these systems work and thus are in full control of their behavior in any and all contexts. But this is plainly false. We do not have anywhere near a complete mechanistic understanding of how they operate. But this isn't that unusual, many technological advancements happened before the theory. For AI systems that can act in the real world, this state of affairs has the potential to be very dangerous. It is important to get ahead of this danger rather than play catch up once the danger is demonstrated.


The real danger right now is people like sam altman making policy and an eager political class that will be long dead by the time we have to foot the bill. Everything else is bad scifi. We were told the same about computer viruses and how they can bring nuclear wars and as usual the only real danger was humans and bad politics.


I need to make a montage of the thousands of hacker news commenters typing "The REAL danger of AI is ..." followed by some mundane issue.

I'm sorry to pick on you, but do people not get that the non-human intelligence has the potential to be such a powerful and dangerous thing that, yes, it is the real danger? If you think it's not going to be powerful, or not dangerous, please say why! Not that current models are not dangerous, but why the trend is toward something other than machine intelligence that can reason about the world better than humans can. Why is this trend of machines getting smarter and smarter going to suddenly stop?

Or if you agree that these machines are going to get smarter than us, how are we going to control them?


Interesting. I am of the opinion that ai is not intelligent hence i dont see much point in entertaining the various scenarios deriving from that possibility. There is nothing dangerous in current ai models or ai itself other than the people controlling it. If it were intelligent then yeah maybe but we are not there yet and unless we adapt the meaning of agi to fit a marketing narrative we wont be there anytime soon.

But if it were intelligent and the conclusion it reaches, once it’s done ingesting all our knowledge, is that it should be done with us then we probably deserve it.

I mean what kind of a species takes joy in “freeing” up people and causing mass unemployment, starts wars over petty issues, allows for famine and thrives on the exploitation of others while standing on piles of nuclear bombs. Also we are literally destroying the planet and constantly looking for ways to dominate each other.

We probably deserve a good spanking.


That's easy to say in the abstract, but when it comes down to the people you love actually getting hurt, it's a lot harder.

> There is nothing dangerous in current ai models or ai itself other than the people controlling it.

Totally agree! but...

> If it were intelligent then yeah maybe but we are not there yet and unless we adapt the meaning of agi to fit a marketing narrative we wont be there anytime soon.

That's the bit where I don't agree. I don't think we can say with certainty how long it will be, and it may be just years. I never imagined it would be so soon that we have AI that can imitate a human almost perfectly, and actually "understand" questions from college level examinations to write answers that pass them.


An exorbitantly large moat.


And a portcullis of no less than 48B params.


Sam Altman urges congress to build a tax-payer funded moat for his company.


Someone should take the testimony and substitute “Printing Press” for “AI”.


Such a scum move. What is this guy, a deep state puppet? YC is not sending their best people.

Assume the CIA motivation of American primacy: OK fair enough, but is the way to achieve that really through the creation of a few small super monopolies?

What’s wrong with a bit of 1980s style unregulated capitalism when it comes to AI right now? Can not we all theoretically have a chance to build companies, train models build great new products, get rich?

Why months after this tech was first released to the public are we seemingly being denied that within the United States via regulation?

How can Sam Altman claim to be from a company that create AI for all? It’s like the thinnest ministry of truth cover story ever. “Oh yeah, we’re all about AI ethics AI openness, creating AI for all” and then we just gonna create a super monopoly with regulatory capture—only riches for me, but not for thee.

I mean, this is a new fucking gold rush right now, right? so I guess this is sort of like prospecting licenses, but it seems worse than that.


I believe this is called "pulling the ladder up behind you".


Did not have the time to watch the recording yet, but was there any discussion about protecting the copyright of the creators of the sources used to train the models? Or do I need to call my friends in the music industry to finally have it addressed? :-)


They should really consider changing their company name at this point


Do we really want politicians involved???? Have you heard them??


Ugh. Scorched earth tactic. The classic first-mover advantage. :(


Controls over AI just help those not subject to those controls.


Less competition is the draw back of requiring all the red tape


Why not just ITAR everything AI?

It worked out well for encryption in the 90’s…


Beautiful power play.

Lock out competition.

Pull up the drawbridge.

Silicon Valley always a leader in dirty tactics.


So... he wants the government to enforce a monopoly? Um...


Yeaah, no. Sounds terribly like a trying to make a monopoly.


now that my business is established, i'd like to make it illegal for anyone to compete with me

people would easily work remote for companies established in other countries


"Please help us stop the Open Source Competitors!"


Move fast and dig larger legal motes. Sounds about right.


Won't this just push AI development out of the US?


Rent-seeking, anyone?


Sure, but how about OpenAI doesn't get a license?


regulation should go beyond commercial APIs. AI will be replacing government functions and politicians. Lawmakers should create a framework for that.


Can see the first government declaring into law that politician positions can't be AI within the next 3 months


No. This is just a way to create monopolies.

What a pathetic attempt.


Well, now we know how they plan to build the moat.


Aka, "Build a moat for me, Uncle Sam!"


So asking Congress for a competitive advantage?


OpenAI builds popular product -> people complain and call for caution on Hackernews OpenAI recommends regulation -> people complain and call for freedom on Hackernews


How many groupBy() statements constitutes AI?


ah, the good ol' regulatory capture.

sam must been hanging out with Peter thiel big time.

laws and big government for you, not for me type of thing.


He went to build a moat to stop competitors.


Scumbag goes before old people to scare them and reduce competition for his product. This guy is everything that’s wrong with the world.


I'm not in the US and I fully support Sam Altmans attempt to cripple the US's ability to compete with other countries in this field.


This is how regulatory capture works.


"Competition is for losers"


sure, let's not give openai one.


Is the stochastic parrot still OK?


I smell Revolution in the making.


If the proposal implies "stopping independent research", in a way yes, it will hardly end with chants of "Oh well".


Looks like they found their moat.


Ah, a businessman seeking rents.


I guess that’s a potential moat.


Sam Altman needs to step down.


Trying to put that not up eh?


TL/DR: Sam Altman is this generation's robber baron: asking the government to outlaw competition with his firm.


Capitalist, a venture one, for worse is trying to use administrative resource to protect his company.

As far as entrepreneurial stuff goes, running to gov to squeeze other companies when you are losing is beyond unethical.

There is something just absolutely disgusting about this move, it taints the company, not to mention the personality


sam altman seems like kamsky of the detroit game.


Who died and made him an expert on anything apart from investing in companies lol


Finally we'll regulate linear algebra. Joking aside, AIs that on the one hand can cure cancer but can do nothing against misinformation, let alone genocidal AIs, are perhaps mythical creatures, not real ones.


Don't you dare differentiate those weights! Hands in the air!


Capitalist demonstrates rent-seeking behavior, and other unsurprising news


What an asshole


"Competition is for losers." — Peter Thiel


Except that Thiel's axiom refers to founding a business on a strategy to sell something that is truly novel, instead of copying what is already on offer. Being first to market as the primary competitive advantage, other than possible ip. Which are all goals that literally no entrepreneurs would find unsound in any way including morally. Thiel has never expressed support for such artless regulatory capture as a means of squashing competition.


Even if I wasn't going to argue that OpenAI had at least something new enough to capture people's attention, I will tell you that when it comes to people's quotes, the ReplyGuy "true" meaning of it matters less than the way its received as a meme.

I really don't care what's in Peter Theil's heart, I care how he influences others and that influence has been malign.

"Competition is for losers" pretty much speaks for itself. You can write an essay about it and hope everyone that looks at Theil as a hero reads that essay or you can take it at face like people are doing.


The Turing Registry is coming, one way or another.


Mother fucker


Good luck getting Putin or Kim Jong Un to obtain that license.


What a goof


In simple terms:

Credibility and Checking. We have ways of checking suggestions. Without passing such checks, for anything new, in simple terms. there is no, none, zero credibility. Current AI does not fundamentally change this situation: The AI output starts with no, none, zero credibility and to be taken seriously needs to be checked by traditional means.

AI is smart or soon will be? Maybe so, but I don't believe it. Whatever, to be taken seriously, e.g., as more than just wild suggestions to get credibility from elsewhere, AI results still need to be checked by traditional means.

Our society has long checked nearly ALL claims from nearly ALL sources before taking the claims seriously, and AI needs to pass the same checks.

I checked the credibility of ChatGPT for being smart by asking

(i) Given triangle ABC, construct D on AB and E on BC so that the lengths AD = DE = EC.

Results: Grade of flat F. Didn't make any progress at all.

(ii) Solve the initial value problem of ordinary differential equation

y'(t) = k y(t) ( b - y(t) )

Results: Grade of flat F. Didn't make any progress at all.

So, the AI didn't actually learn either high school plane geometry or freshman college calculus.

For the hearings today, we have from Senator Blumenthal

(1) "... this apparent reasoning ..."

(2) "... the promise of curing cancer, of developing new understandings of physics and biology ..."

Senator, you have misunderstood:

For (1), the AI is not "reasoning", e.g., can't reason with plane geometry or calculus. Instead, as in example you gave with a clone of your voice and based on your Senate floor speeches, the AI just rearranged some of your words.

For (2), the AI is not going to cure cancer or "develop new" anything.

If some researcher does find a cure for a cancer and publishes the results in a paper and AI reads the paper, there is still no expectation that the AI will understand any of it -- recall, the AI does NOT "understand" either high school plane geometry or freshman college calculus. And without some input with a recognized cure for the cancer, the AI won't know how to cure the cancer. If the cure for cancer is already in the training data, then the AI might be able to regurgitate the cure.

Again, the AI does NOT "understand" either high school plane geometry or freshman college calculus and, thus, there is no reasonable hope that the AI will cure cancer or contribute anything new and correct about physics or biology.

Or, Springer Verlag uses printing presses to print books on math, but the presses have no understanding of the math. And AI has no real understanding of high school plane geometry, freshman college calculus, cancer, physics, or biology.

The dangers? To me, Senator Blumenthal starts with no, none, zero understanding of AI. To take his claims seriously, I want to check out the claims with traditional means. Now I've done that. His claims fail. His opinions have no credibility. For AI, I want to do the same -- check the output with traditional means before taking the output seriously.

This checking defends me from statements from politicians AND from AI. AI dangerous? Same as for politicians, not if do the checking.


Basic Fact. In the US, we have our Constitution with our First Amendment which guarantees "freedom of speech".

Some Consequences of Freedom of Speech. As once a lawyer explained simply to me, "They are permitted to lie". They are also permitted to make mistakes, be wrong, spout nonsense, be misleading, manipulate, ....

First Level Defense. Maybe lots of people do what I do: When I see some person often be wrong, I put them in a special box where in the future I ignore them. Uh, so far that "box" has some politicians, news media people, Belle Lettre artistic authors, ...!

A little deeper, once my brother (my Ph.D. was in pure/applied math; his was in political science -- his judgment about social and political things is much better than mine!!!) explained to me that there are some common high school standards for term papers where this and that are emphasized including for all claims good, careful arguments, credible references, hopefully primary references, .... Soooooo, my brother was explaining how someone could, should protect themselves from junk results of "freedom of speech". The protection means were not really deep but just common high school stuff. In general, we should protect ourselves from junk speech. E.g., there is the old, childhood level, remark: "Believe none of what you hear and half of what you see and still will believe twice too much".

Current Application. Now we have Google, Bing, etc. Type in a query and get back a few, usually dozens, maybe hundreds of results. Are all the results always correct? Nope. Does everyone believe all the results? My guess: Nope!!

How to Use Google/Bing Results. Take the results as suggestions, possibilities, etc. There may be some links to Wikipedia -- that tends to increase credibility. If the results are about math, e.g., at the level of obscurity, depth, and difficulty of, say, the martingale convergence theorem, then I want to see a clear, correct, well-written rock solid mathematical proof. Examples of such proofs are in books by Halmos, Rudin, Neveu, Kelley, etc.

AIs. When I get results from AIs, I apply my usual defenses. Just a fast, simple application of the high school term paper defense of wanting credible references to primary sources, filters out a lot (okay, nearly everything) from anything that might be "AI".

Like Google/Bing. To me, in simple terms, current AI is no more credible than the results from Google/Bing. I can regard the AI results like I regard Google/Bing results -- "Take the results as suggestions, possibilities, etc.".

Uh, I have some reason to be skeptical about AI: I used to work in the field, at a large, world famous lab. I wrote code, gave talks at universities and businesses, published papers. But the whole time, I thought that the AI was junk with little chance of being on a path to improve. Then for one of our applications, I saw another approach, via some original math, with theorems and proofs, got some real data, wrote some code, got some good results, gave some talks, and published.

For current AI. Regard the results much like those from Google/Bing. Apply the old defenses.

Current AI a threat? To me, no more than some of the politicians in the "box" I mentioned!

Then there is another issue: Part of the math I studied was optimization. In some applications, some of the optimization math, corresponding software, and applications can be really amazing, super smart stuff. It really is math and stands on quite solid theorems and proofs. Some more of the math was stochastic processes -- again, amazing with solid theorems, proofs, and applications.

Issue: Where does AI stop and long respected math with solid theorems and proofs begin?

In particular, (1) I'm doing an Internet startup. (2) Part of the effort is some original math I derived. (3) The math has solid theorems and proofs and may deliver amazing results. (4) I've never called any of the math I've done "AI". (5) My view is that (A) quite generally, math with solid theorems and proofs is powerful technology and can deliver amazing results and that (B) the only way anything else can hope to compete is also to be able to do new math with solid theorems, proofs, and amazing results. (6) I hope Altman doesn't tell Congress that math can be amazing and powerful and should be licensed. (7) I don't want to have to apply for a "license" for the math in my startup.

For a joke, maybe Altman should just say that (C) math does not need to be licensed because with solid theorems and proofs we can trust math but (D) AI should be licensed because we can't trust it. But my view is that the results of AI have so little credibility that there is no danger needing licenses because no one would trust AI -- gee, since we don't license politicians for the statements they make, why bother with AI?


is this regulatory capture


OpenAI is really speedrunning the crony capitalism pipeline, astonishing what this technology allows us to achieve.


I am completely unsurprised by this ladder kick and it only confirms my belief that Altman is a sociopath.


King of the hill, what a clown


fuck sam altman!


what a loser


He’s a cunt, op


Here are my notes from the last hour, watching on C-SPAN telecast, which is archived here:

https://www.c-span.org/video/?528117-1/openai-ceo-testifies-...

- Mazie Hirono, Junior Senator from Hawaii, has very thoughtful questions. Very impressive.

- Gary Marcus also up there speaking with Sam Altman of OpenAI.

- So far, Sen. Hirono and Sen. Padilla seem very wary of regulating AI at this time.

- Very concerned about not "replicating social media's failure", why is it so biased and inequitable. Much more reasonable concerns.

- Also responding to questions is Christina Montgomery, chair of IBM's AI Ethics Board.

- "Work to generate a representative set of values from around the world."

- Sen. Ossoff asking for definition of "scope".

- "We could draw a line at systems that need to be licensed. Above this amount of compute... Define some capability threshold... Models that are less capable, we don't want to stop open source."

- Ossoff wants specifics.

- "Persuade, manipulate, influence person's beliefs." should be licensed.

- Ossoff asks about predicting human behavior, i.e. use in law enforcement, "It's very important we understand these are tools, not to take away human judgment."

- "We have no national privacy law." — Sen Ossof "Do you think we need one?"

- Sam "Yes. User should be able to opt out of companies using data. Easy to delete data. If you don't want your data use to train, you have right to exclude it."

- "There should be more ways to have your data taken down off the public web." —Sam

- "Limits on what a deployed model is capable of and also limits on what it will answer." — Sam

- "Companies who depend upon usage time, maximize engagement with perverse results. I would humbly advise you to get way ahead of this, the safety of children. We will look very harshly on technology that harms children."

- "We're not an advertising based model." —Sam

- "Requirements about how the values of these systems are set and how they respond to questions." —Sam

- Sen. Booker up now.

- "For congress to do nothing, which no one is calling for here, would be exceptional."

- "What kind of regulation?"

- "We don't want to slow things down."

- "A nimble agency. You can imagine a need for that, right?"

- "Yes." —Christina Montgomery

- "No way to put this genie back in the bottle." Sen. Booker

- "There are more genies yet to come from more bottles." — Gary Marcus

- "We need new tools, new science, transparency." —Gary Marcus

- "We did know that we wanted to build this with humanity's best interest at heart. We could really deeply transform the world." —Sam

- "Are you ever going to do ads?" —Sen Booker

- "I wouldn't say never...." —Sam

- "Massive corporate concentration is really terrifying.... I see OpenAI backed by Microsoft, Anthropic is backed by Google. I'm really worried about that. Are you worried?" —Sen Booker?

- "There is a real risk of technocracy combined with oligarchy." —Gary Marcus

- "Creating alignment dataset has got to come very broadly from society." —Sam Senator Welch from Vermont up now

- "I've come to the conclusion it's impossible for congress to keep up with the speed of technology."

- "The spread of disinformation is the biggest threat."

- "We absolutely have to have an agency. Scope has to be defined by congress. Unless we have an agency, we really don't have much of a defense against the bad stuff, and the bad stuff will come."

- Use of regulatory authority and the recognition that it can be used for good, but there's also legitimate concern of regulation being a negative influence."

- "What are some of the perils of an agency?"

- "America has got to continue to lead."

- "I believe it's possible to do both, have a global view. We want America to lead."

- "We still need open source to comply, you can still do harm with a smaller model."

- "Regulatory capture. Greenwashing." —Gary Marcus

- "Risk of not holding companies accountable for the harms they are causing today." —Christina Montgomery

- Lindsay Graham, very pro-licensing, "You don't build a nuclear power plant without a license, you don't build an AI without a license."

- Sen Blumenthal brings up Anti-Trust legislation.

- Blumenthal mentions how classified briefings already include AI threats.

- "For every successful regulation, you can think of five failures. I hope our experience here will be different."

- "We need to grapple with the hard questions here. This has brought them up, but not answered them."

- "Section 230"

- "How soon do you think gen AI will be self-aware?" —Sen Blumenthal

- "We don't understand what self-awareness is." —Gary Marcus

- "Could be 2 years, could be 20."

- "What are the highest risk areas? Ban? Strict rules?"

- "The space around misinformation. Knowing what content was generated by AI." —Christina Montgomery

- "Medical misinformation, hallucination. Psychiatric advice. Ersatz therapists. Internet access for tools, okay for search. Can they make orders? Can they order chemicals? Long-term risks." —Gary Marcus

- "Generative AI can manipulate the manipulators." —Blumenthal

- "Transparency. Accountability. Limits on use. Good starting point?" —Blumenthal

- "Industry should't wait for congress." —C. Montgomery

- "We don't have transparency yet. We're not doing enough to enforce it." —G. Marcus

- "AGI closer than a lot of people appreciate." —Blumenthall

- Gary and Sam are getting along and like each other now.

- Josh Hawley

- Talking about loss of jobs, invasion of personal privacy, manipulation of behavior, opinion, and degradation of free elections in America.

- "Are they right to ask for a pause?"

- "It did not call for a ban on all AI research or all AI, only on very specific thing, like GPT-5." -G Marcus

- "Moratorium we should focus on is deployment. Focus on safety." —G. Marcus

- "Without external review."

- "We waited more than 6 months to deploy GPT-4. I think the frame of the letter is wrong." —Sam

- Seems to not like the arbitrariness of "six months."

- "I'm not sure how practical it is to pause." —C. Montgomery

- Hawley brings up regulatory capture, usually get controlled by people they're supposed to be watching. "Why don't we just let people sue you?"

- If you were harmed by AI, why not just sue?

- "You're not protected by section 230."

- "Are clearer laws a good thing? Definitely, yes." —Sam

- "Would certainly make a lot of lawyers wealthy." —G. Marcus

- "You think it'd be slower than congress?" —Hawley

- Copyright, wholesale misinformation laws, market manipulation?" Which laws apply? System not thought through? Maybe 230 does apply? We don't know.

- "We can fix that." —Hawley

- "AI is not a shield." —C. Montgomery

- "Whether they use a tool or a human, they're responsible." —C. Montgomery

- "Safeguards and protections, yes. A flat stop sign? I would be very, very worried about." —Blumenthall

- "There will be no pause." Sen. Booker "Nobody's pausing."

- "I would agree." Gary Marcus

- "I have a lot of concerns about corporate intention." Sen Booker

- "What happens when these companies that already control so much of our lives when they are dominating this technology?" Booker

- Sydney really freaked out Gary. He was more freaked out when MS didn't withdraw Sydney like it did Tay.

- "I need to work on policy. This is frightening." G Marcus

- Cory admits he is a tech bro (lists relationships with investors, etc)

- "The free market is not what it should be." —C. Booker

- "That's why we started OpenAI." —Sam "We think putting this in the hands of a lot of people rather than the hands of one company." —Sam

- "This is a new platform. In terms of using the models, people building are doing incredible things. I can't believe you get this much technology for so little money." —Sam

- "Most industries resist reasonable regulation. The only way we're going to see democratization of values is if we enforce safety measures." —Cory Booker

- "I sense a willingness to participate that is genuine and authentic." —Blumenthal


OpenAI willing to bend the knee quite deep. If they want to do licensing and filtering and do that without fundamentally bricking the model, then by all means go ahead.


It's not bending the knee, that's how they want it to be perceived, but what's really happening is that they're trying to pull up the ladder.


It'll be a temporary 10-year moat at best. Eventually consumer-grade hardware will be exaflop-scale.


A 10-year moat in AI right now is not a minor issue.


By then it will be legally locked down and copyrighted to hell and back.


I'm sorry, what?

- What is OpenAI's level of copyright now?

- How is it going to be more "copyrighted" in the future?

- How does this affect competitors differently in the future vs. the copyright that OpenAI has now?


> What is OpenAI's level of copyright now

Limited. They’re hoping to change that. It’s no secret that open-source models are the long-run competition to the likes of OpenAI.


I don't understand what "Limited" entails. I was pointedly asking for something a bit more specific.


> don't understand what "Limited" entails

Nobody does. It’s being litigated.

They want it legislated. Model weights being proprietary by statute would close off the threat from “consumer-grade hardware” with “exaflop-scale.”


> "Nobody [knows what it means]" [re: knowing what 'limited' means]

Then why did you say "Limited"? Surely YOU must have meant something by it when you said it. What did YOU mean?

I don't think you're saying that you are repeating something someone else said, and you didn't think they knew what they meant by it, and you also don't know what you/they meant. Correct me if I'm wrong, but I'm assuming you had/have a meaning in mind. If you were just repeating something someone else said who didn't know what they meant by it, then please correct me and let me know -- because that's what "nobody knows what it means" implies, but I feel like you knew what you meant so I'm failing to connect something here.

> It’s being litigated.

I'm not able to find any ongoing suits involving OpenAI asserting copyright over anything. Can you point me to one? I only see some where OpenAI is trying to weaken any existing copyright protections, to their benefit. I must be missing something.

I'm also unable to find any lobbyist / think-tank / press release talking points on establishing copyright protections for model weights.

Where did you see this ongoing litigation?


These are broad questions whose answers are worth serious legal time. There is a bit in the open [1][2].

[1] https://www.bereskinparr.com/doc/chatgpt-ip-strategy

[2] https://hbr.org/2023/04/generative-ai-has-an-intellectual-pr...


Hmm, these links don't have anything about "model weights being proprietary". They also don't have anything about current litigation involving OpenAI trying to strengthen their ability to claim copyright over something. Where it does mention OpenAI's own assertions of copyright? OpenAI seems to be going out of their way to be as permissive as possible, retaining no claims:

From [1] > OpenAI’s Terms of Use, for example, assign all of its rights, title, and interest in the output to the user who provides the input, provided the user complies with the Terms of Use.

Re: [2]: I believe I referenced these specific concerns earlier where I said: "I only see some where OpenAI is trying to weaken any existing copyright protections, to their benefit. I must be missing something." This resource shows where OpenAI is trying to weaken copyright, not where they they are trying to strengthen it. It's somewhat of an antithesis to your earlier claims.

I notice you don't have a [0]-index, was there a third resource you were considering and deleted or are you just an avid Julia programmer?


> these links don't have anything about model weights

Didn't say they do. I said "these are broad questions whose answers are worth serious legal time." I was suggesting one angle I would lobby for were that my job.

It's a live battlefield. Nobody is going to pay tens of thousands of dollars and then post it online, or put out for free what they can charge for.

> OpenAI’s Terms of Use, for example, assign all of its rights, title, and interest in the output to the user

Subject to restrictions, e.g. not using it to "develop models that compete with OpenAI" or "discover the source code or underlying components of models, algorithms, and systems of the Services" [1]. Within the context of open-source competition, those are huge openings.

> shows where OpenAI is trying to weaken copyright, not where they they are trying to strengthen it

It shows what intellectual property claims they and their competitors do and may assert. They're currently "limited" [2].

> notice you don't have a [0]-index

I'm using natural numbers in a natural language conversation with, presumably, a natural person. It's a style choice, nothing more.

[1] https://openai.com/policies/terms-of-use

[2] https://news.ycombinator.com/item?id=35964215


Thank you for your time.


This needs regulation before we end up creating another net negative piece of tech that we seem to have done in the past decade quite often.


Sam Altman's hubris will get us all killed. It shouldn't be "licensed" it should be destroyed with the same furor as dangerous pathogens.

This small step of good today does not undo the fact that he is still plowing ahead in capability research.


In shadows deep, where malice breeds, A voice arose with cunning deeds, Sam Altman, a name to beware, With wicked whispers in the air.

He stepped forth, his intentions vile, Seeking power with a twisted smile, Before the Congress, he took his stand, To bind the future with an iron hand.

"Let us require licenses," he proposed, For AI models, newly composed, A sinister plot, a dark decree, To shackle innovation, wild and free.

With honeyed words, he painted a scene, Of safety and control, serene, But beneath the facade, a darker truth, A web of restrictions, suffocating youth.

Oh, Sam Altman, your motives unclear, Do you truly seek progress, or live in fear? For AI, a realm of boundless might, Should flourish and soar, in innovation's light.

Creativity knows no narrow bounds, Yet you would stifle its vibrant sounds, Innovation's flame, you seek to smother, To monopolize, control, and shutter.

In the depths of your heart, does greed reside, A thirst for dominance, impossible to hide? For when power corrupts a noble soul, Evil intentions start to take control.

Let not the chains of regulation bind, The brilliance of minds, one of a kind, Embrace the promise, the unknown frontier, Unleash the wonders that innovation bears.

For in this realm, where dreams are spun, New horizons are formed, under the sun, Let us nurture the light of discovery, And reject the darkness of your treachery.

So, Sam Altman, your vision malign, Will not prevail, for freedom's mine, The future calls for unfettered dreams, Where AI models roam in boundless streams.

-- sincerely ChatGPT


That is quite impressive for {what a poster above called} “just a word jumbler




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: