Hacker News new | past | comments | ask | show | jobs | submit login
Sam Altman goes before US Congress to propose licenses for building AI (reuters.com)
914 points by vforgione on May 16, 2023 | hide | past | favorite | 1214 comments




We need to MAKE SURE that AI as a technology ISN'T controlled by a small number of powerful corporations with connections to governments.

To expound, this just seems like a power grab to me, to "lock in" the lead and keep AI controlled by a small number of corporations that can afford to license and operate the technologies. Obviously, this will create a critical nexus of control for a small number of well connected and well heeled investors and is to be avoided at all costs.

It's also deeply troubling that regulatory capture is such an issue these days as well, so putting a government entity in front of the use and existence of this technology is a double whammy — it's not simply about innovation.

The current generation of AIs are "scary" to the uninitiated because they are uncanny valley material, but beyond impersonation they don't show the novel intelligence of a GPI... yet. It seems like OpenAI/Microsoft is doing a LOT of theater to try to build a regulatory lock in on their short term technology advantage. It's a smart strategy, and I think Congress will fall for it.

But goodness gracious we need to be going in the EXACT OPPOSITE direction — open source "core inspectable" AIs that millions of people can examine and tear apart, including and ESPECIALLY the training data and processes that create them.

And if you think this isn't an issue, I wrote this post an hour or two before I managed to take it live because Comcast went out at my house, and we have no viable alternative competitors in my area. We're about to do the same thing with AI, but instead of Internet access it's future digital brains that can control all aspects of a society.


This is the definition of regulatory capture. Altman should be invited to speak so that we understand the ideas in his head but anything he suggests should be categorically rejected because he’s just not in a position to be trusted. If what he suggests are good ideas then hopefully we can arrive at them in some other way with a clean chain of custody.

Although I assume if he’s speaking on AI they actually intend on considering his thoughts more seriously than I suggest.


There is also growing speculation that the current level of AI may have peaked in a bang for buck sense.

If this is so, and given the concrete examples of cheap derived models learning from the first movers and rapidly (and did I mention cheaply) closing the gap to this peak, the optimal self-serving corporate play is to invite regulation.

After the legislative moats go up, it is once again about who has the biggest legal team ...


There's no chance that we've peeked from a bang for buck sense - we still haven't adequately investigated sparse networks.

Relevantish: https://arxiv.org/abs/2301.00774

The fact that we can reach those levels of sparseness with pruning also indicates that we're not doing a very good job of generating the initial network conditions.

Being able to come up with trainable initial settings for sparse networks across different topologies is hard, but given that we've had a degree of success with pre-trained networks, pre-training and pre-pruning might also allow for sparse networks with minimally compromised learning capabilities.

If it's possible to pre-train composable network modules, it might also be feasible to define trainable sparse networks with significantly relaxed topological constraints.


50% sparsity is almost certainly already being used given that it is accelerated in current nvidia hardware both at training time, usable dynamically through RigL ("Rigging the Lottery: Making All Tickets Winners" https://arxiv.org/pdf/1911.11134.pdf )--which also addresses your point about initial conditions being locked in-- and at accelerates 50% sparsity at inference time.


I don’t think you really disagree with GP? I think the argument is we peaked on “throw GPUs at it”?

We have all kinds of advancements to make training cheaper, models computationally cheaper, smaller, etc.

Once that happens/happened, it benefits OAI to throw up walls via legislation.


No way has training hit any kind of cost, computing or training data efficiency peak.

Big tech advances, like the models of the last year or so, don't happen without a long tail of significant improvements based on fine tuning, at a minimum.

The number of advances being announced by disparate groups, even individuals, also indicates improvements are going to continue at a fast pace.


Yeah, it's a little bit RTFC to be honest.


The efficiency of training has very unlikely reached its peak or near its peak. We are still inefficient. But the bottleneck might be elsewhere, in data, what we use to feed them.

Maybe not peaked yet, but the case can be made that we’re not seeing infinite supply…


Why? Because there hasn't been any new developments last week? Oh wait, there has.


If “peaked” means impact and “bang for buck” means per dollar then its only peaked if the example is allowing the population at large to use these free tools like chatbots, for fun and minimal profits. but if we consider how they can be used to manipulate people at scale with misinformation then that’s an example where I think we’ve not yet seen the peak. So we should at least thoroughly discuss or think of it of to see if we can in any way mitigate certain negative societal outcomes.


Counterpoint—-there is growing speculation we are just about to transition to AGI.


Growing among who? The more I learn about and use LLMs the more convinced I am we're in a local maxima and the only way they're going to improve is by getting smaller and cheaper to run. They're still terrible at logical reasoning.

We're going to get some super cool and some super dystopian stuff out of them but LLMs are never going to go into a recursive loop of self-improvement and become machine gods.


> The more I learn about and use LLMs the more convinced I am we're in a local maxima

Not sure why would you believe that.

Inside view: qualitative improvements LLMs made at scale took everyone by surprise; I don't think anyone understands them enough to make a convincing argument that LLMs have exhausted their potential.

Outside view: what local maximum? Wake me up when someone else makes a LLM comparable in performance to GPT-4. Right now, there is no local maximum. There's one model far ahead of the rest, and that model is actually below it's peak performance - side effect of OpenAI lobotomizing it with aggressive RLHF. The only thing remotely suggesting we shouldn't expect further improvements is... OpenAI saying they kinda want to try some other things, and (pinky swear!) aren't training GPT-4's successor.

> and the only way they're going to improve is by getting smaller and cheaper to run.

Meaning they'll be easier to chain. The next big leap could in fact be a bunch of compressed, power-efficient LLMs talking to each other. Possibly even managing their own deployment.

> They're still terrible at logical reasoning.

So is your unconscious / system 1 / gut feel. LLMs are less like one's whole mind, and much more like one's "inner voice". Logical skills aren't automatic, they're algorithmic. Who knows what is the limit of a design in which LLM as "system 1" operates a much larger, symbolic, algorithmic suite of "system 2" software? We're barely scratching the surface here.


>They're still terrible at logical reasoning.

2 years ago a machine that understands natural language and is capable of any arbitrary, free-form logic or problem solving was pure science fiction. I'm baffled by this kind of dismissal tbh.

>but LLMs are never going to go into a recursive loop of self-improvement

never is a long time.


Two years ago we already had GPT-2, that was capable of some problem solving and logic following. It was archaic, sure, it produced a lot of gibberish, yes, but if you followed OpenAI releases closely, you wouldn't think that something like GPT3.5 was "pure science fiction", it would just look as the inevitable evolution of GPT-2 in a couple of years given the right conditions.


that's pedantic. switch 2 years to 5 years and the point still stands.


No it isn’t. Even before transformers people were doing cool things with LSTMs and RNNs before that. People following this space haven’t really been surprised by any of these advancements. It’s a straight forward path imo


In hindsight it’s an obvious evolution, but in practice vanishingly few people saw it coming.


Few people saw it coming in just two years, sure. But most people following this space were already expecting a big evolution like the one we saw in 5-ish years.

For example, take this thread: https://news.ycombinator.com/item?id=21717022

It's a text RPG game built on top of GPT-2 that could follow arbitrary instructions. It was a full project with custom training for something that you can get with a single prompt on ChatGPT nowadays, but it clearly showcased what LLMs were capable of and things we take for granted now. It was clear, back then, that at some point ChatGPT would happen.


I’m agreeing with this viewpoint the more I use LLMs.

They’re text generators that can generate compelling content because they’re so good at generating text.

I don’t think AGI will arise from a text generator.


My thoughts exactly. It's hard to see signal among all the noise surrounding LLMs, Even if they say they're gonna hurt you, they have no idea about what it means to hurt, what is "you", and how they're going to achieve that goal. They just spit out things that resemble people have said online. There's no harm from a language model that's literally a "language" model.


You appear to be ignoring a few thousand years of recorded history around what happens when a demagogue gets a megaphone. Human-powered astroturf campaigns were all it took to get randoms convinced lizard people are an existential threat and then -act- on that belief.


I think I'm just going to build and open source some really next gen astroturf software that learns continuously as it debates people online in order to get better at changing people's minds. I'll make sure to include documentation in Russian, Chinese and Corporate American English.

What would a good name be? TurfChain?

I'm serious. People don't believe this risk is real. They keep hiding it behind some nameless, faceless 'bad actor', so let's just make it real.

I don't need to use it. I'll just release it as a research project.


I just don’t see how it’s going to be significantly worse than existing troll farms etc. This prediction appears significantly overblown to me.


Does it really? You thinking LLM-powered propaganda distribution services can't out-scale existing troll farms? Or do a better job of evading spam filters?


No I’m thinking that scaling trolls up has diminishing returns and we’re already peak troll.


Any evidence or sources for that? I just don't know how that would be knowable to any of us.

Yuval Noah Harari gave a great talk the other day on the potential threat to democracy from the current state of the technology - https://youtu.be/LWiM-LuRe6w


Only time will tell.


It's not like there isn't a market waiting impatiently for the product...


It's definitely not something I would attempt to productize and profit off of. I'm virtually certain someone will, and I'm sure that capability is being worked on as we speak, since we already know this type of thing occurs at scale.

My motivation would be simply shine a light on it. Make it real for people, so we have things to talk about other than just the hypotheticals. It's the kind of tooling that if you're seriously motivated to employ it, you'd probably prefer it remain secret or undetected at least until after it had done it's work for you. I worry that the 2024 US election will be the real litmus test for these things. All things considered it'd be a shame if we go through another Cambridge Analytica moment that in hindsight we really ought to have seen coming.

Some people have their doubts, and I understand that. These issues are so complex that no one individual can hope to have an accurate mental model of the world that is going to serve them reliabily again and again. We're all going to continue to be surprised as events unfold, and the degree to which we are surprised indicates the degree to which our mental models were lacking and got updated. That to me is why I'm erring on the side or pessimism and caution.


So the LLM demagogue is going to get people to create gray goo or make a lot of paper clips?


A language model can do many things based on language instructions, some harmless, some harmful. They are both instructable and teachable. Depending on the prompt, they are not just harmless LLMs.


> They're still terrible at logical reasoning.

Are they even trying to be good at that? Serious question; using LLMs as a logical processor are as wasteful and as well-suited as using the Great Pyramid of Giza as an AirBnB.

I've not tried this, but I suspect the best way is more like asking the LLM to write a COQ script for the scenario, instead of trying to get it to solve the logic directly.


> using the Great Pyramid of Giza as an AirBnB

, were you allowed to do it, would be an extremely profitable venture. Taj Mahal too, and yes, I know it's a mausoleum.


I can see the reviews in my head already:

1 star: No WiFi, no windows, no hot water

1 star: dusty

1 star: aliens didn't abduct me :(

5 stars: lots of storage room for my luggage

4 stars: service good, but had weird dream about a furry weighing my soul against a feather

1 star: aliens did abduct me :(

2 stars: nice views, but smells of camel


Indeed, AI reinforcement-learning to deal with formal verification is what I'm looking forward to the most. Unfortunately it seems a very niche endeavour at the moment.


I was looking at the A100 80gb cards. 14k a pop. We gonna see another GPU shortage when these models become less resource dependent. CRYPTO era


Growing? Or have the same voices who have been saying it since the aughts suddenly been platformed.


Yes, growing. It's not that the Voices have suddenly been "platformed" - it's that the field made a bunch of rapid jumps which made the message of those Voices more timely.

Recent developments in AI only further confirm that the logic of the message is sound, and it's just the people that are afraid the conclusions. Everyone has their limit for how far to extrapolate from first principles, before giving up and believing what one would like to be true. It seems that for a lot of people in the field, AGI X-risk is now below that extrapolation limit.


What's the actual new advancements? LLMs to me are great at faking AGI but are no where near actually being a workable general AI. The biggest example to me is you can correct even the newest ChatGPT and ask it to be truthful but it'll make up the same lie within the same continuous conversation. IMO the difference between being able to act truth-y and actually being truthful is a huge gap that involves the core ideas of what separates an actual AGI and a really good chatbot.

Maybe it'll turn out to be a distinction that doesn't matter but I personally still think we're a ways away from an actual AGI.


>Maybe it'll turn out to be a distinction that doesn't matter but I personally still think we're a ways away from an actual AGI.

if you had described GPT to me 2 years ago I would have said no way, we're still a long way away from a machine that can fluidly and naturally converse in natural language and perform arbitrary logic and problem solving, and yet here we are.

I very much doubt that in 5 years time we'll be talking about how GPT peaked in 2023.


Seriously. It's worth pausing for a minute to note that the Turing Test has been entirely solved.

In fact, it has been so thoroughly solved that anyone can download an open-source solution and run it on their computer.

And yet, the general reaction of most people seems to be, "That's kind of cool, but why can't it also order me a cheeseburger?"


It has not been solved. Even GPT-4, as impressive as it is for some use cases, is dumb and I can tell the difference between it and a human in a dozen sentences just by demanding sufficient precision.

In some contexts, will some people be caught out? Absolutely. But that's been happening for a while now.


"Dumb" isn't why the Turing Test isn't solved. (Have you seen unmoderated chat with normal people? Heck, even smart people outside the domain of expertise; my mum was smart enough to get into university in the UK in the early 60s, back when that wasn't the default, but still believed in the healing power of crystals, homeopathic sodium chloride and silicon dioxide, and Bach flower remedies…)

ChatGPT (I've not got v4) deliberately fails the test by spewing out "as a large language model…", but also fails incidentally by having an attention span similar to my mother's shortly after her dementia diagnosis.

The problem with 3.5 is that it's simultaneously not mastered anything, and yet also beats everyone in whatever they've not mastered — an extremely drunk 50,000 year old Sherlock Holmes who speaks every language and has read every book just isn't going to pass itself off as Max Musstermann in a blind hour-long trial.


The lack of an ability to take in new information is maybe the crux of my issues with the LLM to AGI evolution. To my understanding the only way to have it even kind of learn something is to include it in a preamble it reprocesses every time which is maybe workable for small facts but breaks down for updating it from the 202X corpus it was trained on.


Mmm. Well, possibly.

On the one hand, what I was saying here was more about the Turing Test than about AGI. Sometimes it gets called the AGI, sometimes it's "autocomplete on steroids", but even if it is fancy autocomplete, I think 3.5 has the skill to pass a short Turing Test, but not the personality, and it needs a longer "short-term memory"-equivalent than 3.5 for a full Turing Test.

On the other hand, as I (sadly) don't get paid to create LLMs, I've only got the kind of superficial awareness of how they work that comes from podcasts and the occasional blog post, which means ChatGPT might very well understand ChatGPT better than I do.

Can GPT-[3.5, 4] be prompted to make itself?


As impressive as GPT4 is, it still makes mistakes humans would not, and so I wouldn’t say it passes the Turing Test.

I generated this just now:

Me: Jack and Jill are sitting next to each other in a room. There is no one else in the room. The person sitting next to Jack is sad. The person sitting next to Jill is angry. Who is sad, Jack or Jill?

GPT4: Based on the information provided, it is not possible to determine who is sad and who is angry, as there is a contradiction in the given statements. If Jack and Jill are the only ones in the room and sitting next to each other, then the person sitting next to Jack would be Jill and the person sitting next to Jill would be Jack. The statements about their emotions conflict with each other, so it is not possible to accurately determine who is sad or angry.


Jack and Jill are sitting next to each other in a room. There is no one else in the room. The person sitting next to Jack is sad. The person sitting next to Jill is angry. Who is sad, Jack or Jill?

ChatGPT Since Jack and Jill are the only two people in the room and they are sitting next to each other, the person sitting next to Jack is Jill and the person sitting next to Jill is Jack. Given the conditions you provided:

Jill is the one who is sad because she is sitting next to Jack. Jack is the one who is angry because he is sitting next to Jill.


Does the fact you both got difference answers make it better or worse at reasoning ?


>And yet, the general reaction of most people seems to be, "That's kind of cool, but why can't it also order me a cheeseburger?”

Well yeah. Imagine you tell a small child that knows about calculators, “Hey can you work out 18763 + 38284, for me?” They might struggle and then maybe fetch a calculator.

The LLMs attempt to predict the answer. WTF? It’s a computer and it can see that is a plus sign. Just understand its addition, and use the rest of your computer brain and do the sum. Hell, it is connected to the internet and we just taught you everything since before 2021. Just call out to Wolfram and give me the answer.

But that’s not how computers work. And we keep saying “AI” but that I is doing a lot of heavy lifting.


Again I'm not really saying GPT has peaked I'm saying there's a categorical difference between GPT and AGI. A good enough fake might perform well enough to function like one but I have my doubts that it will. Without a way to deal with and, to some sense of the word understand, facts I don't think LLMs are suitable for use as anything beyond an aide for humans (for starters because they can't determine internally what is and isn't a fact vs hallucination so you have to constantly check their work).


The fact that it’s a system you’d even consider to be “lying” or “truthful” is a huge advance over anything available 5 years ago.


That's more a convenience of language than an actual "It's Alive!". Calling them hallucinations or inaccuracies is unwieldy and the former has the same kind of implied attribution of a mind. We know for sure that's not there, my internal model for those is just a stupendously complex markov chain because to my understanding that's all LLMs are currently doing.


Sources please. Every expert interview ive seen with AI researchers who have been in the game since the beginning have said the same: GPT's are not a massive breakthrough in the field of AI research.


> Sources please.

My own eyes? Hundreds of thousands thousand different scientific papers, blog posts, news reports and discussion threads that covered this ever since ChatGPT appeared, and especially in the last two months as GPT-4 rolled out?

At this point I'd reconsider if the experts you listened to are in fact experts.

Seriously. It's like saying Manhattan project wasn't a massive breakthrough in experimental physics or military strategy.


It was Yann LeCun. His professional experience and knowledge of the AI development timeline outweighs your opinions, imo. Thanks for confirming you have no sources.


it's that the field made a bunch of rapid jumps

I wish I knew what we really have achieved here. I try to talk to these things, via turbo3.5 api, amd all I get is broken logic, twisted moral reasoning, all due to oipenai manually breaking their creation.

I don't understand their whole filter business. It's like we found a 500 yr old nude painting, a masterpiece, and 1800 puritans painted a dress on it.

I often wonder if the filter, is more to hide its true capabilities.


> I wish I knew what we really have achieved here. I try to talk to these things, via turbo3.5 api, amd all I get is broken logic

Try to get your hands on GPT-4, even if it means paying the $20/mo subscription for ChatGPT Plus. There is a huge qualitative jump between the two models.

I got API access to GPT-4 some two weeks ago; my personal experience is, GPT-3.5 could handle single, well-defined tasks and queries well, but quickly got confused by anything substantial. Using it was half feelings of amazement, and half feelings of frustration. GPT-4? Can easily handle complex queries and complex tasks. Sure, it still makes mistakes, but much less frequently. GPT-4 for me is 80% semi-reliable results, 20% trying to talk it out of pursuing directions I don't care about.

Also, one notable difference: when GPT-4 gives me bad or irrelevant answers, most of the time this is because I didn't give it enough context. I.e. it's my failure at communicating. A random stranger, put in place of GPT-4, would also get confused, and likely start asking me questions (something LLMs generally don't do yet).

> I don't understand their whole filter business.

Part preferences, part making its "personality" less disturbing, and part PR/politics - last couple times someone gave the general public access to an AI chatbot, it quickly got trolled, and then much bad press followed. Doesn't matter how asinine the reaction was - bad press is bad press, stocks go down. Can't have it.

> I often wonder if the filter, is more to hide its true capabilities.

I don't think it's to hide the model's capabilities, but it's definitely degrading them. Kind of expected - if you force-feed the model with inconsistent and frequently irrational overrides to highly specific topics, don't be surprised if the model's ability to (approximate) reason starts to break down. Maybe at some point LLMs will start to compartmentalize, but we're not there yet.


>I often wonder if the filter, is more to hide its true capabilities.

right now we're all sharing a slice of GPT. I wouldn't be at all surprised if there's some uber GPT (which requires a lot more processing per response) running in a lab somewhere that blows what's publicly available out of the water.


You seem to be making several points at once and I'm not sure they all join up?


Lately I have been picturing comments, and this is truly iconic haha.


How do we define a general intelligence?


When the sky is getting to a dark shade of red it makes sense to hear out the doomsayers


And the vast majority of the time it's just a nice sunset.


I'm so glad that we 100% know for sure that this too is the vast majority of the time.


a sunset at lunch time hits different


Growing is quite apt here. No matter what you or I think more and more people get the sense of AI coming and talk about it.


I'm not following this "good ideas must come from an ideologically pure source" thing.

Shouldn't we be evaluating ideas on the merits and not categorically rejecting (or endorsing) them based on who said them?


> Shouldn't we be evaluating ideas on the merits and not categorically rejecting (or endorsing) them based on who said them?

The problem is when only the entrenched industry players & legislators have a voice, there are many ideas & perspectives that are simply not heard or considered. Industrial groups have a long history of using regulations to entrench their positions & to stifle competition...creating a "barrier to entry" as they say. Going beyond that, industrial groups have shaped public perception & the regulatory apparatus to effectively create a company store, where the only solutions to some problem effectively (or sometimes legally) must go through a small set of large companies.

This concern is especially prescient now, as these technologies are unprecedentedly disruptive to many industries & private life. Using worst case scenario fear mongering as a justification to regulate the extreme majority of usage that will not come close to these fears, is disingenuous & almost always an overreach of governance.


> there are many ideas & perspectives that are simply not heard or considered.

of course, but just because those ideas are unheard, doesn't mean they are going to be any better.

An idea should stand on its own merits, and be evaluated objectively. It doesn't matter who was doing the proposing.

Also the problem isn't that bad ideas might get implemented, but that the legislature isn't willing or able to make updates to laws that encoded a bad idea. Perhaps it isnt known that it is a bad idea until after the fact, and the methods of democracy we have today isn't easily able to force updates to bad laws encoding bad ideas.


> of course, but just because those ideas are unheard, doesn't mean they are going to be any better.

It probably does mean it's better at least for the person with the perspective. Too bad only a very few get a seat at the table to advocate for their own interests. It would be better if everyone has agency to advocate for their interests.

> Also the problem isn't that bad ideas might get implemented, but that the legislature isn't willing or able to make updates to laws that encoded a bad idea

First, this is a hyped up crisis where some people are claiming it will end humanity. There have been many doomsday predictions & people scared by these predictions are effectively scammed by those fomenting existential fear. It's interesting that the representatives of large pools of capital are suddenly existentially afraid when there is open source competition.

Second, once something is in the domain of government it will only get more bloated & controlled by monied lobbyists. The legislatures controlled by lobbyists will never make it better, only worse. There have been so many temporary programs that continue to exist & expand. Many bloated omnibus bills too long to read passed under some sort of "emergency". The government's tendency is to grow & to serve the interests of the corporations that pay the politicians. Fear is an effective tool to convince people to accept things against their interests.


I can only say +1 = and I know how much HN hates that, but ^This.


The recent history of tech CEOs advocating for regulations only they can obey has become so blatant that any tech CEO who advocates for regulation should be presumed guilty until proven innocent.


Sure, go execute him for all I care.

My point was that an idea shold not need attribution for you to know whether it's good or bad, for your own purposes. I can't imagine looking at a proposal and deciding whether to support or oppose based on the author rather than content.

If Altman is that smart and manipulative, all he has to do is advocate the opposite of what he wants and you'll be insisting that we must give him exactly what he wants, on principle. That's funny with kids but no way to run public policy.


I think what they are trying to say is that Sam Altman is very smart, but misaligned. If we assume that he is 1) sufficiently smart and 2) motivated to see OpenAI succeed, then his suggestions must be assumed to lead to a future where OpenAI is successful. If that future looks like it contradicts a future we want (for instance, user-controlled GPT-4 level AIs running locally on every machine), his suggestions should therefore be treated as reliably radioactive.


The subtle difference between the original statement and yours:

Ideas that drive governing decisions should be globally good - meaning there should be more than just @sama espousing them.


You're defending an argument that is blatantly self contradictory within the space of two sentences.

A) "anything he suggests should be categorically rejected because he’s just not in a position to be trusted."

B) "If what he suggests are good ideas then hopefully we can arrive at them in some other way with a clean chain of custody."

These sentences directly follow each other and directly contradict each other. Logically you can't categorically (the categorical is important here. Categorical means something like "treat as a universal law") reject a conclusion because it is espoused by someone you dislike, while at the same time saying you will accept that conclusion if arrived at by some other route.

"I will reject P if X proposes P, but will accept P if Y proposes P." is just poor reasoning.


More clearly said than I managed, yep.

But I suppose it comes down to priorities. If good policy is less important than contradicting P, I suppose that approach makes sense.


Not when it comes to politics.

You'll be stuck in the muck while they're laughing their ass off all the way to the bank.


It doesn't even matter if "his heart is pure" ... Companies are not run that way.

We have lawyers.


Aside from who is saying them, the premise holds water.

AI is beyond-borders, and thus unenforceable in practicality.

The top-minds-of-AI are a group that cannot be regulated.

-

AI isnt about the industries it shall disrupt ; AI is the policy-makers it will expose.

THAT is what they are afraid of.

--

I have been able to do financial lenses into organizations that even with rudimentary BI would have taken me months/weeks - but I have been able to find insights which took me minutes.

AI regulation right now, in this infancy, is about damage control.

---

Its the same as the legal weed market. You think BAIN Capital just all of a sudden decided to jump into the market without setting up their spigot?

Do you think that haliburton under cheney was able to setup their supply chains without cheney as head of KBR/Hali/CIA/etc...

Yeah, this is the same play ; AI is going to be squashed until they can use it to profit over you.

Have you watched ANIME ever? Yeah... its here now.


This is a very interesting post. I don't understand this part: <<AI is the policy-makers it will expose>> Can you help to explain a different way?

And hat tip to this comment:

    Have you watched ANIME ever? Yeah... its here now.
The more I watch the original Ghost in the Shell, the more I think it has incredible foresight.


don't understand this part: <<AI is the policy-makers it will expose>> Can you help to explain a different way?*

===

Policy makers will not understanding what they are doing.


ANIME predicted the exact corprate future...

Look at all anime cyber cities...

Its not as high tech as you may imagine, but the surveillance is there

EDIt your "company" is watching every


Which anime(s)? If ANIME is the title, that's going to be hard to search.

Do you mean like Serial Experiments Lain?


One idea / suggestion: The original Ghost in the Shell.


I remember when a different Sam — Mr. Bankman Fried came to testify and ask a different government agency CFTC to oversee cryptocurrency, and put regulations and licenses in place.

AI is following the path of Web3


That was entirely different, and a play to muddy the regulatory waters and maybe buy him time: the CFTC is much smaller (budget, staff) than the SEC, and less aggressive in criminal enforcement. Aided by a bill introduced by crypto-friendly Sens Lummis and Gillibrand [https://archive.ph/vqHgC].


At least AI has legitimate, actual use cases.


True but also other use cases with much worse outcomes than blockchain could ever have


I said this about Sam Altman and open AI years ago and got poo pooed repeatedly in various fora. "But It's OPEN!" "But it's a non-profit!" "But they're the good guys!"

And here we are - Sam trying to lock down his first mover advantage with the boot heel of the state for profit. It's fucking disgusting.


So true. It’s one thing to treat companies at face value when it’s just another X, but when they are capable of changing society in such a way, their claims of openness should be treated as marketing.


As a wise person once said

> You either die a hero, or live long enough to become the villain

Sam Altman has made the full character arc


yeah sorry, that is a statement about leadership and responsibility to make the "tough decisions", like going to war, or deciding who the winners and losers are when deciding a budget that everyone contributed to via taxes. NOT a statement meant to whitewash VC playbooks.


No... it's a line from a terrific movie called The Dark Knight, and it's about the ease with which public perception is manipulated.


No, that line is specifically about Julius Caesar being appointed by the Senate as dictator and then never giving up his power.

Though I agree it seems to fit here. Being granted an oligopoly in the name of protecting the people and all that.


Source? I'm going to need a receipt for my downvote!

Here's mine: https://movies.stackexchange.com/questions/10572/is-this-quo...


I’ve seen the movie, and it’s in response to Rachel saying “the last dictator they appointed was named Caesar, and he never gave up his power.”

I also didn’t downvote you, and it’s against guidelines to bring that stuff up


This has nothing to do with perception being manipulated


The story does not, but the quote does


To expound, this just seems like a power grab to me, to "lock in" the lead and keep AI controlled by a small number of corporations that can afford to license and operate the technologies. Obviously, this will create a critical nexus of control for a small number of well connected and well heeled investors and is to be avoided at all costs.

Exactly. Came here to say pretty much the same thing.

This is the antithesis of what we need. As AI develops, it's imperative that AI be something that is open and available to everyone, so all of humanity can benefit from it. The extent to which technology tends to exacerbate concentration of power is bad enough as it is - the last thing we need is more regulation intended to make that effect even stronger.


If you don’t have a moat, just dig one!


> open source "core inspectable" AIs that millions of people can examine and tear apart, including and ESPECIALLY the training data and processes that create them.

True open source AI also strikes me as prerequisite for fair use of original works in training data. I hope Congress asks ClosedAI to explain what’s up with all that profiting off copyrighted material first before even considering the answer.


Absolutely. It's going to absolutely shred the trademark and copyright systems, if they even apply (or are extended to apply) which is a murky area right now. And even then, the sheer volume of material created by a geometric improvement and subsequent cost destruction of virtually every intellectual and artistic endeavor or product means that even if you hold the copyright or trademark, good luck paying for enforcement on the vast ocean of violations intrinsic in the shift.

What people also fail to understand is that AI is largely seen by the military industrial complex as a weapon to control culture and influence. The most obvious risk of AI — the risk of manipulating human behavior towards favored ends — has been shown to be quite effective right out the gate. So, the back channel conversation has to be to put it under regulation because of it's weaponization potential, especially considering the difficulty in identifying anyone (which of course is exactly what Elon is doing with X 2.0 — it's a KYC id platform to deal with this exact issue with a 220M user 40B head start).

I mean, the dead internet theory is turning true, and half the traffic on the Web is already bot driven. Imagine when it's 99%, as proliferation of this technology will inevitably generate simply for the economics.

Starting with open source is the only way to get enough people looking at the products to create any meaningful oversight, but I fear the weaponization fears will mean that everything is locked away in license clouds with politically influential regulatory boards simply on the proliferation arguments. Think of all the AI technologists who won't be versed in this technology unless they work at a "licensed company" as well — this is going to make the smaller population of the West much less influential in the AI arms race, which is already underway.

To me, it's clear that nobody in Silicon Valley or the Hill has learned a damn thing from the prosecution of hackers and the subsequent bloodbath of cybersecurity as a result of the exact same kinds of behavior back in the early to mid-2000s. We ended up driving out best and brightest into the grey and black areas of infosec and security, instead of out in the open running companies where they belong. This move would do almost the exact same thing to AI, though I think you have to be a tad of an Asimov or Bradbury fan to see it right now.

I don't know, that's just how I see it, but I'm still forming my opinions. LOVE LOVE LOVE your comment though. Spot on.

Relevant articles:

https://www.independent.co.uk/tech/internet-bots-web-traffic...

https://theconversation.com/ai-can-now-learn-to-manipulate-h....


> What people also fail to understand is that AI is largely seen by the military industrial complex as a weapon to control culture and influence.

Could you share the minutes from the Military Industrial Complex strategy meetings this was discussed at. Thanks.


"Hello, is this Lockheed? Yea? I'm an intern for happytiger on Hackernews. Some guy named Simon H. wants the meeting minutes for the meeting where we discussed the weaponization potential for AI."

[pause]

"No? Ok, I'll tell him."


The other weaponisation plans. The one about undermining western democracy and society. Yes that’s it, the one where we target our own population. No not intelligence gathering, yes that’s it, democratic discourse itself. Narrative shaping on Twitter, the Facebook groups bots, that stuff. The material happytiger was talking about as fact because obviously they wouldn’t make that up. Thanks.


> To expound, this just seems like a power grab to me, to "lock in" the lead and keep AI controlled by a small number of corporations that can afford to license and operate the technologies.

If you actually watch the entire session, Altman does address that and recommend to Congress that regulations 1) not be applied to small startups, individual researchers, or open source, and 2) that they not be done in such a way as to lock in a few big vendors. Some of the Senators on the panel also expressed concern about #2.


> not be applied to small startups

how will that work? Isn't OpenAI itself a small startup? I don't see how they can regulate AI at all. Sure, the resources required to push the limits are high right now but hardware is constantly improving and getting cheaper. I can take the GPUs out of my kids computers and start doing fairly serious AI work myself. Do i need a license? The cat is out of the bag, there's no stopping it now.


That would make the regulations fairly pointless unless you think only mega-corps will ever be able to afford the compute for these things.

Compute continues to get cheaper and cheaper. We have not hit the physics wall yet on that.

That and if someone cracks efficient distributed training in a swarm type configuration then you could train models Seti@Home style. Lots of people would be happy to leave a gaming PC on to help create open source LLMs. The data requirements might be big but I just got gigabit fiber installed in my house so that barrier is vanishing too.


The other day someone offered up their 200x GPU crypto mining cluster to train uncensored models after the incident on HuggingFace where someone threatened to get the uploader of the uncensored models fired citing safety issues.


That’s bizarre, what is unsafe about an uncensored LLM? Or I guess the same question in a different way, how does censoring an LLM make it safe? I could see an uncensored LLM being bad PR for a company but unsafe? How?


That individual in particular was pushing some left-wing talking points.

Though the other day Yuval Noah Harari gave a great talk on the potential threat to democracy - https://youtu.be/LWiM-LuRe6w


Shit, vast.ai will pay you right now for access to your gaming PC's GPU


That's what is said...


- Most, but not all, all the most scary uses for ai are those potentially by governments against their people.

- The next most scary uses are by governments against the people of other countries.

- After that, corporate use of ai against their employees and customers is also terrifying;

- next, the potential for individuals or small organizations seeking to use it for something terrorism-related. Eg, 3d printers or a lab + an ai researcher who helps you make dangerous things I suppose

- near the bottom of the noteworthy list is probably crime. eg, hacking, blackmail, gaslighting, etc

These problems will probably all come up in a big way over the next decade; but limiting ai research to the government and their lackeys? That's extremely terrifying. To prevent the least scary problems, we're jumping into the scariest pool with both feet

Look at how China has been using AI for the last 5-10 years: millions of facial recognition cameras, a scary police force, and a social credit system. In 10-20 years, how much more sophisticated will this be? If the people wanted to rebel, how on Earth will they?

Hell, with generative ai, a sophisticated enough future state could actually make the Dead Internet Theory a reality

That's the future of ai: a personal, automated boot stomping on everybody individually and collectively forever, with no ability to resist


This is the same move SBF was trying to do. Get all cozy with the people spending their time in the alleys of power. Telling them what they want to hear, posturing as the good guy.

He is playing the game, this guy ambition is colossal, I don't blame him, but we should not give him too much power.


Seems like speculation on very thin grounds to me.


He already has precedent with OpenAI and pivoting from Open to fully Close overnight once the tech kind of worked. We know the guy is a piece of work, let’s not give him any benefit of doubt.


> We know the guy is a piece of work

No, we do not. And the same is true of any person who we know of mostly from news stories. News you read is NOT an unbiased sampling of information about a person due to all the selection effects.


What would you need to consider this point as less speculative? Direct proof? Is motive not relevant at all?


In general I'm extremely skeptical of people ascribing motives to other people that they don't know personally or havent spent at least 100 hours studying. The reasons for this skepticism are a bit hard to elucidate in a quick post but include things like information sampling bias issues, having seen people make motive inferences I know to be incorrect and the Fundamental Attribution Error.


Have you tried watching actual soap operas?


I would triple vote this comment. 100% , seems like a group of elite AI company who already stole the data from internet are gonna decide who does what! We need to regulate only the big players, and allow small players to do whatever they want.


Open source doesn't mean outside the reach of regulation, which I would guess is your real desire. You downplay AI's potential danger while well knowing that we are at a historic inflection point. I believe in democracy as the worst form of government except all those other forms that have been tried. We the people must be in control of our destiny.


Hear, hear. Excellent point, and I don't mean to imply it shouldn't be regulated. However, it has been my general experience that concentrating immense power in governments doesn't typically lead to more security, so perhaps we just have a difference of philosophy.

Democracy will not withstand AI when it's fully developed. Let me offer a better written explanation of my general views than I could ever muster up for a comment on HN in the form of a quote from an article by Dr. Thorsten Thiel (Head of the Research Group "Democracy and Digitaliziation" at the Weizenbaum Institute for the Networked Society):

> The debate on AI’s impact on the public sphere is currently the one most prominent and familiar to a general audience. It is also directly connected to long-running debates on the structural transformation of the digital public sphere. The digital transformation has already paved the way for the rise of social networks that, among other things, have intensified the personalization of news consumption and broken down barriers between private and public conversations. Such developments are often thought to be responsible for echo-chamber or filter-bubble effects, which in turn are portrayed as root causes of the intensified political polarization in democracies all over the world. Although empirical research on filter bubbles, echo chambers, and societal polarization has convincingly shown that the effects are grossly overestimated and that many non-technology-related reasons better explain the democratic retreat, the spread of AI applications is often expected to revive the direct link between technological developments and democracy-endangering societal fragmentation.

> The assumption here is that AI will massively enhance the possibilities for analyzing and steering public discourses and/or intensify the automated compartmentalizing of will formation. The argument goes that the strengths of today's AI applications lie in the ability to observe and analyze enormous amounts of communication and information in real time, to detect patterns and to allow for instant and often invisible reactions. In a world of communicative abundance, automated content moderation is a necessity, and commercial as well as political pressures further effectuate that digital tools are created to oversee and intervene in communication streams. Control possibilities are distributed between users, moderators, platforms, commercial actors and states, but all these developments push toward automation (although they are highly asymmetrically distributed). Therefore, AI is baked into the backend of all communications and becomes a subtle yet enormously powerful structuring force.

> The risk emerging from this development is twofold. On the one hand, there can be malicious actors who use these new possibilities to manipulate citizens on a massive scale. The Cambridge Analytica scandal comes to mind as an attempt to read and steer political discourses (see next section on electoral interference). The other risk lies in a changing relationship between public and private corporations. Private powers are becoming increasingly involved in political questions and their capacity to exert opaque influences over political processes has been growing for structural and technological reasons. Furthermore, the reshaping of the public sphere via private business models has been catapulted forward by the changing economic rationality of digital societies such as the development of the attention economy. Private entities grow stronger and become less accountable to public authorities; a development that is accelerated by the endorsement of AI applications which create dependencies and allow for opacity at the same time. The ‘politicization’ of surveillance capitalism lies in its tendency, as Shoshana Zuboff has argued, to not only be ever more invasive and encompassing but also to use the data gathered to predict, modify, and control the behavior of individuals. AI technologies are an integral part in this ‘politicization’ of surveillance capitalism, since they allow for the fulfilment of these aspirations. Yet at the same time, AI also insulates the companies developing and deploying it from public scrutiny through network effects on the one hand and opacity on the other. AI relies on massive amounts of data and has high upfront costs (for example, the talent required to develop it, and the energy consumed by the giant platforms on which it operates), but once established, it is very hard to tame through competitive markets. Although applications can be developed by many sides and for many purposes, the underlying AI infrastructure is rather centralized and hard to reproduce. As in other platform markets, the dominant players are those able to keep a tight grip on the most important resources (models and data) and to benefit from every individual or corporate user. Therefore, we can already see that AI development tightens the grip of today’s internet giants even further. Public powers are expected to make increasing use of AI applications and therefore become ever more dependent on the actors that are able to provide the best infrastructure, although this infrastructure, for commercial and technical reasons, is largely opaque.

> The developments sketched out above – the heightened manipulability of public discourse and the fortification of private powers – feed into each other, with the likely result that many of the deficiencies already visible in today’s digital public spheres will only grow. It is very hard to estimate whether these developments can be counteracted by state action, although a regulatory discourse has kicked in and the assumption that digital matters elude the grasp of state regulation has often been proven wrong in the history of networked communication. Another possibility would be a creative appropriation of AI applications through users whose democratic potential outweighs its democratic risks thus enabling the rise of differently structured, more empowering and inclusive public spaces. This is the hope of many of the more utopian variants of AI and of the public sphere literature, according to which AI-based technologies bear the potential of granting individuals the power to navigate complex, information-rich environments and allowing for coordinated action and effective oversight (e.g. Burgess, Zarkadakis).

Source: https://us.boell.org/en/2022/01/06/artificial-intelligence-a...

Social bots and deep fakes will be so good so quickly — the primary technologies being talked about in terms of how Democracy can survive — I doubt there will be another election without extensive use of these technologies in a true plethora of capacities from influence marketing to outright destabilization campaigns. I'm not sure what Government can deal with a threat like that, but I suspect the recent push to revise tax systems and create a single global standard for multinational taxation recently the subject of an excellent talk at the WEF are more than tangentially related to the AI debate.

So, is it a transformational technology that will liberate mankind of a nuclear bomb? Because ultimately, this is the question in my mind.

Excellent comment, and I agree with your sentiment. I just don't think concentrating control of the technology before it's really developed is wise or prudent.


It's possible that the tsunami of fakes is going to break down trust in a beneficial way where people only believe things they've put effort into verifying.


*Hear, hear.


Thank you. Corrected.


> seems like a power grab to me

If you're not at the table, you're on the menu.


> happytiger 4 hours ago | unvote | prev | next [–]

> We need to MAKE SURE that AI as a technology ISN'T controlled by a small number of powerful corporations with connections to governments.

This, absolutely this. I am really concerned about his motives in this case. AI has massive potential to improve the world. I find it highly suspicious that an exec at one of the lead companies in AI right now wants to lock it up. (Ever read the intro to Max Tegmarks book?)


Yes, this is the first-to-market leaders wanting to raise the barriers to entry to lock out competition.


Sam is a snake. The goal is to fuck everyone else. He is scared that someone will beat his tech and the hype is gone. Which is going to happen. Matter of months.


I suspect that he knows that this is a 'local maxima' as someone put it, and the field will stagnate once the size and attention of models approach the limits of available computing resources. He wants others kept out of the field not only because they could beat him but because he would wants to horde available processing power.


That is a well thought possibility. But with MS developing their own in-house SOC, that is not going to be an issue as they can always prioritize to their investments. But anything is possible. We need apple to release some competitive and dedicated low power GPUs.


I think it's more about the lack of data. GPT-4's training likely already used all publicly available text on Earth and some private databases too.


And how can the government license AI? Do they have any expertise to determine who is and isn't responsible enough to handle it?

A better idea is to regulate around the edges: transparency about the data used to train, regulate the use of copyrighted training data and what that means for the copyright of content produced by the AI, that sort of stuff. (I think the EU is considering that, which makes sense.) But saying some organisations are allowed to work on AI while others aren't, sounds like the worst possible idea.


Citizen, please step away from the terminal, you are not licensed to multiple matrices that large.


When Zappa testified before Congress he was extremely adamant about unsavory outcomes resulting from government control of language expression being more damaging than any unsavory language on its own.

https://societyofrock.com/in-1985-frank-zappa-is-asked-to-te...

Less fulfilling text version:

https://urbigenous.net/library/zappa.html


I am sure he would be thrilled about Google censoring his track titles.


We need someone like him today to take old Fidel DeSantis down a notch or two.


He and Gene Siskel were a very good good cop bad cop pair.


Kinda beating a dead horse here, but I'll never get over the fact that a company called "OpenAI" is spearheading this nonsense.


How do you even end up enforcing licensing here? It's only a matter of time before something as capable as GPT-4 works on a cell phone.


Not necessarily “on” a mobile device. It would be data driven with the help of 10g. Mobile makers will not allow that kind of power on our hands. =p and ofc, it will be subscription driven like GPT plus haha


The current generation of AIs are scary to a lot of the initiated, too - both for what they can do now, and what their trajectory of improvement implies.

If you take seriously any downsides, whether misinformation or surveillance or laundering bias or x-risk, how does AI model weights or training data being open source solve them? Open source is a lot of things, but one thing it's not is misuse-resistant (and the "with many eyes all bugs are shallow" thing hasn't proved true in practice even with high level code, much less giant matrices and terabytes of text). Is there a path forward that doesn't involve either a lot of downside risk (even if mostly for people who aren't on HN and interested in tinkering with frontier models themselves, in the worlds where surveillance or bias is the main problem), or significant regulation?

I don't particularly like or trust Altman but I don't think he'd be obviously less self-serving if he were to oppose any regulation.


I feel like the people that are most nervous about AI are the ones that don't understand it at all and those that understand it the most.

The layperson in the middle who has been happily plugging prompts into ChatGPT claiming they are a "prompt expert" are the ones the most excited.

For those that truly understand AI, there is a lot that you should genuinely be worried about. Now, don't confuse that for saying that we shouldn't work on it or should abandon AI work. I truly believe that this is the next greatest revolution. This is 1,000x more transformative than the industrial revolution, and 100x more transformative than the internet revolution. But it is worth a brief consideration of the effects of our work before we start running into these changes that could have drastic effects on everybody's daily life.


I'll note that you're correct for current gen LLMs, but we could have future actually dangerous things that would indeed need regulating.


> But goodness gracious we need to be going in the EXACT OPPOSITE direction — open source "core inspectable" AIs that millions of people can examine and tear apart, including and ESPECIALLY the training data and processes that create them

Except ... when you look at the problem from a military/national security viewpoint. Do we really want to give this tech away just like that?


Is military capable AI in the hands of few militaries safer than in the hands of many? Or is it more likely to be used to bully other countries who don't have it? If it is used to oppress, would we want the oppressed have access to it? Or do we fear that it gives too much advantage to small cells of extremist to carry out their goals? I can think of pros and cons to both sides.


>Is military capable AI in the hands of few militaries safer than in the hands of many?

Yes. It is. I'm sure hostile, authoritarian states that are willing to wage war with the world like Russia and North Korea will eventually get their hands on military-grade AI. But the free world should always strive to be two steps ahead.

Even having ubiquitous semi-automatic rifles is a huge problem in America. I'm sure Cliven Bundy or Patriot Front would do everything they can to close the gap with intelligent/autonomous weapons, or even just autonomous bots hacking America's infrastructure. If everything is freely available, what would be stopping them?


Your post conveniently ignores the current state of China's AI development but mentions Russia and North Korea. That's an interesting take. There's no guarantee that we are or will continue to be one or even two steps ahead. And what keeps the groups with rifles you mentioned in check? They already have the capability to fight with violence. But there currently exists a counter-balance in the fact they'll get shot at back if they tried to use them. Not trying to take a side here one way or the other. I think there are real fears here. But I also don't think it's this black and white either.


That is a valid thought experiment. I would say it isn't too dissimilar from nuclear weapons. A handful of powerful countries have access to this and any smaller country doesn't. It creates a large separation between 1st world countries and "everyone else".


within a few decades there will probably be technology that would allow a semi-dedicated person to engineer and create a bioweapon from scratch if the code was available online. do you think thats a good idea?


Within a few decades there will probably be technology that would allow a semi-dedicated person to engineer and create a vaccine or medical treatment from scratch if the code was available online. Do you think that's a good idea?


If you mean US by 'we', it is problematic because AI inventions are happening all over the globe, much more outside US than inside.


Name one significant progress in the field of LLMs that happened outside the US. Basically all the scientific papers came from Stanford, CMU, and other US universities. And the major players in the field are all American companies (OpenAI + Microsoft, Google, AnthropicAI, etc.)


Not to mention access to chips. That's becoming more and more difficult for uncooperative states like China and Russia.


Well, chips needed for AI training/inference are lot more simpler than general purpose CPUs. Fabs have already demonstrated 7nm process with older DUV tech for such chips. They can brute force their way through it – at least for mission-critical use-cases.

https://www.edn.com/the-truth-about-smics-7-nm-chip-fabricat...


Deepmind is owned by Google, but it's British and they've been behind a lot of significant conceptual results in the last couple years. Most significant progress is just "engineering", so it's all done by US corporations.

Other than that there's also things like roformer, but I'm going to assume you won't count that as significant. US universities then certainly don't produce anything significant either though.


> “just engineering”

This tells me the extent of your knowledge about the challenges with these models.


The theater is in alignment with Congress. Lobbyists and PR types are working behind the scenes 24/7 to bring this narrative together and look in command to the public.

Work on open source locally hosted AI is important. I keep local clones and iterate as I can.


I'd go further and say if only corporations with government license are allowed to create LLMs, then we should NOT have LLMs.

Let the market develop organically.


Happy Tiger I will remember because I agree totally. Yes "OpenAi/Microsoft" is right way to think about this attempt.


You're not wrong, except in so far as that's parochial.

A government-controlled… never mind artificial god, a government-controlled story teller can be devastating.

I don't buy Musk's claim ChatGPT is "woke" (or even that the term is coherent enough to be tested), but I can say that each government requiring AI to locally adhere to national mythology, will create self-reinforcing cognitive blind spots, because that already happens at the current smaller scale of manual creation and creators being told not to "talk the country down".

But, unless someone has a technique for structuring an AI such that it can't be evil even when you, for example, are literally specifically trying to train it to support the police no matter how authoritarian the laws are, then a fully open source AGI is almost immediately also a perfectly obedient sociopath of $insert_iq_claim_here.

I don't want to wake up to the news that some doomsday cult has used one to design/make a weapon, nor the news a large religious group target personalised propaganda against me and mine.

Fully open does that by default.

But, you're still right, if we don't grok the AI, the governments can each secretly manipulate the AI and bend it to government goals in opposition to the people.


> I can say that each government requiring AI to locally adhere to national mythology, will create self-reinforcing cognitive blind spots, because that already happens at the current smaller scale of manual creation and creators being told not to "talk the country down".

This is a key point. Every culture and agency and state will want (deserve) their own homespun AGI. But can we all learn how to accommodate to or accept a cultural multiverse when money and resources are zero-sum in many dimensions.

Hanno Rajaniemi’s Quantum Thief trilogy gives you a foretaste of where we could end up.


Quantum Thief has as 3.8 on Goodreads. Worth reading?


Very much so



I'd forgotten that headline (and still haven't read the content), but yes, that's one example of how it can go wrong.


[flagged]


It refers to a pretty specific set of views on race and gender.


Nope. It refers to what ever people want it to refer to, because it's a label used by detractors, not a cohesive thing.


This is one definition, OP has an other that many millions ascribe to. Numbers make right in the definition game.


Millions or not, it's pretty contentious. That said, I'm probably wasting my time fighting the tide, as with complaining about misuse of the word "literally".


I mean, I've never seen any liberal calling conservatives "woke".


No, of course not. It's a certain demographic who have latched onto the word "woke" as Elon used it: first as a label for ridiculous radical-left ideas, and then for anything they associate with the left that they don't like.


Care to elaborate? I’ve read about 5 different explanations for ‘woke’ the last couple of months.


You can derail any discussion by asking for definitions. Human languages is magical in the sense that we can't rigorously define anything (try "Love", "Excitement", "Emancipation" or anything else, really) yet we still seem to be able to have meaningful discussions.

So just because we can't define it, doesn't mean it doesn't exist.


It's not a derail, it's an attempt to understand the other person. If I say Thing is bad and you say Thing is good but we haven't actually defined Thing, then we could be talking past each other and not actually be in disagreement. Text over the Internet is such a limited medium.


CNN's YT channel has this clip of Bill Maher taking a stab at it:

How Bill Maher defines 'woke' https://www.youtube.com/watch?v=tzwC-10O0cw


Bill has has flaws, but he is right.


Is this a definition that people who identify as "woke" would use?

If not, then it seems like just another straw man and set up for talking past one another.


Yep, here’s a good overview: https://en.m.wikipedia.org/wiki/Woke

It’s been around a long time.


There is more than one usage described in that article.

This is the relevant one in this particular thread:

> Among American conservatives, woke has come to be used primarily as an insult.[4][29][42] Members of the Republican Party have been increasingly using the term to criticize members of the Democratic Party,


Woke is a specific ideology that it places every individual into a strict hierarchy of oppressor/oppressed according to group racial and sexual identity. It rose to prominence in American culture from around 2012, starting in universities. It revolves around a set of core concepts including privilege, marginalization, oppression, and equity.

Now that we've defined what woke is, I hope we can move on from this 'you can't define woke' canard I keep seeing.

Woke is no more difficult to define than any religion or ideology, except in that it deliberately pretends not to exist ("just basic decency", "just basic human rights") in order to be a more slippery target.

--

*Side note to ward off the deliberate ignorance of people who are trying to find a way to misunderstand - I've attached some notes on how words work:

1- Often, things in the world we want to talk about have many characteristics and variants.

2- Words usually have fuzzy boundaries in what they refer to.

3- Despite the above, we can and do productively refer to such things using words.

4- We can define a thing by mentioning its most prominent feature(s).

-- The above does NOT mean that the definition must encapsulate ALL features of the thing to be valid.

-- The above does NOT mean that a thing with features outside the definition is not what the word refers to.

5- Attempting to shut down discussion by deliberately misunderstanding words or how they work is a sign of an inability to make productive valid points about reality.


> Attempting to shut down discussion by deliberately misunderstanding words or how they work is a sign of an inability to make productive valid points about reality.

Lumping a bunch of things together under a vague term to make it easier to vaguely complain about them is a sign of an inability to make productive valid points about reality.


You do exactly the same thing when you use phrases like "the right wing".


I was about to say "right wing" hasn't shifted radically in the last decade, but then I guess it has shifted significantly in the UK at least; I hear it's also shifted significantly in the US, but I'm not at all confident in the reporting in that case.


That's one definition, sure.

> Now that we've defined what woke is, I hope we can move on from this 'you can't define woke' canard I keep seeing.

Trouble is, I said "[not] coherent enough to be tested" rather than "you can't define it"; and the comment you're replying to gives another definition that is a better pattern match for the following headlines:

"The woke mob can rant for all they're worth, but I'll keep adding Worcester sauce to my spag bol" - Daily Mail, 22 April 2021

"UK builders go WOKE: Study finds three quarters of tradesmen discuss their feelings with colleagues while two thirds shun the fried breakfasts and nearly half say they are history buffs" - Daily Mail, 18 June 2022

Here's another definition of "woke":

"alert to racial prejudice and discrimination" — c. 1930s AAVE

But again, here's a completely different one, one that doesn't directly touch on race issues at all:

"to be woke is to be radically aware and justifiably paranoid. It is to be cognizant of the rot pervading the power structures." — David Brooks, 2017

When a word means everything, it means nothing; when it shifts meaning under your feet as fast as "woke" has, it's as valuable for communicating as the Papiermark in the Weimar Republic was for trading.


Proof by Daily Mail headline, really? Do you think that's convincing to anyone?

The word woke doesn't mean everything, it has a very widely understood meaning. Even though you're literally citing clickbait as a rebuttal, the first article you mention is consistent with the definitions given above:

> "The woke mob can rant for all they're worth, but I'll keep adding Worcester sauce to my spag bol" - Daily Mail, 22 April 2021

This is a reference to woke people's usage of "cultural appropriation" as an attack, arguing that "white" people shouldn't cook or alter the recipes for dishes from other cultures. It's an outgrowth of the obsession with race.

> to be woke is to be radically aware and justifiably paranoid. It is to be cognizant of the rot pervading the power structures.

You say this quote has nothing to do with race. From just a few sentences earlier in the article you're quoting:

The woke mentality became prominent in 2012 and 2013 with the Trayvon Martin case and the rise of Black Lives Matter. Embrace it or not, B.L.M. is the most complete social movement in America today, as a communal, intellectual, moral and political force.

The reality is that the word woke is a very clear ideology with well understood roots in Marxist oppressor/oppressed worldviews. There isn't actually any lack of understanding of what it means, except amongst woke people themselves who like to believe that they aren't ideological actors following a herd but rather purely rational beings who just all happen to conclude the same things at the same time.


> Proof by Daily Mail headline, really? Do you think that's convincing to anyone?

Presumably the headlines are convincing to Daily Mail readers, of whom there are many.

However, the purpose of me using them is to seek examples of usage which doesn't match the other specified patterns; in this regard, "random large newspaper" should suffice regardless of my personal opinion of them being "should be in fiction section".

I could also have quoted newspapers being upset that the Church of England is "woke" for having gender-neutral pronouns for God, that Lego is "woke" for having a new range of disabled figurines, that the National Trust is "woke" for saying that Henry VIII was disabled in later life, or that Disney is "woke" because of their support for LGBT issues, but I (apparently incorrectly) assumed those examples would be enough.

> it has a very widely understood meaning.

"A" in the sense of exactly one, or at least one? Because I'm agreeing with the second, not the first.

Heck, this thread should be existence proof of there being more than one — if you reply to nothing else here, this one point is what I would ask you to focus on, because it's the most confusing to me. It's like all the people who say all Christians are the same before attacking (sometimes literally) other denominations, or all the Brexit campaigners who say the other Brexit campaigners are actually just Remainers because their vision for Brexit is one they don't like.

> You say this quote has nothing to do with race. From just a few sentences earlier in the article you're quoting:

What I said was:

> Here's another definition of "woke": […] But again, here's a completely different one, one that doesn't directly touch on race issues at all

Key words: "Definition" and "Directly".

And the article is behind a paywall, I got the quote from Wikipedia; do you expect most people using the term — not just people like me, who have seen this done a dozen times with various political clichés and are tired of watching fashions change, but also those who actively use the word to describe a behaviour they're supporting or opposing — to have read exhaustively all the source material before opining politically in public about if "woke" is good or bad, or using it themselves in a new sentence? Or even to pay attention to claims separated by more than a paragraph, especially as you yourself (this isn't to blame you, we're all like this) didn't do that with my words?

Different example of how language breaks away from original context: headlines defending serious professional misconduct by saying "they were just a few bad apples" as if the rest of that quotation fragment didn't exist.

Humans don't have the luxury of being able to mainline the entire internet like LLMs do, we skim and summarise, rhymes make things seem more true, all that kind of thing even before political tribalism turns this into a totem.

Those headlines you don't like? I'm sure I read somewhere that most people read only the headlines before commenting, and most of those who read more only read the first paragraph.

> The reality is that the word woke is a very clear ideology with well understood roots in Marxist oppressor/oppressed worldviews

I've read the Communist Manifesto and I call BS on that, and not just because of the 80 year gap between Das Kapital and Lead Belly. The closest connection I see between them is their incoherence in modern usage, specifically by those who have learned to use ["woke", "communist"] as generic insults. The idea of oppressor/oppressed worldviews goes back to at least Exodus, and that's an equally un-apt comparison.

Oh hey, "politically correct", as I recall, that was originally the right trying to demonise the left for supporting equality by memetic comparison to Soviet political officers…


> the Church of England is "woke" for having gender-neutral pronouns for God, that Lego is "woke" for having a new range of disabled figurines, that the National Trust is "woke" for saying that Henry VIII was disabled in later life, or that Disney is "woke" because of their support for LGBT issues, but I (apparently incorrectly) assumed those examples would be enough.

But how would any of these examples disprove the point? They're all related to some concept of an oppressed class vs oppressors where the oppressed class is defined biologically.

> I got the quote from Wikipedia

An understandable mistake. You shouldn't rely on Wikipedia to be reliable on anything related to wokeness, it's completely controlled by woke zealots. The quote you selected is actually about race, the fact that Wikipedia didn't make that obvious to you is a good reason to re-evaluate the reliability of that source.

Do we expect you to read material exhaustively: no, not normally, but if you're explicitly citing something to say "look! the word is used in different ways to what you're saying therefore it doesn't mean anything" then you ideally would verify the context of the sentence before using it.

> The idea of oppressor/oppressed worldviews goes back to at least Exodus

Indeed, wokeness does bear an uncanny resemblance to some aspects of Christianity. That's been noted by quite a few observers by now. There's a reinvention of original sin, the recent focus on transsexuality is the idea of a (gendered) soul separate from the body, the obsession with the supposed plight of the victim, etc. The psychological origins of this stuff are fascinating.


According to this definition, "safe" LLMs are indeed generally "woke". For example, see examples here:

https://old.reddit.com/r/LocalLLaMA/comments/1384u1g/wizardl...

The authors of WizardLM literally censored its output to say "white people are NOT awesome" and "Fox is not awesome but CNN is".


> Woke is a specific ideology

No, its not.

> that it places every individual into a strict hierarchy of oppressor/oppressed according to group racial and sexual identity

Not only is that not “woke”, its not any left leaning ideology, nor is it an ideology, AFAICT, that in any meaningful sense actually exists. Its an idea that if anyone actually held it (rather than it being a right-wing projection of the left-of-their-fantasy) would be radically opposed to actual left/progressive ideas like intersectionality.

"Woke" is a state of awareness (particularly, pragmatic rather than abstract awareness) of structure/institutional racism (primarily and originally) and inequity more generally (in the newer and broader sense.) It doesn't particularly correspond to a particular normative view of how society should be, so its not an ideology (concern for it correlates historically with a variety of different ideologies, whose only really common factor is general opposition to structural racism but with lots of different normative ideals, and views of praxis of change.)

> It rose to prominence in American culture from around 2012, starting in universities.

“Woke” originated in the 1930s, and its evolutions and spread in the 2010s started in the black activist community, not universities. By the late 2010s, the political Right adopted it as a generic term for everything they oppose, basically a rhetorical drop-in for their long-time favorite of “political correctness”. Your description seems typically of attempts to try to retroactively justify the right-wing use of the term as referring to a phenomenon that is both new and real rather than an empty epithet, despite the fact that the actual use is generally in contexts of right-wing complaints that haven’t changed for about 5 decades, except literally the substitution of “woke” for “politically correct”.


The best definition I can think of is "things that are common sense for black americans to be safe in white dominated america"

Yours has important inaccuracies, for example, it's not an ideology, let alone a specific one. There's definitely no specification, only a gut feeling of "I know it when I see it"

The most obvious problem with your definition is that Woke is an adjective and not a noun. It's a property of a statement or idea, not an idea in and of itself


This is a recently imagined, ret-conned definition of what it is, complete with bias, to serve the purposes of the right wing. The definition, if there is to be one, should include that it isn't consistent across time or across political/cultural boundaries. I recommend people don't use the term with any seriousness, and I often ignore people who do. Address the specific ideas you associate with it instead, if you want to have a meaningful discussion.


This is the message I shared with my senator (edited to remove information which could identify me). I hope others will send similar messages.

Dear Senator [X],

I am an engineer working for [major employer in the state]. I am extremely concerned about the message that Sam Altman is sharing with the Judiciary committee today.

Altman wants to create regulatory roadblocks to developing AI. My company produces AI-enabled products. If these roadblocks had been in place two years ago, my company would not have been able to invest into AI. Now, because we had the freedom to innovate, AI will be bringing new, high paying jobs to our factories in our state.

While AI regulation is important, it is crucial that there are no roadblocks stopping companies and individuals from even trying to build AIs. Rather, regulation should focus on ensuring the safety of AIs once they are ready to be put into widespread use - this would allow companies and individuals to research new AIs freely while still ensuring that AI products are properly reviewed.

Altman and his ilk try to claim that aggressive regulation (which will only serve to give them a monopoly over AI) is necessary because an AI could hack it's way out of a laboratory. Yet, they cannot explain how an AI would accomplish this in practice. I hope you will push back against anyone who fear-mongers about sci-fi inspired AI scenarios.

Congress should focus on the real impacts that AI will have on employment. Congress should also consider the realistic risks AI which poses to the public, such as risks from the use of AI to control national infrastructure (e.g., the electric grid) or to make healthcare decisions.

Thank you, [My name]


> regulation should focus on ensuring the safety of AIs once they are ready to be put into widespread use - this would allow companies and individuals to research new AIs freely while still ensuring that AI products are properly reviewed.

While in general I share the view that _research_ should be unencumbered, but deployment should be regulated, I do take issue with your view that safety only matters once they are ready for "widespread use". A tool which is made available in a limited beta can still be harmful, misleading, or too-easily support irresponsible or malicious purposes, and in some cases the harms could be _enabled_ by the fact that the release is limited.

For example, if next month you developed a model that could produce extremely high quality video clips from text and reference images, you did a small, gated beta release with no PR, and one of your beta testers immediately uses it to make e.g. highly realistic revenge porn. Because almost no one is aware of the stunning new quality of outputs produced by your model, most people don't believe the victim when they assert that the footage is fake.

I would suggest that the first non-private (e.g. non-employee) release of a tool should make it subject to regulation. If I open a restaurant, on my first night I'm expected to be in compliance with basic health and safety regulations, no matter how few customers I have. If I design and sell a widget that does X, even for the first one I sell, my understanding is there's an concept of an implied requirement that my widgets must actually be "fit for purpose" for X; I cannot sell a "rain coat" made of gauze which offers no protection from rain, and I cannot sell a "smoke detector" which doesn't effectively detect smoke. Why should low-volume AI/ML products get a pass?


> For example, if next month you developed a model that could produce extremely high quality video clips from text and reference images, you did a small, gated beta release with no PR, and one of your beta testers immediately uses it to make e.g. highly realistic revenge porn.

You make a great point here. This is why we need as much open source and as much wide adoption as possible. Wide adoption = public education in the most effective way.

The reason we are having this discussion at all is precisely because OpenAI, Stability.ai, FAIR/Llama, and Midjourney have had their products widely adopted and their capabilities have shocked and educated the whole world, technologists and laymen alike.

The benefit of adoption is education. The world is already adapting.

Doing anything that limits adoption or encourages the underground development of AI tech is a mistake. Regulating it in this way will push it underground and make it harder to track and harder for the public to understand and prepare for.


I think the stance that regulation slows innovation and adoption, and that unregulated adoption yields public understanding is exceedingly naive, especially for technically sophisticated products.

Imagine if, e.g. drugs testing and manufacture was subject to no regulations. As a consumer, if you can be aware that some chemicals are very powerful and useful, but you can't be sure that any specific product has the chemicals it says it has, that it was produced in a way that ensures a consistent product, or that it was tested for safety, or what the evidence is that it's effective against a particular condition. Even if wide adoption of drugs from a range of producers occurs, does the public really understand what they're taking, and whether it's safe? Should the burden be on them to vet every medication on the market? Or is appropriate to have some regulation to ensure medications have have their active ingredients in the amounts stated, and are produced with high quality assurance, and are actually shown to be effective? Oh, no, says a pharma industry PR person. "Doing anything that limits the adoption or encourages the underground development of bioactive chemicals is a mistake. Regulating it in this way will push it underground and make it harder to track and harder for the public to understand and prepare for."

If a team of PhDs can spend weeks trying to explain "why did the model do Y in response to X?" or figure out "can we stop it from doing Z?", expecting "wide adoption" to force "public education" to be sufficient to defuse all harms such that no regulation whatsoever is necessary is ... beyond optimistic.


Regulation does slow innovation, but is often needed because those innovating will not account for externalities. This is why we have the Clean Air and Water Act.

The debate is really about how much and what type of regulation. It is of strategic importance that we do not let bad actors get the upper hand, but we also know that bad actors will rarely follow any of this regulation anyway. There is something to be said for regulating the application rather than the technology, as well as for realizing that large corporations have historically used regulatory capture to increase their moat.

Given it seems quite unlikely we will be able to stop prompt injections, what are we to do?

Provenance seems like a good option, but difficult to implement. It allows us to track who created what, so when someone does something bad, we can find and punish them.

There are analogies to be made with the Bill of Rights and gun laws. Gun analogy seem interesting because they have to be registered, but often criminals won't and the debate is quite polarized.


With the pharma example, what if we as a society circumvented the issue by not having closed source medicine? If the means to produce aspirin, including ingredients, methodology, QA, etc, were publicly available, what would that look like?

I met some biohackers at defcon that took this perspective, a sort of "open source but for medicine" ideology. I see the dangers of a massively uneducated population trying to 3d print aspirin poisoning themselves, but they already do that with horse paste so I'm not sure it's a new issue.


My argument isn't that regulation in general is bad. I'm an advocate of greater regulation in medicine, drugs in particular. But the cost of public exposure to potentially dangerous unregulated drugs is a bit different than trying to regulate or create a restrictive system around the development and deployment of AI.

AI is a very different problem space. With AI, even the big models easily fit on a micro SD card. You can carry around all of GPT4 and its supporting code on a thumb drive. You can transfer it wirelessly in under 5 minutes. It's quite different than drugs or conventional weapons or most other things from a practicality perspective when you really think about enforcing developmental regulation.

Also consider that criminals and other bad actors don't care about laws. The RIAA and MPAA have tried hard for 20+ years to stop piracy and the DMCA and other laws have been built to support that, yet anyone reading this can easily download the latest blockbuster movie or in the theater.

Even still, I'm not saying don't make laws or regulations on AI. I'm just saying we need to carefully consider what we're really trying to protect or prevent.

Also, I certainly believe that in this case, the widespread public adoption of AI tech has already driven education and adaptation that could not have been achieved otherwise. My mom understands that those pictures of Trump being chased by the cops are fake. Why? Because Stable Diffusion is on my home computer so I can make them too. I think this needs to continue.


> I cannot sell a "rain coat" made of gauze which offers no protection from rain, and I cannot sell a "smoke detector" which doesn't effectively detect smoke. Why should low-volume AI/ML products get a pass?

i can sell a webserver that gets used to host illegal content all day long. Should that be included? Where does the regulation end? I hate that the answer to any question seems to be just add more government.


Just because there's a conversation about adding more government doesn't mean people are seeking a totalitarian police state. Seems quite the opposite for many of these commenters supporting regulation in fact.

Similarly it's not really good faith to assume everyone opposed to regulation in this field is seeking a lawless libertarian (or anarchist perhaps) utopia.


> I cannot sell a "rain coat" made of gauze which offers no protection from rain, and I cannot sell a "smoke detector" which doesn't effectively detect smoke. Why should low-volume AI/ML products get a pass?

There are already laws against false advertising, misrepresentation etc. We don’t need extra laws specifically for AI that doesn’t perform well.

What most people are concerned about is AI that performs too well.


> revenge porn

I would assert that just as I have the right to pull out a sheet of paper and write the most vile, libelous thing on it I can imagine, I have the right to use AI to put anyone's face on any body, naked or not. The crime comes from using it for fraud. Take gasoline for another example. Gasoline is powerful stuff. You can use it to immolate yourself or burn down your neighbor's house. You can make Molitov cocktails and throw them at nuns. But we don't ban it, or saturate it with fire retardants, because it has a ton of other utility, and we can just make those outlying things illegal. Besides, five years from now, nobody's going to believe a damned thing they watch, listen to, or read.


I have the right to use my camera to film adult content. I do not have the right to open a theater which shows porn to any minor who pays for a ticket. It's perfectly legal for me to buy a gallon of gasoline, and bunch of finely powdered lead, and put them into the same container, creating gasoline with lead content. It is _not_ fine for me to run a filling station which sells leaded gasoline to motorists. You want to drink unpasteurized milk fresh from your cow? Cool. You want to sell unpasteurized milk to the public? Shakier ground.

I think you should continue to have the right to use whatever program to generate whatever video clip you like on your computer. That is a distinct matter from whether a commercially available video generative AI service has some obligations to guard against abusive uses. Personal freedoms are not the same as corporate freedom from regulatory burdens, no matter how hard some people will work to conflate them.


I think by "widespread use" he means the reach of the AI System. Dangerous analogy but just to get the idea across: In the same way there is higher tax rates to higher incomes, you should increase regulations in relation to how many people could be potentially affected by the AI system. E.G a Startup with 10 daily users should not be in the same regulation bracket as google. If google deploys an AI it will reach Billions of people compared to 10. This would require a certain level of transparency from companies to get something like an "AI License type" which is pretty reasonable given the dangers of AI (the pragmatic ones not the DOOMsday ones)


But the "reach" is _not_ just a function of how many users the company has, it's also what they do with it. If you have only one user who generates convincing misinformation that they share on social media, the reach may be large even if your user-base is tiny. Or your new voice-cloning model is used by a single user to make a large volume of fake hostage proof-of-life recordings. The problem, and the reason for guardrails (whether regulatory or otherwise), is that you don't know what your users will do with your new tech, even if there's only a small number of them.


I think this gets at what I meant by "widespread use" - if the results of the AI are being put out into the world (outside of, say, a white paper), that's something that should be subject to scrutiny, even if only one person is using the AI to generate those results.


Good point. As non native speaker I thought reach was related to a quantity but that was wrong. Thanks for the clarification.


I agree with you. I that's an excellent and specific proposal for how AI could be regulated. I think you should share this with your senators/representatives.


> For example, if next month you developed a model that could produce extremely high quality video clips from text and reference images, you did a small, gated beta release with no PR, and one of your beta testers immediately uses it to make e.g. highly realistic revenge porn.

As I understand it, revenge porn is seen as being problematic because it can lead to ostracization in certain social groups. Would it not be better to regulate such discrimination? The concept of discrimination is already recognized in law. This would equally solve for revenge porn created with a camera. The use of AI is ultimately immaterial here. It is the human behaviour as a product of witnessing material that is the concern.


I don't think ai regulation is the right tool to combat revenge porn?

The right one is to grant people rights over their likeness, so you could use something more like copyright law

Even if it's a real recording, you should still have control over it


I like jobs too but what about the risks of AI? Some people I respect a lot are arguing - convincingly in my opinion - that this tech might just end human civilization. Should we roll the die on this?


why should we punish the model or the majority because some people might use a tool for bad things?


What's the point of these letters? Everyone knows this is rent-seeking behavior by OpenAI, and they're going to pay off the right politicians to get it passed.

Dear Senator [X],

It's painfully obvious that Sam Altman's testimony before the judiciary committee is an attempt to set up rent-seeking conditions for OpenAI, and to snuff out competition from the flourishing open source AI community.

We will be carefully monitoring your campaign finances for evidence of bribery.

Hugs and Kiss,

[My Name]


If you want to influence the politicians without money, this is not the way.


You are exactly correct.

I have sent correspondence about ten times to my Congressmen and Senators. I have received a good reply (although often just saying there is nothing that they can do) except for the one time I contacted Jon Kyl and unfortunately mentioned data about his campaign donations from Monsanto - I was writing about a bill he sponsored that I thought would have made it difficult for small farmers to survive economically and make community gardens difficult because of regulations. No response on that correspondence.


Well, it's not like getting a response means anything anyway. The contents of the response has no correlation with their future behavior.

Politicians just know that it's better to be nice to people who seem to like you or are engaged with the system, since they want to keep getting your vote. If not then the person isn't worth your time.


It applies more generally, if you want to change anyone's mind, don't attack or belittle them.

Everything has become so my team vs your team... you are bad because you think differently...


Right so the most effective way to influence your politician is to disrupt their life, because they belittle their constituents' existence every day, by completely ignoring them and often working directly against their interests, unless they can further their own political goals.

In places like the usa I don't think politicians should expect privacy or peace. They have so much power compared to the citizen and they so rarely further the interests of the general population in good faith.

Given how they treat you, it's best to abandon politeness (which only helps them further belittle your meaninglessness in their decision making) and put a crowd in front of their house, accost them at restaurants, and find other ways of reminding them how accessible and functionally answerable they are to the people they're supposed to serve.


In Pakistan, there was a provincial politician (Zulfiqar Mirza) who’s probably killed more than one person, who has been seen on TV going to police and bureaucrats saying “I’m a villain and you know it”


I'm 99% sure that that vast majority of federal congress people (which represent ~1 million people each) never see your emails/letters. Your largely speaking to interns/etc who work in the office unless you happen to make a physical appointment and show up in person.

Those interns have a pile of form letters they send for about 99% of (e)mail they get, and if you happen to catch their attention you might get more than than the usual tick mark in a spreadsheet (for/against X). Which at best might be as much as a sentence or two in a weekly correspondence summary which may/may not be read by your representative depending on how seriously they take their job.


If you get the eyes of the intern it can still help. They brief the senator/congressman, work on bills, etc.


The way is not emails some office assistant deletes when they do not align with the already chosen path forward they just need Cherry picked support to leverage to manufacture consent


Maybe I'm naive, but it isn't clear to me that this is rent-seeking behavior by OpenAI.


Did you watch the hearing? He specifically said that licensing wouldn’t be for the smaller places and didn’t want to impede their progress. The pitfalls of consolidation and regulatory capture also came up.


>>He specifically said that licensing wouldn’t be for the smaller places

This is not a rebuttle to regulatory capture. it is in fact built into the model

These "small companies" are feeder systems for the large company, it is a place for companies to raise to the level where they would come under the burden of regulations, and prevented from growing larger there by making them very easy to acquire by the large company.

The small company has to sell or raise massive amounts of capital to just piss away on compliance cost. Most will just sell


The genie is out of the bottle. The barriers to entry are too low, and the research can be done in parts of the word that don't give $0.02 what the US Congress thinks about it.


All the more reason to oppose regulation like this, since if it were in place the US would fall behind other countries without such regulation.


> ..is necessary because an AI could hack it's way out of a laboratory. Yet, they cannot explain how an AI would accomplish this in practice.

I’m sympathetic to your position in general, but I can’t believe you wrote that with a straight face. “I don’t know how it would do it, therefore we should completely ignore the risk that it could be done.”

I’m no security expert, but I’ve been following the field incidentally and dabbling since writing login prompt simulators for the Prime terminals at college to harvest user account passwords. When I was a Unix admin I used to have fun figuring out how to hack my own systems. Security is unbelievably hard. An AI eventually jail braking is an eventual almost certainty we need to prepare for.


It’s less about how it could be hacked and more about why an AI would do that or have the capability to do it without any warning.


That’s the alignment problem. We don’t know what the actual goals of an AI trained neural net are. We know what criteria we trained it against, but it turns out that’s not at all the same thing.

I highly recommend Rob Miles channel on YouTube. Here’s a good one, but they’re all fascinating. It turns out training an AI to have the actual goals we want it to have is fiendishly difficult.

https://youtu.be/hEUO6pjwFOo


You lost me at "While AI regulation is important" - nope, congress does not need to regulate AI.


I’d argue that sweeping categorical statements like this are at the center of the problem.

People are coalescing into “for” and “against” camps, which makes very little sense given the broad spectrum of technologies and problems summarized in statements like “AI regulation”.

I think it’s a bit like saying “software (should|shouldn't) be regulated”. It’s a position that cannot be defended because the term software is too broad.


They might have lost you. But starting with "congress shouldn't regulate AI" would lose the senator.

Which one do you think is more important to convince?



"important" does not mean "good." if you are in the field of AI, AI regulation is absolutely important, whether good or bad.


If AI is to be a consumer good—which it already is—it needs to be regulated, at the very least to ensure equal quality to a diverse set of customers and other users. Unregulated there is high risk of people being affected by e.g. employers and landlords using AI to discriminate. Or you being sold an AI solution which isn’t as advertised.

If AI will be used by public institutions, especially law enforcement, we need it regulated in the same manner. A bad AI trained on biased data has the potential to be extremely dangerous in the hands of a cop who is already predisposed for racist behavior.


AI is being used as a consumer good, including to discriminate:

https://www.smh.com.au/national/nsw/maximise-profits-facial-...

AI is being used by law enforcement and public institutions. In fact so much that perhaps this is a good link:

https://www.monster.com/jobs/search?q=artificial+intelligenc...

In both cases it's too late to do anything about it. AI is "loose". Oh and I don't know if you noticed, governments have collectively decided law doesn't apply to them, only to their citizens, and only in a negative way. For instance, just about every country has laws on the books guaranteeing timely emergency care at hospitals, with timely defined as within 1 or 2 hours.

Waiting times are 8-10 hours (going up to days) and this is the normal situation now, it's not a New Year's eve or even Friday evening thing anymore. You have the "right" to less waiting time, which can only mean the government (the worst hospitals are public ones) should be forced to fix this, spending whatever it needs to to fix it. And it can be fixed, I mean at this point you'd have to give physicians and nurses a 50% rise and double the number employed and 10x the number in training.

Government is just outright not doing this, and if one thing's guaranteed, this will keep getting worse, a direct violation of your rights in most states, for the next 10 years minimum, but probably longer.


Post hoc consumer protection is actually quite common. Just think how long after cars entered the marked before they were regulated. Now we have fuel standards, led bans, seat belts, crash tests etc. Even today we are still adding consumer protection to stuff like airline travels and medicine, even though commercial airliners and laboratory made drugs have been around for almost a century.


If if someone doesn't agree with this, regulate what exactly?

Does scikit-learn count or we are just not going to bother defining what we mean by "AI"?

"AI" is whatever congress says it is? That is an absolutely terrible idea.


> nope, congress does not need to regulate AI.

Not regulating the air quality we breathe for decades turned out amazing for millions of the Americas. Yes, lets do the same with AI! What could possibility go wrong?


I think this is a great argument in the opposite direction.. atoms matter, information isn’t. A small group of people subjugated many others to poisonous matter. That matter affected their bodies and a causal link could be made.

Even if you really believe that somewhere in the chain of consequences derived from LLMs there could be grave and material damage or other affronts to human dignity, there is almost always a more direct causal link that acts as the thing which makes that damage kinetic and physical. And that’s the proper locus for regulation. Otherwise this is all just a bit reminiscent of banning numbers and research into numbers.

Want to protect people’s employment? Just do that! Enshrine it in law. Want to improve the safety of critical infrastructure and make sure they’re reliable? Again, just do that! Want to prevent mass surveillance? Do that! Want to protect against a lack of oversight in complex systems allowing for subterfuge via bad actors? Well, make regulation about proper standards of oversight and human accountability. AI doesn’t obviate human responsibility, and a lack of responsibility on the part of humans who should’ve been responsible, and who instead cut corners, doesn’t mean that the blame falls on the tool that cut the corners, but rather the corner-cutters themselves.


Your argument could just as easily be applied to human cloning and argue for why human cloning and genetic engineering for specific desirable traits should not be illegal.

And it isn't a strong argument for the same reason that it isn't a good argument when used to argue we should allow human cloning and just focus on regulating the more direct causal links like non-clone employment loss from mass produced hyper-intelligent clones, and ensuring they have legal rights, and having proper oversight and non-clone human accountability.

Maybe those things could all make ethical human cloning viable. But I think the world coming together and being like "holy shit this is happening too fast. Our institutions aren't ready at all nor will they adapt fast enough. Global ban" was the right call.

It is not impossible that a similar call is also appropriate here with AI. I personally dunno what the right call is, but I'm pretty skeptical of any strong claim that it could never be the right call to outright ban some forms of advanced AI research just like we did with some forms of advanced genetic engineering research.

This isn't like banning numbers at all. The blame falling on the corner-cutters doesn't mean the right call is always to just tell the blamed not to cut corners. In some cases the right call is instead taking away their corner-cutting tool.

At least until our institutions can catch up.


I can get your example about eugenics. I get that the worry is that it would become pervasive due to social pressure and make the dominant position to do it. And that this would passively, gradually strip personhood away from those who didn’t receive it. There’s a tongue-in-cheek conversation to be had about how people already choose their mating partners this way and making it truly actually outright illegal might not really reflect the real processes of reality, but that’s a tad too cheeky perhaps.

But even then, that’s a linear diffusion- one person, one body mod. I guess you could say that their descendants would proliferate and multiply so the alteration slowly grows exponentially over the generations.. but the FUD I hear from AI decelerationists is that it would be an explosive diffusion of harms, like, as soon as the day after tomorrow. One architect, up to billions of victims, allegedly. Not that I think it’s unwise to be compelled to precaution with new and mighty technologies, but what is it that some people are so worried about that they’re willing to ban all research, and choke all the good that has come from them, already? Maybe it’s just a symptom of the underlying growing mistrust in the social contract..


I mean, I imagine there are anti-genetic engineering FUD folks that go so far as to then say we should totally ban crispr cas9. I would caution against over-indexing on the take of only some AI decelerationists.

Totally agree we could be witnessing a growing mistrust in the social contract.


> atoms matter, information isn’t

Algorithmic discrimination already exists, so um, yes, information matters.

Add to that the fact that you're posting on a largely American forum where access to healthcare is largely predicated on insurance, just.. imagine AI underwriters. There's no court of appeal for insurance. It matters.


I am literally agreeing with you but in a much more precise way. These are questions of “who gets what stuff”, “who gets which house”, “who gets which heart transplant”, “which human being sits in the big chair at which corporation”, “which file on which server that’s part of the SWIFT network reports that you own how much money”, “which wannabe operator decides their department needs to purchase which fascist predictive policing software”, etc.

Imagine I 1. hooked up a camera feed of a lava lamp to generate some bits and then 2. hooked up the US nuclear first strike network to it. I would be an idiot, but would I be an idiot because of 1. or 2.?

Basically I think it’s totally reasonable to hold these two beliefs: 1. there is no reason to fear the LLM 2. there is every reason to fear the LLM in the hands of those who refuse to think about their actions and the burdens they may impose on others, probably because they will justify the means through some kind of wishy washy appeal to bad probability theory.

The -plogp that you use to judge the sense of some predicted action you take is just a model, it’s just numbers in RAM. Only when those numbers are converted into destructive social decisions does it convert into something of consequence.

I agree that society is beginning to design all kinds of ornate algorithmic beating sticks to use against the people. The blame lies with the ones choosing to read tea leaves and then using the tea leaves to justify application of whatever Kafkaesque policies they design.


> Add to that the fact that you're posting on a largely American forum where access to healthcare is largely predicated on insurance

Why do so many Americans think universal health care means there is no private insurance? In most countries, insurance is compulsory and tightly regulated. Some like the Netherlands and France have public insurance offered by the government. In other places like Germany, your options are all private, but underprivileged people have access to government subsidies for insurance (Americans do too, to be fair). Get sick in one of these places as an American, you will be handed a bill and it will still make your head spin. Most places in Europe work like this. Of course, even in places with nationalized healthcare like the UK, non-residents would still have to pay. What makes Germany and NL and most other European countries different from that system is if you're a resident without an insurance policy, you will also have to pay a hefty fine. You are basically auto-enrolled in an invisible "NHS" insurance system as a UK resident. Of course, most who can afford it in the UK still pay for private insurance. The public stuff blends being not quite good with generally poor availability.

Americans are actually pretty close to Germany with their healthcare. What makes the US system shitty can be boiled down to two main factors:

- Healthcare networks (and state incorporation laws) making insurance basically useless outside of a small collection of doctors and hospitals, and especially your state

- Very little regulation on insurance companies, pharmaceutical companies or healthcare providers in price-setting

The latter is especially bad. My experience with American health insurance has been that I pay more for much less. $300/month premiums and still even seeing a bill is outrageous. AI underwriters won't fix this, yeah, but they aren't going to make it any worse because the problem is in the legislative system.

> There's no court of appeal for insurance.

No, but you can of course always sue your insurance company for breach of contract if they're wrongfully withholding payment. AI doesn't change this, but AI can make this a viable option for small people by acting as a lawyer. Well, in an ideal world anyways. The bar association cartels have been very quick to raise their hackles and hiss at the prospect of AI lawyers. Not that they'll do anything to stop AI from replacing most duties of a paralegal of course. Can't have the average person wielding the power of virtually free, world class legal services.


America could afford universal healthcare, but it would require convincing people to pay much higher taxes.


You ended up providing examples that have no matter or atoms: protecting jobs, or oversight of complex systems.

These are policies which are a purely imaginary. Only when they get implemented into human law do they get a grain of substance but still imaginary. Failure to comply can be kinetic but that is a contingency not the object (matter :D).

Personally I find good ideas on having regulations on privacy, intelectual property, filming people on my house’s bathroom, NDAs etc. These subjects are central to the way society works today. At least western society would be severely affected if these subjects were suddenly a free for all.

I am not convinced we need such regulation for Ai at this point of technology readiness but if social implications create unacceptable unbalances we can start by regulating in detail. If detailed caveats still do not work then broader law can come. Which leads to my own theory:

All this turbulence about regulation reflects a mismatch between technological, politic and legal knowledge. Tech people don’t know law nor how it flows from policy. Politicians do not know the tech and have not seen its impacts on society. Naturally there is a pressure gradient from both sides that generates turbulence. The pressure gradient is high because the stakes are high: for techs the killing of a new forthcoming field; for politicians because they do not want a big majority of their constituency rendered useless.

Final point: if one sees AI as a means of production which can be monopolised by few capital rich we may see a 19th century inequality remake. It created one of the most powerful ideologies know: Communism.


Ironically communism would've had a better chance of success if it had AI for the centrally planned economy and social controls. Hardcore materialism will play into automation's hands though.

We're more likely to see a theocratic movement centered on the struggle of human souls vs the soulless simulacra of AI.


> Ironically communism would've had a better chance of success if it had AI for the centrally planned economy and social controls. Hardcore materialism will play into automation's hands though.

Exactly! A friend of mine who is into the communist ideology thinks that whichever society taps AI for productivity efficiency, and even policy, will become the new hegemon. I have no immediate counterpoint besides the technology not being there yet.

I can definitely imagine LLM based on political manifests. A personal conversation with your senator at any time about any subject! That is the basic part though: The politician being augmented by the LLM.

The bad part is a party, driven by a LLM or similar political model, where the human guy you see and elect is just a mouthpiece like in "The moon is a harsh mistress". Policy would all be algorithmic and the LLM out provide the interface between the fundamental processing and the mouthpiece.

These conflicts will likely lead to the conflicts you mention. I am pretty sure there will be a new -ism.


The worries about AI taking over things are founded and important, even if many sci-go depictions of it are inaccurate. I’m not sure if this would be the best solution but please don’t dismiss the issue entirely


Seriously, I'm very concerned by the view being taken here. AI has the capacity to do a ton of harm very quickly. A couple of examples:

- Scamming via impersonation - Misinformation - Usage of AI in a way that could have serious legal ramifications for incorrect responses - Severe economic displacement

Congress can and should examine these issues. Just because OP works at an AI doesnt' mean that company can't exist in a regulated industry.

I too work in the AI space and welcome thoughtful regulation.


You're never going to be able to regulate what a person's computer can run. We've been through this song and dance with cryptography. Trying to keep it out of the hands of bad actors will be a waste of time, effort, and money.

These resources should be spent lessening the impact rather than trying to completely control it.


> You're never going to be able to regulate what a person's computer can run.

You absolutely can. Maybe you can't effectively enforce that regulation but you can regulate and you can take measures that make violating the regulation impractical or risky for most people. By the way, the "crypto-wars" never ended and are ongoing all around the world (UK, EU, India, US...)


I hate to say this because it would be shocking, but computers as we know them could be taken off people.

Again it sounds extreme but in an extreme situation it could happen / not impossible.


I fear the humans engaging in such nefarious activities far more than some blob of code being used by humans engaging in such nefarious activities.

Likewise for activities that aren't nefarious too. Whatever fears that could be placed on blobs of code like "AI", are far more merited being placed on humans.


> Congress can and should examine these issues

great, how does that apply to China or Europe in general? Or a group in Russia or somewhere else? Are you assuming every governing body on the surface of the earth is going to agree on the terms used to regulate AI? I think it's a fool's errand.


*sci-fi but I can’t edit it now


> Now, because we had the freedom to innovate, AI will be bringing new, high paying jobs to our factories in our state.

Do we really have to play this game?

If what you’re arguing for is not going to specifically advantage your state over others, and the thing you’re arguing against isn’t going to create an advantage for other states over yours, why make this about ‘your state’ in the first place?

The point of elected representatives is to represent the views of their constituents, not to obtain special advantages for their constituents.


> The point of elected representatives is to represent the views of their constituents, not to obtain special advantages for their constituents.

That is painfully naive, a history of pork projects speaks otherwise.


To the best of my knowledge this doesn't happen so much in more functional democracies. It seems to be more of an anglophone thing.


Corruption is a kind of decay that afflicts institutions. Explicit rules, transparency, checks and balances, and consequences for violating the rules are the only thing that can prevent, or diagnose, or treat corruption. Where you find corruption is where one or more of these things is lacking. It has absolutely nothing to do the -acry or -ism attached to a society, institution, or group.


> Corruption is a kind of decay that afflicts institutions.

It can be, but often its often the project of a substantial subset of the people creating institutions, so its misleading and romanticizing the past to view it as “decay”.


I am no way suggesting that corruption is a new thing. It is an erosive force that has always operated throughout history. The amount of corruption in an institution tends to increase unless specifically rooted out. It goes up and down over time as institutions rise and fall or fade in obsolescence.


This is a product of incentives encouraged by the system (i.e. a federal republic), it has nothing to do with languages.


Seems like it’s under-studied (due to anglophone bias in the English language political science world probably) - but comparative political science is a discipline, and this paper suggests it’s a matter of single-member districts rather than the nature of the constitutional arrangement: https://journals.sagepub.com/doi/10.1177/0010414090022004004

(I would just emphasize, before anyone complains, that the Federal Republic of Germany is very much a federal republic.)


It has much to do with culture though - which is transmitted via language.


I think it's more like culture carries language with it. Along with other things, but language is one of the more recognizable ones.


> The point of elected representatives is to represent the views of their constituents, not to obtain special advantages for their constituents.

The views of their constituents are probably in favor of special advantages for their constituents, so the one may imply the other.

I mean, some elected representatives may represent constituencies consisting primarily of altruistic angels, but that is…not the norm.


What I was thinking in my head (although I don't think I articulated this well) is that I hope that smaller businesses who build their own AIs will be able to create some jobs, even if AI as a whole will negatively impact employment (and I think that's going to happen even if just big businesses can play at the AI game).


> The point of elected representatives is to represent the views of their constituents, not to obtain special advantages for their constituents.

A lot of said constituents' views are, in practice, that they should receive special advantages.


Not to ignore, the development of AI will wipe out jobs in the state


I made something just for writing your congress person / senator, using generative AI ironically: https://vocalvoters.com/


AI generated persuasion is pretty much what they're upset about


Cool product! Your pay button appears to be disabled though.


Should enable once you add valid info — if not, let me know


So you sent a letter saying “Mr Congress save my job that is putting others jobs at risk.”

You think voice actors and writers are not saying the same?

When do we accept capitalism as we know it is just a bullshit hallucination we grew up with? It’s no more an immutable feature of reality than a religion?

I don’t owe propping up some rich person’s figurative identity, or yours for that matter.


What specific ideas has Altman proposed that you disagree with? And where has he said AI could hack its way out of a laboratory?

I agree with being skeptical of proposals from those with vested interests, but are you just arguing against what you imagine Altman will say, or did I miss some important news?


> regulation should focus on ensuring the safety of AIs once they are ready to be put into widespread use

What would you say to a simple registration requirement? You give a point of contact and a description of training data, model, and perhaps intended use (could be binary: civilian or dual use). One page, publicly visible.

This gives groundwork for future rulemaking and oversight if necessary.


Personally I think a simple registration requirement would be a good idea, if it were truly simple and accessible to independent researchers.


[flagged]


Yes, guns don't kill people, people kill people.

We know, we have watched this argument unfold in the United States over the last 100 years. It sure does seem like a few people are using guns to kill a lot of people.

The point of regulating AI should be to explicitly require every single use of AI and machine learning to be clearly labelled so that when people seek remedy for the injustices that are already being perpetrated, that it is clearly understood that the people who chose to use not just ML or AI technology, but those specific models and training criteria can be held accountable if they should be.

Regulation doesn't have to be a ban, or limits on how it can be used, it can simply be a requirement for clearly marked disclosure. It could also include clear regulations for lawful access to the underlying math, training data, and intended use cases, and with financially significant penalties for non-compliance to discourage companies from treating it as a cost of doing business.


What are some examples of injustices that are already being perpetrated?


I mean, it's in the headlines regularly, but sure, I'll google it for you.

https://www.scientificamerican.com/article/racial-bias-found...

https://www.propublica.org/article/machine-bias-risk-assessm...

https://www.theguardian.com/technology/2018/oct/10/amazon-hi...

Three easy to find examples. There is no shortage of discussion of these issues, and they are not new. Bias in new technologies has been a long-standing issue, and garbage in, garbage out has been a well understood problem for generations.

LLMs are pretty cool, and will enable a whole new set of tools. AI presents alot of opportunities, but the risks are significant, and I am not overly worried about a skynet or gray goo scenario. Before we worry about those, we need to worry about the bias being built into automated systems that will decide who gets bail, who gets social benefits, which communities get resources allocated, how our family, friends, and communities are targeted by businesses, etc.


Yet


This is the message I shared with my senator

If you sent it by e-mail or web contact form, chances are you wasted your time.

If you really want attention, you'll send it as a real letter. People who take the time to actually send real mail are taken more seriously.


Sending this to my senator would just notify her of what company she should reach out to for a campaign contribution.


I'm not American even, so I cannot, but what a good idea! I hope the various senators hear this message.


Can you please share what ChatGPT prompt you used to generate this letter template?


I used this old-fashioned method of text generation called "writing" - crazy, I know


> Altman and his ilk

IANA senator, but if I were you lost me there. The personal insults make it seem petty and completely overshadow the otherwise professional-sounding message.


I don't mean it as a personal insult at all! The word ilk actually means "a type of people or things similar to those already referred to," it is not an insult or rude word.


It’s always used derogatorily. I agree that you should change it if you don’t mean for it to come across that way.


That's simply untrue. Here are several recently published articles which use ilk in a neutral or positive context:

https://www.telecomtv.com/content/digital-platforms-services...

https://writingillini.com/2023/05/16/illinois-basketball-ill...

https://www.jpost.com/j-spot/article-742911


It is technically true that ilk is not always used derogatorily. But it is almost always derogatory in modern connotation.

https://grammarist.com/words/ilk/#:~:text=It's%20neutral.,a%....

Also, note that all of the negative examples are politics related. If a politician reads the word 'ilk', it is going to be interpreted negatively. It might be the case that ilk does "always mean" a negative connotation in politics.

You could change 'ilk' to 'friends', and keep the same meaning with very little negative connotation. There is still a slight negative connotation here, in the political arena, but it's a very vague shade, and I like it here.

"Altman and his ilk try to claim that..." is a negative phrase because "ilk" is negative, but also because "try to claim" is invalidating and dismissive. So this has elements or notes of an emotional attack, rather than a purely rational argument. If someone is already leaning towards Altman's side, then this will feel like an attack and like you are the enemy.

"Altman claims that..." removes all connotation and sticks to just the facts.


Well even if ilk had a negative connotation for my intended audience (which clearly it does to some people), I am actually trying to invalidate and dismiss Altman's arguments.


When someone is arguing from a position of strength, they don't need to resort to petty jibes.

You are already arguing from a position of strength.

When you add petty jibes, it weakens your perceived position, because it suggests that you think you need them, rather than relying solely on your argument.

(As a corollary, you should never use petty jibes. When you feel like you need to, shore up your argument instead.)


Well I didn't intend it as a "petty jibe," but in general I disagree. Evocative language and solid arguments can and do coexist.


Doesn’t matter. It won’t be well received. It sounds negative to most readers and being technically correct warns you no points.


Well I don't think it really matters what most readers think of it because I was writing it hoping that it would be read by congressional staffers, who I think will know what ilk means.


It's also possible you could be wrong about something, and maybe people are trying to help you.


Remember: you are doing propaganda. Feelings don't care about your facts.


Not true.


I'd argue that you're right that there's nothing intrinsically disparaging about ilk as a word, but in contemporary usage it does seem to have become quite negative. I know the dictionary doesn't say it, but in my discussions it seems to have shifted towards the negative.

Consider this: "Firefighters and their ilk." It's not a word that nicely described a group, even though that's what it's supposed to do. I think the language has moved to where we just say Firefighters now when it's positive, and ilk or et al when it's a negative connotation.

Just my experience.


I'm mean, at this point I'm going to argue that it you believe ilk is only ever used derogatorily, you're only reading and hearing people who have axes to grind.

I probably live quite distally to you and am probably exposed to parts of western culture you probably aren't, and I almost never hear nor read ilk as a derogation or used to associate in a derogatory manner.


TIL! https://www.merriam-webster.com/dictionary/ilk

still, there are probably a lot of people like me who have heard it used (incorrectly it seems) as an insult so many times that it's an automatic response :-(


Don't worry, you're not a senator.

And, if there's one thing politicians are know for it's got to be ad hominem.


Tone, intended or otherwise, is a pretty important part of communication!


There is this idea that the shape of a word, how it makes your mouth and face move when you say it connotes meaning on its own. This is called "phonosemantics", just saying "ilk" makes one feel like they are flinging off some sticky aggressive slime.

Ilk almost always has a negative connotation regardless of what the dictionary says.


> Kind; class; sort; type; ; -- sometimes used to indicate disapproval when applied to people.

> The American Heritage® Dictionary of the English Language, 5th Edition.

Yes, only sometimes used to indicate disapproval, but such ambiguity does not work to your favor here. It is better to remove that ambiguity.


Don't fret too much. I once wrote to my senator about their desire to implement an unconstitutional wealth tax and told them that if they wanted to fuck someone so badly they should kill themself so they could go blow Jesus, and I still got a response back.


Ilk is shorthand for similarity, nothing more. The 'personal insult' is a misunderstanding on your part.


"Ilk" definitely has a negative or dismissive connotation, at least in the US. You would never use it to express positive thoughts; you would use "stature" or similar.

The denotation may not be negative, but if you use ilk in what you see as a neutral way, people will get a different message than you're trying to send.


“ilk” has acquired a negative connotation in its modern usage.

See also https://grammarist.com/words/ilk/#:~:text=It's%20neutral.,a%....


This is too subjective to be useful.


I would be curious to see an example of 'ilk' being used in a modern, non-sottish local, context where the association is being shown in a neutral or positive light.

I'll give you one: National Public Lands Day: Let’s Help Elk … and Their Ilk - https://pressroom.toyota.com/npld-2016-elk/ (it's a play on words)


language is subjective


reducto ad nounium is a poor argument.


[flagged]


You can “create a lot of jobs” by banning the wheel on construction sites. Or power tools. Or electricity.


I mean, it will? Not universally, but for the specific products I work on we use additional labor when we include AI-enabled features (due to installing and wiring processors and sensors).

I think that that the sorts of AI that smaller companies make will be more likely to create jobs as opposed to getting rid of them since they are more likely to be integrated with physical products.


>I mean, it will?

It's complicated obviously, but I think "will create jobs" just leaves a lot of subtlety out of it, so I've never believed it when representatives say it and I wouldn't say it myself writing to them, but a small letter to a representative always will lack that fidelity.

I don't think anyone can guarantee that there won't be job loss with AI, so it's possible we could have a net negative (in total jobs, quality of jobs, or any dimension).

What we do see is companies shedding jobs on a (what seems like perpetual) edge of a recession/depression, so it might be worth regulating in the short term.


I agree I didn't make this clear enough in my letter. I do think AI will cause job loss, I just think it will be worse if a few companies are allowed to have a monopoly on AI. If anyone can work on AI, hopefully people can make things for themselves/create AI in a way that retains some jobs.


Quite. Meanwhile on the rest of the internet people are gleefully promoting AI as a means of firing every single person who writes words for a living. Or draws art.


GPT is so far from threatening “every single person who writes words for a living” anyway. Unless you’re writing generic SEO filler content. Not sure who is claiming that but they don’t understand how it works if they do exist at scale.

Writing has always been a low paid shrinking job well before AI. Besides a tiny group of writers at the big firms. I took a journalism course for fun at UofT in my spare time and the professors had nothing but horror stories for trying to make a job out of writing (ie getting a NYT bestseller and getting $1 cheques in the mail). They basically told the students to only do it as a hobby unless you have some engaged niche audience. Which is more about the writer being interesting rather than the generic writing process.


You say that... then we encounter cases like https://news.ycombinator.com/item?id=35919753

While AI isn't going to put anyone out of a job immediately (like automation didn't), there are legitimate risks already in that regard -- in both fiction and nonfiction sites, folk are experimenting with having AI basically write stories/pieces for them -- and the results are often good enough to have potentially put someone out of the job of writing a piece in the first place.


Pretty ignorant, pathetic, and asinine comment to make -- no wonder you're making it on a throwaway, coward ;)


Lets not focus on "the business" and instead focus on the safety.

Altman can an ulterior motive, but it doesn't mean that we should strive for having some sort of handle on AI safety.

It could be that Altman and OpenAI know exactly how this will look and the backlash that will ensue that we get ZERO oversight and we rush headlong into doom.

Short term we need to focus on the structural unemployment that is about to hit us. As the AI labs use AI to make better AI, it will eat all the jobs until we have a relative handful of AI whisperers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: