I would not really classify them as "bad" actors, but there are definitely real research lines into this. This freakonomics podcast (https://freakonomics.com/podcast/how-to-poison-an-a-i-machin...) is a pretty good interview with Ben Zhao at the University of Chicago. He runs a lab that is attempting to figure out how to trip up model training when copyrighted material is being used.
That's a policy decision by the Chinese government. They still have the authority, but the Streisand effect makes blunt censorship counterproductive in an open society. For example, TikTok took down the viral "Uighur makeup tutorial" but quickly reinstated it after the backlash. That backlash couldn't occur in China, but it can in the USA for as long as uncensored outlets exist.
Subtler manipulation still works great, and the opacity of algorithmic content recommendation makes that an ideal instrument. Nobody outside ByteDance knows to what extent the CCP is putting its thumb on that scale already, but they certainly have the power to.
A different account operated by the same user was banned for something relating to an image of bin Laden in a different video. I've been unable to locate that video. I haven't found any reference stating that she praised him. She described her use of that image as satirical, and TikTok itself seems to recognize that (but stands by that ban):
> *While we recognize that this video may have been intended as satire, our policies on this front are currently strict.
In any case, the video in question is the Uighur one. TikTok quickly stated that one was a "human moderation error" and reversed it. My point is irrespective of whether their rules were morally correct or correctly applied, though--whatever those merits, they clearly drew more attention to the topic by censoring here, not less. So it's not surprising they don't apply blunt Chinese-style censorship outside China, since it's counterproductive without Chinese-style control of all major media.
What does replacing programmers have to do with typing speed? You mean the stuff copilot does was never the limiting factor of productivity? I agree, but copilot is not replacing programmers; replacing programmers is more than typing and proponents know that. It might get there.
I still don't understand how it would hurt if you can focus on the parts that do influence your productivity and have the rest appear automatically. But that's just me.
> This is the fundamental truth that those pushing AI as a replacement for programmers miss (intentionally or not).
I think this blend of comment shows a good dose of ignorance discussing the role of AI as a replacement for programmers.
It's not like PMs are suddenly seeing engineers vanish from software projects. It's that AI makes developers so much more productive that you only need a subset of them to meet your work requirements.
To give you an example, AI tools can indeed help you write whole modules. Yes, code can be buggy. Yet, the "typing" part is not what developers benefit from AI. Developers can iterate way faster on designs and implementations by prompting LLMs to generate new components based on new rewuirements, which saves you the job of refactoring any code. LLMs can instantly review changes and suggest ways to improve it, which would require either reading up on the topic or asking a fellow engineer on payroll to spend their time doing the same job. LLMs can explain entire codebases to you without asking a single question to veteran engineers in the team. LLMs can even write all your unit tests and rewrite them again and again as you see fit. LLMs can even recommend best practices, explain tradeoffs of approaches, and suggest suitable names for methods/variables based on specific criterias.
This means AI can do a multitude of jobs that previously you needed a whole team to do, and for that you no longer need whole teams to do a job.
> LLMs can instantly review changes and suggest ways to improve it, which would require either reading up on the topic or asking a fellow engineer on payroll to spend their time doing the same job.
If we train ourselves out of being able to do these tasks, won't we find it harder to recognise when the AI makes mistakes?
> If we train ourselves out of being able to do these tasks, won't we find it harder to recognise when the AI makes mistakes?
We are not skipping these tasks. We are using tools to help us avoid doing drudge work that can be automated away.
Code linters eliminate the need to prettify code. Do developers find it harder to recognize indentation inconsistencies? Syntax highlighters simplify picking up code constructs. Do developers find it harder to read code? Template engines simplify generating new source files with programming constructs. Do developers find it harder to read code? Heck, auto complete helps developers write whole code blocks faster. Do developers find it harder to write a function?
I think those first two are not like the third. I think it's highly likely that developers who rely on AI to write code for them find it harder to write a function themselves, yes.
> It's that AI makes developers so much more productive that you only need a subset of them to meet your work requirements.
my experience so far is that it takes away the upfront thinking and leads to a mess of code that I then have to sit down with them and try to work through.
> So, your argument is that AI does not replace programmers, it just... replaces programmers?
I pointed out the fact that AI does not replace programmers. You still need people between keyboards and chairs delivering code and maintaining systems.
What AI does is make developers far more efficient at their job.
If you have employees that do their work in less time, they do not get more free time. They get more work. The moment their workforce is more productive, employees start to need fewer employees to deliver the same volume of work.
It wasn't, and if it did I would wager it would write it more elegantly and coherently than me. That goes to show how much I use Claude/Copilot/ChatGPT at my job.
>One idea is merely a form of control flow (like 'if' or 'goto') that could jump down the call stack.
Interestingly this is more or less how Symbian handled exceptions back in the day. It was a giant mess. You basically called setjmp in a surrounding function (using a macro IIRC) and then longjmp (in the form of user::leave()) which unwound the stack without calling any destructors.
It was very fast and the source of endless memory leaks.
Not only were they fast but the codegen was very small. It had exception support before there was any real standardisation on how they should work.
It was also before RAII was a thing and so you had to manage the ‘CleanupStack’ [0] yourself.
At least for the development of the OS itself, and not frameworks such as the awful Series60 UI, we had have unit tests that brute forced memory correctness by getting the memory allocator to deliberately fail at each next step of the code being developed until it succeeded or panicked.
In addition I would need to some justification for the idea that water stored in a clean tank without access to light is somehow worse than fresh out of a filter.
> To be eligible for membership in the Benevolent and Protective Order of Elks, you must be a citizen of the United States over the age of 21 who believes in God (whatever that means to you).
Here's a helpful atheistic definition of "God" I picked up somewhere: The force that enables good things to happen to groups of people who act upon a belief that they should do good to another.
I.e. if you say "thank God", you're really saying you're thankful that you and the people around you are making decisions that benefit everyone.
(Not an invitation for theistic discussion, just trying to be helpful to a fellow atheist)
>> If my interlocutor isn't prepared to make any distinctions between a human being and a machine designed to mimic one, I think that I can't meaningfully discuss this with them. I hate that this is the case; it must be seen as a personal failing. I can't do it, though. These models aren't people. They don't know anything. They don't want anything. They don't need anything. I won't privilege it over a person. And I certainly won't privilege its master.
Perhaps if you read Gabe's post you could have saved yourself the trouble of making this comment.
Seems lazy to differentiate between types of intelligence without providing any substance as to why you believe one type is more worthy of learning than another and criticizing others for the same laziness
>Seems lazy to differentiate between types of intelligence without providing any substance as to why you believe one type is more worthy of learning than another and criticizing others for the same laziness
Seems lazy to call a genAI model "intelligence" without providing any substance as to why one should believe that, but OK.
The model isn't an "intelligence" because it's not making a choice about which data you train on and be "inspired by", as people here say.
The humans that operate those models do. It's those humans that are exploiting the artists.
The AI, as many have said here, is just a tool.
Funny how it's "just a tool" when a human uses it to create art, but it's "intelligence" that gets "inspired" by art when a human uses it to create a software product.
> The model isn't an "intelligence" because it's not making a choice […]
There’s no evidence any of us are making choices either. I don’t think LLMs are intelligences in the way most people mean the word (I think that would be vastly overestimating their abilities) but appealing to choice when it’s so poorly understood — as poorly understood as intelligence, even — is not useful.
My point is that there’s no scientific way to demonstrate that one has made a choice vs that outcome being determined (perhaps probabilistically) or being random. We can’t “go back” and do it a different way to demonstrate we’ve actually chosen.
We don’t understand what it means to be intelligent in the way people generally mean, and we don’t understand (scientifically) what it means to choose. So we can’t use choice to usefully define intelligence.
>My point is that there’s no scientific way to demonstrate that one has made a choice
"I'm sorry, your honor, there is no scientific way to demonstrate that I have made a choice to shoot the victim dead. Therefore, I am not guilty!"
The way that an AI gets a data set to be trained on is via a choice made by humans. You can't reason your way out of the fact that a choice is made, that it's not made by the AI, and that it has consequences.
One of the best programming lessons I've learned was actually an attempt I made to learn something about design. Typography (which deals quite a lot with the presentation of information hierarchies) is an essential skill in making code actually readable. It gives you a framework for thinking about how to draw a readers attention to the places you want and how to signal that it is important. Which you can do even with tabs and plain-text.