2005 piracy had little to do to with making art accessible. For the most part it seemed more like getting for free the digital things we couldn't pay or and/or felt entitled to, with many justifications layered on top.
And in 2004 you had a tape deck with two bays meant for copying and none of your tapes or cds were real. You’d make copies from other people or even the radio or TV. People forget how piracy was actually the norm before the digital age attempted to crack down on it. Even just passing a book you enjoyed to a friend to read, can you even do that with ebook DRM?
> How about you put up it up to a national vote and see what democracy gets you? I highly suspect that vast majorities of the electorate would want to nationalize this tech to benefit everyone rather than benefiting the few.
You're probably right -- except for the billions in massive PR campaigns that will be spent to successfully convince enough of them that it's in their best interest to let the companies keep ownership.
This is in addition to the billions in PR already being spent to make AI palatable in spite of the societal and economic costs.
Their billions in PR isn't stopping people from rejecting data centers being built in their communities.
What you have to understand about advocacy is that it's the worst form of politics and it only goes so far. Paid canvassers aren't convincing compared to actual humans organizing with one another.
I think it's often genuine excitement to share a thing - without quite processing that anybody with the same idea can now build it (for simple- to mid-complexity projects).
The novelty of "new thing! That would have been incredibly hard a decade ago!" hasn't worn off yet.
This isn't the first time something like this has happened.
I would imagine that people had similar thoughts about the first photographs, when previously the only way to capture an image of something was via painting or woodcutting.
When movies first came out they would film random stuff because it was cool to see a train moving directly at you. The novelty didn't wear off for years.
There was something someone said in a comment here, years and years ago (pre AI), which has stuck with me.
Paraphrased, "There's basically no business in the Western world that wouldn't come out ahead with a competent software engineer working for $15 an hour".
Once agents, or now claws I guess, get another year of development under them they will be everywhere. People will have the novelty of "make me a website. Make it look like this. Make it so the customer gets notifications based on X Y and Z. Use my security cam footage to track the customer's object to give them status updates." And so on.
AI may or may not push the frontier of knowledge, TBD, but what it will absolutely do is pull up the baseline floor for everybody to a higher level of technical implementation.
And the explosion in software produced with AI by lay-people will mean that those with offensive security skills, who can crack and exploit software systems, will have incredible power over others.
I think that when a software system is used by more people and has more eyes on it, it's more likely to have its security flaws be found and fixed. Then all the users will benefit from the fix.
The more that software is fragmented into bespoke applications used by small numbers of people, the less people benefit from security network effects.
I believe the security vulnerability issues will be addressed with companies using cloud based vibe-code platform or a ai security auditor agent that runs through the code base and flags security issues.
Sure it is. AI software development is here. It's not good enough for everything, but it's good enough for a majority of the changes made by most software engineers.
That's now. Right now, the tooling exists so that for >80% of software devs, 80% of the code they produce could be created by AI rather than by hand.
You can always find some person saying that it'll destroy all jobs in a year, or make us all rich in a year, or whatever, but your cynicism blinds you to the actual advances being made. There is an endless supply of new goalpost positions, they will never all be met, and an endless supply of chartalans claiming unrealistic futures. Don't confuse that with "and therefore results do not exist".
No, it isn't. There is a gigantic chasm of difference between "80% of code they produce could be created by AI" and "80% of commits they produce could be created by AI".
Mixing the two up is how we get a massive company like Microsoft to continually produce such atrocious software updates that destroy hardware or cause BSODs for their flagship Operating System.
That's not replacing software development. That's dysfunction masquerading as capability.
And none of what I said is goalpost moving. They are the goalposts constantly made by the AI industry and their hype-men. The very premise of replacing a significant amount of human labor underlies the exorbitant valuation AI has been given in the market.
It appears that your understanding of AI code generation reflects the state of 1-2 years ago. In which case of course it seems like what people are describing as reality, feels 1-2 years away.
> There is a gigantic chasm of difference between "80% of code they produce could be created by AI" and "80% of commits they produce could be created by AI".
This is exactly the goalpost moving I am talking about. I said 80% of code could be AI-written, you agreed, and followed up with "oh but it doesn't matter because now we're measuring by % of commits".
> That's now. Right now, the tooling exists so that for >80% of software devs, 80% of the code they produce could be created by AI rather than by hand.
Technically 100% of the code they could produce could be created by a ton of very specific AI prompts. At that level of control it would be slower than typing the code out though.
Just throwing out random numbers like this is complete nonsense since there's about a million factors which determine the effectiveness of an LLM at generating code for a specific use case. And it also depends on what you consider producing by hand versus LLM output. Etc.
Today I fed to Opus 4.6 five screenshots with annotations from the client and told it to implement the changes. Then told it to generate real specs, which it did. I never even looked at the screenshots, I just checked and tested against the generated specs. Client was happy.
I have a similar feeling to people who upload their AI art to sites like danbooru. Like I guess I can understand making it for yourself but why do you think others want to see it
xkcd turned stick figure drawings into an art form. sometimes it is not about how something was created, but about the story being told.
some people build apps to solve a problem. why should they not share how they solved that problem?
i have written a blog post about a one line command that solves an interesting problem for me. for any experienced sysadmin that's just like a finger painting.
do we really need to argue if i should have written that post or not?
No, it is a famously coherent concept over millenia.
Quis custodiet ipsos custodes?
"Who will guard the guards themselves?" or "Who will watch the watchmen?"
>>A Latin phrase found in the Satires (Satire VI, lines 347–348), a work of the 1st–2nd century Roman poet Juvenal. It may be translated as "Who will guard the guards themselves?" or "Who will watch the watchmen?". ... The phrase, as it is normally quoted in Latin, comes from the Satires of Juvenal, the 1st–2nd century Roman satirist. ...Its modern usage the phrase has wide-reaching applications to concepts such as tyrannical governments, uncontrollably oppressive dictatorships, and police or judicial corruption and overreach... [0]
The point is a government that is not overseen by the people devolves into tyranny.
So yes, the point is to regulate the regulators and oversee the oversight committee.
Anthropic was happy to have it's AI used for military purposes, with two exceptions: 1) no automated killing, there had to be a human in the "kill chain" of command, and 2) no use for mass surveillance. This govt "Dept of War" is demanding Anthropic drop those two safety requirements or it threatens to make Anthropic a pariah. These demands by the govt are both immoral and insane. The "regulator and overseer" needs to be regulated and overseen.
Alas, historically speaking, most governments have been tyrannies. In recent decades, some of them have been less so, or slightly more representative or transparent. I think in Switzerland they go to referendums often. Beyond that, once you vote for a party due an issue you deeply care about, they get to do whatever they want day to day, without citizens having a regular recourse to stop them. Yes people can go to the streets and fight the police that defends the government. But there's not a constitutional mechanism which is "citizen can push this button to override the senate and/or veto what the president wants" or "all security forces are subordinated first and foremost to citizen consensus on the area where they operate".
But can you see the difference if you only include "you are a senior engineer"? It seems like the comparison you're making is between "write the tests" and "write the tests following these patterns using these examples. Also btw you’re an expert. "
> No, he called some Mexican migrants into the US rapists.
It was more than that.
In his own words, 'some' of those migrants[1] are good people (/maybe/ - he's apparently never met one), but everyone else...
"They're not sending their best. They're sending people with lots of problems. And they're bringing those problems with us[sic]. They're bringing drugs, they're bringing crime, they're rapists, and some - I assume - are good people."
I would personally expect review to evaluate for correctness. Such a review would have stopped this from being published. This diagram as published is literal nonsense.
I think it's funny that we're moving in the direction of providing extremely fine-grained permissions models to serve AI and prevent it from accessing things it should not - but that's a level of control we will never have (or even expect to have) over third parties that use our sensitive data.
reply