Does everyone just use AI to write these days? Or is the style so infectious that I just see it everywhere? I swear there needs to be some convention around labeling a post with how much AI was used in its creation.
I'd be embarassed to put my name on AI prose without a disclaimer and I'd also be annoyed to read it as a reader.
IMO it's insulting to the audience, it says your time and attention is not worthy of the author's own time and attention spent putting their own thoughts in their own words.
If you're going to do that at least mention it's LLM output or just give me your outline prompts. I don't care what your LLM has to say, I'm capable of prompting your outline in my own model myself if I feel like it.
> If you're going to do that at least mention it's LLM output
Yes, this! Please label AI generated content. Pull request written by an AI? Label it as ai generated. Blog post? Article generated with AI? Say so! It’s ok to use AI models. Especially if English is your second language. But put a disclaimer in. Don’t make the reader guess.
Eg:
> This content was partially generated by chatgpt
Or
> Blog post text written entirely by human hand, code examples by Claude code
I'm not a fan of AI and try to avoid it, but there is a difference from AI output published by someone knowledgeable and any other AI output that you run by yourself. If an expert looked at the result and found it to be ok, then you can have some assurance that it at least makes sense. Your own AI run doesn't mean anything, it could be 100% hallucination and a non-expert will buy it as truth.
LLMs were trained on stuff that people wrote. I get there are "tells", but don't really think people are as good at identifying AI generated text as they think they are...
I wouldn't have picked this article as AI until I got an agent to do some writing for me and read a bunch of it to figure out if I can stand behind it. Now I see the tells everywhere "It's not this. It's that." is particularly common and I can't unsee it. (FWIW I rewrote most of the writing it generated, but it did help me figure out my structure and narrative)
The problem I think with AI generated posts is that you feel like you can't trust the content once it's AI. It could be partly hallucinated, or misrepresented.
Yeah, but "it's not X. It's Y" is a common idiom that LLMs picked up from people. That's the point i was making. And it's starting to feel like every post has at least one comment claiming that it was AI generated.
Good chunks of the article don't trigger this for me, but I would bet money on the final paragraph involving AI:
> That's not a technical argument. It's a values argument. And it's one that the filesystem, for all its age and simplicity, is uniquely positioned to serve. Not because it's the best technology. But because it's the one technology that already belongs to you.
As the author I can assure you there’s a human behind these words. Interesting times me live in though, I find myself questioning what’s AI and what’s not often too and at the moment we’ve offloaded that responsibility to the good will of authors or platform policy which might have to change soon
I thought it was a great post tying a lot of things I’ve been reading and thinking about together. Could care less if you used AI if it helps my brain expand and or make connections I wouldn’t have otherwise.
What's wild is that with a few minutes of manual editing it would give exponential return. For instance, a lead sentence in your section saying "here's why X" that was already described by your subheading is unnecessary and could have been wholly removed.
That’s pretty presumptive of how obviously the author could improve it. As someone who writes a lot of docs, I find feedback and preferences varies wildly. They may just have well made it “worse” to your preferences by hand editing it more.
Does everyone just easily accuse genuine, literate humans of "cheating" with AI when there's no way they could know that?
There are a lot of unique aspects of the writing in this post that LLMs don't typically generate on their own.
And there's not a "delve" or "tapestry" or even a bullet point to be found.
Also, accusations and complaints like this are off-topic and uninteresting.
We should be talking about filesystems here, not your gut instinct AI detector that has a sky-high false-positive rate.
I swear there needs to be some convention around throwing wild accusations at people you don't know based exclusively on vibes and with zero actual evidence.
Imagine, if you would, that the strict libertarians had much more influence in shaping the country. So much so that the roads are toll roads, the parks require a fee, and almost no libraries exist because the ROI just isn’t there.
Furthermore, there is no anti-trust legislation, and as a result, there are only a few companies that control all meeting places: the parks, the coffee shops, the roads, the pubs. And they have set up constant monitoring technology.
If you want to set up a protest on a street corner, it better align with the corporation’s views, or they will ban your access to the roads. If you want to talk with friends at the pub, don’t say anything out of line or you’re not coming back. Events can take place in parks, but make sure you only discuss the weather.
Of course, this is fine: you can always just meet at your own home and say what you think, because that is your own property.
…
I realize the analogy is overwrought, but there just doesn’t exist an online equivalent of a public space, and ideological enforcement is trivial. Comparing it to the rules we have for physical spaces mean we need to imagine what those physical spaces would be like if they operated like online spaces, and frankly the result is dystopian (in my opinion).
Surely the solution isn’t just to dismiss it as a non-problem? Or, I suppose, to stop looking for a solution because… solutions so far considered have negative side effects, which feels (practically speaking) the same to me.
Physical public spaces are regulated. Laws still apply there.
There are countless online spaces which operated like physical public spaces, where anything legal goes. Move off of the mainstream web and even the illegal stuff is allowed. You can literally run your own instance of whatever application on the Fediverse and follow whomever you want. No matter how radical or extremist your ideology is, someone will happily host it.
It's only a problem if one insists that all online spaces must be run under the same anarchic principles and must be forced to give anyone a platform, but that's far more dystopian than what we have now.
Having been in both roles, I believe it is important that each side of the “1” give the other a little grace.
When you are going from “1” to stable, there is some breathing room because you have a 1 that works, mostly. Sort of. Dealing with it may be a slow slog of sordid substitutions, but the pressure is different.
Going from 0 to 1 may involve working 80+ hour weeks with little sleep and enormous stress. It may mean meeting deadlines that make or break the product in a mad rush to fulfill a contract that saves or dooms the company. It may mean taking minutes to decide designs when days or months of consideration would have been more appropriate. And it may mean getting a lot of things wrong, but hopefully not so wrong that a version 2 is impossible.
As a final note: often v1 has substantial problems, that’s true. But sometimes it’s actually not that bad, and v2 fails because it was trying to shove the tech de jure (k8s cough cough) where it wasn’t needed so someone could get that shiny architect promotion.
As much as I love sarcasm that is done well, I do find that it translates very poorly to written text unless explicitly noted with /s or something like that. Even when annotated, it's extremely rare that a sarcastic comment actually furthers discussion or makes a meaningful point. If a person is using sarcasm, odds are pretty high that they aren't engaging substantively anyway. Given the difficulties with detection (which even many humans fail at) it seems like trying to detect sarcasm would just make the tool a lot less useful and would be mostly antithetical to the project goals anyway.
I would be willing to bet money they used AI to scrape and curate the comments. The justifications have that feeling of knowledge the sentiment is negative coupled with a lack of understanding about its accuracy.
Sort of a meta-observation, but consistently folks on the left have that take and then are confused when they lose.
“If only all those idiots on the right and in the center could see they should vote for the bumbling but well-intentioned candidate over the obvious liars and thieves” is an explanation that feels good to tell yourself, but also incredibly patronizing and prevents actually understanding why people vote the way they do.
I find the arrogance of the left pretty abhorrent. I also despise aspects of the right, but boy does the left rub me the wrong way.
If you find the arrogance abhorrent, I wonder how you characterize some of the actually bad stuff that politicians get up to.
Personally, I don't expect people on the right to come around. I am mystified by people on the center who looked at Trump and Harris and decided Trump was the way to go, or even just didn't care. If you'd like to enlighten me why they did that, I'd be interested.
My real confusion is people on the left who did this. They decided that Harris didn't say the right things about Israel, or they were upset at not having a primary, or they were still upset about Bernie, and decided to stay home. That is baffling.
> already was president for 4 years, which - aside from a lot of crazy talk - was a pretty stable and prosperous time.
you did NOT just write this seriously???! :) I hope you are being as sarcastic as one can be or did you sleep through it. just check how much of the total national debt comes from his first term… it is arguably the least “stable and prosperous” 4 years any American who is alive has ever seen
I think people forget when in the year the election is held and associate 2020 with Biden. Certainly much of that year's craziness was not Trump's fault, but his absolute uselessness was on full display, and he was the guy in charge.
we want Presidents who step up when the shit hits the fan, not take our kids out of schools and lock everyone up in their homes and then add trillions of dollars of debt they will eventually have to pay up.
It is easy to be the President when shit's easy. I don't follow politics at all anymore, stopped right around the time someone like Donald was able to get a nomination for a political party in the United States so this isn't a liberal bashing Donald, these are just facts that he was about as worse of a President in his first four years as we've ever had. The jury is still out for these 4, we'll analyze that in 2029 :)
I would characterize a lot of the behavior of politicians as despicable, antisocial, and un-American.
The short answer to your question is that the Democratic establishment in general and Harris in particular repeatedly lied throughout the Biden administration, culminating in the bald-faced lie that Joe Biden was completely competent. This was done with the attitude of “well what are you going to do? Vote for the other team? Don’t be ridiculous.” There were so, so many other things throughout the Biden administration, it felt (feels) like a race to the bottom.
So Trump, who is notorious for lying, won. To be fair to Republicans, Trumps lies are more like crazy exaggerations sprinkled with outright bullshit which somehow is more palatable than being gaslit.
If the defense of the Democrats is “Well look at how bad Trump is!” it should at least be acknowledged that is one of the worst defenses possible. And in general, if my options are to be stabbed by person A twice, or by person B once but person B expects me to be grateful, I might just go with person A.
The end result is we will keep toggling between the two parties until one of them decides to run using sane people. I sincerely hope that will be the Democrats this year.
Befuddlement at the choices of the American voters is not a defense of Democrats. They could do so much better. But even with the choices we have, I don't understand how people come to the conclusions they do.
Drive.
Even though it’s only 50 m, the car itself has to be at the wash bay—otherwise the staff or machines can’t clean it. Rolling or pushing the vehicle that distance isn’t practical or safe (you’d still need someone inside to steer and brake), so just hop in, creep over, and shut the engine off right away.
I agree with the author that there is a sense of hypocritical outrage in Pike’s post.
My viewpoint is similar. Google has done many negative things, and at this point it can easily be argued they have caused net harm. By choosing to remain employed there, Pike tacitly admits he believes that they land on the net positive side.
There is at least as much nuance to AI as a technology, but his level of outrage indicates he is not evaluating it through the lens of trade-offs. His reaction then begs the question: why would he be nuanced in the case of his employer but not in the case of AI? And the answer seems obvious: he profits from Google directly, not AI.
If you reach that conclusion, his words ring pretty hollow.
reply