Hacker Newsnew | past | comments | ask | show | jobs | submit | yellowapple's commentslogin

This seems like odd timing with Apple's new cheap Macbook.

On a tangential note, like with most NYT diagrams, the flowchart in that article makes very little sense, and there's zero explanation for what any of it is supposed to mean. Did an LLM generate it? The irony of that possibility for yet another article blaming AI for all the world's problems would be thick enough to spread on toast.


> This seems like odd timing with Apple's new cheap Macbook

600 Euros cheap ?


Here, I think (hope) you dropped this: /s

> that the cost of catching cheaters is significantly higher than the cost of cheating

This is tackling the problem from the wrong direction. The right direction would be to make it harder to cheat in the first place. For example: if the student submits an essay, and that student is able to coherently and accurately answer any questions asked about the essay in a face-to-face conversation, then that student is probably the genuine author of that essay.


I agree with you that a face-to-face q&a is a reasonably good way to detect low-effort cheating, but I'll still quibble a bit:

- I don't think this lowers the cost of detection as much as you imagine. You still need to know the paper better than the student and have to sacrifice already tight instruction/planning/grading time to have all of these conversations. Even if you catch enough to successfully deter most, it likely means not covering something else. It won't be too hard to catch low-effort cheaters who can't be bothered to read the paper, but you're on the low-leverage side of an arms race with the remaining students. You have experience on your side and they can't know what you'll ask, but they outnumber you and can certainly read the paper and use LLMs to quiz them on it. You have to invest your effort without knowing how each student prepared, so you'll spend about as much effort on every low-effort cheat as you do on the highest-effort cheat you are prepared to catch.

- Not sure it is "from the wrong direction" since both approaches raise the cost of cheating and lower the cost of detecting it.

- While this does avoid encouraging students to dumb down their work, it does still raise the cost of not-cheating. Unless you surprise the students with these conversations, the ones that care most will still anxiously prepare.


> but the tools are pretty good. I encourage you to give them a try.

I have given them a try and can confirm the exact opposite. Plenty of others have given them tries and have confirmed the exact opposite.

Regardless, the “better for a hundred guilty men to go free than for one innocent man to hang” principle applies here.


No it doesn't.

This is armchair philosophy when pragmatism and problem solving serve better.

Fundamentals - Teaching is expensive, and we don't have enough teachers.

Verifying if someone has the skills is difficult.

Given the shortage of teachers, and the difficulty of verification, we ways to bridge the gap.

The first step is always going to be to spend more on education, especially in underserved areas.

The new options we have with LLMs is to increase the rate of testing, and test out the benefits of low stakes testing at scale.


> This is armchair philosophy when pragmatism and problem solving serve better.

Punishing innocent people out of negligence is not pragmatism, and refusing to tolerate such punishment is not armchair philosophy.


— dozens!

I agree with the article's general thrust: use AI if you want to, don't use it if you don't want to, whether or not you'd want to will probably change as AI continues to evolve, and most people seem to be being pushed to use AI in the dumbest ways imaginable.

----

I rather strongly disagree with the framing around the environmental impacts, though; the article would make a much stronger point if it resisted the urge to peddle the same “muh water and electricity” disinformation that gets parroted all over the place by people who can't be bothered to put numbers into the context of other numbers.

For example:

> A single DGX B200 AI server is rated to consume 14,300 watts of electrical power at peak. You can cram about four of these on a rack if you like to live on the edge, and these four units might draw something like 200 amps of current combined. For a point of comparison, a typical single-family home in the United States will have wires from the utility company that are thick enough to provide a 200 amp service.

Cool, and how many households' worth of AI queries would that single B200 (let alone the rack of 'em) be able to handle? Probably a lot more than any individual household could ever hope to produce per second, even assuming a household consisting entirely of hardcore AI stans (let alone someone like the author, or like myself, who uses AI sparingly). Each of those servers is handling requests from thousands upon thousands of users; those power and water requirements get amortized over such a large quantity of requests (and people making them) that if you've ever eaten a single hamburger in your entire life then you've done more harm to the environment than hundreds (if not thousands) of those AI queries.

This all comes after this quip in the margins:

> You think you’re just gonna self-host an open weight model like GLM-5 on your personal hardware and cut out the hosting costs? Well, alright, hope you have 1,727 GB of VRAM lying around.

and like… the author does understand that not everyone needs such a large model with such a large VRAM requirement, right? Or that VRAM itself ain't even strictly necessary (it just happens to make things faster — which is more important for a server handling requests from thousands of users than it is for my laptop handling requests from exactly one user: me)? That's indeed part of the issue the author correctly identifies with people using AI in seemingly the dumbest way possible: that dumbness includes the demand for instantaneous responses, and the consequent demand for throwing more and more VRAM and SSDs at the problem, when “just make a cup of coffee while the LLM ‘thinks’ about what you asked of it” is a perfectly workable approach. As I'm typing out this comment, I've got Olmo 3.1 on this same exact machine doing a bunch of thinking about how to respond to me asking it “How much wood would a woodchuck chuck if a woodchuck could chuck wood?”¹, and it's totally fine that it's taking multiple minutes because there are other things I can do while I wait.

This all ain't to say that we shouldn't care about AI's power and water usage. We should absolutely be pushing for better efficiency. That includes acknowledging that there are options besides “throw more and more VRAM at it and hope for the best”; the article instead prefers to assume that the big beefy servers are the only option, dismissing the notion of self-hosting with little thought, and that dismissal does the article's broader point a disservice.

----

The discussion around AI being considered a “tool” also rubbed me the wrong way a bit:

> This unlocks a common refrain from the booster class: “A true craftsperson uses every tool at their disposal!” Which, if you think about it for more than three seconds, is ridiculous on its face. Gotta dig some holes for fence posts? Okay! Bring along every shovel on the truck, the Ditch Witch, a box of ANFO and the Bagger 293. Have the people who echo this kind of stuff ever built anything in the physical world? Your average craftsperson has one real good compound miter saw that they use for basically every cut on the jobsite. They’ll use it until it breaks down, then they’ll replace it with a newer model of substantially the same thing. In what world is constantly switching tools for the sake of switching tools a remotely smart use of time?

That's pretty blatantly a strawman, and seemingly the exact opposite of how even the most vibe-codey of vibe-coders use AI. They're largely using AI as that miter saw; they might switch out blades/models for a given job, but at the end of the day it's the same tool. That's indeed yet another part of that “people using AI in the dumbest ways imaginable” problem that's otherwise correctly-identified: AI maximalists having a hammer called ChatGPT and seeing everything as a nail.

And also: who cares whether or not someone brings along every shovel + the Ditch Witch + the ANFO + the Bagger 293 if it's easy enough to bring them all? That's only a problem to the extent that carrying one tool comes at the expense of one's ability to carry another tool. If you've got a big enough truck to carry all that gear around, and you're okay with taking the time to load and unload it all, then fuck it, might as well full send — and then if there happens to be a boulder blocking the path of your fence, then it's a good thing you have that ANFO handy, right?

And of course, most software developers ain't doing their work in a pickup truck in the middle of nowhere (though some are, and that's fucking rad). Most are doing their work at their desks, in their offices or homes, wherein they're probably in close proximity to the entirety of their collection of tools. Hell, even if they are doing their work in a pickup truck in the middle of nowhere, the vast majority of the tools they need are probably already present (or could readily be made present) on whatever laptop they're bringing along for the job. Toby and Lyle don't need to worry about the logistics of carrying their tools (in particular Lyle's trusty lathe) because they do their jobs in a workshop wherein those tools already live; I don't need to worry about the logistics of carrying around my compilers and editors and manpages and such (or even an LLM!) because I do my job on a laptop wherein those tools already live.

----

¹ For the record, Olmo 3.1 concluded (like most models do these days) that “If a woodchuck could chuck wood, it would chuck as much as it could—but given its actual habits, it would probably just dig a very efficient burrow instead.”


In the second demo, the audio for “Mama Tried” ended up in the track titled “Row Jimmy” instead of the track titled “Mama Tried” ;)

Also, the demo UI drops the playback state if you switch from one tab to the other; if you play a track in one tab, switch to the other tab, and switch back to the first tab, there's no option to pause the already-playing song. Thanks to this I'm currently listening to five instances of the Grateful Dead singing “Mama Tried” offset by about 5 seconds lmao


> plagiarized

Unless you can point to specific works that something allegedly plagiarizes, the “plagiarism” allegation is meaningless.


not at all true

depending on what you ask for it might generate something 99% as similar as an existing artist


Per your link, only 6% of those games made more than $10k.

Also, the AI disclaimer covers the very broad category of “AI has touched some part of this project at some point, no matter how minor, and no matter if it was eventually replaced with non-AI assets”. The original article seems to be more about the narrower category of “AI is a significant part of this project”, which would exclude nearly all of the top-12-grossing games that your link covers.


> I don't see the discourse of indie or single game developers being ostracized in some public shaming trend

Not specifically “game” developers, but I do see attempts at that ostracization on the OSDev subreddit; at least one participant there has posted progress updates on a vibe-coded hobby OS, and each of those updates ends up deluged with people complaining specifically about the AI use.


> and each of those updates ends up deluged with people complaining specifically about the AI use

I would genuinely like to see this thread, because if the comments are legitimate and backed up by examples, ie : "This is XSS vulnerable" etc.. then even with the prefix of "AI Slop" I'm fine with..

I think it's fair people don't get too comfortable with just trusting vibe coded agents, when in my own experience, the bugs they leave around are often harder to identify from a simple review than a simple architeture misalignment.


It would elevate the conversation significantly if people didn't use "vibe coding" and AI use interchangeably.

I don't use Reddit, but you don't have to look for a specific thread. It often feels like there's a mob of people just waiting for fresh meat to wander into their camp. Literally any thread referencing AI on this site is full of people who appear to have nothing but venom and contempt for people who use these tools.

It's not everyone, but loud minorities are still loud.


I'll try to only use the term AI as VibeCoding is more a derivative. There definitely are some people who are just doomers about AI entirely and I think it's always the case with any new technology. That said, you can't deny there is just an unholy amount of useless applications for the AI tooling that are really not providing anything useful other than generating 'slop'.

Using AI tools for protein folding or medical breakthroughs for example will impact the world in a positive way. People will champion that. Using them to automate your creativity hasn't been in demand by anyone except shareholders or people looking to milk a quick ad revenue for little to no effort. So of course, there's a negative sentiment.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: