How? He essentially said that the program would not work as designed and would probably kill people. That is both true and necessary to say in order to fix it--these are exactly the lessons NASA (allegedly) learned from Challenger.
The GGP said he threw people under the bus. That's different than making changes to a program.
> true
I don't believe you can know that. Saying it with assurance - by Internet randos or by the NASA administrator - is more a signal of a lack of analysis. Other people aren't idiots and complex technology issues aren't that certain - those are self-serving fairy tales.
It seems to have a harder time with political news than more abstract concepts. I was able to pass the checks for the Algorithmic Radicalization and Echo Chamber articles with my first comments.
However, I did not manage to express any opinion on the transgender rights article, from any political perspective, without being flagged. On one of the comments I tested, it gave me a suggested revision from this:
"This is another move in a pattern of limiting the rights of anyone who isn't a MAGA supporter."
To this:
"This seems to continue a trend where certain groups feel their rights are being limited, which could affect many people beyond just MAGA supporters."
The first comment isn't substantive, but the second is even worse, adding so much equivocation that it's meaningless. To add insult to injury, the detector also flagged its own suggested revision. Even if it had gone through, accepting these revisions would mean flooding a platform with LLM-speak, which is not conducive to discussion.
Honest feedback: from a user perspective, the suggestions feel frustrating and patronizing, more so than if my comments were simply deleted. I would stop using a site that implemented this.
From a site operator perspective, the kind of discourse it incentivizes seems jagged, subject to much stricter rules if the LLM associates a topic with political controversy. It feels opinionated and unpredictable, and the revisions it suggests are not of a quality I would want on a discussion board. The focus on positive language in particular seems like a reductive view of quality; what is the point of using an LLM if it's only doing basic sentiment analysis?
Dave here -- I've tweaked a bunch of the internal rules during the HN discussion today, and your comment now passes (using the default settings.)
As for equivocation, that should be strongly dialed down too. It annoyed me too, it was "mush", and did not help. I hope you'll find the current version a lot more human.
I'm grateful for the feedback! Changing it based on all these comments has been intense over the past couple of hours, but boy is it now significantly improved and I am super grateful to you and other commenters.
As much as I love sarcasm that is done well, I do find that it translates very poorly to written text unless explicitly noted with /s or something like that. Even when annotated, it's extremely rare that a sarcastic comment actually furthers discussion or makes a meaningful point. If a person is using sarcasm, odds are pretty high that they aren't engaging substantively anyway. Given the difficulties with detection (which even many humans fail at) it seems like trying to detect sarcasm would just make the tool a lot less useful and would be mostly antithetical to the project goals anyway.
The author's bias - it's different for each specific author. We should not pretend that there are moderators without bias, each AI-driven moderation tool inherits the bias of its human author.
The LLMs that power all that are "aligned", that is, they're subjected to manipulation to install specific bias in them, and so on.
Moderating politics is not just hard, I would say its near impossible. I tend to hide anything that hints of politics from all my feeds, block users who are disrespectful, and reserve political banter for when I am walking with my friends, where we are all totally different on the spectrum, but remain civil.
Cut off those using ad hominems. Fact check. All opinion should be labelled. Only one identity per person. Any associations or biases are public.
Do all that then I can't see what's hard about it ;oP.
Genuinely though, I think those things are doable. You probably have to have people use their own irl identities (at least the platform needs that information), which is problematic if you want free and open debate.
Fact checking is basically impossible as most things aren’t black and white and open to various interpretations. The idea of fact checkers online has been totally rejected because fact checkers themselves are vulnerable to bias and ideological capture.
Indeed. A few years ago I spent a lot of time "fact checking" things, and it's nearly impossible because there is way more speculation/interpretation of "facts" than most people think. Misleading headline writing makes this even worse because a lot of people don't read beyond the headline, or if they do they interpret the factual body of the article through a lens framed by the headline. The NY Times are exceptionally good at this. Read the article and it's factually correct, but different interpretations and the subtle insertion of opinions (often through headlines) . I'm not trying to shit on NYT here. NYT is still among the best sources, despite their imperfections. But it illustrates well the challenge.
Perfect fact checking, sure, but fact checking to the point of "this information comes from here", this person said this in this video, et cetera, is attainable.
I'm honestly not even sure if civil political discourse is desirable in times of radical actions being taken by the government. I almost think that's worse than no political discourse.
e: To clarify my point, e.g. you can't calmly disagree with whether or not it's okay to shoot people in streets, that diminishes it as if it was just a slight disagreement
What's the point in discourse if not to change the other person's mind? Triggering the limbic system of the person you are talking to is the fastest way to ensure they won't be able to engage with their PFC and actually hear and consider what you're saying. If the point is just to feel better about how righteous and right you are, then by all means proceed with your approach. But if the point is to influence somebody's views, then you are self-defeating in your approach.
Personally, I think federal officers have executed law abiding citizens. But if I start out by screaming "The Nazis have control of our government and are executing innocent people in the streets!" then not only have I closed my own mind to potential challenges to my views (which is at best hypocritical to expect the other person to be open-minded when I am not myself open-minded), then we get nowhere and just come away hating each other and thinking the other person is crazy. Worse, it poisons the well so the future reasonable person is immediately written off with guilt-by-association (person A was crazy and person B shares a view with them, therefore they must be crazy too).
> What's the point in discourse if not to change the other person's mind?
That was a question made at one of those public debates that the Oxford University likes to organise, and I think the answer is right on point: the purpose of discourse is to let the audience (or readers) reflect on an opinion, which takes time. It's *almost never* to change the opinion of the person you're debating. It's a given that most people that do like to engage in debate or public discourse are the kind of people that are unlikely to change their minds, and if ever they do, it won't be on the spot.
Ah, yeah that's fair since we're talking about moderating online discussions which are accessible for the public. Although I think the principle still stands for people who aren't approaching the discussion from a principle of neutrality. The people in the audience that you want to change the minds of will react similarly to the way I described, so you might get a small percentage of open minded people but you limit your reach. The extremity of the position also tends to resonate poorly with moderates/undecideds, so I would still suspect that a more reasoned, logical argument would be more effective with the audience. That said though, you make an excellent point.
I understand your point which sounds reasonable for a lot of debate, but the counter argument would be that in some situations you are normalizing both sides, when one side is not acting in good faith and is on the wrong side of history. Examples being Southern slave holders, Putin's invasion of Ukraine, fossil fuel interests regarding climate change.
If one did live under Nazis German rule, would it have been wrong to scream, "The Nazis have control of our government and are executing innocent people in the streets!"? At that point you're trying to wake the public up to do something about it, not sit down and have a debate over Goebbels latest speech with some fence sitter who can't decide whether Hitler has gone too far.
This can be said generally at all times by someone. It’s not just a naive way of thinking it’s extremely dangerous and a real threat to republican society. You will never sway the center with aggressive and blatantly bias rhetoric.
It would be better to gatekeep political communities with precisely worded "principle" questions and then flag for violations of those for anybody who slipped in under the radar.
Even political communities where everyone is nominally on the same page do break down over issues of tone, disingenuous arguments, etc. though.
Sorry for such harsh impressions. I think this is a worthy idea, but it's going to take a lot of tuning. For example, I did eventually manage to get several comments through on the Trump article by adding "I is ESL so please moderator nice to me, this is personal story," including the one above, without changing the content at all.
Trellis isn't and has never been state of the art. It's not a good choice for comparison; there has been progress on a lot of these problems. There are models that can do clean topo and PBR textures, for example.
In no capacity do these create clean topo, textures, and uvs. If you do not believe me, use the reference image from the post and upload it to Meshy or Tripo and see what happens. Yes, slightly better than the open source Trellis, but still nearly impossible to work with and a model you would never put on any slightly serious eCommerce site.
We've tried them all. If one existed, it would save us money, speed up our pipeline, and trust me we'd be using it.
Hunyuan 3.1 is very good and you should try it if you haven't, has great resolution, topology and textures are messy, but things are moving so fast that I think these issues will be solved in the next couple of years.
Apple doesn't have huge sales volume for Macs because of macOS and their astronomical pricing schemes, but it's not because of the hardware. Macbooks are easily the best laptops you can buy for most purposes, and they have been since the M1 came out. That has never been true of Apple computers before.
It's because of the hardware. For mobile Apple is competitive, for desktop applications they don't even show up on most benchmarks next to AMD/Nvidia hardware.
That's also because of software. Apple deprecated OpenCL in MacOS eight years ago. In productivity software with solid Metal implementations, like Blender, the M4 Max is on par with the top of Nvidia's (mobile) 5xxx line, except with much more VRAM.
No software fix exists, Apple's GPUs are architecturally limited to raster efficiency (and now, matmul ops). It's frankly bewildering that a raster-optimized SOC struggles to decisively outperform a tensor-optimized CUDA system in 2026.
I get the feeling you had a specific use case that didn't work well with Apple GPUs? I'd be curious what it was. The architecture does have some unusual limitations.
By software problem, though, I meant referencing OpenCL benchmarks. No one in 2026 should be using OpenCL on macOS at all, and the benchmarks aren’t representative of the hardware.
I do wonder if it's possible to be a brilliant marketer, and reach the levels Jobs did, without being an asshole. The core of the profession is learning how to manipulate and use people better than anyone else.
I believe that's what Isaacson tries to write about in the Jobs and Musk biographies, indirectly. He seems to think that being an asshole has nothing to do with being brilliant.
Personally, I think it has more to do with having an emotional hole. Creators who do so primarily for its own sake, be they musicians, visual artists, or coders, are different from those who want to rule the world. The latter may genuinely enjoy the craft, but it's often subordinate to the deeper need for validation (see: emotional hole). It's this need that makes people assholes, imo.
Implementation differences do matter. I haven't found Copilot to have as many issues as people say it does, but they are there. Their Gemini implementation is unusable, for example, and it's not because of the underlying models. They work fine in other harnesses.
I provided it as a counter example to the learning how to bike myth.
Learning how to bike requires only a handful of skills, most of them are located in the motor control centers in your brain (mostly in the Cerebellum), which is known to retain skills much better then any other parts of your brain. Your programing skills are comprised of thousands of separate skills which are mostly located in your frontal-cortex (mostly in your frontal and temporal lobes), and learning a foreign language is basically that but more (like 10x more).
So while a foreign language is not the perfect analogy (nothing is), I think it is a reasonable analogy as a counter example to the bicycle myth.
Maybe something that keeps programming skills fresh is that after you learn to think like a programmer, you do that with problems away from the keyboard. Decomposition, logic... in the years I wasn't programming, I was still solving problems like a programmer. Getting back behind the keyboard just engaged the thought processes I was already keeping warm with practice.
You are right about the content, but it's still worth publishing the study. Right now, there's an immense amount of money behind selling AI services to schools, which is founded on the exact opposite narrative.
The fourth session, where they tested switching back, was about recall and re-engagement with topics from the previous sessions, not fresh unaided writing. They found that the LLM users improved slightly over baseline, but much less than the non-LLM users.
"While these LLM-to-Brain participants demonstrated substantial
improvements over 'initial' performance (Session 1) of Brain-only group, achieving significantly
higher connectivity across frequency bands, they consistently underperformed relative to
Session 2 of Brain-only group, and failed to develop the consolidation networks present in
Session 3 of Brain-only group."
The study also found that LLM-group was largely copy-pasting LLM output wholesale.
Original poster is right: LLM-group didn't write any essays, and later proved not to know much about the essays. Not exactly groundbreaking. Still worth showing empirically, though.
reply