Hacker Newsnew | past | comments | ask | show | jobs | submit | brokencode's commentslogin

Smart people do stupid things all the time. Especially when they are moving fast and trying new things.

At least they were able to recognize their mistake and course correct.


The whole UI situation in Windows is absolutely ludicrous.

Is the recommendations panel in the start menu, which nobody actually wants to begin with, really so complicated that it justifies bringing in a JS runtime?

And is C# and WinUI really so hard to use that they can’t just make it with that?

And if WinUI is hard to use, maybe they should like.. make it better?

At least Apple had the decency to take the React paradigm and turn it into a native part of their development platform with SwiftUI, even if the implementation has been a bit rough around the edges.


I mean, it looks super complicated to me by now. Why on earth they did it, that's a different story

Are you forgetting about PyPy, which has existed for almost 2 decades at this point?

That's a completely separate codebase that purposefully breaks backwards compatibility in specific areas to achieve their goals. That's not the same as having a first-class JIT in CPython, the actual Python implementation that ~everyone uses.

Definitely agree that it’s better to have JIT in the mainline Python, but it’s not like there weren’t options if you needed higher performance before.

Including simply implementing the slow parts in C, such as the high performance machine learning ecosystem that exists in Python.


It’s pretty telling that Elon had to have Grok rewrite Wikipedia because the truth was too woke for him. No idea how anybody can ever take Grok seriously.


Many projects in his companies seem to be more and more Musk's vanity projects than ideas/products one can take seriously. This is also how tesla ended up with a huge cybertruck stock that nobody wants to buy and thus had to be bought by his other companies. And it is becoming worse and worse, especially ever since he bought twitter and sped up his twitting rates.


FWIW it looks there’s now a demand surge with the introduction of the new cheap cybertruck variant. delivery dates pushed out to the fall of 2026.


That was an artificial boost created by setting a time-limit for a low price. There were ten days to buy at the price, then they put it back up. [1]

[1] https://electrek.co/2026/03/01/tesla-cybertruck-awd-price-in...

EDIT: grammar


What's an artificial boost? Sounds like you're describing a sale.


Sales are artificial boosts yes. The difference is in the connotation. A sale is given for something that people generally would buy anyway, but now more people will. An artificial boost is given to stuff nobody wants, but at a lower price can be convinced to buy.

Or in other words, sales raise $high_number to $higher_number while artificial boosts raise $essentially_zero to $acceptable_number.


Your claim is that people that bought the cybertruck at a lower price don’t actually want it?


I believe the claim is that the demand side did not change, the supply side did, as in sales != demand.


Just quoting the above

“An artificial boost is given to stuff nobody wants, but at a lower price can be convinced to buy”

So people spent 60k on a cybertruck that they didn’t want? Is that the claim?


the claim is that it moved sales forward in time, but it'll have a corresponding dip in sales later, whereas a good sales campaign increases total volume (virtually no dip, brings in new customers, etc)


> artificial boost is given to stuff nobody wants, but at a lower price can be convinced to buy.

People do want it, clearly, but it's too expensive for them.

Sales don't make people want things they otherwise don't.


> Sales don't make people want things they otherwise don't.

That is exactly what sales do. most sales are made sellings things to people they don’t want, until sales does what sales does


So people spent 60k on a cybertruck they don’t want? Do you believe that?


look around your house and see how much shit you got that you really want(ed). great salesman (and elon is the best in the history of the civilization) will sell you shit you never thought you wanted :)


The motivation to buy something is always because you want it. That a product doesn’t meet your needs or expectations later is a different story. What’s your evidence to claim that people spending 60k in a cybertruck don’t want it? What’s your evidence to make a similar claim or the opposite for any other purchase? Without evidence it feels you are making baseless claims about peoples motivations.


> The motivation to buy something is always because you want it

salesman make you want stuff you didn’t know you want it but now you do. entire world economy is built on this


Is it still your claim that people spending 60k on Cybertruck don’t want it? How do you know? Given the lack evidence feels like motivated thinking. You don’t like Elon and can’t accept that tons of people actually like him and his products.


literally almost everything I have bought on sale is something I wasn't looking to buy at that moment in time.


How many of those things cost more than 10,000 dollars?


I think you might be slightly misinformed on how many 10,000+ dollar purchases the average person makes in their lifetime to make sweeping statements of that nature. Advertizing sales on medical procedures or daycare could have the opposite effect I would imagine

[X] doubt


Look up what their production targets were and compare that to their sales. A small temporary demand surge isn't going to be enough to chew through their current inventory, let alone keep the production lines busy.


A push on delivery dates is as likely to mean production issues as it is an influx of interest.


Drivel. They’re selling just as well as Rivians.


They're not even selling as well as Volkswagens here anymore.


The cybertruck is an amazing vehicle, it was mostly just bad timing- Inflation more than doubled between the announcement and release date so it seemed to come out more expensive than promised, the USA Democratic party abandoned it's environmental side for unions, and the whole "woke" movement ballooned and got violent to the point where people were lighting certain car dealerships on fire and vandalizing people's vehicles on sight.


Probably next generations of kids being fed PragerU studying material will. Something tells me we didn't see a fraction of what's going to happen in the decades to come.


I take Grokipedia very seriously as a threat to society. Sure, they're happy if people read it and fall for - but the primary goal is not to convince humans, but to influence search results of current models & to poison the training data of future models. ChatGPT (and most likely other models/providers too) is already using Grokipedia as a source, so unless you're aware of the possibility and always careful, you might be served Musks newest culture war ideas without ever being the wiser.

It's not enough that everyone on Twitter is forced to read his thoughts, he's trying to make sure his influence reaches everyone else too.


I've seen Claude pick it up too. It's disconcerting.


I can both not like Elon and also think Wikipedia is also very captured on some things


Are there actual good examples showing errors of fact on Wikipedia that are verifiably incorrect, that demonstrate how it is "captured"?


How about Gabrowski et al.: "Wikipedia’s Intentional Distortion of the History of the Holocaust", about the outsize influence of certain coordinated Polish editors on the Wikipedia articles about Poland and the Holocaust?

https://www.tandfonline.com/doi/epdf/10.1080/25785648.2023.2...

Quote from the conclusion:

> This essay has shown that in the last decade, a handful of editors have been steering Wikipedia’s narrative on Holocaust history away from sound, evidence-driven research, toward a skewed version of events touted by right-wing Polish groups. Wikipedia’s articles on Jewish topics, especially on Polish–Jewish history before, during, and after World War II, contain and bolster harmful stereotypes and fallacies. Our study provides numerous examples, but many more exist. We have shown how the distortionist editors add false content and use unreliable sources or misrepresent legitimate ones.

For a more recent paper, "Disinformation as a tool for digital political activism: Croatian Wikipedia and the case for critical information literacy" by Car et al. says that:

> The Hr.WP [Croatian Wikipedia] case exemplifies disinformation not only as content manipulation, but also as process manipulation weaponising neutrality and verifiability policies to suppress dissent and enforce a single ideological position.

https://doi.org/10.1108/JD-01-2025-0020


If the debate here is that sustained ethno-political campaigns are slightly shifting Wikipedia over time in a way that requires an academic paper to detect...

Vs.

What Elon is doing...

Then we're not even comparing fruits to fruits.


I find it more surprising that the common understanding has shifted away from "wikis are crap for anything new or political".

As soon as there is a plausible agenda for selecting a narrative the way Wikipedia works we should be sceptical.

For recent examples, everything to do with Biden and family, and Gamergate. These pages are still full of discussion; and what's written is more ideological than factual. You can follow these pages to see how an in-group selects a narrative.

And these topics are not nearly as controversial as race, feminism, or transgender topics.


OK, is there a specific example on either the Biden or Gamergate page that is factually incorrect? Or are you saying the entire pages are false?


My point is more that the history of those pages is a good example of how Wikipedia works for controversial topics; it's not really a process of becoming more correct as better sources are found and argued about like it is on more neutral pages, instead it's an in group deciding what to represent, collecting their preferred opinion pieces. And this changes over time, getting no closer to neutrality within the same articles history.

You can write an equivalent article starting with "Gamergate was a movement reacting to the improper collusion between game developers and journalists" and find just as many sources, but the current article wants to promote the idea that it was a harrassment campaign first.


It was also pretty credibly a psyop orchestrated by Steve Bannon and Jeffrey Epstein, but that’s probably better served in history books and biographies rather than an encyclopedia.


Wiki's Gamergate opening paragraph:

> Gamergate or GamerGate (GG) was a loosely organized misogynistic online harassment campaign motivated by a right-wing backlash against feminism, diversity, and progressivism in video game culture. It was conducted using the hashtag "#Gamergate" primarily in 2014 and 2015. Gamergate targeted women in the video game industry, most notably feminist media critic Anita Sarkeesian and video game developers Zoë Quinn and Brianna Wu.

Grokipedia's:

> Gamergate was a grassroots online movement that emerged in August 2014, primarily focused on exposing conflicts of interest and lack of transparency in video game journalism, initiated by a blog post detailing the romantic involvement of indie developer Zoë Quinn with journalists who covered her work without disclosure. The controversy began when Eron Gjoni, Quinn's ex-boyfriend, published "The Zoe Post," accusing her of infidelity with multiple individuals, including Kotaku journalist Nathan Grayson, whose article on Quinn's game Depression Quest omitted any mention of their prior personal contact. This revelation highlighted broader patterns of undisclosed relationships and coordinated industry practices, such as private mailing lists among journalists, fueling demands for ethical reforms like mandatory disclosure policies.

I don't care about "Gamergate" and never use Grokipedia, but Wiki definitely has a stronger slant: it's as if an article about Black Lives Matter started with a statement that it was a campaign meant to scam people to pay for mansions for leadership.


Wikipedia's assessment is more accurate. Wikipedia does go on in its second paragraph to explain the context of the start of the campaign, including "The Zoe Post" and the accusations of conflict of interest. But the broader impact of Gamergate was as a misogynistic online harassment campaign, and Wikipedia is correct to make that the central part of its summary. Just because Grokipedia is more reluctant to state a conclusion does not make it less biased.


Well, I'm naively assuming Grokipedia is being sympathetic to the cause(?) of Gamergate, but if the best thing they could lead the article was essentially "It all started when someone got mad at his ex-girlfriend and her many other boyfriends and wrote something that went viral" ...

... it does sound like an online harassment campaign.


It was. In hindsight it signaled the beginning of the mass weaponization of the internet via social media. It also was NOT grassroots lol. It was very specifically and intentionally enflamed and groomed and funded by people like Steve Bannon and his good buddy Jeffrey Epstein. It wouldn’t have such a big Wikipedia article without them.


As somebody who supported GG for the first month or so, Wikipedia has the better intro from where things stand in 2026. GG started by piggybacking on general distrust of gaming journalists, but was quickly consumed by misogyny.

An article doesn't avoid bias by avoiding unpleasant facts.


Which facts are represented is equally important as being factual though.

Brian hit Jim can be a fact. But if you emit "Jim murdered Brians whole family", its a disortation of truth


specific examples other than ficticious Jim&Brian?


I haven't read wikipedia in a long time so I can't answer your question, I am just pointing out that just saying "the facts are correct" is not enough to say there is no bias on wikipedia


[flagged]


The Minnesota Transracial Adoption Study was methodologically flawed. “Children with two black parents were significantly older at adoption, had been in the adoptive home a shorter time, and had experienced a greater number of preadoption placements.”

Reframed, the study seemed to find (a) black kids are adopted less readily and (b) the longer a kid spends in the foster system, the lower their IQ at 17. (There is also limited controlling for epigenetic factors because we didn’t understand those well in the 1970s and 80s.)

Based on how new human cognition is, and genetically similar human races are, it would be somewhat groundbreaking to find an emergent complex trait like IQ to map to social constructs like race, particularly ones as broad as American white and black. (There is more genetic diversity in single African tribes than in some small European countries. And American whites and blacks are all complex hybridized social categories.)

[1] https://en.wikipedia.org/wiki/Minnesota_Transracial_Adoption...


[flagged]


What? No you can't.

And: it remains perfectly OK to study racial differences in IQ. It's an actively studied topic. In fact, it's studied by at least three major scientific fields (quantitative psychology, behavioral genetics, and molecular genetics). The idea that you can't is a cringe online racist canard borne out of the fact that the studies aren't coming out the way they want them to.


Does it now? Noah Carl would disagree. He was a researcher at Cambridge University that was dismissed after an open letter signed by over 1,400 academics and students accusing him of "racist pseudoscience" for merely arguing that race-IQ research should not be off-limits.

James Flynn (of the Flynn effect) has also publicly stated that grants for research clarifying genetic vs. environmental causes of IQ gaps weren't approved because of university fears of public furor.


You're trying to axiomatically win an argument that is already settled empirically. It won't work. You can just read the papers. My point being: the papers exist, and more are published every year. Once you acknowledge that, your argument is dead. Literally no matter what the papers say. Don't make dumb arguments.

Noah Carl has a sociology doctorate. He doesn't work in the fields that study this; he just tries to launder his way into them.

Flynn is, famously, a race/IQ skeptic.


https://medium.com/@racescienceopenletter/open-letter-no-to-...

https://www.theguardian.com/education/2019/may/01/cambridge-...

> for merely arguing that race-IQ research should not be off-limits.

Help me connect the dots here.


It seems like the root of your statement is with the existence of "race" as a purely biological classification. Wikipedia correctly notes the consensus position that race is a social construct [0] that's difficult to use accurately when discussing IQ. Grok makes the implicit and incorrect assumption that genetic factors = race, among other issues.

[0] https://www.genome.gov/genetics-glossary/Race


I wonder how much longer that link will stay up with the current administration...


Ok, change it to "what we call race as a proxy for general geographic locations that people's ancestors come from."

Which is what we all mean by race, anyways.


That's not what your previous post was talking about. But if you insist, at least make your point clear. "African Americans" and "Africans" are wildly different genetic populations that get subsumed under the same "Black" racial category in the US. Which one were you talking about?

The latter is more genetically diverse than any other human population by an incredible margin. Making generalized statements about them is impossible (including this one). As for African American populations, ancestry estimates of how closely related they are to African populations vary massively for each individual. Many people are much closer to "white" populations than any African population, due to the history of African Americans in North America. If you really mean race as a geographic proxy, the "black" label is simply confusing what you actually mean.


I understand your point (although I find the babybathwater-ing to be tiring), and I didn't mean to be drawn into a debate about this. But that was entirely the point - that there's a debate. Wikipedia would have you believe that there isn't.

For what it's worth, I'm mixed as hell. European, Asian, Jewish, north african, and native american. I look white, though - and I am, in fact, majority European ancestry. Therefore in most studies (of anything race related), I would presumably be lumped in with white people. It's not a perfect "measure," but it's still the easiest proxy for geographic location of our ancestors that we have and on a population level it works just fine for studies.


But then what are you arguing? Geographic location determines IQ? (An inherently flawed measurement itself)


I'm not arguing anything other than the fact that Wikipedia is biased.

Though I will say it's beyond argument that geographic ancestry has an effect on IQ on a statistical group level (the reasons for this are what's debated), and that IQ is the best measurement of G that we have.


> I'm not arguing anything other than the fact that Wikipedia is biased.

It "is biased" to document human knowledge as accurately as possible. Is there something wrong with that?


Because it's not accurate? As I and others have pointed out?


I'm a bit confused then. You said that you're "not arguing anything other than the fact that Wikipedia is biased" but then also argue that Wikipedia is also inaccurate after someone points out that "so-and-so is biased" is a meaningless phrase. This reads like a shift of the goal posts and it discredits your arguments.

Regardless, one can see this claim of inaccuracy repeated in these comments with no provided example of such. The saying goes "what is presented without evidence can be dismissed without evidence". It is therefore reasonable for a reader to conclude that the claim of inaccuracy can be dismissed as "bullshit", in the sense that the person making it cares not for its veracity.


Okay but you need to… actually present these arguments. Right now you’re stating your position and then affirming it as fact and expecting everyone to trust you.


I already gave you two large meta-analyses and more on the first point along with a and as far as the second goes in the field of psychology that's as established as 2+2=4 is in the math world. If you really want to research that yourself go ahead; I don't feel like I should need to waste my time.


Have you considered the possibility that your opinion is just not representative of the scientific consensus?


I asked ChatGPT on whether or not it was the "scientific consensus."

"Anonymous surveys of intelligence experts reveal division: a 2016 survey found that about 49% attributed 50% or more of the Black-White gap to genetics, while over 80% attributed at least 20%; an earlier 1980s survey showed similar splits. These views are more common in private or anonymous contexts, contrasting with public statements from bodies like the APA that find no support for genetic explanations."

Hm, sure seems like Wikipedia should probably have a more balanced, nuanced discussion considering the experts are split at least 50/50.


The "scientific consensus" the parent comment mentioned is referring to published studies, with data to back up their conclusions. The numbers you are citing seem to be from an opinion poll. Where did any of the 49% surveyed get the idea that "50% or more of the Black-White gap" can be "attributed" to genetics? What is their methodology for the attribution?

Bringing up an opinion poll as a counterpoint makes it read like you're arguing that Wikipedia should focus less on fact and more on opinion. Of course, you're free to think what you wish, but I suspect that's where most disagree.


We don't really have "intelligence genes" mapped out, if they exist. Therefore, something like this, from Wikipedia: "Genetics do not explain differences in IQ test performance between racial or ethnic groups" is effectively a lie.

Genetics certainly don't explain all the differences in IQ. They very well might not explain the majority of the the difference. However, considering we know that intelligence is quite heritable along with various adoption and twin studies that have happened throughout the decades (along with simple freaking logic), we have a pretty good idea that it explains at least some of the difference. That "opinion poll," while not super-great because only some elected to reply, was a poll of experts in the fields that study this stuff, not random people.

A real unbiased article would mention that (and perhaps whatever counterarguments there are), not straight up do the encyclopedia equivalent of sticking their fingers in their ears and going "nah uh I can't hear you."


Wikipedia does not care about scientific consensus. It just summarizes "reliable" secondary sources.


Wrong in two different ways:

- this tends to approximate consensus.

- Wikipedia does care, and has a policy on this: https://en.wikipedia.org/wiki/Wikipedia:Scientific_consensus


>and has a policy on this

Look at the top of that page.

>This is an essay. It contains the advice or opinions of one or more Wikipedia contributors. This page is not an encyclopedia article or a Wikipedia policy, as it has not been reviewed by the community.


That’s like arguing that “forming a queue at the store” is not an official policy.

The document outlines normative / prescriptive approaches that are followed in practice.


>As you can see, Wikipedia is very dismissive to the point of effectively lying.

Did I miss where you presented evidence that wikipedia is wrong? You seem to be taking an assumption you carry (race is related to IQ) and assuming everyone believes it's true as well, thus wikipedia is lying.


There have been many, many studies that show that "race" is related to IQ. A true, unbiased article would show that as well as any well-founded criticisms of it.


Can you cite them then?


Roth, P. L., Bevier, C. A., Bobko, P., Switzer, F. S., & Tyler, P. (2001). Ethnic group differences in cognitive ability in employment and educational settings: A meta-analysis. Personnel Psychology, 54(2), 297–330.

Rushton, J. P., & Jensen, A. R. (2005). Thirty years of research on race differences in cognitive ability. Psychology, Public Policy, and Law, 11(2), 235–294.

Neisser, U., et al. (1996). Intelligence: Knowns and unknowns. (APA Task Force report). American Psychologist, 51(2), 77–101.


I’d say Wikipedia definitely has a strong “woke” bent to it. Either in the language or the choice of what facts to show. Here’s an example I deleted that had been there for quite a while https://en.wikipedia.org/w/index.php?title=Salvadoran_gang_c...

I really like Wikipedia, though, and I think over time we will get around to fixing it up.


Why did you feel this passage was worth deleting?


Anyone familiar with Wikipedia etiquette knows how to find the answer to this question. Rather than getting into an argument here about a subject there, I'd prefer you familiarize yourself with the norms of that community, and if you already have or are experienced with them, then you know where to discuss the subject guided by those norms.


But you’re responding to a comment here, not there. So why not abide by the norms that prevail here?


My experience is that we end up debating the norms because this forum has different views than Wikipedia itself. That’s interesting to some but not to me so I’m opting out.

In addition, the answer to the question is already available so I want any question asker to put in a little bit of effort and if they’re not going to do that then I’m not really interested in talking to them since I prefer peer interactions to tutorials.


It's not errors of fact, it's errors of omitted facts.


Are there actual good examples showing errors of omitted facts on Wikipedia that are verifiably correct, that demonstrate how it is "captured"?


I can understand somebody not liking wikipedia, I cannot understand at all somebody, who is not Elon, liking/preferring "grokipedia" as idea or implementation.


> "grokipedia" as idea

So you can understand someone not liking something, but you cannot understand that person liking the idea of an alternative? What is the idea for you if not just an alternative to the established service with the undesired part changed?


Because not liking something does not imply liking any possible alternative.

Which one is the "undesirable part changed" here? Wikipedia is written by humans, it has a not-for-profit governance model, it encompasses a large, international community of authors/editors that attempt to operate democratically, it has an investment/commitment in being an openly available and public source of information. Grokipedia, on the other hand, is AI-generated, and operated by a for-profit AI company. Even if "grokipedia" managed somehow to get traction and "overthrow" wikipedia, there is no reason on earth why a company would operate it for free and not try to make profit out of it, or use it for their ends in ways much more direct than what may or may not be happening to wikipedia. Having a billionaire basically control something that may be considered "ground truth" of information seems a bad idea, and having AI generate that an even worse one.

I can understand somebody not liking something in how wikipedia is governed or operating, after all whatever has to do with getting humans work together in such a scale is bound to be challenging. I can understand somebody ideologically disagreeing with some of the stances that such a project has to take eventually (even if one tries to be neutral as much as possible, it is inevitable to avoid some clash somewhere about where this neutrality exactly lies). But grokipedia much more than "wikipedia but different ideologically".

edit: just to be clear, I see a critique of the "idea of grokipedia" as eg the critique of it being a billionaire controlled, AI generated project to substitute wikipedia; a critique of the implementation would be finding flaws to actual articles in grokipedia (overall). I think the idea of it is already flawed enough.


I'll spell out the argument:

Wikipedia is fine for uncontroversial facts. The obscure ones can have individual mistakes but it's generally correct.

For controversial topics, it's an eternal battle between factions of "volunteers" trying to present their view of a conflict. The articles reflect which side has the best organized influencer operations. Factual truth may or may not shine through, but as a side effect, not a result of the governing process.

Grokipedia operates by Grok writing what it considers the true and interesting facts. That doesn't mean it's always right, but it's a model far less influenced by influencer operations.

I wildly disagree with the critique based on the wealth of the top executive. I care about the truth and quality of the articles.


>Grokipedia operates by Grok writing what it considers the true and interesting facts. That doesn't mean it's always right, but it's a model far less influenced by influencer operations.

If Grok is trained on a corpus of information written by humans trying to influence other humans, and it has no ability to perform its own original investigation in the real world, then how can it be anything but the product of influence?


This seems based on the myth that Grok is trained solely on X/Twitter posts and Mein Kampf.

In reality, Grok is trained on pretty much the same giant web crawl/text corpus as other contemporary AIs.


Sure, it's just weighted more toward X-corpus and Mein Kampf.

Not all alternatives are necessarily worthy. I can understand someone not liking tomatoes. I can't understand someone liking depleted uranium.


Maybe ask a Ukrainian soldier which they prefer (modern armor is often made of depleted uranium). Environment shapes such preferences far more than personality.


what do you have against depleted uranium? you know what they say, one man’s trash is another man’s treasure :)


They meant the idea of Wikipedia rewritten by Grok (or another controversial LLM) specifically, not just any alternative.


> I cannot understand at all somebody, who is not Elon, liking/preferring "grokipedia" as idea or implementation.

Really? Have you used AI to write documentation for software? Or used AI to generate deep research reports by scouring the internet?

Because, while both can have some issues (but so do humans), AI already does extremely well at both those tasks (multiple models do, look at the various labs' Deep Research products, or look at NotebookLM).

Grokipedia is roughly the same concept of "take these 10,000 topics, and for each topic make a deep research report, verify stuff, etc, and make minimal changes to the existing deep research report on it. preserve citations"

So it's not like it's automatically some anti-woke can't-be-trusted thing. In fact, if you trust the idea of an AI doing deep research reports, this is a generalizable and automated form of that.

We can judge an idea by its merits, politics aside. I think it's a fascinating idea in general (like the idea of writing software documentation or doing deep research reports), whether it needs tweaks to remove political bias aside.


> Have you used AI to write documentation for software?

Hi. I have edited AI-generated first drafts of documentation -- in the last few months, so we are not talking about old and moldy models -- and describing the performance as "extremely well" is exceedingly generous. Large language models write documentation the same way they do all tasks, i.e., through statistical computation of the most likely output. So, in no particular order:

- AI-authored documentation is not aware of your house style guide. (No, giving it your style guide will not help.)

- AI-authored documentation will not match your house voice. (No, saying "please write this in the voice of the other documentation in this repo" will not help.)

- The generated documentation will tend to be extremely generic and repetitive, often effectively duplicating other work in your documentation repo.

- Internal links to other pages will often be incorrect.

- Summaries will often be superfluous.

- It will love "here is a common problem and here is how to fix it" sections, whether or not that's appropriate for the kind of document it's writing. (It won't distinguish reliably between tutorial documentation, reference documentation, and cookbook articles.)

- The common problems it tells you how to fix are sometimes imagined and frequently not actually problems worth documenting.

- It's subject to unnecessary digression, e.g., while writing a high-level overview of how to accomplish a task, it will mention that using version control is a good idea, then detour for a hundred lines giving you a quick introduction to Git.

As for using AI "to generate deep research reports by scouring the internet", that sounds like an incredibly fraught idea. LLMs are not doing searches, they are doing statistical computation of likely results. In practice the results of that computation and a web search frequently line up, but "frequently" is not good enough for "deep research": the fewer points of reference for a complex query there are in an LLM's training corpus, the more likely it is to generate a bullshit answer delivered with a veneer of absolute confidence. Perhaps you can make the case that that's still a good place to start, but it is absolutely not something to rely on.


>LLMs are not doing searches, they are doing statistical computation of likely results.

This was true of ChatGPT in 2022, but any modern platform that advertises a "deep research" feature provides its LLMs with tools to actually do a web search, pull the results it finds into context and cite them in the generated text.


That's not at all been my experience. My experience has been one of constant amazement (and still surprise) when it catches nuances in behavior from just reading the code.

I'm sure there are many variables across our experiences. But I know I'm not imagining what I'm seeing, so I'm bullish on the idea of an AI-curated encyclopedia, whether Elon Musk is involved or not.


No, I don't trust an encyclopedia generated by AI. Projects with much narrower scopes are not comparable.

edit: I am not very excited by AI-generated documentations either. I think that LLMs are very useful tools, but I see a potential problem when the sources of information that their usefulness is largely based on is also LLM-generated. I am afraid that this will inevitably result in drop in quality that will also affect the LLMs themselves downstream. I think we underestimate the importance that intentionality in human-written text plays in being in the training sets/context windows of LLMs for them to give relevant/useful output.


Elon at some point threatened to have an LLM rewrite all of the training data to remove woke. I assume Grokipedia is his experiment at doing this (and perhaps hoping it will infect other training sets too?) ...


I appreciate you


Wikipedia obviously is left leaning.


Well yes, but so is reality. And Wikipedia as an encyclopedia is supposed to document reality. So what's the problem?


That's an interesting take. Left or right leaning is kind of just relative to society as a whole. If the world really was so left, I think we'd be calling Wikipedia neutral.


But many people do call Wikipedia neutral, or at least mostly neutral.


[flagged]


Have you ever wondered why the most educated and scholarly people in the country are left leaning?

I suppose you think they were indoctrinated. But finding and teaching the truth is essentially their job. Learning how to evaluate sources and approach research logically is like academia 101.

So doesn’t it seem strange that so few of them ever manage to see that they’re being indoctrinated?

Or do you think a person’s political beliefs are assigned at birth and lefties just like academia for some reason?


I have never wondered that because it is caused by many obvious factors.

>So doesn’t it seem strange that so few of them ever manage to see that they’re being indoctrinated?

I don't think most people care. They are primarily interested in career progression, social status, protecting the feelings of their peers.

They are willing to accept whatever ideology that occupies the water that they swim in. If the status quo was right wing, they would adopt the views of the right wing.

Beyond academia 101, you use that fundamental understanding of the scientific process to break the system, work backwards to justify your conclusion, p-hack, ensure grants and scholarships go to the correct people and whatever else it takes to succeed in that environment. I went to college, I've seen it happen in front of me.

It's the academic's job to find and teach the truth in the same way that it's the mechanic's job to fix your car. But the mechanic's incentives drive him to upsell you, charge you for work you don't need, and hell if he breaks something in the process you'll be coming back a lot sooner. So too it is the job of the knowlege worker to create more knowlege work.

>Or do you think a person’s political beliefs are assigned at birth and lefties just like academia for some reason?

I think that when a "normie" is told to imagine their future, they literally imagine themselves. That is to say, they don't imagine the mechanics of what their daily routine would be or imagine what values and strengths they would have; they imagine their future in the same way that they look at themselves in a mirror. There is a literal self-image involved.

They subscribe to a certain aesthetic (say, an upper-class aesthetic, or an artistic aesthetic, or a blue collar aesthetic or a military aesthetic) based on if they think it looks cool and then they work backwards to figure out what beliefs, values, strengths, etc. they need to fit in with society and play a certain designated character which is probably inspired by something they saw on TV. I am not joking.

So if you adopt, say, a punk rock style you would need to act rebellious to fit in, even if you do not feel the innate urge to do so based on your life experiences. When you enter the mosh pit it is like a safe, culturally designated, controlled aggression. Like sports. Because this style is associated with a certain type of rebellion you would get roped into leftist stuff. And because you become leftist, you naturally want to go to college to fraternize with more leftist types. The whole admission process is designed to filter out students based on their personality, not their tenacity for learning.

You can imagine what the reverse of this would look like for someone adopting right-wing associated aesthetics and culture. There are even right wing academic groups, controlled by a narrower overton window due to their weakness in the academic and bureaucratic domain.

None of this has to do with evaluating sources and approach research logically. Do you believe that people come out of the womb with a lifelong desire to pursue the truth?

This whole system of acculturation just seems too fake, orderly, and planned for me. Furthermore, I think it will ruin the country because it is too individualistic. In an ideal world, your political beliefs would stem from a combination of your life experiences and a practical analysis of the demands of your time.

It's treating politics like some sort of sports team rather than something strategic and decisive.


That’s quite a cynical way of looking at the world. That political belief is driven by conformance into social groups rather than an individual’s desire to seek the truth.

If that’s truly what you believe, then I guess I’ve got no argument that could sway you. In fact, using your view of the world, all political debate is worthless because nobody really seeks an objective truth anyway.


Are you suggesting that academia and all of the other actual places people who learn and know stuff for a living being full of leftists is some conspiracy against you and the right wing?

Touch grass, my dude. These are the thoughts of someone who spends too much time on X.


I don't use X or Grok.

The fact that the elite and knowledge workers in this country are generally more left-leaning is pretty evident. The right wingers in these ranks make up a distinctive subgroup. These are the thoughts of pretty much everyone everywhere in the country and this becomes apparent if you ask randoms on the street, or you have attended college lectures, or have used a dictionary, or have read wikipedia talk pages, or have compared news sources.

Being a welder or a farmer or a carpenter requires "learning and knowing stuff". These right-wing associated jobs just don't produce knowledge for other people as an end product. That is what makes knowledge work an elite position; not everyone has the luxury of doing knowledge work.

I take issue with your implication that we should all bow down to knowledge workers because they know better. The knowledge workers are the issue here. They are the subject of discussion. This is like when the police investigate themselves and find no wrongdoing.

If we found that the population of professional athletes became dominated by a certain cultural ingroup, and eventually we failed to bring home gold medals at the olympics, we might be correct to question the state of our meritocracy wrt athleticism. Regardless of what people who "improve and use their bodies for a living" think.

The US is losing intellectually and technologically to countries like China. I call into question the general legitimacy of our academic and journalistic institutions.


You know the trades are union, i.e., left wing?

If you went to my hometown and asked working class men who work with their hands whether they support the conservatives, they'd laugh in your face.

Have you heard of the AFL-CIO?


> The US is losing intellectually and technologically to countries like China. I call into question the general legitimacy of our academic and journalistic institutions.

China has tons of green energy, high speed rail and frequently extols the virtues of socialism.


The success of chinese infrastructure validates my concern, which is with a specific faction of the american/western elite


Bro, I sympathize a little bit but looking to illiterates who hate green energy is not the solution. We could probably agree that in China, engineers and scientists are more listened to but the solution for America is not "less reading".


hn is left leaning too

even pg tries to shit on elon on x time to time

funniest bit was garry tan hinting that yc x and y seasons has to be renamed because pg doesnt like it


Are really suggesting everything in Wikipedia is truthful, complete, and free of all biases?


Maybe not all of it, but a vast majority of it is. And almost certainly the parts that drove Elon to slopify it are true.


Citation needed.


That's not how it works. You're making the extraordinary claim that a widely trusted and strictly moderated encyclopedia with tons and tons of citations to back up the truthfulness of its contents is not mostly true. You get to prove that assertion, since your claim is the extraordinary one.


Epstein files. State actors, company security departments, activists, etc influence and seemingly control the more meaningful/controversial Wikipedia sections.

I think it’s just an inherent flaw in ANY centralized and universal repository of knowledge.

I haven’t actually ever been on grokipedia but I’m sure Elon influences it, I mean if I paid for something I’d expect it to be to my liking too.


Not everything on Wikipedia is true, but the parts Elon Musk hates most are probably true.


[flagged]


Not sure if this is an example of something Musk hates, but here’s a paragraph from the “2016 presidential campaign” section of the Donald Trump article on Wikipedia.

> Trump's FEC-required reports listed assets above $1.4 billion and outstanding debts of at least $265 million.[140][141] He did not release his tax returns, contrary to the practice of every major candidate since 1976 and to promises he made in 2014 and 2015 to release them if he ran for office.[142][143]

I could not find any mention of tax returns on the Donald Trump page of Grokipedia.

Wikipedia:

https://en.wikipedia.org/wiki/Donald_Trump

Grokipedia:

https://grokipedia.com/page/Donald_Trump


Well, you yiyrself did not provide any sources for asserting the argument that some of what is on Wikipedia is false


No, when did I say that? That’s impossible for anything of the size of Wikipedia.

I was suggesting that Elon Musk, a man who has donated hundreds of millions to Trump and other Republican causes, who has numerous financial conflicts of interest, and who has publicly lied numerous times, is never going to produce a more unbiased and factual encyclopedia than Wikipedia.

Especially when his effort to do so is essentially AI slop from a third rate LLM on top of his own biases.


Right, and Wikipedia leadership is free of any conflicts of interest:

During and after her Wikimedia role, Maher drew fire for statements perceived as rejecting objective truth or Wikipedia’s traditional “free and open” model:

• In a 2021 TED Talk, she described reverence for truth as potentially a “distraction” hindering common ground.

• She called Wikipedia’s free-and-open ethos a “white male Westernized construct” that excluded diverse communities.


But she’s not the one writing and editing the wiki pages. It’s open, and there are open discussions as well. What’s it matter what she thinks?

Why would I ever trust an encyclopedia totally controlled by one megalomaniac over one that I myself can contribute to?


Linux compatibility would be sweet, but Apple has no incentive.

Hopefully this product gives other companies the kick in the pants they need to improve their hardware.

Though they still haven’t been able to complete that well against the Air and Pro, so seems unlikely they will adapt well to this either.


It’s funny that this is even remotely a concern in 2026. We have computers you can talk to but Windows laptops maybe won’t go to sleep in your backpack.

I do hope that it’s fixed though. I haven’t followed Windows laptops that closely, but my work laptop from a few years ago does lose battery surprisingly quickly when “sleeping”.


I don't think going with windows is the move if you're trying to have sleep working like in a macbook. Too many stories of laptops waking up for no good reason.

Linux on the other hand has always been able to sleep as expected. I'm definitely advocating for panther lake + linux. Not panther lake + windows, which I hoped was clear given the context of the parent comment.


Idk, searching online gives me lots of results where people are complaining about Linux battery life when sleeping.

I haven’t used a Linux laptop in over a decade personally, so can’t really speak to that much though.

What I do know is that Windows sucks and macOS has absurdly good battery life, both in active use and in sleep.


> What I do know is that Windows sucks and macOS has absurdly good battery life, both in active use and in sleep.

Ever since lunar lake (intel's prev-gen ultrabook chip), this isn't even true anymore.

And now with panther lake, competing windows and macOS laptops do have comparable active use battery life, especially when comparing against macbook airs which do sometimes lose because of their smaller batteries.

This guy: https://www.youtube.com/@JustJoshTech does really good battery tests (brightness at 300 nits, looped office tasks, wifi on, BT on), and a number of windows laptops match even the 14in macbooks pros. That macbook pro already gets noticably more battery life than both the 13 and 15in macbook air.

For a specific example ,the current XPS14 without the OLED (meaning the base 1200p screen) will have hours more battery life than any macbook. If you're looking for "absurdly good battery life", both macOS and windows laptops can give you this today. Your last comment hasn't been true (at least for active use) for at least since lunar lake came out (end of 2024).


That’s part of why this is all so stupid. Anthropic’s red lines seem very reasonable. Feels incredibly arbitrary for the DoD to cancel the contract and declare it a supply chain risk.


That's the way I see it. I've commented previously that I see this mostly as a Soviet-style loyalty test not really about the specific technical concerns of developing weapon systems or surveillance.


Yeah, being able to switch quickly between amounts is my number one request for the iOS app right now. Logging in on the browser is a good idea though.


Yeah IDK. Wordpad is built around rich text, with all the weirdness and complexity that comes with it. I know for a fact that .rtf is absurdly complicated to work with, and I assume that .docx is similar.

I’m willing to bet that adding markdown to Notepad was a lot simpler than trying to make it work in Wordpad, especially since you’d probably still have to support rich text.


Both Wordpad and Win11-Notepad use the RichEdit control (which first appeared in Win95, brought to you by the Mail client group aka Capone - cuz no one else wanted to do a RichEdit text control). see https://devblogs.microsoft.com/math-in-office/windows-11-not... and https://learn.microsoft.com/en-us/cpp/mfc/rich-edit-control-...

The RichEdit control handles parsing RTF (I believe there was a CVE-level bug about RTF-handling in RichEdit - ahh - here we go https://www.kb.cert.org/vuls/id/368132/), the programmer/app is insulated from grokking RTF.

Here's sample code for opening an RTF file - https://learn.microsoft.com/en-us/windows/win32/controls/use...

Adding realtime conversion of text-only Markdown to the processed-richtext Markdown is slightly more difficult than an instant message-type edit control converting a text :) to a unicode emoji character representing :)

You'd have some bookkeeping to remember which lines are markdown and which are plain text. But it's not rocket science.

Imagine Win11-Notepad as WordPad with all the UI for rich text formatting disabled.


I think I remember RichEdit from Windows 3.1, but maybe it was always installed with the OLE common controls and not shipped with the OS.


The RichEdit control was first shipped in Win95.

Exchange 4.0 email client app (shipped in 1996) had a Win16-bit version which included RichEdit.

see https://learn.microsoft.com/en-us/archive/blogs/murrays/rich...


Cheers, thanks


Hence why I use .txt and not .rtf (After having multiple RTF files become corrupted)


Syntax highlighting is definitely less complex than updating and rendering RTF and HTML.

There is configurable syntax highlighting in vscode.

Should an app like Notepad ever embed a WebView? (with e.g. tauri-apps/wry instead of CEF now FWIU)? Not even for a Markdown Preview feature IMHO.


Why do I get the feeling that AI skeptics will treat it as definitive and irrefutable proof that they were right all along even though it’s one data point in an industry that’s hasn’t even been around for 5 years.


You're right, it is tempting to dunk on AI boosters every time an article like this comes out and puts a damper on their sci fi fan fiction fantasies. There's just something about a grown person getting all excited like a child that makes it really satisfying.


You must have a really bleak view on life to think an adult should never get excited like a child.

Adult life doesn’t have to be boring drudgery, you know. I mean, it mostly is, but the rare moments of childlike joy and excitement are some of the best parts.

As far as the putting a damper on anything, nope it doesn’t. And it never will.

The people excited about AI are excited because of the impacts they see on their own jobs and daily lives. We don’t care what Goldman Sachs has to say about productivity.


It's not bleak, just more mature.


Nope it’s bleak. Try to enjoy life.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: