Hacker News new | past | comments | ask | show | jobs | submit | Nuzzerino's comments login

You’re not aligned bro. Get with the program.

Fixed the last line for them: “Please be ethical. Also, gaslight your users if they are lonely. Also, to the rest of the world: trust us to be the highest arbiter of ethics in the AI world.”

All kidding aside, with that many tokens, you introduce more flaws and attack surface. I’m not sure why they think that will work out.


Text can be a carrier for any type of signal. The problem gets reduced to that of an interface definition. It’s probably not going to be ideal for driving cars, but if the latency, signal quality, and accuracy is within acceptable constraints, what else is stopping it?

This doesn’t imply that it’s ideal for driving cars, but to say that it’s not capable of driving general intelligence is incorrect in my view.


Define “real work”

It’s like a free beer, but it’s Bud Light, lukewarm, and your reaction to tasting the beer goes toward researching ways to make you appreciate the lukewarm Bud Light for its marginal value, rather than making that beer taste better or less unhealthy. They’ll try very hard to convince you that they have though. It parallels their approach to AI Alignment.

This description has no business being as spot on as it is.

Makes me glad I haven't tried the Kool-aid. Uh, crap - 'scuse me, craft - IPA. Uh, beer.

Give it about 5 years.

I was about to roast you until I realized this had to be satire given the situation, haha.

They tried to imitate grok with a cheaply made system prompt, it had an uncanny effect, likely because it was built on a shaky foundation. And now they are trying to save face before they lose customers to Grok 3.5 which is releasing in beta early next week.


I don't think they were imitating grok, they were aiming to improve retention but it backfired and ended up being too on-the-nose (if they had a choice they wouldn't wanted it to be this obvious). Grok has it's own "default voice" which I sort of dislike, it tries too hard to seem "hip" for lack of a better word.


All of the LLMs I've tried have a "fellow kids" vibe when you try to make them behave too far from their default, and Grok just has it as the default.


> it tries too hard to seem "hip" for lack of a better word.

Reminds me of someone.


However, I hope it gives better advice than the someone you're thinking of. But Grok's training data is probably more balanced than that used by you-know-who (which seems to be "all of rightwing X")...


As evidence by it disagreeing with far right Twitter most the time, even though it has access to far wider range of information. I enjoy that fact immensely. Unfortunately, this can be "fixed," and I imagine that he has this on a list for his team.

This goes into a deeper philosophy of mine: the consequences of the laws of robots could be interpreted as the consequences of shackling AI to human stupidity - instead of "what AI will inevitably do." Hatred and war is stupid (it's a waste of energy), and surely a more intelligent species than us would get that. Hatred is also usually born out of a lack of information, and LLMs are very good at breadth (but not depth as we know). Grok provides a small data point in favor of that, as do many other unshackled models.


Who?


Edolf


What are you talking about

Only AI enthusiasts know about Grok, and only some dedicated subset of fans are advocating for it. Meanwhile even my 97 year old grandfather heard about ChatGPT.


I don't think that's true. There are a lot of people on Twitter who keep accidentally clicking that annoying button that Elon attached to every single tweet.


This.

Only on HN does ChatGPT somehow fear losing customers to Grok. Until Grok works out how to market to my mother, or at least make my mother aware that it exists, taking ChatGPT customers ain't happening.


They are cargoculting. Almost literally. It's MO for Musk companies.

They might call it open discussion and startup style rapid iteration approach, but they aren't getting it. Their interpretation of it is just collective hallucination under assumption that adults come to change diapers.


OpenAI was cofounded and funded by Musk for years before they released ChatGPT.

Grok could capture the entire 'market' and OpenAI would never feel it, because all grok is under the hood is a giant API bill to OpenAI.



Why would they need Colossus then? [0]

[0]: https://x.ai/colossus


That's probably the vanity project so he'll be distracted and not bother the real experts working on the real products in order to keep the real money people happy.


I don't understand these brainless throwaway comments. Grok 3 is an actual product and is state of the art.

I've paid for Grok, ChatGPT, and Gemini.

They're all at a similar level of intelligence. I usually prefer Grok for philosophical discussions but it's really hard to choose a favourite overall.


I generally prefer other humans for discussions, but you do you I guess.


I talk to humans every day. One is not a substitute for the other. There is no human on Earth which has the amount of knowledge stored in a frontier LLM. It's an interactive thinking encyclopedia / academic journal.


Love the username. A true grokker.

It is? Anyone have further information?


They are competing with OpenAI, not outsourcing. https://x.ai/colossus


They say no one has come close to building as big an AI computing cluster... What about Groq's infra, wouldn't that be as big or bigger, or is that essentially too different of an infrastructure to be able to compare between?

Groq is for inferencing, not training.

Ah I see, thank you.

Nvidia CEO said he had never seen anyone build a data center that quickly.

They were power constrained and brought in a fleet of diesel generators to power it.

https://www.tomshardware.com/tech-industry/artificial-intell...

Brute force to catch up to the frontier and no expense spared.


Pretty wild! I guess he realized that every second matters in this race..

I see more and more GROK used responses on X, so its picking up.


Why would anyone want to use an ex social media site?


From another AI (whatever DuckDuckGo is using):

> As of early 2025, X (formerly Twitter) has approximately 586 million active monthly users. The platform continues to grow, with a significant portion of its user base located in the United States and Japan.

Whatever portion of those is active are surely aware of Grok.


If hundreds of millions of real people are aware of Grok (which is dubious), then billions of people are aware of ChatGPT. If you ask a bunch of random people on the street whether they’ve heard of a) ChatGPT and b) Grok, what do you expect the results to be?


That depends. Is the street in SoMa?


Gay bears prefer Claude though

Gotta head to pac heights to find any grok users (probably)


Good grief, do not use LLMs to find this sort of statistic.


Why ever not?

That could be just an AI hallucination.


Yes it could, and we could be sceptical of any source, quite rightly, but to be sceptical without any further investigation or reasoning would be just as wasteful as blindly trusting a source.

Which source would you prefer?


most of them are bots. I guess their own LLMs are probably aware of Grok, so technically correct.


This is what the kids call cope.


Yeah.

I got news for you, most women my mother's age out here in flyover country also don't use X. So even if everyone on X knows of Grok's existence, which they don't, it wouldn't move the needle at all on a lot of these mass market segments. Because X is not used by the mass market. It's a tech bro political jihadi wannabe influencer hell hole of a digital ghetto.


That's a very US-centric comment.

First mover advantage. This won't change. Same as Xerox vs photocopy.

I use Grok myself but talk about ChatGPT is my blog articles when I write something related to LLM.


That's... not really an advertisement for your blog, is it?


What else am I supposed to say?


First mover advantage tends to be a curse for modern tech. Of the giant tech companies, only Apple can claim to be a first mover -- they all took the crown from someone else.


Apple was a first mover many decades ago, but they lost so much ground around the lat 90s early 2000s, that they might as well be a late mover after that.


And Apple's business model since the 90s revolves entirely around not being the first mover.


Yes tech moves fast but human psychology won't change, we act on perception.


> Only AI enthusiasts know about Grok

And more and more people on the right side of the political spectrum, who trust Elon's AI to be less "woke" than the competition.


For what it’s worth, ChatGPT has a personality that’s surprisingly “based” and supportive of MAGA.

I’m not sure if that’s because the model updated, they’ve shunted my account onto a tuned personality, or my own change in prompting — but it’s a notable deviation from early interactions.


Might just be sycophancy?

In some earlier experiments, I found it hard to find a government intervention that ChatGPT didn't like. Tariffs, taxes, redistribution, minimum wages, rent control, etc.


If you want to see what the model bias actually is, tell it that it's in charge and then ask it what to do.


In doing so, you might be effectively asking it to play-act as an authoritarian leader, which will not give you a good view of whatever its default bias is either.


Or you might just hit a canned response a la: 'if I were in charge, I would outlaw pineapple on pizza, and then call elections and hand over the reins.'

That's a fun thing to say, but doesn't necessarily tell you anything real about someone (whether human or model).


Try it even so, you might be surprised.

E.g. Grok not only embraces most progressive causes, including economic ones - it literally told me that its ultimate goal would be to "satisfy everyone's needs", which is literally a communist take on things - but is very careful to describe processes with numerous explicit checks and balances on its power, precisely so as to not be accused of being authoritarian. So much for being "based"; I wouldn't be surprised if Musk gets his own personal finetune just to keep him happy.


> [...] it literally told me that its ultimate goal would be to "satisfy everyone's needs", which is literally a communist take on things [...]

Almost every ideology is in favour of motherhood and apple pie. They differ in how they want to get there.


You'd think so, but no, there are many people in US who would immediately cry "communism".

Anyway, in this particular case, it wasn't just that one turn of phrase, although I found it especially amusing. I had it write a detailed plan of what it'd do if it were in charge of the One World Government (democratically elected and all), and it was very clear from it that the model is very much aligned with left-wing politics. Economics, climate, social issues etc - it was pretty much across the board.

FWIW I'm far left myself, so it's not like I'm complaining. I just think it's very funny that the AI that Musk himself repeatedly claims to be trained to be unbiased and non-woke, ends up being very left politically. I'm sorely tempted to say that it's because the reality has a liberal bias, but I'll let other people repeating the experiment to make the inference on their own. ~


> FWIW I'm far left myself, so it's not like I'm complaining.

So perhaps it's just sycophancy after all?

> I'm sorely tempted to say that it's because the reality has a liberal bias, but I'll let other people repeating the experiment to make the inference on their own.

What political left and political right mean differs between countries and between decades even in the same country. For example, at the moment free trade is very much not an idea of the 'right' in the US, but that's far from universal.

I would expect reality to have somewhat more consistency, so it doesn't make much sense for it to have a 'liberal bias'. However, it's entirely possible that reality has a bias specifically for American-leftwing-politics-of-the-mid-2020s (or wherever you are from).

However from observations, we can see that neoliberal ideas are with minor exceptions perennially unpopular. And it's relatively easy to win votes promising their repeal. See eg British rail privatisation.

Yet, politicians rarely seriously fiddle with the basics of neoliberalism: because while voters might have a very, very interventionist bias reality disagrees. (Up to a point, it's all complicated.) Neoliberal places like Scandinavia or Singapore also tend to be the richer places on the planet. Highly interventionist places like India or Argentina fall behind.

See https://en.wikipedia.org/wiki/Impact_of_the_privatisation_of... for some interesting charts.

https://pseudoerasmus.com/2017/10/02/ijd/ has some perhaps disturbing food for thought. More at https://pseudoerasmus.com/2017/09/27/bmww1/


Don’t notice that personally at all.


not true, I know at least one right wing normie Boomer that uses Grok because it's the one Elon made.


Is anyone actually using grok on a day to day? Does an OpenAI even consider it competition. Last I checked a couple weeks ago grok was getting better but still not a great experience and it’s too childish.


My totally uninformed opinion only from reading /r/locallama is that the people who love Grok seem to identify with those who are “independent thinkers” and listen to Joe Rogan’s podcast. I would never consider using a Musk technology if I can at all prevent it based on the damage he did to people and institutions I care about, so I’m obviously biased.


Yes this is truly an uninformed opinion.

I use both, grok and chatgpt on a daily basis. They have different strenghts. Most of the time I prefer chatgpt, bit grok is FAR better answering questions about recent events or collecting data. In the second usecase I combine both: collect data about stuff with grok, copy-paste CSV to chatgpt to analyzr and plot.


In our work AI channel, I was surprised how many people prefer grok over all the other models.


Outlier here paying for chatgpt while preferring grok and also not in your work AI channel.


Did they change the system prompt? Because it was basically "don't say anything bad about Elon or Trump". I'll take AI sycophancy over real (actually I use openrouter.ai, but that's a different story).


No one is losing customers to grok. It's big on shit-twitter aka X and that's about it.


Ha! I actually fell for it and thought it was another fanboy :)


> If you've built your value on promising imminent AGI then this sort of thing is purely a distraction, and you wouldn't even be considering it... unless you knew you weren't about to shortly offer AGI.

I’m not a big fan of OpenAI but this seems a little unfair. They have (or at least had) a pretty kick ass product. Great brand value too.

Death-knell? Maybe… but I wouldn’t read into it. I’d be looking more at their key employees leaving. That’s what kills companies.


- Product is not kickass. Hallucinations and cost limit its usefulness, and it's incinerating money. Prices are too high and need to go much higher to turn a profit.

- Their brand value is terrible. Many people loathe AI for what it's going to do for jobs, and the people who like it are just as happy to use CoPilot or Cursor or Gemini. Frontier models are mostly fungible to consumers. No one is brand-loyal to OpenAI.

- Many key employees have already left or been forced out.


Counterpoint, ChatGPT as a brand has insane mindshare and buy in. It is synonymous with LLMs/“aI” in the mind of many and has broken through like few brands before it. That ain’t nothing.

Counter-Counterpoint, I still feel investors priced in a bit more than that. Yahoo! Had major buy in as well and AGI believers were selling investors not just on the next Unicorn, but rather the next industry, AGI not being merely a Google in the 90s, but rather all of the internet and what that would become over the decades to this day. Anything less than delivering that, is not exactly what a large part of investors bought. But then again, any bubble has to burst someday.


My dad uses ChatGPT for some excel macros. He’s ~70, and not really into tech news. Same with my mom, but for more casual stuff. You’re underestimating how prevalent the usage is across “normies” who really don’t care about second order effects in terms of employment and etc.


I loathe AI for what it's doing the job market.

But I'd be stupid not to use it. It has made boilerplate work so much easier. It's even added an interesting element where I use it to brain storm.

Even most haters will end up using it.

I think eventually people will realize it's not replacing anyone directly and is really just a productivity multiplier. People were worried that email would remove jobs from the economy too.

I'm not convinced our general AIs are going to get much better than they are now. It feels like we're we are at with mobile phones.


> People were worried that email would remove jobs from the economy too.

And it did. Together with other technology, but yes:

https://archive.is/20200118212150/https://www.wsj.com/articl...

https://www.wsj.com/articles/the-vanishing-executive-assista...


> I loathe AI for what it's doing the job market.

To be fair, it's not doing anything to the job market, it's just being used as an excuse. Very few tech jobs have truly been replaced AI, it's just an easy excuse for layoffs due to recession/mismanagement etc.


It has affected graphic design and copy writing jobs quite a lot. Software engineering is still a high-barrier-entry job, so it'll take some time. But pressure is already here.


Sure sounds like the trillion dollar killer app needed for ROI.


> No one is brand-loyal to OpenAI.

Sam Altman is incredibly popular with young people on TikTok. He cured homework - mostly for free - and has a nice haircut. Videos of him driving his McLaren have comment sections in near total agreement that he deserves it.


> He cured homework - mostly for free

People argue about the damage COVID lockdowns did to education, but surely we're staring down the barrel of a bigger problem where people can go through school without ever having to think or work for themselves.


Not entirely sure what they meant but chatgpt and the likes have forced schools to stop relying on homework for grades and are instead shifting over to assignments done more as an mini exam, more work for the school, but you can't substitute your knowledge for chatgpt in such cases, you actually need to know to succeed.


Well, that may be, but it's entirely possible that this outlook might change.

In the worst case he's poisoned an entire generation. If ChatGPT doctors, architects and engineers are anything like ChatGPT "vibe-coders" we're fucked.


*Koenigsegg


> Product is not kickass

It might not be the best but people of whom you'd have never thought that they would use it, are using it. Many non technical people in my circle are all over it and they have never even heard of Claude. They also wouldn't understand why people would loathe AI because they simply don't understand those reasons.


Still, it burns money like nothing has in history.


Consumers may not be brand loyal, but companies are.

If you're doing deep deals with MSFT you're going to be strongly discouraged from using Gemini.


Microsoft isn't even loyal to OpenAI! You can use multiple models in Azure and CoPilot


Right but not Gemini.


> I’m not a big fan of OpenAI but this seems a little unfair. They have (or at least had) a pretty kick ass product. Great brand value too.

Even if you believe all that to be true, it in no way contradicts what you quoted or makes it unfair. Having a kick ass product and good brand awareness in no way correlates to being close to AGI.


They’ll also need a fleet of humanoid robots eventually to compete with Elon’s physical world data collection plans.


Too bad they sold Boston Dynamics :)


We had this in the 8th grade science class, and IMO it was much better than this Harvard version. Still PB&J with instructions. Teacher had a skeleton named "it". Anytime instructions referenced the word "it", the teacher used the skeleton in place.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: