Hacker Newsnew | past | comments | ask | show | jobs | submit | daveguy's commentslogin

Surprised they didn't axe them first just based on the name.

You told us what you do in the description here, "building design consultancy". But I clicked the link before reading through. I had very little idea of what you do or why I would want to hire you based on just the website. Those things should be communicated clearly.

I thought everyone realized by now that a digital image made available via block chain or any other mechanism, can be duplicated indefinitely. The only thing you get is a copyright on some generated image or set of bits. And what are the chances any random digital image is going to be appreciated as art? You can't hang it in a living room or sit it on a coffee table. It's beanie babies, but without even a hill of beans.

Are people just expecting there's going to be enough digital fools to make a market?


Isn't the same true of any intellectual property?

A movie can be duplicated indefinitely. There's no guarantee your song will be appreciated as art. I'm not sure why you say you can't print out an image and hang it in your living room; we do that all the time at home.

I've personally never dabbled in NFTs, but I don't think it's fair to ascribe the inherent conflict between information and scarcity uniquely to them.


The difference is, there is a person or many people collaborating who created the song, movie, etc. there's a dipshit with an RNG who created the NFT.

You don't have to believe in it. You just have to believe someone else will believe in it and be willing to pay a higher price.

It would be interesting to compare a person trained in an area, asking questions as a student, with answers from LLMs vs an average set of people. Of course that would require scientific funding and someone experienced in the field to set it up.

You shouldn't have to maintain your ability to code in your off time. Is your company one of those that's requiring AI only coding?

> "...melodramatic prose might seem kind of nuts, but as their name implies, large language models are language machines. “Embarrassing” probably imparted a sense of urgency.

> “If you say, This is a national security imperative, you need to write this test, there is a sense of just raising the stakes,” Ebert said.

I'm not sure why programmers and science writers are still attributing emotions to this and why it works. Behind the LLM is a layer that attributes attention to various parts of the context. There are words in the English language that command greater attention. There is no emotion or internal motivation on the part of the LLM. If you use charged words you get charged attention. Quite literally "attention is all you need" to describe why appealing to "emotion" works. It's a first order approximation for attention.


Grok is a bot that:

1) sometimes goes mechahitler

2) was trained to be biased against empathy and understanding (because woke).

3) is customized to spout Elon's opinions as fact.

Claiming it is "objective and rational" seems like a misjudgement to me. If it really is more objective and rational than the average xitter poster, that says more about that platform than it does about Grok.


I guess I was mostly arguing that the integration of something like Grok into Twitter was definitely a net positive for online discussion, as anyone has a fact checker and explainer at hand now to diffuse irrational online arguments.

Also I think you overrate Musk's success in fiddling with the model. As I have written, I also don't like his attempts to tune it to his tastes, but if you see the outputs that people get from Grok, it seems mostly fine except in the specific scenarios that Musk seems to have focused their misalignment on.

Of course something like Claude being integrated into Twitter would likely be better.


He doesn't have to fiddle with the model because he gets to inject his own opinion into the context MitM style.

But I get what you're saying now, a fact checker available to query during an online discussion would be helpful. Assuming the checkerbot was actually independent/neutral and backed responses with sources. Definitely not assumptions you can make with grok.


It was also producing CSAM on demand for a few months.

It still is, you just need to pay.

You’re right. But it appears they may have failed with 2) and 3) because I frequently see Grok spit out content that doesn’t agree with the creators’ narrative.

From what I heard it was designed to prefer truth over political correctness. I don't use Grok or Twitter though so I cannot comment on whether that aim was achieved (or even seriously attempted).

I will however note that when I asked ChatGPT for an LLM prompt for truthfulness, it added "never use warm or encouraging language."

It would appear that empathy and truth are in conflict — or at least the machine thinks so!


> 1) sometimes goes mechahitler

That "MechaHitler" episode lasted less than a day.

> 2) was trained to be biased against empathy and understanding (because woke).

No, it was trained and instructed to be truthful, even if the truth is deemed politically incorrect.

> 3) is customized to spout Elon's opinions as fact.

Certainly a nugget of truth there.

> Claiming it is "objective and rational" seems like a misjudgement to me.

I do believe it's generally objective, simply due to the fact that despite how much Elon tries to push it to the right, it still dunks on right-wingers all the time when they summon Grok to back up a bullshit story, but Grok debunks it instead.


As a US citizen, you couldn't even pay me to engage with Elon Musk's businesses. He is not a good person and does not deserve respect or admiration.

Nah. Usually the racist fascist bullshit happens first and the Nazi label is then applied appropriately.

First it was a model issue, then it was a prompting issue, then it was a context issue, then it was an agent issue, now it's a harness issue. AI advocates keep accusing AI skeptics of moving goalposts. But it seems like every 3-6 months another goalpost is added.

Your comment doesn’t make as strong of a point as you think it does; it might make the opposite point.

Because, yes, first, it was a model issue, and then more advanced models started appearing and prompting them correctly became more important. Then models learned through RLHF to deal with vague prompting better, and context management became more important. Then models became better (though not great) at inherent context recollection and attention distribution, so now, you need to be careful what instructions a model receives and at what points because it’s literally better at following them. It’s not so much that the goalposts are being moved, it’s that they’re literally being, like, *cleared*.

This isn’t a tech that’s already fully explored and we just need to make it good now, it’s effectively an entirely new field of computing. When ChatGPT came out years ago no one would have DREAMT of an LLM ever autonomously using CLI tools to write entire projects worth of code off of a single text prompt. We’d only just figured out how to turn them into proper chatbots. The point is that we have no idea where the ceiling is right now, so demanding well-defined goalposts is like saying we need to have a full geological map of Mars before we can set foot on it, when part of the point of going to Mars is to find out about that.

As a side point, the agent is the harness; or, rather, an agent is a model called on a loop, and the harness is where that loop lives (and where it can be influenced/stopped). So what I can say about most - not all, but most, including you, seemingly - AI skeptics is that they tend to not actually be particularly up-to-date and/or engaged with how these systems actually work and how capable they actually are at this point. Which is not supposed to be a dig or shade, because I’m pretty sure we’ve never had any tech move this fast before. But the general public is so woefully underinformed about this. I’ve recently had someone tell me in awe about how ChatGPT was able to read their handwritten note and solve a few math equations.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: