Hacker News new | past | comments | ask | show | jobs | submit | Culonavirus's comments login

In the post-GPT world? Yes. Wait, maybe actually no? Depends on context. https://www.reddit.com/r/ChatGPT/comments/16jvl4x/wait_actua...

> Well, @Reuters is known to be the absolute dumbest news organization

- Elon

Also:

> The Guardian sucks donkey eggplantemoji

> Strong candidate for most misanthropic publication on Earth!

- Elon

Tell us how you really feel Elon :D


2.5 million?

Apparently you have time to "do a little research" but not to read the entire article you're reacting to? It specifically mentions TCP_QUICKACK.

Most of your comments, at least recently, are [dupe] type of posts too. You should [dupe] your comments also :P


A few years of hard work made possible by substantial injections of cash by a lot of the big players in the industry. Epic, Nvidia, Amd, Amazon, Intel, Meta etc. I'm sure the $150k/mo of continuous donation support also helps.

In this aspect, Blender is a very unique OSS project.


Substance Designer is a defacto industry standard in procedural texture creation. As of 5.4 UE also has a texture graph editor. They're all node based editors combining a bunch of PCG techniques and patterns to produce textures in parametric, non-destructive way.


> They often use it because it's cheaper than

That's pretty much what Anduril does. "You don't have to use a Patriot missile for that!" etc.


Which is a legitimate strategic imperative in the world of asymmetric warfare. It is not good that someone can launch a $1,000 drone that requires $10,000,000 in air defense munitions to counter.


I don't know, but if it was me paying the bill I would care a lot about this:

https://twitter.com/anduriltech/status/1780987267630395454 "the prototype was delivered ahead of schedule, and on budget"

and statements like this:

https://twitter.com/anduriltech/status/1783241461976256553 "We look forward to delivering CCAs on schedule, within budget, and at scale."

And the scale part is important. The whole point of these CCA fleets is to have A LOT of them and for that they have to be relatively cheap to build and easy enough to produce in decent numbers. The quality of an individual CCA doesn't matter that much.


Yep, but the killer feature for me is the composability. Take a dozen of components made by different authors, slap them together and it all just works, with no css conflicts and messes.


> is exactly what the AI safety field is attempting to address

Is it though? I think it's pretty obvious to any neutral observer that this is not the case, at least judging based on recent examples (leading with the Gemini debacle).


Yes, avoiding creating societally-harmful content is what the Gemini "debacle" was attempting to do. It clearly had unintended effects (e.g: generating a black Thomas Jefferson), but when these became apparent, they apologized and tried to put up guard rails to keep those negative effects from happening.


> societally-harmful content

Who decides what is "societally-harmful content"? Isn't literally rewriting history "societally-harmful"? The black T.J. was a fun meme, but that's not what the alignment's "unintended effects" were limited to. I'd also say that if your LLM condemns right-wing mass murderers, but "it's complicated" with the left-wing mass murderers (I'm not going to list a dozen of other examples here, these things are documented and easy to find online if you care), there's something wrong with your LLM. Genocide is genocide.


This isn't the un-determinable question you've framed it as. Society defines what is and isn't acceptable all the time.

> Who decides what is "societally-harmful theft"? > Who decides what is "societally-harmful medical malpractice"? > Who decides what is "societally-harmful libel"?

The people who care to make the world a better place and push back against those that cause harm. Generally a mix of de facto industry standard practices set by societal values and pressures, and de jure laws established through democratic voting, legislature enactment, and court decisions.

"What is "societally-harmful driving behavior"" was once a broad and undetermined question but nevertheless it received an extensive and highly defined answer.


> The people who care to make the world a better place and push back against those that cause harm.

This is circular. It's fine to just say "I don't know" or "I don't have a good answer", but pretending otherwise is deceptive.


Read the entire comment before replying, please. I'm not interested in lazy comments like this, and they're not appropriate for HN.


> Who decides what is "societally-harmful content"?

Are you stupid, or just pretending to be?


What Gemini was doing -- what it was explicitly forced to do by poorly considered dogma -- was societally harmful. It is utterly impossible that these were "unintended"[1], and were revealed by even the most basic usage. They aren't putting guardrails to prevent it from happening, they quite literally removed instructions that explicitly forced the model to do certain bizarre things (like white erasure, or white quota-ing).

[1] - Are people seriously still trying to argue that it was some sort of weird artifact? It was blatantly overt and explicit, and absolutely embarrassing. Hopefully Google has removed everyone involved with that from having any influence on anything for perpetuity as they demonstrate profoundly poor judgment and a broken sense of what good is.


I didn't say the outcome wasn't harmful. I said that the intent of the people who put it in place was to reduce harm, which is obvious.


Yeah, I don’t think there’s such thing as a “neutral observer” on this.


An LLM should represent a reasonable middle of the political bell curve where Antifa is on the far left and Alt-Right is on the far right. That is what I meant by a neutral observer. Any kind of political violence should be cosidered deplorable, which was not the case with some of the Gemini answers. Though I do concede that right wingers cooked up questionable prompts and were fishing for a story.


> An LLM should represent a reasonable middle of the political bell curve where Antifa is on the far left and Alt-Right is on the far right. That is what I meant by a neutral observer.

This is a bad idea.

Equating extremist views with those seeking to defend human rights blurs the ethical reality of the situation. Adopting a centrist position without critical thought obscures the truth since not all viewpoints are equally valid or deserve equal consideration.

We must critically evaluate the merits of each position (anti-fascists and fascists are very different positions indeed) rather than blindly placing them on equal footing, especially as history has shown the consequences of false equivalence perpetuate injustice.


All of this is political. It always is. Where does the LLM fall on trans rights? Where does it fall on income inequality? Where does it fall on tax policy? "Any kind of political violence should be considered deplorable" - where's this fall on Israel/Gaza (or Hamas/Israel)? Does that question seem non-political to you? 50 years ago, the middle of American politics considered homosexuality a mental disorder - was that neutral? Right now if you ask it to show you a Christian, what is it going to show you? What _should_ it show you? Right now, the LLM is taking a whole bunch of content from across society, which is why it turns back a white man when you ask it for a doctor - is that neutral? It's putting lipstick on an 8-year-old, is that neutral? Is a "political bell curve" with "antifa on the left" and "alt-right on the right" neutral in Norway? In Brazil? In Russia?


Speaking as somebody from outside the United States, please keep the middle of your political bell curve away from us.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: