> [T]he mythos of people having popped out from the same area or region that they their most recent ancestors have lived […]
>> Genetic studies have put an end to that kind of speculation. Only crackpots and
religious nuts are supporting alternatives.
David Reich (author of "Who We Are and How We Got Here: Ancient DNA and the new science of the human past") says[^0]:
"
The modern human lineage, leading to the great majority of the ancestors of people today, was probably in sub-Saharan Africa for the last 500,000 years at least. It might be much more. Certainly our main lineage was in Africa, probably 3-7 million years ago.
But in a period between about 2 million to 500,000 years ago, it's not at all clear where the main ancestors leading to modern humans were. There were humans throughout many parts of Eurasia and Africa with a parallel increase in brain size and not obviously closer ancestrality to modern humans in one place than in the other. It's not clear where the main lineages were. Maybe they were in both places and mixed to form the lineages that gave rise to people today.
There's been an assumption where Africa's been at the center of everything for many millions of years. Certainly it's been absolutely central at many periods in human history. But in this key period when a lot of important changes happen—when modern humans develop from Homo habilis and Homo erectus all the way to Homo heidelbergensis and the shared ancestor of Neanderthals, modern humans, and Denisovans— that time period which is when a lot of the important change happened, it's not clear, based on the archaeology and genetics, where that occurred as I understand it.
" (emphasis mine throughout)
>But in this key period when a lot of important changes happen—when modern humans develop from Homo habilis and Homo erectus all the way to Homo heidelbergensis
That's a time period before homo sapiens, not after.
David Reich is widely acknowledged as a top if not the foremost leader in the field of ancient human origins, and he's talking about exactly the debate you claim is settled to everyone but crackpots. His interview with Dwarkesh Patel linked by OP is one of the most informative you'll find anywhere.
> I don't see why we should trust OpenAI's promises now, when they've broken promises in the past.
I don't see what "our" trust has to do with anything. Perhaps you're an investor in OpenAI and your trust matters to OpenAI and its plans? But for the rest of us, our trust doesn't matter. It would be like me saying, "I don't see why we should trust Saudi Aramco."
> It would be like me saying, "I don't see why we should trust Saudi Aramco."
It's completely fair response to say that if the CEO of Saudi Aramco performatively pens an article on how to mitigate the effects of global warming, while also profiting from it, and engaging in no tangible actions to fix the problem.
My question, rephrased, is "so what"? What is my or our trust worth? What does us claiming we no longer trust Saudi Aramco achieve unless we are investors or perhaps other significant stakeholders?
I totally understand your disenchantment, but if you feel that the mere opinions of the plebs are inconsequential^ and hence pointless, why participate in a public forum at all?
^Demonstrably not true if you look at the history of popular movements that garnered real and durable change, all which gathered momentum from the disgruntled mumblings of the plebs
> but if you feel that the mere opinions of the plebs are inconsequential^ and hence pointless, why participate in a public forum at all?
That gets to the heart of the matter, actually. Personally I participate in order to get new information and learn new ideas. But yeah, being human and flawed, I do end up giving opinions and I notice most people just want to talk about opinions.
But I digress. My question was specifically about the value of saying "I don't trust OpenAI".
Perhaps it was a warning to the naive that might take the article at face value to perhaps reconsider? What is obvious to you, might not be to another. I'd say there are a sizable amount of viewers of this forum are either on the fence or view OpenAI favorably.
And also back to part two of what I said, there's network effects to grumbling. I'd also add, there's a chilling effect to apathy.
Not really. That's why there's a saying, "Opinions are like assholes, everybody has one."
Information has to correspond to reality, the only arbiter of truth. When I give you my opinion I can say pretty much anything, usually they correspond to feelings e.g. "I think if you ask her out she'll say yes. I think this because you're my friend and I like you so surely she will."
But I think you already know that opinions are not information and perhaps you're asking rhetorically and hopefully not trolling me?
They are, at minimum, information about a particular persons values and perspective. Collectively, those individual opinions are what shape elections and foment uprisings.
> They are, at minimum, information about a particular persons values and perspective. Collectively, those individual opinions are what shape elections and foment uprisings.
Ah. The good old communist dream: The masses are where the truth is. Just one more marginal voice and we will know of the glorious uprising that was foretold.
But of course one would say this, after all, this is the age of the influencer and the knee jerk reaction to any information is to turn to the nearest one and say "thoughts?"
Anyway, it doesn't matter. If you believe opinions are information, more power to you.
Nonprofits make a social contract, purporting to operate for the public good, not profit.
Trust that their operations are indeed benefiting the public and they are acting truthfully is important for making that social contract work.
Shady companies doing shady things and keeping shady records doesn't incentivize any type of market participants -- investors, consumers, philanthropists.
> Nonprofits make a social contract, purporting to operate for the public good, not profit.
This is obvious (though I disagree that there is a social contract, and if there is, it's worth the paper it's printed on) and everybody is aware what a nonprofit is. But your reply still doesn't answer my question. Another way of asking it is: how many other non-profits have you audited for trustworthiness before this conversation? What was the impact of your audit?
Or is saying "we can no longer trust Sam Altman" just us twiddling our thumbs so we can signal our virtue to others or comfort ourselves in our own powerlessness? In less than a decade he'll have an army of humanoid sentient robots and probably be the wealthiest person on the planet, and we'll still be yelling "we can no longer trust him"?
> You strike me as the type to be unaware of social norms
You have a weird way of talking to strangers. But, you know what they say about assumptions.
> I have no idea what you're on about regarding sentient robots.
So you're ignorant about both the state and purpose of OpenAI's research as well as the state of the art in robotics. So why am I even talking to an ignorant person? smh.
What initial question? It seems like you are confusing threads (again).
Is that a pun about cannabis?
I just don't think Sam Altman is gonna be the guy to command a droid army, and I also don't think they'll look humanoid, and I also think us saying he is a dipshit in public helps undermine his efforts to waste vital resources pursuing that dystopia that he may or may not want and almost certainly won't meaningfully achieve.
Maybe we're just operating on different assumptions. And maybe we have different goals. Perhaps I'm just replying to weigh down the conversation and thread, dilute Altman's profiteering propaganda.
> Perhaps I'm just replying to weigh down the conversation and thread, dilute Altman's profiteering propaganda.
At least you're self aware enough to know that you've contributed noise but not any signal.
And by the way, when you talk to a stranger and immediately get horny to tell them "you sound unhinged" that only reveals that it is you who is in fact unhinged, for you don't know jack about the stranger, but you assume that you are smarter and more knowledgeable than you actually are.
You're the kind of person who says "I don't know what you're talking about" to your interlocutor, and imagine that you saying that statement somehow invalidates the existence of what they are talking about. It's a child's psychology: if you say something doesn't exist then it must not exist.
> It seems like you are confusing threads (again).
smh. There's only 1 thread but perhaps you're the one lost in the maze. There's a word for reprobates like you who, during a discussion, imagine they spot an error, then latch onto that assumed error in order to claim it characterises the entire conversation. Essentially it only shows bad faith and it's a retarded way for you to talk to people. I surely hope your parents will teach you how to talk to strangers. Show them this thread and your responses and ask them for their perspective. If they actually don't know how to teach you, then that's definitely too bad.
And yet, RLHF is a net helpful step of building an LLM Assistant. I think there's a few subtle reasons but my favorite one to point to is that through it, the LLM Assistant benefits from the generator-discriminator gap. That is, for many problem types, it is a significantly easier task for a human labeler to select the best of few candidate answers, instead of writing the ideal answer from scratch.
[…]
No production-grade actual RL on an LLM has so far been convincingly achieved and demonstrated in an open domain, at scale.
RL on any production system is very tricky, and so it seems difficult to work in any open domain, not just LLMs. My suspicion is that RL training is a coalgebra to almost every other form of ML and statistical training, and we don't have a good mathematical understanding how it behaves.
> I got Severance vibes from this blog about Palantir.
If our most immediate means of understanding the world are by relating it to fiction rather than history then it's likely we have a poor understanding of the world and reality.
No? What do you think the point of fiction often is? Finding similarities between Severance and the real world (and vice versa) is the whole point of that work of fiction.
> What do you think the point of fiction often is?
Fiction is not obligated to reveal anything about reality. Mainly it's escapist. But I'm not trying to have some big debate. Clearly you're passionate in your belief. Ultimately, how to understand the world is something one figures out with life experience and time at which point history becomes your best source of instruction.
When you're young it may seem to make sense to get your morality from Harry Potter and Star Wars. But that's usually just availability bias and the fact that fiction is easy to consume.
But then you discover that the map is not the territory, sometimes that discovery is painful or brutal or destructive and the consequences of having misunderstood the world are permanent for you and those around you.
Anyway, not an argument. I'm not trying to convince anyone of anything, merely pointing a fact about competence in understanding the world.
> An "AI Agent" replacing an employee requires intentional behaviour: the AI must act according to business goals, act reliably using causal knowledge of the environment, reason deductively over such knowledge, and formulate provisional beliefs probabilistically. However there has been no progress on these fronts.
This is a great example of how it's much easier to describe a problem that to describe possible solutions.
The mechanisms you've described are easily worth several million dollars. You can walk into almost any office and if you demonstrate you have a technical insight that could lead to a solution, you can name your price and $5M a year will be considered cheap.
Given that you're experienced in the field, I'm excited by your comment because its force and clarity suggest that you have some great insights into how solutions might be implemented but that you're not sharing with this HN class. I'm wishing you the best of luck. Progress in what you've described is going to be awesome to witness.
The first step may be formulating a programming language which can express such things to machine. We are 60% of the way there, I believe only another 20% is achievable -- the rest is a materials science problem
Had we an interpreter for such a language, a transformer would be a trivial component
> And of course if you ask it anything related to the CCP it will suddenly turn into a Pinokkio simulator.
Smh this isn't a "gotcha!". Guys, it's open source, you can run it on your own hardware[^2]. Additionally, you can liberate[^3] it or use an uncensored version[^0] on your own hardware. If you don't want to host it yourself, you can run it at https://nani.ooo/chat (Select "NaniSeek Uncensored"[^1]) or https://venice.ai/chat (select "DeepSeek R1").
> First time I'm hearing about a "Strategic Bitcoin Reserve"
You should definitely know what your government might be involved in, after all government is made up of people many of whom are bitcoiners. Here is a report that's a good primer: "Digital Gold: Evaluating a Strategic Bitcoin Reserve for the United States" https://www.btcpolicy.org/articles/digital-gold-evaluating-a...
> Are these tasks really complex enough for people that they are itching to relegate the remaining scrap of required labor to a machine?
I think I sympathize with your feeling but I don't agree with the premise of the question. Do you have or have you ever had a human personal assistant or secretary?
An effective human personal assistant can feel like a gift from God. Suddenly a lot of the things that prevent you from concentrating on what you absolutely must focus on, especially if you have a busy life, are magically sorted out. The person knows what you need and knows when you need it and gets it for you; they understand what you ask for and guess what you forgot to ask for. Things you needed organized become organized while you work after giving minimal instructions. Life just gets so much better!
When I imagine that machines might be able to become good or effective personal assistants for everyone … If this stuff ever works well it will be a huge life upgrade for everyone. Imagine always having someone who can help you, ready to help you. My father would call the secretary pool to send someone to his office. My kids will probably just speak and powerful machines will show up to help.
I've never had a human personal assistant. I don't have a sufficiently "busy life", at least in the conventional sense. I appreciate that personal assistants can be useful for other people.
And I'm not knocking the idea of agents. I can certainly imagine other tasks ("research wedding planners", "organize my tax info", "find the best local doctor", "scrape all the bike accident info in all the towns in my county") where they could be a benefit.
It's the focus on these itty bitty consumer tasks I don't get. Even if I did have a personal assistant, I still can't imagine I'd ask them to make a reservation for me on OpenTable, or find tickets for me on Stubhub. I mean, these apps already kind of function like assistants, don't they?, even without any AI fanciness. All I do is tell them what I want and press a few buttons, and there's a precise interface for doing so that is tailored to the task in each case; the UX has been hyper-optimized over time by market forces to be fast and convenient to me so that they can take my money. Using them is hardly any slower than asking another person to do the task for me.
> Why do we still hold Apple as a company in high regard […]
There is no "we". There's just a market (of products and of company shares) in which anyone can put their money where their preferences are and express their opinion.
> a Metaverse consisting of infinite procedural slop sounds about as appealing as reading infinite LLM generated books
Take a look at the ImgnAI gallery (https://app.imgnai.com/) and tell me: can you paint better and more imaginatively than that? Do you know anyone in your immediate vicinity who can?
Probably your answer is "yes, obviously!" to all the above.
My point: deep learning works and the era of slop ended ages ago except that some people are still living in the past or with some cartoon image of the state of the art.
> "Cost to zero" implies drinking directly from the AI firehose with no human in the loop
No. It means the marginal cost of production tends towards 0. If you can think it, then you can make it instantly and iterate a billion times to refine your idea with as much effort as it took to generate a single concept.
Your fixation on "content without a human directing them" is bizarre and counterproductive. Why is "no human in the loop" a prerequisite for productivity? Your fixation on that is confounding your reasoning.
> Take a look at the ImgnAI gallery (https://app.imgnai.com/) and tell me: can you paint better and more imaginatively than that?
So while I generally agree with you, I think this was a bad example to use: a lot of these are slop, with the kind of AI sheen we've come to glaze over. I'd say less than 20% are actually artistically impressive / engaging / thought-provoking.
There's still plenty of slop in there, and it would be a better gallery of if there was a way to filter out anime girls. But it's definitely higher than 20% interesting to me.
The closest similar community of human made art is this:
Although unfortunately they've decided to allow AI art there too so it makes comparison harder. Also, I couldn't figure out how to get the equivalent list (top/year). But I'd say I find around the same amount interesting. Most human made art is slop too.
I think you fundamentally misunderstand what people use "slop" to describe.
> Most human made art is slop too.
I'm assuming you're using the term "slop" to describe low-quality, unpolished works, or works where the artist has been too ambitious with their skill level.
Let me put it this way:
Every piece of art that is made, is a series of decisions. The artist uses their lived experience, their tastes and their values to create something that's meaningful to them. Art doesn't need to have a high-level of technical expertise to be meaningful to others. It's fundamentally about communication from artists to their audience. To this point, I don't believe there's such a thing as "bad art" (all works have something to say about the artist!).
In contrast, when you prompt an image generator, you're handing over the majority of the decisions to the algorithm. You can put in your subject matter, poses, even add styles, but how much is really being communicated here? Undoubtedly it would require a high level of technical skill to render similarly by hand, but that's missing the forest for the trees- what is the image saying? There's a reason why most "good" AI-generated images generally have a lot of human curation and editing.
As a side note, here's a human-made piece that I appreciate a lot. https://i.imgur.com/AZiiZj1.jpeg
The longer you explore it, the more the story unfolds, it's quite lovely. On the other hand, when I focus on the details in AI-generated works, there's not much else to see.
> I think you fundamentally misunderstand what people use "slop" to describe.
I don't think I do, actually. It's not a term with a technical definition, but in simple terms it means art that is obviously AI, because it has the sheen, weird hands, inconsistencies, weird framing or thematic elements that are hard to describe without an art degree but which we instinctively know is wrong, or is just plain bad.
I used the term slop to describe bad humans art too, but I meant something subtly different. It's a term that has been used to describe bad work of all kinds from humans since long before there was AI.
In this case, it's art from humans who are learning what makes good art. You say there's no bad art, and it's a valid viewpoint, but I'd say bad art is when the artist has a clear goal in their mind, but they lack the skills to realize it. Nonetheless, they share it for feedback and approval anyway, and by doing that on a site like DeviantArt they learn and grow as artists. But meanwhile, to me or anyone else who is visiting that site to find "good", meaningful art made by skilled artists, this is slop. Human slop, not AI slop.
> here's a human-made piece that I appreciate a lot
I like your art. I'm glad you made it. What I like most is that it's fun to look at and think about which is what you say you intended. I hope I get to see more of your art.
> To this point, I don't believe there's such a thing as "bad art" (all works have something to say about the artist!).
As a classically trained oil painter, I know for sure there is bad art especially because I've made more than enough bad art for one lifetime.
Bad art begins with a lack of craftsmanship and is exemplified by a poor use of materials/media and forms, or a lack of knowledge of those forms (e.g. poor anatomical knowledge, misunderstanding the laws of perspective), or an overly literal representation of forms (a photograph is better at being literal, for example).
> Here's an example of some "slop" from the AI Art Turing Test […] But it's very clearly AI-generated. Can you figure out why?
It's only "clearly AI-generated" because we know that AI is capable of generating art. If you saw this without that context you wouldn't immediately say "AI!" Instead, you'd give it a normal critique that you'd give a student or colleague: I'd say:
- there's too much repetition of large forms.
- there's an unpleasant hierarchy of values and not enough separation of values.
- The portrait of the human is the focus of the image yet it has been lost in the other forms.
- The composition can improve with more breathing room in the foreground or background which are too busy.
- Here look at this Frazetta!
However, my rudimentary list could just as easily be turned into prompts to be used to refine the image and experiment with variations. And, perhaps you'd consider that to be a human making decisions?
> I like your art. I'm glad you made it. What I like most is that it's fun to look at and think about which is what you say you intended. I hope I get to see more of your art.
> There's still plenty of slop in there, and it would be a better gallery […]
Thanks for sharing your better AI gallery. It's awesome to see.
Your reply clarifies my point even better: I shared a gallery, you evaluated it and shared an even better gallery! Undoubtedly someone else will look at yours today or next year, and say, as you said, "You missed a slop! Here's a better gallery".
My point fundamentally is about basic capability of the average and even above average person. As a classically trained amateur painter, I frequently ask myself: "Can I paint a nude figure better than what you've called slop?" As I mathematician I ask: "Can I reason better than this model?"
it is a fixation based on the desire that they themselves shouldn't be rendered economically useless in the future. Then the reasoning come about post-facto from that desire, rather than from any base principle of logic.
Most, if not all, that are somewhat against the advent of AI are like the above in some way or another.
> Now show me the AI write something that's actually good on purpose
The average human can't even write a 3000 word short story that is good "on purpose" even if they tried.
I know because I've participated in many writing workshops.
The real question is: can you?
> AI can write an argument that's bad on purpose
Are you able to recognise good writing? How do I know? For all I know you're the most incompetent reader and writer on the planet. But your skills are irrelevant.
What's relevant is that deep learning is more skilled than the average person. If you're not aware of this you're either a luddite or confused about the state of the art.
The 'strawmanning your opponent' technique is a non-argument, and is effortless to pull off. Surrounding your argument with tons of purple prose (which Claude is good at) does not change that.
Writing a good argument requires 3 things: be logical, be compelling and likeable, and have a solid reputation. It does not require purple prose.
As for good writing, I'm pretty sure Brandon Sanderson's Mistborn trilogy qualifies, which was written with a rather small vocabulary and pedestrian prose, yet is universally praised.
Tbf, I do think Claude Sonnet and SD are impressive, and I think they can aid humans in producing compelling content, but they are not on the level of amateur fiction writers.
Besides, surpassing most humans in an area where most humans are unskilled is not a feat, not even AI companies flex on that.
> Writing a good argument requires 3 things: be logical, be compelling and likeable, and have a solid reputation. It does not require purple prose.
That's a common misconception that young writers have. Their prose is first purple and overwrought, then they overcorrect and try to be Hemmingway, then they master the craft and discover that form follows function.
As such, the "purpleness" of prose is not an indictment of any sort except if the style doesn't serve the substance. So yes, purple prose is sometimes required and can be used correctly, just ask James Joyce or Hitchens or remember that first sentence in Lolita, for example.
Furthermore, almost every piece of writing you've probably enjoyed went through an editor or several professional editors. You'd be shocked to read early or even late drafts.
(Also, a having "solid reputation" has f' all to do with whether you can construct a good argument. Wanting that as a prerequisite is what the cool kids used to call "appeal to authority". Anyway ...)
But wtf are we even talking about now?
> Besides, surpassing most humans in an area where most humans are unskilled is not a feat, not even AI companies flex on that.
I don't care what "AI companies flex". What I care about, as a programmer, and as an artist, and as a writer who won a tiny prize in my even tinier and insignificant niche on the planet, is what tools we can build for the average person and what tools I have access to.
If I have a robot that is 50% stronger than me or 10x better read than the average human or 20% better than the average mathematician, that's a huge victory. So yes, surpassing the average human is a feat.
But it's not merely the average human who has been surpassed: the average mathematician (skilled in mathematics) and the average artists (skilled in art) and the average writer, have all been surpassed. That is my testable claim. Play with the tools, and see for yourself.
> the fact that you are seriously asking this question says a lot about your taste.
Non sequitur. My sense taste or lack of it, is irrelevant.
Questions about "taste" don't matter when the average person doesn't have the craft to produce what they claim they are competent to judge especially when we're talking about such low hanging fruit as: "write a short story", "write an essay", "analyse this math problem", "draw an anatomically accurate portrait or nude figure", "paint this still life", "sketch this landscape".
Are you able to make the distinction between taste and craftsmanship?
Then after you are done signalling whatever it is you think you're signalling by vaguely gesturing at your undoubtedly superior sense of taste, perhaps we can talk like adults about what I asked?
Frankly i think you cannot get past your own delusion about AI and no argument will change your mind. No one can make you appreciate art properly and I can only hope one day you will.
> No one can make you appreciate art properly and I can only hope one day you will.
Lmao.
Refer to my other comment for more context, for whatever that is worth (talking with strangers who are eager to judge everyone but themselves is always weird but unavoidable online): https://news.ycombinator.com/item?id=42790853
>> Genetic studies have put an end to that kind of speculation. Only crackpots and religious nuts are supporting alternatives.
David Reich (author of "Who We Are and How We Got Here: Ancient DNA and the new science of the human past") says[^0]:
" The modern human lineage, leading to the great majority of the ancestors of people today, was probably in sub-Saharan Africa for the last 500,000 years at least. It might be much more. Certainly our main lineage was in Africa, probably 3-7 million years ago.
But in a period between about 2 million to 500,000 years ago, it's not at all clear where the main ancestors leading to modern humans were. There were humans throughout many parts of Eurasia and Africa with a parallel increase in brain size and not obviously closer ancestrality to modern humans in one place than in the other. It's not clear where the main lineages were. Maybe they were in both places and mixed to form the lineages that gave rise to people today.
There's been an assumption where Africa's been at the center of everything for many millions of years. Certainly it's been absolutely central at many periods in human history. But in this key period when a lot of important changes happen—when modern humans develop from Homo habilis and Homo erectus all the way to Homo heidelbergensis and the shared ancestor of Neanderthals, modern humans, and Denisovans— that time period which is when a lot of the important change happened, it's not clear, based on the archaeology and genetics, where that occurred as I understand it. " (emphasis mine throughout)
---
[^0]: https://www.dwarkeshpatel.com/p/david-reich