Hacker Newsnew | past | comments | ask | show | jobs | submit | hackit2's commentslogin

In his own words he all-ready got early feedback from his family.

"I tried the local Iranian market. I showed it to friends, family, and potential clients. Their response: "Nobody in Iran will pay $500/month for this. The Persian language quality isn't perfect. We'll use free ChatGPT instead.""

Which should of been free feed-back on the risk vs reward.


Most of the internet still assumes you're using a 96 DPI monitor. Tho the rise of mobile phone has changed that it seems like the vast majority of the content consumed on mobile lends itself to being scaled to any DPI - eg.. movies, pictures, youtube ect.


Sad to see what happened to the kid, but to point the finger at a language model is just laughable. It shows a complete breakdown of society and the caregivers entrusted with responsibility.


people are (rightly) pointing the finger at OpenAI, the organization comprised of human beings, all of whom made decisions along the way to release a language model that encouraged a child to attempt and complete suicide.


You forgot to add the cognitive load of needing to learning their business domain. Programming or working on code is very much like replaceable lego blocks, which are made up of your typical functional, procedural, queues, dictionaries, link list, and your data model. The mental load really comes from needing to learn a often narrow niche with all their idiosyncratic edge case conditions and data models.

I've worked on Titling Systems, Game Development (C/C++), Integration Systems, and Backend database systems. All those niche data models/systems live rent free in my head. It is all absolutely worthless to my current employer or people around me because they're focused on solving their unique problems which at the end of the day just become another piece of worthless business procedure in my head. It is worthless because of the fact that business and people only care about solving their problem, once its solved they just move onto the next.


Counterpoint, reorgs do happen. Even if someone is doing a fine job, they can find themselves in a completely new team working in a completely new domain just because bodies needed to be buried.


I think you answered your own question.

Question: How do people figure out how to deal with this world?

Answer: People choose to plug themselves into a world of their choosing.


I do agree there is a sense in which this has always been true. But most people’s constructed worlds overlapped quite significantly up until now; it was necessary for survival.

I think that condition is weakening. Just look at how wealth affects people; If 99% of your interlocutors are sycophants, you lose grip on reality. There is a very clear and dangerous attractor state where everyone gets this, thinking that they want it.


Then they gossip about that, then they gossip about the gossip. Its a feedback loop.


Well, it's a social network.


If you take a deep dive into it, there isn't really any truth or falsehoods, it mostly comes down to what can be reproduced, and what is practical or pragmatic for the situation.


I might be wrong but most disk controller report the file as written when it isn't actually written to the drive.


But then it's in their cache which is battery-backed and survives power losses. It's the same concept as the write ahead log is at a higher level.


That is because all of the leading "science" is being done by private pharmaceutical companies, and businesses that have strict NDA's and security.


Kind of ironic that DeepSeek is more Open than ChatGPT


They do it for their own reasons, but OpenAI are straight up liars and they are neither open nor give a fuck about humanity.


It would be hilarious if this scenario played out.

OpenAI starts as a nonprofit, aiming to benefit all humanity. Eventually, they discover a path to AGI and engage in intense internal debates: Should they abandon their original mission and chase profit, knowing it could bring generational wealth? They ultimately decide, "To hell with humanity—let’s go for the money."

As they pivot to prioritizing profit, DeepSeek emerges. Staying true to OpenAI’s original vision, DeepSeek open-sources everything, benefiting humanity and earning global admiration. Unintentionally, this move tanks OpenAI’s valuation. In the end, OpenAI fails to become the hero or secure the massive profits they chased. Instead, they leave behind a legacy rebranded as "ClosedAI"


Admittedly I'm a sideline observer but it feels like the first half of your scenario is already happening (sans the agi).


"I don't want to live in a world where someone else is making the world a better place better than we are"

- Silicon Valley Season 2


OpenAyyyyI swear babe I’m gonna open it up any day. Yeah for that grated good or whatever it is you keep yappin about.


Well, they do give us a great free tool to use, but that's where it ends and probably has some agenda behind it.


> Kind of ironic that DeepSeek is more Open than ChatGPT

Not ironic at all.

You've simply be lied to by OpenAI.

Nothing ironic about being naive.


Now. It’s amazing to me that everyone is like fuck OpenAI deepseek is the savior, when OpenAI’s papers and code jump started an AI revolution just a few years ago. Let’s wait the same number of years and see what deepseek does.


I thought the papers that jump started the revolution came from Google?


Indeed. And the papers were about doing better translation of char sequences, essentially the tech emerged as linguistics improvement for language. Then someone realised the parrot learns enough ZIP and JPEG alongside and can spit back hazy memories of it all.

the one still super useful thing OpenAI ever released must’ve been Whisper. But they could’ve been much more open for sure.


Hinton. And if you'd ask himself probably Schmidthuber.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: