Hacker News new | past | comments | ask | show | jobs | submit | LivenessModel's comments login

How incredibly cool to see young people that are interested and capable of building things! I have a couple of rhetorical questions.

How do you expect a language model to see through propaganda and other large-scale misinformation by power/money with a megaphone?

How do you expect a computer program that can't reliably determine what letter a word starts with to determine objective truth?

I appreciate that you have a passion for the subject, but this tool is fundamentally unable to do what you wish it to do. If your goal is to make money -- keep going forward. Big promises built on lies have made many tech billionaires. If your goal is to combat misinformation you'd be better served by doing it in a different way than relying on a machine.

If you're building this at sixteen you have no limits. Don't take this as discouragement towards building things -- take it as a warning against cybernetic totalism. Make the world a better place not through technology that tells humans how to be or how things are; make the world a better place by building technology that adapts itself it human needs. Maybe even build technology that needs humans more than the humans need the technology.


Thank you! We are working on our own LLM that is based of a multitude of data, we also will double check or even triple check all the information through our DB and the internet. We are working to make it as reliable as possible.


Your enthusiasm is great! People don't want to quash your enthusiasm, and I'm in the same boat.

But while enthusiasm is great, delusion is not. Since you're striving to be a founder and not a hobbyist, you have to be realistic about what you're trying to build.

What you're describing is fundamentally not possible to provide assurances on without some kind of legititmate AGI, which you lack the resources to build yourself.

Many better resourced companies are trying to provide grounded, factually accurate information, so it just seems like an area of effort far too broad to ever succeed in.

I would suggest a pivot into demonstrating legitimacy in a very narrow niche before attempting to be a genralist know-it-all. Providing fine-tuning as a service to a point of assured factual grounding is itself a hard enough open challenge in AI.


This is the only wise response in the entire thread. OP please listen to this very valid criticism it is extremely valid. Misinformation just general is not a solvable problem, nor do I believe you could ever approach a good solution.

You are tackling an extremely broad, nuanced, unsolvable problem.

You and your friends are obviously incredibly bright, pivot to something more narrow focused. Maybe you can fact check for some sub genre of information that is solvable?

Think sports scores, building heights and structural engineering. Hard, concrete fact.

As soon as you get into anything with any degree of subjectivity misinformation is impossible to solve.

I honestly thought hackernews of all places would have given you better advice in-line with the above commenter, but what’s actually happening is people are filling you with false hope because you are young.

I was in a similar position as you when I was younger, and as I’ve gotten older and had some successes I’ve learnt to listen for valid criticisms.

Block out the noise, both positive and negative. Listen to the wise ones


And yet GPT4 still can't reliably tell me if a word contains any given letter.


Simple ID scans are already on their way out.

"Liveness checks" where we have to turn on our webcam and let some stranger make a full biometric model of our head to use basic internet infrastructure is the dystopia we deserve, and it's the one we're gonna get.

I hope the "AI" was worth it. Let's see if you can fix this problem you created.


Already happening at the IRS. There's a reason government was so reticent in regulating facial recognition in any meaningful way: The government database of everyone's faces, purchased and cobbled together from private partners, isn't complete enough yet.

This has nothing to do with AI, but an out-of-control executive branch and intelligence agencies. AI is just another tool that will make it cheaper.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: