Hacker News new | past | comments | ask | show | jobs | submit login
Humane execs leave company to found AI fact-checking startup (techcrunch.com)
52 points by iancmceachern 7 days ago | hide | past | favorite | 47 comments





Their track record based on Humane AI does not exactly give the confidence on building an AI search you can trust. Also what a pivot from hardware to fact-checking?

Fact checking is not objective, you just need to convince someone that your facts jive with their perception.

There are better and worse ways to do fact checking. Appealing to evidence is more objective than appealing to vibes or appealing to baseless authority.

Once solid evidence is in sight then are usually some clean objective takes to be had and some fairly obvious or likely subjective takes that a reasonable viewer might have.

The real problems happen when one side objects to what counts as evidence at all or one side puts forward low quality evidence and claims it is beyond reproach.


> Appealing to evidence is more objective

Assuming one's evidence is subjective, and one's judgment of it, and some other things.

> than appealing to vibes or appealing to baseless authority.

"Baseful" authorities are not necessarily objective. In fact, we have massive amounts of evidence that they are very often not.


Of course you are correct on both points. But subjective doesn't mean "wrong" or "open to every interpretation", it just mean more than one interpretation, and often narrowing down possibilities makes the truth easy to see.

Different people disagree for different reasons. But only the most deranged disagree that what they can do and effect is worse evidence than some authority. So things like "trying it for yourself" have worked consistently as evidence for most of history. Today "it" can be a bunch of different categories of experiments and experiences.

Even then lots of experiments are out of reach of most people, sometimes financially, sometimes intellectually. Some people think essential oils helped their cold go away in 3~5 days when most colds go away on their own in 3~5 days and. So there needs to be some delegation of authority. This is where a lot of problems leak it. Some people grant authority to some for silly reasons (like an a priori assumption they can do no wrong) other for good reasons (like lots of experience working in a relevant field with many published papers).

For many people exploring the different levels of subjective evidence can get two people to understand why the other believes something goofy. This takes time, isn't guaranteed, but it can be done at least some of the time to show where evidence and many beliefs fall on a gradient of quality. This fails when people hold some belief as firmer than reality, like what conspiracy theorists or religious zealots do.


It's complicated no disagreement here. But I'm not sure what I should take away from this with respect to my comments...any suggestions?

I was trying not to cast aspersions, but I thought you might have been treating this as a binary. I was just trying to place this stuff on a gradient.

It can be and they want to focus only on objective questions:

> To use an example nearer to my heart, say you want to compare how many Apple and Samsung devices were sold in the past five years. The service would locate and collate that information.

My question would be to what extent is this really a problem. Sounds like they're going to be competing with Google and Bloomberg which is a tough sell. Finding data quickly is their bread and butter.


[flagged]


This is a pretty hot take, which is likely why you're getting downvoted, but I can at least attest to this sort of stuff happening all the time. I recently joined a global company and made acquaintances with a Serbian whose family lineage happened to be in preserving history. Our first few non-work conversations were pretty hot debates because we were talking about the same historical events from a completely different viewpoint (mainly around the actions of NATO in that area).

Anyway, I've since learned of some non-Western outlets to see the "other" side of things, and it's very interesting to compare and contrast the differences.


How's work going at the Internet Research Agency :)

You broke the site guidelines badly with this comment and unfortunately have been posting unsubstantive and/or flamebait comments in other places, too:

https://news.ycombinator.com/item?id=40908934

https://news.ycombinator.com/item?id=40793361

https://news.ycombinator.com/item?id=40793252

https://news.ycombinator.com/item?id=40770909

This is the sort of thing we have to ban accounts for in the end, and I don't want to ban you, because you've also posted good things—so it would be good if you'd review https://news.ycombinator.com/newsguidelines.html, take the intended spirit of the site to heart, and stick to the rules in the future. We'd appreciate it.


Well noted.

How is the weather in Englin today ? Also if you don't want to stick out like a sore thumb on here, think of making comments that actually use technical jargon sometimes.

Pretty funny that you mention the IRA since I was threatened with open brain surgical torture by one of them a few months back. Like you, he assumed I was part of an opposite three letter agency. Do you guys never stop and consider that maybe, just maybe, the inevitable fate of conspiracy theorists is to become conspirators ? It always boils down to dialectics. Always.

For the wider audience running across this comment:

https://www.reddit.com/r/Blackout2015/comments/4ylml3/reddit...

https://www.youtube.com/watch?v=VA4e0NqyYMw

https://en.wikipedia.org/wiki/Internet_Research_Agency


Please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.

Also, please don't post unsubstantive and/or flamebait comments, and please don't use HN primarily for political/ideological/nationalistic battle. Your account has already been doing a lot of those things and that's the sort of account we end up having to ban.

https://news.ycombinator.com/newsguidelines.html


>a lot

There isn't a single comment by this dude that isn't related to politics. Not one.

He's probably on a payroll.

https://news.ycombinator.com/item?id=40568392

Meanwhile I provided a link to an obscure proof assistant.

What is HN doing against shills, i.e. people that are tactful enough to steer opinion towards certain goals and let you think they "also posted good things" ? Is preventing confrontations on this problem a good tradeoff with the current state of the guidelines considering that political topics are a lot more present on HN than 10 years ago ? Or could fostering a culture of shill-spotting beneficial ?

Logging off. Password status: not saved, as always.


Two reasons commonly cited:

- The perception that founders with big tech "credentials" stand a better chance of success than those without. The founders of Humane were ex-Apple.

- The idea that founders will generally be more successful on their second try, even if their first ended up being a complete trash fire - see Adam Neumann's new venture. This one sounds more reasonable on the surface but I've got no data to back it.


ex-Apple doesn’t really give much credibility either. For a hardware startup yes but for a AI fact-checking startup? unsure.

Shh dont tell the investors.

That was their foot in the door to the startup scene. Now they are venerable founders with a proven track record of AI-driven solutions /s

Execs like these fail upwards by chasing funding that they can incinerate in bonfires of "innovation" and "disruption."

For whatever reason, the failure of their tenure at past companies (or the failure of their past companies outright) are usually not held against them.

But wait, these execs need insane compensation because, conveniently, "they have the special skills needed to be an exec" [see my first sentence].

SV and the tech world has this idea that many execs and founders are something like Steve Jobs. They love to promote that image. Actually, from my experience, they're more like Elizabeth Holmes or your random MBA/MBB alum psychopath failing upwards.


There's also the clear conflict of interest where Exec at company A is on the Board of Company B who's team of Execs are on the board at company A so it turns out to be really really easy to get them all to vote for more Exec compensation against shareholder interests.

Because they aren't shareholders, they're insiders who also own shares.


I am not sure that ex-Apple is really worth anything here. Apple is a kingdom. Unless you are Tim or one of his close team, your opinion on product does not matter, you will do what you are told. So, unless you are one of those few people, "ex-Apple" means that you are very good at executing on things you were clearly told to do. I would gladly hire any ex-Apple engineer and be sure to get a great hire! But hiring ex-Apple product people...

> This one sounds more reasonable

yeah, for example, purchasing a magazine about surfing


I don't know why the title is written the way it is. It's slightly different from the title on the article -- "Humane execs leave company to found AI fact-checking startup".

The current link title "AI Humane execs leave company to found AI fact-checking startup" reads like an AI running the company Humane left to start its own AI powered fact-checking startup. It's very Kafkaesque and funny but maybe a bit inaccurate.


I find the title of the article to be confusing/ambiguous. If you're not familiar with the company, "humane" could easily be read as an adjective here.

How do egregiously failed execs continue to get funding and high profile gigs? It's not just with Humane. I've seen this with the Better.com founder, his second failure after UNCLE, and with execs from MSFT Xbox. I can't for the life of me understand how they continue to get opportunities.

And how do I get in on it?


The skills required to become an exec are a lot of pitching yourself and networking to the right person to get your next job. Those skills are only slightly correlated with how good you actually are as an exec.

Add to this that most companies don't like to take risks and will only hire an exec that has already be an exec somewhere else. To some level, once you make it through the exec glass ceiling, you are almost always guaranteed to always be an exec somewhere.

You see this all over the place by the way. I have seen the same thing for Directors, Principal engineers etc. Those people were not always that good at their job to justify their positions. But they for sure knew how to market themselves and interview well.


>The skills required to become an exec are a lot of pitching yourself and networking to the right person to get your next job. Those skills are only slightly correlated with how good you actually are as an exec.

source?

large corporations harnessing and deploying massive resources have led to air travel, skyscrapers, cell phones, MRI machines, medicines, the green revolution, etc. all things that require undertaking risk, but of course effectively managing it. Your claim is that skills don't matter to claim leadership positions in such organizations, only self promotion. OK, let's say that's true: the system sure seems to be working. And of course, people who think they have a better way are free to pursue that avenue... maybe some of the most successful companies in the world have pursued just such avenues.


Almost every economic system in history has had some impressive wins, but that doesn't mean that they are all perfectly efficient. Modern corporate capitalism isn't the worst system ever but that doesn't mean there aren't massive flaws and blindspots.

Becoming a large company CEO is highly correlated with who a person has spent time around. If it were a true meritocracy you would expect a roughly even distribution across regions and backgrounds, which is very much not the case.

If you ever work in a large corporation I can almost guarantee that you will encounter managers who are much better at networking and marketing themselves than they are at doing productive work.


Hypothesis: most VCs have a hive mentality and would prefer betting on low risk founders through their network of "highly accomplished" individuals: usually white males - false aura of achievement, survivorship and in-group bias, Matthew effect.

Sounds about right to me.

In-group membership tends to be sticky. People can stay in a tribe even when their qualifications in someone outside of the tribe would be insufficient to let them in.

I imagine that's partially driven by a feeling that one's previous qualifications lend some predictive power that they will earn their place again in the future even if right now they're foundering. Also, I suspect a lot of it is that people don't want to be in groups where it's too easy to get kicked out because it makes themselves feel to tenuous. So the kinds of in-groups that stick around are the ones that are a little more generous to their members once they get in.


They haven't raised much funding yet:

> Infactory has thus raised a pre-seed, though its founders declined to confirm the amount or investors. Seed funding will be a focus for the next “six to 18 months,” per Hartley Moy.

A pre-seed round with anonymous investors doesn't mean much. It can be as simple as friends or even themselves.


People are looking to hire someone with experience, and the CEO club is relatively small.

I'm curious- what's their strategy for determining what's true? Is it an Orwellian setup where certain media organizations (eg. NYT, Reuters, Wikipedia) are deemed to be an authoritative ministry of truth?

Does it matter what their strategy is? This is vaporware. People consume information through search engines and platforms. If there's a market for fact-checking AI, it'll probably be developed in-house, since all the big companies have in-house tech. The most they can hope for is to grab as large of a bag as they can before this bubble pops, or hope they'll get bought out in a few years.

The article explains that:

> Infactory will pull information directly from trusted resources

Of course it means assuming anything the NYT says is true. Biden is sharp as a tack, it's a fact!

Nah, actually the article says they'll avoid politics. They want to be a better Bloomberg Terminal apparently, and only focus on quantitative data for business purposes. Basically OurWorldInData + LLMs.

In theory you can actually do a reasonable job of this sans Orwell. You train a model on a really wide selection of sources and then get it to spit out the knowledge that doesn't seem to have any disagreement within the dataset. The assumption is that no disagreement = fact. This heuristic isn't bad, but it's been tried before and is prone to errors in a few well known cases. Google tried it years ago, predating LLMs. It worked well for things like "how high is the Eiffel tower" but unsurprisingly one place it worked poorly is political ideology and terminology. Different political tribes often have their own ways of using language.

Example: "is George Bush a war criminal"? Turns out that the internet is full of documents asserting this to be true, and not many asserting it to be false. This isn't because it's a widely agreed fact. It's because the left believe the concept of war criminal makes sense and can be applied liberally, but the right doesn't. Presented with this statement the right tend to say, a criminal according to which court and which government? The left say according to international law which is again, a concept the right doesn't really recognize as being legitimate to begin with because they think that law inherently flows from the concept of a nation state or empire, not a group of NGOs.

At heart the "problem", if you want to call it that, is that the left is generally more passionate about politics and power so if they believe in a concept they do things like take poorly paid journalism jobs and write lots of articles that take for granted the legitimacy of their ideological precepts. The right do things like go work in banking or agriculture or oil, or indeed the tech industry, and don't end up with much time to spend arguing with them. So these concepts filter into the dataset without pushback. Deploy them in real world debate though, and suddenly you get that pushback.

(this seems to be one of the reasons that a naively trained LLM ends up super woke - the internet is just left biased due to the greater output of words from that tribe)

Fortunately there's a limited number of cases like this. The set of such cases does grow over time, but at a relatively slow rate. In theory, if you had people really and truly committed to neutrality and pursuit of truth, you could use LLMs to find claims that are both lacking in disagreement and also non-dependent on ideological disputed concepts. LLMs are actually pretty good at the sort of vagueness and nuance that understanding requires.

The problem is that such a program would be very boring and not commercially useful. Ironically, the very concept of fact checking is itself an "is George Bush a war criminal" type problem. The right take it for granted that reality is complex and depends on perspective, the left take for granted that reality is simple and can be painted in black/white, correct/incorrect. So the right doesn't spend much time on "fact checking" as a concept, because they see it as a quasi-illegitimate endeavor to begin with. From their POV there isn't actually much disagreement on things that are genuinely empirical facts like the speed of light or the price of USD:GBP yesterday at noon, so what fact checkers end up spending time on is in reality a sort of political censorship / propaganda operation aimed at shutting down any viewpoint they don't like.

So an AI dedicated to genuine checking of empirical facts would probably find that there isn't much to do. People are pretty good at agreeing on facts already, there are not that many errors to fix (and the errors that do sneak through are rarely important). An AI dedicated to the sort of fact checking that Snopes engages in would be very busy indeed, but there are plenty of people willing to work for peanuts to engage in ideological warfare against the right so where's their commercial edge? Seems like another business failure waiting to happen.


Ken Kocienda is known for writing the iPhone software keyboard, an experience he wrote a book about that is well worth a read.

These guys have way too much money…

Correction: These _investors_ have too much money

Actually it’s a step further: these _LPs_ have too much money


in terms of AI's making us smarter, so far, AI-such-as-we-know-it provides a greater advantage (labor saving) for traditionally smart people (who are less likely to be led astray by bullshit) than it does as a substitute for the being actually smart part.

I love using it to write code or answer questions better than simple websearches, but man it produces some nonsense just as do websearches.


> For another, it’s next to impossible to launch a startup in 2024 without some upfront AI pitch

Nitpicking on one specific sentence, but reading that feels so dumb...


If you remember their original teaser videos years ago and the writing around their product when they were in Stealth Mode, it had nothing to do with AI. The sales pitch was that it was the next iteration of an iDevice-like device with a unique form factor and method of interaction.

Only at the very end did they blast "AI AI AI AI" all over the marketing and rename it to "AI Pin" and have everything AI in AI their AI announcements AI talk AI about AI. It was pretty transparent what they were up to.


This was true like 8 months ago but investors are getting wise to it.

It's what got us that awful pin.

They went full waterfall. Never go full waterfall. /jk

I'm aware that the original waterfall paper actually calls for frequent iteration. It reads a lot like the Agile Manifesto actually.


Yep. Waterfall is everyone's favorite kneejerk whipping boy.

Big fat eye roll. The service will be redundant before it is even launched.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: