Hacker Newsnew | past | comments | ask | show | jobs | submit | raywatcher's commentslogin

You’ll see results as soon as your first visitors arrive. Our agent evaluates every unique visitor, researches and qualifies B2B prospects based on your custom criteria, and starts showing you qualified leads in real time.


We’re not externalizing costs onto users because our demand inference uses only qualified, high‑precision intelligence to eliminate massive spam and irrelevant outreach that wastes everyone’s time and erodes trust. We conducted user interviews upfront to understand these pain points before we started building. By focusing solely on B2B, we reduce noise, protect privacy, and respect people rather than treating them like targets.


> our demand inference uses only qualified, high‑precision intelligence to eliminate massive spam

I recommend taking the Voight-Kampff test yourself.


Profit alone isn’t the motivation. Unlike polluting rivers, our goal isn’t to externalize costs onto society or the environment. We see a real problem: B2B teams waste massive time and budgets chasing unqualified leads, spamming prospects, and burning through acquisition dollars. By delivering truly high‑precision demand intelligence, we help businesses focus only on the buyers who actually want to talk, cutting down on unwanted outreach and wasted resources.

In other words, our goal is to create efficiency and reduce friction for buyers, sellers, and the planet rather than offloading harm onto someone else.


I don't think anyone is questioning the value for business. Marketing precision and accuracy come at a cost. That cost is the degradation of privacy; and by secondary order effect: human freedom, dignity and autonomy. That the resulting byproduct of this phenomenon is increased efficiency is irrelevant and uninteresting to the principal discussion.


So, basically:

“Call-for-pricing”-as-a-service.


One day you'll look back on how this technology is being used for actual real evil, and have to deal with the fact that your early naivety about the "value" you were bringing helped get us to that point.


Naïveté, and, IMO, a necessary deliberate desire to look the other way.

“Hey, I’d love to see the ahem ‘qualified leads’ looking at this crisis pregnancy center website, or at the Border Patrol website, or at the IRS tax deduction FAQ website.”

It’s so incredibly easy to imagine the ways this will be abused.


Thanks for bringing this up. We don’t collect personal data or process any PII. We only enrich qualified, high‑intent buyer intelligence, and we do so solely for B2B. Some GTM vendors have experimented in this space, but we focus on AI‑native demand inference with far greater precision and efficiency, while always following strict safety and privacy practices.


At first glance:

If I visit one of your customer’s websites, have I given them permission to collect and process my personal information? Do you have a way to avoid collecting this PII for people in GDPR countries? If I were to send you a CCPA request, do you have a mechanism to completely avoid storing my information?

I think this product is inherently incompatible with personal privacy, although I’d be interested in hearing why I’m wrong.


> We don’t invade personal privacy. We only enrich qualified, high‑intent buyer intelligence,

This is some fascinating mental gymnastics.


As the context grows, the output (usually a structured function call) remains relatively short. This makes the ratio between prefilling and decoding highly skewed in agents compared to chatbots. The problem is that context engineering is still an emerging science, even though for agent systems, it's already essential. This is overall a really interesting post and makes me think of Chroma’s technical report on context rot: https://research.trychroma.com/context-rot


The fish-trapping network fed the growth of early Maya centers


Malta's first settlers arrived from mainland Europe 1,000 years earlier than thought


Without the model you are nothing. Boris said it himself but then for a few weeks looks that insight was forgotten. Now everything is back in place. This could also indicate that Cursor is in a bad spot. No defensibility


For all the discussions about the slopification of the internet, the human toll on open source maintainers isn’t really talked about. It's one thing to get flooded with bad reports; it's another to have to mentally filter AI-generated submissions designed to "sound correct" but offer no real value. Totally agree with the author mentioning the emotional toll it takes to deal with these mind-numbing stupidities.


The most notable thing about this article, in my opinion, is the increase in human generated slop.

Everyone is talking about AI in the comments, but the article estimates only 20% of their submissions are AI slop.

The rest are from people who want a curl contribution or bug report for their resume. With all of the talk about open source contributions as a way to boost your career or get a job, getting open source contributions has become a checklist item for many juniors looking for an edge. They don’t have the experience to know what contributions are valuable or correct, they just want something to put on their resume.


Reminds me of those "I updated your dependencies/build system version" and "I reformatted your code" kinds of PRs I got several times for my projects. Yeah, okay, you did this very trivial thing. But didn't you stop to think about the fact that if it's so trivial, there must be a reason I haven't done it myself? "It already works as is" is a valid reason too.


I often update README files or documentation comments and submit PRs when I find incorrect documentation.

I’ve had mixed results. Most maintainers are happy to receive a well formatted update to their documentation. Some get angry at me for submitting non-code updates. It’s weird


There's nothing wrong with fixing actual mistakes. It's obviously in everyone's best interest for documentation to be correct.

But updating dependencies and such is totally unproductive. It's contributing for the sake of having contributed in its purest form. The only thing that's worse is opening a PR to add a political banner to someone else's readme, and then getting very pissed off when they respectfully close it.


It's weird because both of those are or can be fully automated nowadays, which IMO is a great litmus test for "is this merge request just karma farming"


It's human toll everywhere. AI used for peer review effectively forces researchers to implement suggestions between revisions, AI used by managers suggest bad solutions that engineers are forced to implement, etc. Effectively, the number of person-hours that is spent following whatever AI models suggest is increasing rapidly. Some of it might make sense, but uncomfortably many hours are burned in vain. There is a real cost of lost productivity in the economy by command chains not being ready to filter out slop.


Maybe instead of trying to detect LLMs, would a better strategy be to try and detect inconsistent or self-contradictory reports? The reports we see here seem to unravel at some point, either leaving out crucial information, such as code location or steps to reproduce, while insisting the information is present - or straight-up claiming things about a code location that are not there.

Such as the buffer length check in [1] where the report hallucinated an incorrect length calculation and even quoted the line, then completely ignored that the quoted line did not match what the report was talking about and was in fact correct.

So essentially, can we put up a gaslighting filter?

It seems like those kinds of inconsistencies could be found, ironically, by an LLM.

[1] https://news.ycombinator.com/item?id=44561058


this type of social moderation exist well over decade and FB had thousands of people hired for these. They were filtering liveleak level or even worse type of content for years with human manually watching or flagging the content. So nothing new.


> hired

Do remember "we're" (hi, interjecting) talking about open source maintainers, we didn't all make curl or Facebook


My gut tells me that deciding the soundness of a vulnerability report is not in the same complexity class as deciding whether a video showing ISIS torture footage.


> but offer no real value

They could offer value, but just rarely, at least with the LLM/model/context they used.

> toll it takes to deal with these mind-numbing stupidities.

Could have a special area for submitting these where AI does the rejection letter and banning.


I think looking at one example is useful: https://hackerone.com/reports/2823554

What they did was:

1) Prompt LLM for a generic description of potential buffer overflows in strcopy() and a generic demonstration code for a buffer overflow. (With no connection to curl or even OpenSSL at all)

2) Present some stack traces and grep results that show usage of strcopy() in curl and OpenSSL.

3) Simply claim that the strcopy() usages from 2) somehow indicate a buffer overflow, with no additional evidence.

4) When called out, just pretend that the demonstrator code from 1) were the evidence, even though it's obvious that it's just a textbook example and doesn't call any code from curl.

It's not that they found some potentially dangerous code in curl and didn't go all the way to prove an overflow, which could have at least some value.

The entire thing is just bullshit made to look like a vulnerability report. There is nothing behind it at all.

Edit: Oh, cherry on top: The demonstrator doesn't even use strcopy() - nor any other kind of buffer overflow. It tries to construct some shellcode in a buffer, then gives up and literally calls execve("/bin/sh")...


> The problem is in strcpy in the src files of curl.. have you seen the exploit code ??????

The worst part is that once they are asked for clarifications by the poor maintainers, they go on offense and become aggressive. Like imagine the nerve of some people, to use LLMs to try to gaslight an actual expert that they made a mistake, and then act annoyed/angry when the expert asks normal questions


Yep.

My guess is that the aggression is part of the ruse. Trying to start drama/intimidating the other when your bluff is being called out is the oldest strategy...

(You could see a similar pattern in the xz backdoor scheme, where they were deliberately causing distress for the maintainer to lower their guard.)

Or maybe the guy here hoped that the reviewers would run the demo - blindly - and then somehow believe it was real? Because it prints some scary messages and then does open a shell. Even if that's the only thing it does...


>They could offer value, but just rarely, at least with the LLM/model/context they used.

Eating human excrement can also offer value in the form of undigested pieces of corn and other seeds. Are you interested?


Funnily enough, fecal transplants (Fecal Microbiota Transplants, FMT) are a thing, used to help treat a range of diseases. It’s even being investigated to help treat depression.

So…


Oh, certainly. I know that if I was the test subject, no matter what else happened it wouldn't be the worst thing done to me that day :)


I'm sure it does. But would you like one every other week like the llm slop?


Honestly, regarding the whole "LLM slop" thing, I don’t care. I get why others do, but I just don’t.

I don’t care how that sausage is made. Heck, sometimes gen AI even allows people who otherwise wouldn’t have had the time or skills to come up with funny things.

What annoys me is all the spam SEO-gamed websites with low information density drowning the answer I’m actually looking for in pages of empty sentences.

When they haven’t just gamed their way to the top of search results without actually containing any answer.

And that didn’t need LLMs to exist. Just greed and actors with interests unaligned with mine. Such as Google’s former head of ads, apparently. [0][1]

[0]: https://www.wheresyoured.at/the-men-who-killed-google/

[1]: https://www.wheresyoured.at/requiem-for-raghavan/


> They could offer value, but just rarely, at least with the LLM/model/context they used.

Still a net negative overall, given that you have to spend a lot of effort separating the wheat from the chaff.

> Could have a special area for submitting these where AI does the rejection letter and banning.

So we'll just have one AI talking to another AI with an indeterminate outcome and nobody learns anything of value. Truly we live in the future!


It can be better. On slop detection, shadowban the offender and have it discuss with two AI "maintainers", and after 30 messages go and reveal the ruse. Then ban.


Just days before the FTC’s “Click-to-Cancel” rule was set to take effect in July 2025, the U.S. Court of Appeals for the Eighth Circuit struck it down. The court ruled that the FTC violated federal procedural requirements by failing to conduct a preliminary regulatory analysis, which is mandatory for any rule expected to have an economic impact exceeding $100 million annually.

Although the FTC initially claimed the rule would fall below that threshold, an administrative law judge later found that compliance costs would exceed it—unless every business somehow managed to implement the rule using fewer than 23 hours of professional services at the lowest possible rate. The court concluded that the FTC’s failure to issue a separate preliminary analysis for public review deprived stakeholders of a meaningful opportunity to challenge or shape the rule, rendering it procedurally invalid.

So no, the unanimous ruling by the Eighth Circuit didn’t kill the Biden-era regulation because the judges were Republican appointees. It was struck down because bureaucratic procedures weren't followed. Its a shame because I believe canceling subscriptions should be easy. It often is not in the U.S. (I’m looking at you adobe)this really undermines consumer interests :(


> unless every business somehow managed to implement the rule using fewer than 23 hours of professional services at the lowest possible rate.

I mean that's one way to comply with "click to cancel". You could also make signing up more difficult and I doubt that would take 23 hours.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: