I stumbled across this a few weeks ago via Google or Kagi. One problem I have with a lot of AI tooling today is that I can't tell if it's a legit product, or just a toy that somebody vibe-coded in a day. AI can easily crap out a website that looks like this.
This is all just my 2 cents, so take it with a grain of salt, but I see trust and authenticity as a huge issue (made worse by LLMs), and it's doubly so with AI-based companies because it attracts flies.
One question I have is it's not clear to me that this all just doesn't boil down to plain old SEO. Does your platform generate recommended actions on how to improve your ChatGPT ranking? (and how is that different from just improving your PageRank?)
Thanks for the feedback.
To answer your question is:
1) yes, you still need good content as if you're ranking for SEO. However trad seo(i'm shorthanding bcuz i'm lazy) requires you to really to be in the top 5 results. Although this has beeen made worse by google AIO, 0 click answers, and a bunch of other things. For AI SEO you need to be in the top 20-30. Easier-yes but fundamentals like writing good conent is still needed
2) The between trad SEO and AI SEO is that the content you have to create to rank well has changed-often needing multiple pieces to rank well. For instance the prompt "Which AI company has the safest AI models?" has google results show a lot of listing various companies by ranking them safe and least safe-almost promotional content. But if you look at what chatgpt is searching and getting, it's a lot more safety index, reports, and technical information.
There's SEO experts that believe that trad seo and ai seo is the same-that's fair. We're of the group that believe it's pretty much the same, although we're starting to come at an inflection point and it'll change in the future.
To answer your second question: we don't have an icon to give recommended actions. I think that's a mistake we made during planning, where we saw "ai recommended actions" and questioned the validity of them. For instance, it doesn't make sense to edit and change wikipedia becuase that's incredibly hard although a few platforms do recommend that.
The idea was that users would improve their keyword research and blog content strategy and we just assist in figuring out what to write for, which we still think is the best approach- however it seems that wasn't very clear. It's different from improving page rank because the strategy has been shifted. Instead of opimizing for the 1st and 2nd result, you're optimizing for topics as a whole and can even appear on the second page of google. If you're saying improving your pagerank via making good content that's unique and high quality-then I fully agree that you need to improve your pagerank
So many LLM submissions here are self promotion spam from accounts with no other activity, which further makes them seem low effort and decreases the average amount of trust they've earned or deserve.
If this is a business, which it sure seems like it is, then this is such a messed up idea. Exploiting whistleblowers and the whistleblowing system for profit. And they're trying to incentivize whistleblowers with money too.
Whistleblowers take all of the risk here, and only get 20% of the proceeds. Seems like a pretty shit deal, besides being confoundingly greedy.
There already are people you can trust, who aren't anonymous, who are professionals bound by ethics, and who aren't out to sue for profit: Journalists. investigations@icij.org
They've been teaching C in universities for like 40 years to every Computer Science and Engineering student. The number of professionally trained developers who know C compared to Rust is not even close. (And a lot of us are writing Python because it's easy and productive, not because we don't know other languages.)
I think you'll find that C (and C++) are rapidly disappearing from computer science curriculums. Maybe you'll encounter one or both in Operating Systems, or an elective, but you'll be hard pressed to find recent graduates actually looking for work in those languages.
It's absolutely abhorrent that you think combating the effects of racism in the healthcare system is political. Do you also believe that about sexism in the healthcare system?
Not the same poster, but the first "D" in "DDoS" is why rate-limiting doesn't work - attackers these days usually have a _huge_ (tens of thousands) pool of residential ip4 addresses to work with.
I work on a "pretty large" site (was on the alexa top 10k sites, back when that was a thing), and we see about 1500 requests per second. That's well over 10k concurrent users.
Adding 10k requests per second would almost certainly require a human to respond in some fashion.
Each IP making one request per second is low enough that if we banned IPs which exceeded it, we'd be blocking home users who opened a couple of tabs at once. However, since eg universities / hospitals / big corporations typically use a single egress IP for an entire facility, we actually need the thresholds to be more like 100 requests per second to avoid blocking real users.
10k IP addresses making 100 requests per second (1 million req/s) would overwhelm all but the highest-scale systems.
We had rate limiting with Istio/Envoy but Envoy was using 4-8x normal memory processing that much traffic and crashing.
The attacker was using residential proxies and making about 8 requests before cycling to a new IP.
Challenges work much better since they use cookies or other metadata to establish a client is trusted then let requests pass. This stops bad clients at the first request but you need something more sophisticated than a webserver with basic rate limiting.
reply