Hacker News new | past | comments | ask | show | jobs | submit login

> What we might need to break this is ...

Create a core protocol at the same level as DNS etc., that web servers can use to offer an index of everything they serve/relay. A multitude of user-side apps may then query that protocol, with each app using different algorithms, heuristics and offering different options.




I've been thinking along similar lines for a year or so now.

There are several puzzling omissions from Web standards, particularly given that keyword-based search was part of the original CERN WWW discussion:

http://info.cern.ch/hypertext/WWW/Addressing/Search.html

IF we had a distributable search protocol, index, and infrastructure ... the entire online landscape might look rather different.

Note that you'd likely need some level of client support for this. And the world's leading client developer has a strongly-motivated incentive to NOT provide this functionality integrally.

A distributed self-provided search would also have numerous issues -- false or misleading results (keyword stuffing, etc.) would be harder to vet than the present situation. Which suggests that some form of vetting / verifying provided indices would be required.

Even a provided-index model would still require a reputational (ranking) mechanism. Arguably, Google's biggest innovation wasn't spidering, but ranking. The problem now is that Google's ranking ... both doesn't work, and incentivises behaviours strongly opposed to user interests. Penalising abusive practices has to be built into the system, with those penalties being rapid, effective, and for repeat offenders, highly durable.

The problem of potential for third-party malfeasance -- e.g., engaging in behaviours appearing to favour one site, but performed to harm that site's reputation through black-hat SEO penalties, also has to be considered.

As a user, the one thing I'd most like to be able to do is specify blacklists of sites / domains I never want to have appear in my search results. Without having to log in to a search provider and leave a "personalised" record of what those sites are.

(Some form of truly anonymised aggregation of such blocklists would, of course, be of some use, and facilitating this is an interesting challenge.)


I too have been thinking about these things for a long time, and I also believe a better future is going to include "aggregation of such blocklists would, of course, be of some use, and facilitating this is an interesting challenge."

I decided it is time for us to have a bouncer-bots portal (or multiple) - this would help not only with search results, but also could help people when using twitter or similar - good for the decentralized and centralized web.

My initial thinking was these would be 'pull' bots, but I think they would be just as useful, and more used, if they were perhaps active browser extensions..

This way people can choose which type of censoring they want, rather than relying on a few others to choose.

I believe creating some portals for these, similar to ad-block lists - people can choose to use Pete'sTooManyAds bouncer, and or SamsItsTooSexyfor work bouncer..

ultimately I think the better bots will have switches where you can turn on and off certain aspects of them and re-search.. or pull latest twitter/mastodon things.

I can think of many types of blockers that people would want, and some that people would want part of - so either varying degrees of blocking sexual things, or varying bots for varying types of things.. maybe some have sliders instead of switches..

make them easy to form and comment on and provide that info to the world.

I'd really like to get this project started, not sure what the tooling should be - and what the backup would be if it started out as a browser extension but then got booted from the chrome store or whatever.

Should this / could this be a good browser extension? What language / skills required for making this? It's on my definite to do future list.


There are some ... "interesting" ... edge cases around shared blocklists, most especially where those:

1. Become large.

2. Are shared.

3. And not particularly closely scrutinised by users.

4. Via very highly followed / celebrity accounts.

There are some vaguely similar cases of this occurring on Twitter, though some mechanics differ. Celebs / high-profile users attract a lot of flack, and take to using shared blocklists. Those get shared not only among celeb accounts but their followers, though, because celebs themselves are a major amplifying factor on the platform, being listed effectively means disappearing from the platform. Particularly critical for those who depend on Twitter reach (some artists, small businesses, and others).

Names may be added to lists in error or malice.

This blew up summer of 2018 and carried over to other networks.

Some of the mechanics differ, but a similar situation playing out over informally shared Web / search-engine blocklists could have similar effects.


Create a core protocol at the same level as DNS etc., that web servers can use to offer an index of everything they serve/relay

Isn't that pretty much a site map?

https://en.wikipedia.org/wiki/Sitemaps


A sitemap simply tells you what pages exist, not what's on those pages.

Systems such as lunr.js are closer in spirit to a site-oriented search index, though that's not how they're presently positioned, but instead offer JS-based, client-implemented site search for otherwise static websites.

https://lunrjs.com


How would this help anything? It would make the blackhat SEO even easier, if anything.


The results could be audited.

Fail an audit, lose your reputation (ranking).

The basic principle of auditing is to randomly sample results. BlackHat SEO tends to rely on volume in ways that would be very difficult to hide from even modest sampling sizes.


How do you stop the server lying?

If a good site is on shared hosting will it always be dismissed because of the signal of the other [bad] sites on that same host? (you did say at DNS level, not domain level)


> Create a core protocol at the same level as DNS etc., that web servers can use to offer an index of everything they serve/relay.

So, back to gopher? That might actually work!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: