
Ask HN: Question about search engine feasability - rossdavidh
So, many have noticed in recent years the bloated&#x2F;intrusive&#x2F;annoying drift of many pages on the web.  One could put in browser features to block much of this, but then most web pages become unusable.<p>However, we don&#x27;t need to use all of the web.  There are more web pages out there than we can ever visit in our lifetimes, even if we restrict it to just those in our own language.  What we need, is to be able to find usable web pages, that aren&#x27;t bloated&#x2F;buggy&#x2F;intrusive&#x2F;etc.  For this, I wonder if the browser is not the tool that needs to change, but rather the search engine.<p>So, if one made a search engine that only displayed results whose total content was below some user-set limit, or that had some other configurable filter (no javascript, no cross-origin content, etc.), is there enough out there still to have a usable amount of content?  I have the idea that there is a nice, text-based, non-intrusive, non-bloated web still out there, it&#x27;s just buried under a pile of bloated crapware.  If we had a search engine that could sift the wheat from the chaff, would there be enough wheat left to stay informed? (my apologies to gluten-intolerant readers for my wheat metaphor)<p>Thing is, I am not an expert in this field.  But, perhaps on Hacker News there are people who have some experience at what it takes to build a search engine.  Does this seem feasible?  Especially let me know if this has already been tried and failed.
======
itronitron
I think this would be a great feature. This could either be a
selectable/tunable filter that filters out results, or it could just provide
detail on the page size of each item in the search results. People already do
some amount of mental filtering when looking at search result lists so this
should seem familiar to them, in other words they will understand the filter.

I think you should email the people at duckduckgo.com and let them know that
you think it would be a very useful and valued feature by users.

------
elevation
In experimenting with IPv6 at home, I set up a wifi AP with only IPv6 access
(no IPv4 translation mechanism) so I could try an IPv6-only connection from my
phone. In this configuration, I can visit google.com, facebook.com, and a
handful of blogs that I know about, but the world wide web is basically
unusable without IPv4 because most of the links are IPv4 only.

Since it's hard to find IPv6-connected web pages when you only have an IPv6
connection, some people have reverted to manually building directories of
interesting websites that are accessible over IPv6. However, these are full of
stale links; manual curation of an internet-wide doesn't scale well.

I know that "ipv6-only" is an odd use case, but it sure feels like the early
web. We could really use a little search engine to be the front page of the
ipv6 web.

------
rajacombinator
Sounds like a potentially good idea to me and not that hard to implement.
However whether it would ultimately be useful is hard to tell without building
it first. So give it a try!

