Hacker News new | past | comments | ask | show | jobs | submit login
Google search drops cache link from search results (seroundtable.com)
501 points by croes 10 months ago | hide | past | favorite | 172 comments



The cache: qualifier does still work for me, but I fear it too is deprecated. Another incredible Google blunder.

https://webcache.googleusercontent.com/search?q=cache%3Ahttp...


I have a feeling this has to do with AI and all the lawsuits regarding content and copyright, storing and displaying the contents of a third party website to the end user might not be the best idea right now (I know the way back machine exists but they’re not a for profit company like google).


I can picture future HN conversations now.

"It's a good thing copyright law exists to prevent AI from doing anything useful because I'm fearful of AI."

"Hey, does anyone know why the web has become so boring and useless???"

Who knows – maybe Google Search won't even show page descriptions in the coming years.


"You can always count on Americans to do the right thing - after they've tried everything else." -Winston Churchill

A paradox of politics is that the desire to not talk about politics favors the status quo, necessitating the need to talk about politics.

It's obvious to me that AI is being rolled out in just exactly the wrong way at multiple levels, creating misaligned incentives and a perversion of original goals. But the knee-jerk reactions to it that drive the creation of draconian policies were predicted almost a century ago.

It seems that everything we hold sacred is under attack in these times. The smart bet seems to be on ensh@ttification. Because the organizations that were formerly stewards of online freedom have abdicated their responsibility.

I find it helpful to remember that every misstep by established players creates an opportunity for newcomers to compete.


This is an important comment.

Instead of "AI"(be real, llms and stable diffusion) being developed openly and studied without a profit motive, it's been thrust into the cheap world of VCs looking to extract any profit whatsoever from it, nearly immediately.

This has caused these "AI" tools to be used to steal and rehash artists work(cheapening artists because tech bros and MBAs resent art and the time it takes to perfect). Cheapening human labor(why pay a person to understand anything when you can have an llm do 20% of the job and worse).

These AI tools were thrown in the deep end with a singular purpose: Cheapen what was thought to be protected from computers, while not providing any real value to the layperson.

The average person can maybe get a funny joke, a bad few lines of code, or an ugly bespoke AI image for their medium article, but the true winners are the ones cutting jobs en masse before the tech has even matured, so both the employee and the customer gets a worse product while the MBA's show a solid quarterly report after they ran a knife across their workforce's neck.

Those with power and money have continued to show they will not use technology for any positive societal purpose until they are forced to with regulation. So we're forced to neuter the technology before it can really develop. It's like one child playing violently with a toy, forcing the teacher to take the toy away from everyone else.


There should be no surprise that "those with power and money" will not willingly apply "technology" for social good. It should be expected, rather. Your comment brings to mind Edward R. Murrow's speech in 1958 about the direction of television (certainly would be "high tech" back then!) He and others envisioned education, information, and enlightenment from this technology. Well, you know what we got instead. Here is a summary of the speech, and you can hear the whole thing on youtube if you like- still absolutely relevant.

https://www.poynter.org/reporting-editing/2014/today-in-medi... https://www.youtube.com/watch?v=AIhy0T7Q48Y

We can see the same thing with other newer media; Ex. video games. I don't have solutions per se, though I wish this pattern could be different. The profit motive and appealing to base instinct seems to always "win".


I'm aware it should be expected.

The target of my comment is specifically the "well profit=good" crowd on here.

It's to point out the innate contradiction in how we speak about technology compared to the guaranteed outcome in our system.

We're doomed as a species if we keep believing in the magical market as a primary mover for said species. We'll stall in a circlejerk of ads and stock buybacks and never accomplish anything, because any good use for technology is locked behind a gate due to the lack of profit.

Any improvement in medicine is behind lock and key because the pharmaceutical companies "need to make back their investment" despite massive gov funding.

We're killing ourselves here with the spectacle that this system is either working or worth saving.


True points, and I apologize for missing you actual point and wandering off a bit. We probably need another social model for technological development specifically- but I am not sure what it might be. If there is anything useful to salvage from my previous its that this issue has been with us for a long while.


I think amazing gains could be made if we ripped the bandaid off and took all the weapons manufacturers and other gov contractors and used that brain power entirely for darpa style public works projects.

Our current military industrial project is just a self fulfilling fantasy. Stop the contracts for war, create contracts and research for moving humanity forward without the need to strap it to a missile 20 years before it hits shelves. Keep IP rights publicly owned, license them to anyone and use the licensing fees to fund new research.


One could make a similar argument about every other major technological advance in history.

I’m not sure why people seem to believe AI is different or special or what leads them to believe you can stop it anymore than the automated loom or the combine harvester.


Given the invention of the cotton gin, the slave owners should have been clubbed to death and the technology not used to make slavery more enticing but to improve productivity to reduce level of effort and provide more free time.

This can be translated to modern day.


> "Hey, does anyone know why the web has become so boring and useless???"

Yes, I know. Some companies have abused the social contracts of the web which were in place since its inception, and told that they are doing something amazing so they need no permission to do what they're doing.

So the web has responded the way it should.


Boring and useless? More like A DARK FOREST with disinformation and gaslighting everywhere, and fake “participants” in your forums. Oh, hi human :)


I won't be surprised if Archive.org ends up getting screwed at the end of this fight.


They already are, with content platforms clamping down on public access, with more and more content locked away behind exorbitantly expensive APIs (mostly thinking reddit and twitter here)


Yes and no. The deep web has long been much larger than the observable one.

What I'm worried about is a new set of legal precedents that will make it impossible for Archive to legally rehost the content they have scraped.


We need a new web!

MaidSAFE

Freenet

Hypercore


MaidSAFE afaict is just another IPFS which you can pay to get your data hosted on.

HyperCore apparently got acquired and they are a company seeling solutions to businesses.

Freenet 2023 is a FOSS project. I'm watching the matrix server for a while. Ian says they're launching the network in 2 weeks. It is a decentralized data store + runtime. So while the original Freenet was analogous to disk, Freenet 2023 is analogous to an entire computer. See https://freenet.org/


MaidSAFE is far beyond either Freenet, Hypercore or IPFS.

You should read their primer: https://primer.safenetwork.org/

It's completely uncensorable and unstoppable.

They encrypt every chunk of data, using "self-encryption." They don't require a manual market for hosting (like IPFS) so people can't be intimidated into not hosting something.

They even have their own implementation of the DHT which removes IP addresses after the first hop, so you can't discover the whole network and DDOS it / block it (which is not true of HyperCore, IPFS, etc.)


From their website[1]:

>> You're likely to want to store data on the Network. Why? Because in return for a very small one-time payment, your data will then be stored forever, encrypted and accessible anywhere in the world and only by you—unless you choose to share it.

No different than IPFS.

>> Safe Network Tokens are the incentive mechanism that encourages individuals to provide the computing resources that the Network requires: storage, broadband, and CPU resources. ... Individuals who choose to supply the resources that the Network requires have the opportunity to be rewarded with Safe Network Tokens. This work ensures that the Network rewards those who provide it with valuable resources.

> They encrypt every chunk of data, using "self-encryption."

>> Next, let's talk encryption. Imagine you want to store a photo. That data is protected by a number of layers of encryption. Your photo starts by being broken into pieces which are then encrypted with the other parts of that same file. This 'Self-Encryption' happens before the data ever hits the Network. So, unless you choose to override it, none of your data touches the Network unless it is encrypted. And it’s designed so that you're the only one that ever holds the key.

If you hold the key, why bother encrypting the data with itself? TBH the entire thing reads like a new crypto.

> They even have their own implementation of the DHT which removes IP addresses after the first hop, so you can't discover the whole network and DDOS it / block it (which is not true of HyperCore, IPFS, etc.)

This also makes large parts of the network unreachable. Freenet achieves DDOS protection without this extreme measure[2]. It also allows development of all sorts of apps, it's not only for data storage.

1: https://safenetwork.tech/how-it-works/#where-is-data-stored

2: https://docs.freenet.org/examples/antiflood-tokens.html


> No different than IPFS.

You’re wrong. SAFE is autonomous, meaning no one has to agree to host your thing, and no one has to agree to pay you for hosting. You just spin up a node and earn coin. With IPFS there are manual deals, it’s not clear how to actually get paid for your file space, and they even opened up a new tier that pays 10x filecoin block rewards for hosting data they consider “important”, but the hosts for that data don’t charge any fees and despite being quite tech-savvy I couldn’t figure out how to start earning FileCoin for hosting, using their program at all! Do I just pin the IDs of data which is going to possibly earn me 10x block rewards, or do I need to advertise and get someone to whitelist me for a “deal”? (Did anyone here do it successfully?)

> This also makes large parts of the network unreachable. Freenet achieves DDOS protection without this extreme measure[2]. It also allows development of all sorts of apps, it's not only for data storage.

Not that kind of DDOS. If I know the IP addresses of all the nodes on the network, I can flood them with traffic via the DHT leaking their IP.

Freenet does antiflood tokens on the level of their protocol, but I can DDOS their ports and subnets without conforming to the protocol. SAFE doesn’t let you even send a message to the end-computer’s IP because there is NO ADDRESS on any protocol (including BGP) that would route it there. That makes the network harder to block.

Also, I fail to see how this makes parts of the network ureachable. It’s just that I need the neighbors of a node to agree to forward info to that node. Unless you mean eclipse attacks?

> If you hold the key, why bother?

That’s a very good question… this is the only patent their team ever made… the data is encrypted using its own contents instead of relying on an external key, to aid in things like deduplication as multiple people storr and encrypt the data, while each has their own key to access it: https://youtu.be/Jnvwv4z17b4?si=oHM96fBjCVGaGmSN

It’s not a “new crypto”. That video is 9 years old, and SAFE network started before Bitcoin! Just a bit after Bittorrent. They started in 2006 and are still working on it.


Can you link the proof for the actual claims?


Can you be more specific about which proofs you want to see?

Self-encryption patent? Or the functioning of the network in general?

Im new here so unsure of the policy on links, but please search for safenetform. Folks there will be delighted to answer your queries. This project is very different from most and it can take a little effort to grasp it. That effort is very much worth it.


> It's completely uncensorable and unstoppable.

This claim is what needs something to back it up. Otherwise it's hard to believe.


Wouldn't this work in Google's favor?

"AI is doing a thing which we have already been doing for most of the internet's existence; this thing is central to the internet as we know it works today."

Versus

"AI is doing a thing that we stopped doing proactively because we thought it might be illegal, which also means it's clearly not that important to the web, please let us keep doing it for AI training."


“So as you can see esteemed jury, we’re not just copyright infringing _now_, we’ve been doing it for years! So you must acquit!”


Probably more to do with google hating maintaining stuff and they probably lose money showing you stale websites with stale ads instead of a live one with live ads


It could also be security related. I know occasionally companies accidentally make things public they didn’t intend to be public due to misconfiguration. Once this happens, those pages are available in the Google cache even when they’re no longer accessible. You can request the cached results be removed, but this takes time.


That's really too bad. I have a keyword for that search, because I have blocked this site at work in my hosts file so I can't browse, and only look at specific links via the cache.


Have we come to expect anything less from Google? Ugh.


I don’t even know what the point of using google search anymore is...


Appending site:Reddit.com to everything bc Reddit search sucks


Reddit search sucking is a feature - it's trying to protect you from the Reddit content, that's what really sucks. There was a brief moment it had some good advice on products and services, until SEO jerks got wind of it and astroturfed over the whole thing.


Yeah, Reddit is the most astroturfed place on the internet. There's a thriving market in "high karma" accounts that are bought and sold by PR firms, SEO companies, and guerilla marketers -- and a huge fraction, if not an outright majority, of product-related posts are engineered from such sources with undisclosed interests. These days, it's one of the cheapest, and one of the "best," ways to advertise.

I don't think that Reddit was ever good. (The upvote/downvote system stifles real discussion, and even normal people treat it like a game -- exaggerating and making up stories for upvotes.) But today it's unambiguously terrible.


Finding genuine product reviews is borderline impossible these days. Search results are usually just worthless affiliate link spam, even from supposed "reputable" outlets. Are there any other places on the internet that try to solve this problem in a way that can't be easily gamed by marketers?


At risk of throwing the baby out with the bathwater, think back to how you would've browsed the internet before Google. You probably would've grown your own web-directory of bookmarks, right? That's also how you explore the non-SEO-paved parts of the internet now.

I haven't learned to find product reviews yet, but I imagine that a trick would be to know which forums to go for certain kinds of products, e.g. maybe https://xdaforums.com/ for Android devices.

There is also the search engine https://boardreader.com/ which ONLY searches proper forums--sometimes you find neat threads, sometimes not.


Forums are a good one, but they're surprisingly hard to find these days. It really feels like we're back to word of mouth again.

This boardreader search engine sounds promising though! I'll give that a try next time I'm shopping in an unfamiliar market.


I used AltaVista.


Forums, surprisingly, are still quite good resources for product recommendations. I suspect it's because forums tend to be _slightly_ nerdier and more focused than more mainstream resources like Reddit. I do a lot of car restoration, and forums are goldmines of product recommendations and information.


I thought it was pretty good / relatively unexploited before diggs collapse, but I surely have rose colored nostalgia glasses.


Didn't Digg collapse in 2012 or 2011? The internet was a different place back then. Forums, in general, have gone downhill since then, mostly on account of low-effort phoneposters. (A trend heavily promoted by Reddit.)


Yeah but they enshittified Reddit too, so many links are to "unreviewed" subreddits that force you to login.

If it was not for Kagi putting some sanity back into search, Google and Reddit going to shit would have had me seriously reconsider this career and spending so much time online.

I think it is abhorrent borderline criminal to place yourself at the centre of the Internet experience, and one day decide to make it shittier for everybody because of short sighted lust for money and inept PMs.


Yeah exactly. Using reddit as a qualifier works only until everyone starts doing it. Thankfully for now we have old.reddit.com, until they deprecate that with some bullshit excuse in PR-english (who knows what will happen after the IPO). Then there's teddit, libreddit, etc., but they only work as long as reddit doesn't make a change. Feels like everything is a cat and mouse game these days.


Most people in tech aren't even using old.reddit.com

Ever notice people posting completely broken markdown? That's because they use new reddit, which disagrees about markdown newlines. I would guess >70% of programmers/tech people are using new reddit.


teddit/libreddit are basically as good as dead after the API changes last summer. It's all gone to the dogs.


> I think it is abhorrent borderline criminal to place yourself at the centre of the Internet experience, and one day decide to make it shittier for everybody because of short sighted lust for money and inept PMs.

Putting into words the thought I’ve been having recently.

I think Google is winding down on search. I wonder how important it is to them these days. There’s no way being this crap is not intentional.


I use old reddit with an extension that forces it and don't even know what unreviewed subreddits are.


I more often want the opposite, because reddit sucks for answering the sort of questions I usually search for.


You don't need google for that, Works with duckduckgo aswell.


> Appending site:Reddit.com

Or, with DDG, append !r


DDG supports both, but they do different things. 'site:google.com' is a DDG search that returns Google results, '!g' is a google search that returns any results.

Similarly, !r uses reddit's search


I personally am seeing issues with this too. Any time I search for something niche, and hence want some real world feedback, adding reddit to the search returns bot postings. Half the time you'll read comments that are fake reviews and the equivalent of Amazon bot reviews, in reddit comment form.

I long for the internrt of old. Everything is enshittified


Brave Search usually put Reddit Posts on top. Furthermore the Reddit post answer are now visible right in the search engine, so no need for the Reddit tracking. The only issue is you can only see OP title Post and answers, but not OP full question.


Could you explain how you generated this link? Whenever I try the old cache:<url> address I always get a 404. Thanks!


I went to the google search page and put in the URL with the cache: prefix. Eg "cache:http://archive.org/". This is now broken. Existing cache entries still exist, but unfortunately unless you know the URL it is inaccessible.



Thankfully, if a page is cached you can still use the 'cache:' command to retrieve the cached version - e.g., cache:https://apod.nasa.gov/apod/astropix.html


For how long?


Yes that works for me in Chrome / Android.


Just noticed it missing recently , frustratingly enough. So many times I'd search for something very specific and get a hit on a comment on a page or forum, often with tens if not hundreds of pages. Using the cached link always worked.


If confirmed, what a shame. Cache was a great resource. But one thing that already bothered me it was it didn't work on mobile, at least for me. Only while using a desktop browser.


I've had to use Google's cache maybe twice in the last year: the first time I was very surprised that they had hidden it one or two clicks deeper than it used to be; the second time was earlier today, and this time I couldn't find the button anywhere. I guess this confirms it's gone for real!


Heh.

First: "We should move the cache link one menu click deeper, we don't have room here"

(No one can easily find it now)

Later: "Wow, no one uses cache, guess we should remove the link!"


“But the plans were on display…”

“On display? I eventually had to go down to the cellar to find them.”

“That’s the display department.”

“With a flashlight.”

“Ah, well, the lights had probably gone.”

“So had the stairs.”

“But look, you found the notice, didn’t you?”

“Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard.'”

― Douglas Adams, The Hitchhiker's Guide to the Galaxy


Aren't they doing the same to reviews ? and to Google Maps ?


Wild guess: Some PM got promoted for discovering that the Ads product could make more display revenue when people don't go to the cached page result.


Also the money saved from not serving cached goes into the $1-$10m retention bonuses paid to Brain and Deepmind software engineers https://www.theinformation.com/articles/googles-defense-agai...


"Our telemetry shows that many people don't use the brakes often in an electric car, so we've moved them to a sub-menu of the dashboard. We've removed the brake pedal, saving $32 per car."


I used to use it on some sites that would show different results to Google vs people. LinkedIn cache was disabled years ago, I guess it was just time until the rest would be. on a similar topic, does anyone know how to view LinkedIn without an account?


Do you have any examples? I'd actually find this helpful for probably the same reasons lol


You could always make a fake account for LinkedIn.


I tried this. Account was blocked fairly quickly. I think they pattern match on activity and block such accounts automatically


Bummer to see Google actually killing a good feature.

Please take me back to Google 10 years ago when they actually had working products instead of experimenting with Ai.


Yes instead of making improvements to their product like removing spam sites like Pinterest, etc - they are actually killing the good features which gives Google edge over chatgpt. Not sure what the gameplan is here.


Right now, the focus seems to be on margin expansion.

Give them a few years at this rate, and they'll move to financialization.

A decade after that, they'll divest most of their assets and switch to providing services, like IBM, except they'll probably try doing it without adding a service team, which should make for some fun satire from The Register.


More the a few times, cached links were what saved me because the target site/page was long gone or down.

I guess this feature was too useful for their users..


web.archive.org.


I'm surprised I had to scroll down this far to see the obvious.

If you need cached versions of websites, just use the InternetArchive and make sure to donate.


It also should be obvious that only a small portion of all pages of the Internet are in InternetArchive.


Yes, it is. I admit it's not a solution for today but relying on a free service by a major corporation that won't accept direct payments for their search service is not a long-term solution.


Argh!

It was so useful for when a client deleted or changed a page in their CMS and had no recovery position or backup!

And by Client I mean Me most of the time LOL


The Google of 20 years ago was a vastly superior product


In the late 90s I switched from metager to google as it was just significantly better at locating what you was searching for. I'm now back with metager as it nowadays is significantly better at locating what I'm searching for.


I remember when Chrome first launched. It was such a beautiful product. The first major competitor to IE in years. Now, it's become the king it killed.


> The first major competitor to IE in years.

You mean Firefox was.


Chrome was beyond TERRIBLE compared to Firefox. Firefox: 1)Right click image 2)Save as

Chrome: 1)Right click image 2)Can't save as because Google decided so, just like Apple's mentality.

Imagine actually utilizing a browser where the maker decides you should not have access to basic features.


I just tried this on Chrome from 2010 + current Chrome and both had "save image" when right clicking an image?


The person is talking about a css trick. You could avoid the "save as" coming up if you made the image the css background url to an empty square.

However, Firefox was smart enough to not be tricked by that.

That's the right move if you side with the consumer and the wrong if you side with the producer. Google is fundamentally highly elitist and that's a pretty easy framework to use if you want to guess their actions.


Was perhaps the web easier to index 20 years ago though too?


I'd guess their point was, before they continuously removed features that were useful and optimized for ad revenue.


I still miss google code search. Someone know an alternative?


github search, the downside being that you need to have an account


Not sure of it's only me but GitHub search is absolutely unusable, it does not find a variable name in the file I have open in the other pane and I can see it exists.



sourcegraph search.


the Internet of 10 years ago was vastly superior period


Google sucks so hard now, literally anything they do is getting worse. No idea why anyone wanna work there or why they think they are cool paying 100 of k to incompetent people


Most of the people getting 100k are not the people making the dumb decisions. Besides, lots of cool research is still happening inside Google


No :( Now what will I do when a website is down or broken or just randomly not working? Why does google kill every good feature? What a terrible company...



And please, donate.


It looks like nothing more than a cost-cutting measure.

Cache is needed for machine translation of PDFs. This change has made it difficult to read PDFs written in other languages.


I don't buy it, still accessible from translate.google.com, the Translate button in Chrome, etc. etc.

Probably just a ham-fisted 'change for the sake of change thing'. """Reimagining""" search can often get boiled down to the least common agreeable set of things across 3000 people. This is "simplification"


Those don't work on PDFs.


Yes they do. Check again.


Google's Cache is a wonderful way to circumvent many paywalls. I have a bookmark that I use as a button to redirect the current URL to it's cached version.

You can try it out, just add a new bookmark and paste this:

"javascript:var url = new URL(location.href); location.href = 'https://webcache.googleusercontent.com/search?q=cache:%27 + url;"


Can confirm this works great, thank you :)


I circumvent paywalls with my back button and blocking the domain


I don't know why they're keeping the "so and so result in 0.XY seconds" line.


To make you squeal in agony knowing that they probably have the result to your query but there is no way to reach it due to the first 10 pages being junk.


It's literally impossible to look at more than a few pages (10 maybe? that's optimistic at this point) of those results, if they even do exist.


So this is why I couldn't find it...bummer. Which means another downgrade.


Maybe it just wasn't used enough to justify its placement in the UI. I always used the URL cache: prefix. The few times I tried through the UI, I couldn't find it. I'm fine with the prefix, just continue supporting it. Though these days, I use other cache services more than I use Google cache.



Matter of time before “powered by Bing” appears at the bottom.


Sammy Kamkar's "Just Bing it!" joke will recurse on that day.


Google continues to innovate!


Maybe Google is getting us ready to a post-Search AI world? Why to use Search, you might find there something which is gonna make you political active. Use neural network instead, it will help you to obey.


Cache: directive is very useful for many circumstances, especially for information archiving purpose, what a shame Google even want to cancel it.


Any archival purposes should use the Internet Archive; Google's cache couldn't be counted on to remain present and unchanged, even before this.

(The Internet Archive can't be counted on to remain unchanged either, but at least the Internet Archive will either remain unchanged or disappear, never change. I wish archive.org didn't auto-nuke entire sites based on robots.txt, because when domains disappear the domain squatters often seem to use robots.txt files for some reason.)


IA have increasingly ignored robots.txt since the mid-2010s, going on at least eight years now:

<https://blog.archive.org/2017/04/17/robots-txt-meant-for-sea...>

<https://blog.archive.org/2016/12/17/robots-txt-gov-mil-websi...>


That's excellent news, thank you!


This is a terrible shame.

I wrote a browser extension back in 2005 or so called Commoncache to help the user view a page when slashdot hugged a site to death. It used a fallback mechanic where it would try Google cache, then the wayback machine, and finally coralcache.

It was minorly popular and even included on a cd in an issue of MacWorld magazine.

I have absolutely no idea what's happening with Google's project managers these days, or whoever in the company is making product decisions. Thousands of highly intelligent and highly paid staff just keep making their core product and associated features increasingly user hostile.

YouTube search is an absolute travesty. Google search just floods the first page with various cards that take up previously useful space and doesn't add value to the simple need to find answers to questions.

I firmly believe that only relying on A/B testing for feature launches reached peak usefulness years ago. It's like everyone forgot to see if new features are a benefit to users at a human level simply because 51% of people click more on B, while A is the better experience for everyone.


Darn, I've completely forgot about coral cache. From the /. times. I remember it being slow but super useful.


Noticed this months ago. But what is the purpose of removing it?

Has been a slightly useful thing especially for older links to see even barebones text-only for what might have been on a site without having to go to IA Wayback Machine etc.


> But what is the purpose of removing it?

The cache links had a near-zero click and that ruined the engagement metrics of a project manager?


The cached pages don't increase Adsense impressions. I think they're getting more desperate in finding ways to increase revenue.


Well Google is making more me convinced each day with my investment in kagi $10 subscription. Kagi native support for archive wayback is one of the most useful features for me.


Can you explain the UX. Is it just one click to go to the latest wayback machine version of the site or something like that? (it was on my 'to do' list to either find or make an extension that does this, so kagi could be quite useful to me also)


There are three dots right next to the result, from that menu you can open the result on archive.org in another tab. [1]

But you probably can somehow move the link to a more prominent position as Kagi does support custom css that is served with your search results. [2]

1 https://help.kagi.com/kagi/features/website-info-personalize...

2 https://help.kagi.com/kagi/features/custom-css.html#customiz...


Sounds a straight-forward as hoped. Ty!


Kagi is rad. I signed up about two months ago. yeah. I'm not comfortable using it at work, so I search on my personal phone. it works like google should (or did). It's hard to quantify. But yeah. Write your own engine (which isn't crazy) or pay someone to do it for you (my solution).

Yeah. it's worth a few bucks.


I believe the reason why Kagi "feels right" for ex-Google users is that they are actually using Google to power the search [1]. They have other sources as well, but since it feels so natural after 25 years of daily Googling, I assume Google is quite important source for generic search.

[1] https://help.kagi.com/kagi/search-details/search-sources.htm...


"But most importantly, we are known for our unique results, coming from our web index (internal name - Teclis) and news index (internal name - TinyGem). Kagi's indexes provide unique results that help you discover non-commercial websites and "small web" discussions"

They say that what makes them usable, their results most topical, is their own index.

They also list Google as an ancillary, as well as many others.

But the "feel" is more from the UI, and the relevance of the results. Both of these ways Google completely fails on now.


this makes me want to roll my own. But it's such a huge undertaking (maybe?).

I didn't realize it was a rebalancing. But seriously, I have no regrets. if nothing else, the incentive alignment matches my values better.


Reminds me of the time I recovered the complete content of an SME's website from the Google cache. They had the site hosted by the company that was also their ISP, and had been paying that company for years for site backups, that turned out not to exist.


Another thing that they removed was the built-in "car loan calculator"

It's odd because they still have the mortgage calculator. I can only imagine it was some weird agreement between Google and car companies to remove it.


I'd bet that it was unmaintained and underutilized, and someone filed a bug, and it was easier to justify removing than to fix it.


It was very simple and I know many people who used it. The only thread I could find on it was this: https://support.google.com/websearch/thread/156391283/auto-l...

Can't imagine that takes much maintenance.


I'm getting sick and tired of seeing these "AI" generated shit images on 20 different articles daily. I can't be the only one


Wow. A large chunk of the web just died. As if the echoes of millions of past websites cried out in terror and were suddenly silenced. I fear something terrible has happened


The [indexed] web died before that because Google is already "forgetting" things in the indexer that were always there. I bet that Google/Alphabet already knows that these are the last shots. I don't think it is a coincidence that the first rows of search results are full of ads and it is very difficult to detect the first organic result.


One of the things I've come to really enjoy in Brave is its "Show Wayback Machine prompt on 404 pages" feature. I know this is tangental to your point, but I felt it still worth noting.


I'm finding it harder and harder to search for things from recent history.

They killed reverse image search for Lens, which is borderline useless.

Image search no longer has its date range filter. You can still use the undocumented keyword, but who knows when they'll take that away.

Search results are increasingly irrelevant. Yesterday, I was searching for news articles about the late 00s capture of an Al Qaeda leader who was tracked down in an unusual way. Amongst the results were Visa's careers page. (As a side note, I then asked ChatGPT about the event and it hallucinated in all 3 of my attempts.)


You can also use the bang !wayback and create a custom one for archive.ph ("https://archive.ph/submit/?url=%s") and I have one for LibGen/Scihub too.


Keep in mind that although the Wayback Machine (Archive.org / Internet Archive) will accept archival requests through a simple URL submission[1], Archive.Today does not and AFAIU requires at least some manual interaction to complete an archive request. Archive.Today also rate-limits submissions and will through up CAPTCHA tests if you're exceeding rates (though so long as you're retaining cookies, though seem to stay good for a while).

I've archived 100s to low 1,000s of my own personal contributions on a few online services to both sites.

For Archive Today, it is possible to expedite the archival process by generating the initial submission URL, though you'll have to complete another two steps after that point manually as I recall. If you're archiving a large set of sites, you can compile a list, generate a Web page off of that, and work through it at a pretty good clip.

________________________________

Notes:

1. The URL format being

  https://web.archive.org/save/<URL-to-save>
You can submit that via a script using any HTTP request generator, e.g., curl, wget, w3m, lynx, etc.


There is still archive.org


Archive.org is great but its always better to have more options. This is another nail in the Google that was cool.


Bing Cache still exists btw. But it's indeed sad google is killing it...


Noticed the other day, wasn't sure if it was specific domain or just me. Sucks still though


Hey, if anyone from Google is here reading this, what gives?

Could you tell your decision makers to stop making such stupid, user-hostile decisions?


Not from google but I can tell you a probable reason. I'd wager they did this purposefully, knowing that it's user-hostile. Google cache shows you what a page looks like to Google, and that often has things such as paywalls (edit - or login gates) disabled to improve indexability of the page. People could use this to get around paywalls, which is not great for profitability. Cached pages also don't show you the latest active ads.

Edit: also, easy access to non-paywalled content gives you a massive trove of training data for machine learning models. Even if these aren't the main reasons for this feature disappearing, they're pretty convenient side effects.


It saddens me when companies wilfully slap their users across the face. It's eventually self-harming, as each regression of their product's quality is an opportunity someone else will use to out-compete them (over the long term).

I remember being in an interview with a googler and they posed some contrived problem which, in restrosoect, I realized they intended I solve using URL re-writes (so all result clicks run through Google rather than direct to the desired site). This was before that was the norm. It was appalling - there's no way I would have entertained such an approach due to the way it breaks the user's expectations about how links work (not to mention degrading their privacy).

Today I can't copy a 'bare' news site link without extra steps or properly rely on the back button, and I wish I could find that guy and slap them across the head for making my Internet a shittier place. </rant>


> they intended I solve using URL re-writes (so all result clicks run through Google rather than direct to the desired site).

Interesting, well we know how that ended with google AMP. It's good that we have people that think like you. Sadly there's always someone else willing to just take the money and implement it. I'm grateful for the community and the hobbyists that build workarounds and alternatives (e.g. searxng), and I contribute where I can. I think that's the only real solution at the moment.


extremely sad, but not surprising. i used webcache all of the time. good thing the internet archive still exists, but who knows how much longer that'll stay up.

ruth porat joining was the day that old google died.


Damn - noticed this the other day... what a frustrating shame this is.


This is actually a monumental loss for the internet


Not much longer until Search joins https://killedbygoogle.com/


At this point is seems like they're just trying to make search worse.


The real reason for the change, in my opinion, is that the cache feature is used by many to bypass paywalls


I was looking for this link in a recent search. So infuriating!


The google enshittification continues, more news at 11.


I'm excited for the day when Google finally bites the bullet and makes their biggest UI "upgrade" to date, removing the search bar and keeping pesky user input from interfering with targeted ad revenue.


The other day I did an AI related search and had zero organic results above the fold. It has begun.


I'm probably in the university library most weekends scanning books into my fine-tuned LLM. Y'all can enjoy the trash, I've had enough.


What does that mean? Are you creating your own LLM, finetuned off the books from your library or something else? Hard to pick up on what you’re saying.


If you're not saying it sarcastically, that's an awesome project.


They're replacing sales with some AI contraption, so it makes perfect sense for them to replace their users with AI, too.


The New Google Search: Just browse our page of endless ads until you find the result you're looking for!


"Hear me out. Let's remove all of the relevant results so they keep browsing and viewing ads forever!"


> removing the search bar and keeping pesky user input from interfering with targeted ad revenue.

But how will they know what you are shopping for?


The Chrome intent integration engine coupled with recent 'Bard' conversations as well as whatever Google 'Home and Meet' hardware has overheard you saying removes the need for any user interaction. All that's left is to tell you what to buy and where.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: