The !wa bang command is great for weird conversions. (Ex: Sometimes watching mad men I might plug in "!wa 100 dollars 1960" and out pops an inflation adjusted figure)
!fake can check a given amazon product for fake reviews
Stuff like Wikipedia (!w), reddit (!r) etc.
Some people complain the results aren't as good as Google but I think they have either forgotten and/or never learned how to search. Since the engine has no PII on you, it can't handle ambiguity well. A good heuristic is if your query would give you a disambiguation page on Wikipedia, add a few terms.
I've been using it exclusively for a while, and the only time I really go back to Google is for their maps.
A huge chunk of what people do on search engines is essentially using them to query databases. If I know which database my answer is in, why add an extra step?
It's actually a point in DDG's favor that they forgo a chance at ad impressions and allow the user to jump directly to what is wanted.
(Though to be fair I suspect a big chunk of their income comes from Amazon referrals, which will give them income whether you search on the site or use a bang command)
Back to your bang feature point, I really appreciate that DDG makes it easy to search Google instead by simply using !g. Tech companies don't always make it easy to use a competitor, and there's something to be said for DDG not creating barriers to using other search engines.
Or you can use !s to use StartPage, which uses Google's results to seed it's own and is more privacy respecting
Google basically enhances that in three ways:
* It takes away the choice/effort behind selecting a source.
* It combines multiple sources in an ok way
* It gives ok results for things that aren't really just shortcut queries
(!gi for their images I use sometimes)
I wouldn't call my searches exotic but merely specific.
I think I'm just used to the nuances of DDG where others (who currently use Google) are used to the nuances of Google. You get used to crafting search queries in a specific way, and having to change it is noticeable.
Nowadays my favorite are
!wbm : wayback machine (buggy but still useful)
!cache : google cache (as a super crude text only view)
PS: How do you generate monospaced font on HN?
thanks for the tip friend :)
I've copy-pasted this to HN previously, but it's worth posting again -- explanation from their About page:
"Search engines like Google are indispensable, able to find answers to all of your technical questions; but along the way, the fun of web surfing was lost. In the early days of the web, pages were made primarily by hobbyists, academics, and computer savvy people about subjects they were interested in. Later on, the web became saturated with commercial pages that overcrowded everything else. All the personalized websites are hidden among a pile of commercial pages. Google isn't great at finding those gems, its focus is on finding answers to technical questions, and it works well; but finding things you didn't know you wanted to know, which was the real joy of web surfing, no longer happens. In addition, many pages today are created using bloated scripts that add slick cosmetic features in order to mask the lack of content available on them. Those pages contribute to the blandness of today's web.
The Wiby search engine is building a web of pages as it was in the earlier days of the internet. In addition, Wiby helps vintage computers to continue browsing the web, as page results are more suitable for their performance."
EDIT: Apparently Wiby also has !g and !b for Google and Bing redirections.
My test search on all of these engines is "atari", so all I get here is atari fan pages, and blogs about retrocomputing and restoring old hardware and stuff, no atari.com, no wikipedia, no steam page for atari collection sales.
See the results and decide.
It's a search engine that returns results but without the first 100 to first million popular results with logarithmic steps. It's a great way to step out the normal popularity filter bubble and find new things.
It uses other search engines, but so does at least DDG and Ecosia that are listed there.
There are actually several Searx instances you can choose: https://github.com/asciimoo/searx/wiki/Searx-instances
: I also have it externally facing for use outside of my LAN, with forced SSL and an http simple auth to prevent unauthorized access/0day exploits, and every browser I've tried works fine with simple auth in front of the default search. Even iPhone browsers are fine
I'm not sure how that's an improvement over DDG.
It queries Google and Wikipedia and the like. It's a metasearch engine, so doesn't crawl the internet on its own.
It's an improvement over DDG in that it's free software: You can't host DDG yourself.
1) An engine that removes all results from corporate entities. Essentially all pages would be from mostly independent entities. Obviously this would not be a primary search engine in and of itself since it's restricting a lot of content that could be useful, but at times I'd rather see what people are creating instead heavily SEO'd corporation #64728321. The description of 'Yippy' seemed promising here, but a search for 'space' quickly lowered my optimism.
2) A truly semantic search engine. It's amazing that e.g. Google was founded 20 years ago. And it was a major step forward in that searches for 'Abraham Lincoln' would no longer return hardcore porn. But since then we really haven't really improved much beyond that. Imagine a search for 'pages updated within the past 30 days about the launch of the crew dragon excluding large media and all social media results' would actually return what I'm looking for. Wolfram Alpha  is a very good proof of concept here, where the entire internet could effectively be a subset of all results.
 - https://www.wolframalpha.com/
If they want to go the extra step and identify sister corps, that works too!
We silently opened to the public about a month ago, and we are currently putting the final touches on the site and story, and working on onboarding more charities. Right now you can support charities working for climate, animals and children.
Since we have a lot of DuckDuckGo fans on here, I think I should mention that Givero:
* Has DuckDuckGo compatible bangs
* Has Instant Answers (just launched the first 3, more to come)
* Is Privacy centric.
* Is Bing based, like DDG.
* Is Euro hosted.
So a good mix of DuckDuckGo and Ecosia, with the key difference being that we donate 50% of our entire revenue to charities you choose.
Full disclosure: I am the founder. We're a small team based in Denmark, formerly working on Findx (privacy search engine with own index), which shut down last year.
We do not use third-party analytics tools, so your IP is not shared that way.
Your IP is anonymized immediately in our analytics tool (self-hosted Matomo), and we don't store your queries there.
We anonymize our raw weblogs, which are only used for debugging purposes, after 5 days.
We do have to pass on your IP to Microsoft at the moment, but you have the option to turn off personalized results (the "filter bubble", basically) on the search result page. We are working on getting permission to further tighten the privacy options here, but Bing requires a certain volume before they're willing to discuss it (several million searches/month) as we discussed with their VP of Search Partnerships in Europe. So we're on-par with Ecosia right now, but working to be on-par with DDG.
How should I go forward with this plan? Are there any APIs available? Or should I put on a biz dev hat and talk to Bing guys?
Only 1kg? Most of a tree's "dry" mass is carbon, it gets that carbon from the atmosphere, and most trees weigh more than 1kg...
Having more forest area keeps more carbon down as long as the forest lives. However, if you want to actually get rid of the carbon permanently, you'll have to bury it.
Maybe 1kg per tree reflects some kind of average of these 2?
But I'm not an expert in this topic (not even very knowledgeable) so someone please confirm or correct me!
Also, I've heard that the issue with AI in some specific domains is substantially slower responses than traditional / non-AI. Would this / how would this be reflected in a search engine? Would query results be updated less often?
It could also actually understand web sites and create meaningful summaries. And all sorts of other things.
As for speed, I don't know - it seems in many applications AI is way faster than traditional approaches. Also things can be parallelized. DeepMind trained for many years (measured in computing time), but it was parallelized so it didn't actually take years to train.
That's only 7000 kwh, or 100 full charges of a Tesla. Seems more like a gimmick than anything impactful.
The engines behind all of the general-purpose search products mentioned in the article are still Google or Bing.
I would love to see a competitor.