This is something that's been going on for a while - Google killing small web apps; Convertors, calculators, movie listings, ip finders, weather stats, stocks. It's not all low-hanging fruit. I'm not saying they shouldn't be doing this, nor that it's intentional. Their goal is to be the best search engine which means connecting searchers with answers as quickly as they can. But even so, it sucks for the web apps who get made redundant by Google.
Also, it's interesting to compare this with, say, Yahoo's approach. Yahoo would have put an "IP" widget and a "weather" widget on their portal homepage. Google waits until the user searches for the info before giving it to them - which keeps their homepage clean and more importantly keeps their message strong "we do search well", while Yahoo's always seemed to be "we do a whole bunch of stuff, some of which you may need". I know Google/Yahoo comparisons aren't really du jour, but still it's interesting.
This is why I mentioned that it's not all low hanging fruit. Movie listings? Requires feed integration, handling a lot of data, non-trivial presentation. Same with weather. Getting a good weather app is not 10 seconds of coding. Some, like ip address, are simple things, but even so whatismyip.com built a huge range of products around that one simple service.
Pretty sure they got that after google. For years, wunderground was the best of a bunch of terrible web sites. They're still about 40% ads by pixel though, and have a hugely cluttered UI. I'll use them as a second step (Google's "detailed forecast" link) after typing "weather" into the chrome address bar. But broadly, they still suck compared to Google.
Actually the only way for a "my IP" application to make some money is be indexed as first result in Google I guess. At least I never remember the domain of one of that 200 trivial apps and search on google. So makes a lot of sense they showing you the result if you search for "ip" or "my ip".
It still disadvantages those people who want to use that service while there is nothing to perform the job. Take google recently killing code search http://googleblog.blogspot.com/2011/10/fall-sweep.html. One of the commenter's had been building their own code search platform before google did. Google entering the market meant that this was no longer viable though http://news.ycombinator.com/item?id=3112444. A new code search platform will no doubt pop up, but it will take a while to get something that works well.
This is the problem with the giant company stepping into new areas, it often leaves destruction in its wake.
Remember, if the answer is delivered with HTTP, the reported IP may be the IP of your ISP's transparent proxy server. If you want the IP of your NAT box, you need a what-is-my-IP where the response is delivered over HTTPS.
This seems like a case of some things being features, not applications. Entire web sites build just to report your IP back were probably going to be replaced by one thing or another, eventually.
Both Google and Apple (and most other companies) are smart enough to see that if a simple feature is heavily used and the experience of using it can be improved for their users, they may want to make it a "native" part of their products. Let's face it, this is a better experience for that search, and you can still go to the indie sites if what you need isn't covered by it.
If your site is so sparse that Google can ruin you just by handling a search query, your business model was broken or non-existent. There must be something they can do with all that traffic data to differentiate. Where's the aggregate statistics?
Any of the bigger ones could spring off into an ISP review site.
how would a modify headers add-on be able to let the server think that you are coming from a different address? It's not as if the browser sends the origin address as a HTTP header.
It doesn't have to. It's part of the IP packet which contains the TCP packet which contains the request headers.
Are you sure that you are not using a proxy server at the address Wolfram gives you?
Edit: On second thought, you could try to fool server-side detection by setting a non-standard X-Forwarded-For header, but a what is my ip service shouldn't trust that and just report the real remote address.
I recently noticed that searching for dictionary words, using the old define:something trick or queries like "ubuntu release day", "evanscence genre" returns related information or 'best guess'. Nifty.
Malware usually opens a reverse (TCP) connection and thus does not require the remote IP of the infected machine. It only needs to know the IP or domain name of the server it wants to communicate with.
Regularly reversing malware samples, we still see many malware getting their respective remote IP from remote services. This is even used by some recent malware to update the bootstrap DHT with their own IP... In such case, they don't even need to contact directly the C&C.
I don't get why my previous comment is down-voted ;-)