Hacker News new | past | comments | ask | show | jobs | submit login
How I Took an API Side Project to 250M Daily Requests (ipinfo.io)
494 points by coderholic on July 1, 2017 | hide | past | web | favorite | 160 comments

I'd like to point out that the things he says he's doing instead of marketing are, in fact, marketing. It's "guerilla" marketing, and it's being paid for with the writer's time. Nothing wrong with that, just don't confuse "marketing" with "advertising".

The author does not say how much does maintaining the service cost and what is the long term plan. As others have referred to already, a similar existed before and ended for a simple reason – no point in maintaining it with constant loss and no clear revenue plan.


Oh it generates revenue, and we have a lot of big customers! See https://ipinfo.io/about https://ipinfo.io/customers and https://ipinfo.io/pricing

Yes but what's your revenue? Please consider going transparent (i.e. http://www.transparentstartups.com/) and making an interview with one of my fav websites indiehackers.com

I'm confused why you think you're entitled to this information?

Providing revenue and financial metrics are really get eyeballs as this info is rarely available for early stage companies. Balasmiq used this very well, they released early on monthly financial status blogs, which almost all managed to reach front page/top of HN. So, take it as another guerilla marketing channel.

Though the difference between Balsamiq and an IP address lookup service is that HNers don't think they can make Balsamiq in a weekend.

Why do you hear "Please consider" as "I demand/expect"?

"Yes but what's your revenue?" sounds rather demanding.

I think the most common business model for a website such as this is to offer premium services like:

1) more accurate details

2) fraud protection, if the ip is known for fraud or spamming

3) increased rate limits, etc

Just the top of my head s.. I'm sure you i can think of many more..

Actually, in that post the OP only ended the free, unlimited version of their API.

Since the OP in this case only has a limited free version, I don't think it's going to be an issue.

Good strategy! That's also what I did to get my side project (Cookie Inspector - Google Chrome Cookie Editor) project to having 80,000/daily users.


I solely marketed it at Stack Overflow and was getting upvotes and that was all my marketing.


Also a big factor there are good reviews. When users like your project/product, they will market it for you.

HN users clicking through: take a look at reviews before installing. Appears there may be an issue with adware.

westoque - might want to check your code base, if the adware is unintentional.

I never trust Chrome extensions - too often the business model is to build up a user base and then sell the spot in the store.

The only way to safely install extensions is to pack them yourself from versions you trust, then drag + drop them into the extensions window.

Smart man.

I'm on mobile currently and apparently cannot look at reviews for Chrome extensions. Cookie Inspector is open source[0] according to the extension page, so you could try finding any adware-related code yourself, couldn't you?

[0]: https://github.com/westoque/cookie_inspector

Chrome does nothing to guarantee that an extension is built from the source that it claims.

You can verify that source, enable external sources in Chrome, ('developer mode' or somthing) and then install it - but that says nothing about what's on the Chrome store.

Thankfully, all Chrome extensions can at least have the source viewed.

The .crz file type for Chrome extensions is actually a .zip file, so the code can be inspected. If it's obfuscated this doesn't mean much though.

If it's there, I can't find it. I highly recommend https://chrome.google.com/webstore/detail/chrome-extension-s... to let you download and view the source of extensions.

But how do you know if that extension isn't compromised?

Install it in a VM, unpack it from the filesystem.

Thanks. Yes, it has been an issue and the adware is definitely unintentional.

Will fix immediately :)

Adding to my answer above: users are more likely to install your product too if you provide good customer support.

Post-mortem would be appreciated. As someone who has Chrome extensions in the store, how did you get malware-infected?

How do you accidentally write code to make your extension adware?

How do you make money?

I'm not sure why people are proud to do things without spending money on marketing.

What if spending money on marketing had made you grow twice larger? Twice faster?

When people say "I didn't spend money on marketing", the only translation is "I knowingly overlooked massive growth opportunities."

I wouldn't be so quick to make that translation.

I think a more useful way to think about it is "Know your users, and your channel." There are some user populations - and developers are usually one of them - that are virulently anti-advertising. Any paid channel usually earns instant distrust from them. Putting money into marketing spending can have a negative ROI for them, because it has a signaling effect on the brand that says "Our product quality isn't good enough for us to get users without paying for them."

Then there are other user populations - most e-commerce is like this - where there is no such signaling effect, and they are happy to check out new products, regardless of how they hear about the product. For these markets, it's silly to ignore paid channels; you're just leaving money on the table.

I think Apple's paid marketing efforts are a counter example to your argument. Ineffective advertising is ineffective. As a tautology this statement is pretty unremarkable.

I first found out about ipinfo from a Google search looking to solve this problem. Haven't pulled the trigger on using it, but it's been on the back of my mind for awhile now.

For me, their marketing happened to be their placement on Google. I don't really know how you'd pursue paid channels on this one (outside of SEM and SEO).

It's a solution to a known problem. Unlike, say, the Apple Watch which has to first convince you have a problem... (This coming from a guy who's very interested in buying a watch as I train for a marathon.)

For developer tools? Apple certainly spends a ton on brand advertising for consumers, but most developers I know develop for Apple because they want to reach Apple consumers. (I spend a year doing smartwatch apps before giving up on the platform...the only reason I was developing for Apple Watch is because that's where the users were.)

As a single example that immediately came to mind: WWDC. If that isn't a massive marketing channel directly targeted at developers I don't know what is.

They're anti-advertising on paper. Are you telling me no one's ever run a successful ad campaign targeted at developers? I don't believe that for a minute.

I wouldn't go so far as "never", but historically, developer tools with lasting growth tend to spread via word of mouth & community recommendations rather than ad campaigns. See eg. Ruby/Rails, Python and supporting ecosystem, JQuery, Node.js, Express, Postgres, Vue.js, GitHub before they took VC, Parse before they were acquired by Facebook, etc. Even Google - they had distribution deals starting early on, but they were very reluctant to spend money on advertising up until about 2010 (I remember watching the first Google SuperBowl commercials at TGIF, and hearing a lot of questions about why were spending money on advertising and what it meant for organic growth).

When ad campaigns are run, they usually work by targeting programmers' bosses, encouraging the organization to purchase & standardize on one solution and then forcing developers to adopt it via management fiat. That's the route taken by Java, MongoDB, Oracle, .NET, many of the companies in the Hadoop ecosystem, etc. It certainly works - these are big companies - but it requires a consultative enterprise sales team that can work to get the solution deployed across the whole enterprise. The author obviously doesn't have the resources for that.

The middle ground, where you provide a developer tool with $100-200/year CLTV and hope to get it distributed via paid advertising, is a very difficult place to occupy. That's the area occupied by FogBugz and RethinkDB and Sandstorm and many analytics providers. Most of them go the community-building route, and it's still very difficult. Only company I can think of that's thrived here is GitHub, and they went the word-of-mouth, community-building route first. Companies like Jetbrains, Atlassian, Rational, etc. thrive as well, but they've got large enterprise sales teams that standardize a whole organization on their products.

But you're looking at a very narrow set of products used by developers. What about, say, Aeron chairs?

Sold to the boss, not the developer. The best devs I know actually get very suspicious when they see a startup full of Aerons, because it means that management is more focused on appearances than frugality. They look for chairs that are comfy but cheap, like something you'd get used off Craigslist, or if you do have Aerons, they'd like to hear that you picked them up from a bankruptcy auction of another startup.

That was just an example. Besides, I'm not sure what you can infer about developers from the beliefs of the "best devs", considering they're only a fraction of the whole.

The point is that developers don't just buy software. They're marketed stuff constantly, like everybody else. To suggest they're immune to it seems a little naive to me.

Or maybe it's like a profitable established company with Aerons.

What about them? I just learned they exist, from you. Do they have a history of running ads targeting developers? Why is a chair worth $500?

>Why is a chair worth $500?

Same reason a good mattress is worth $1k+.

You spend a lot of time in it, and quality product greatly increases your quality of life (back pain etc).

Note: I have no idea if Aeron chairs are good or not.

Not if they had no money to spend on marketing in the first place. Saying no money was spent on marketing is advertising strategies that might work if someone with low capital who is trying to boostrap a company.

That's a fair and charitable interpretation. If we relax the "no money" assumption a little, one might then question if it's the wisest capital allocation (why not spend, say, $200 on marketing and see if that helps?).

I read from this

> I built the API in a few hours, posted the answer, and forgot about it — until a few months later I got an email saying my server usage was off the charts. I’d been getting millions of requests per day.

that they basically just set it up as a side-project and answered a couple questions on SO about it only to be confronted with its explosive growth after a few months.

One could then interpret the author in that situation as thinking "don't need marketing, who's gonna use this little thing I made anyway. [a couple months later] Oh shit, this blew up and I didn't even need to spend anything on marketing it besides plugging it on Stackoverflow occasionally".

The takeaway isn't that he could make more money from marketing directly to developers, but that sites like StackOverflow give him direct access to developers right at the _very_ moment their pain point is at maximum velocity.

Someone searching for "IP API" on StackOverflow and having his site rank in the top result(s) is the same as searching for "dresses" on Google, but without having to pay for search placement and getting 100% targeted traffic.

They may just be specifying the amount of marketing.

If the headline simply reads "Taking $foo to 250M API req/day" and then it turns out that all of the traffic came from a $4MM ad spend, it's a lot less interesting to those without $4MM to spend.


We saw a 500,000% gain in sales by marketing, but to be fair we were small and had no idea what we were doing.

I was as skeptical as they come, but a good marketing team basically launched us from nobodies into LEO. Yeah we had a great core product but still... gotta know how to sell it.

The post title is misleading. The author has clearly invested substantial amounts of time, and some money (because the brochureware landing pages presumably aren't hosted at zero cost, they're on EC2). What they seem to be suggesting is they didn't buy ads. So it's a short and incomplete list of basic tips for identifying a need & creating a buzz via online communities; don't get hung up on the title.

It's prideful because using ads is the easy way out, and it's often not very cost-efficient. Doing it the hard way is cool.

Yes, that seems to be the sentiment. And that sentiment is puzzling to me.

Why? People hate ads. If your service grows with no spending on ads, it means you're getting traffic from word-of-mouth. And that indicates that you've built a great service that people are happy to promote for you. It just shows your product has virtue and it's naturally taken as a compliment.

I'd compare it to having a kid. If you have money you can probably buy your kid into a good school and hook them up with a good job when they graduate. Or you can set them up with the right foundation and watch them succeed on their own.

I did the same as the OP with a project of mine. I never spent a penny on ads and it's grown into a very popular charting/trading tool among cryptocurrency traders. I also take pride in the fact that I grew it organically.

What you're addressing here seems orthogonal to me. You can use "marketing" for a crappy product, and you can use it for a good product. All combinations exist.

What's puzzling is that you would deprive yourself from making your great product known to more people simply because you want to do it the "hard way."

Forgive me, but I'd call it "the dumb way."

Because marketing invariable reaches people who are uninterested and just leads to wasted bandwidth.

Also, it leads to added risk to building something people don't want and only engaged because of the marketing drowned out finding what they really wanted.

That resonates Peter Theil in Zero to One saying, Silicon Valley Developer-Entrepreneurs believing Good Products sell by itself with no sales and marketing effort.

>>When people say "I didn't spend money on marketing", the only translation is "I knowingly overlooked massive growth opportunities."

That's not the only translation. Another one is "I knowingly avoided pouring lots of money down the drain."

After all, just because you spend money on marketing doesn't mean you will see any returns.

I would question whether you can know the ROI of all marketing channels before you've tested them.

>I'm not sure why people are proud to do things without spending money on marketing.

being successful without marketing probably means you were successful due to word of mouth. Its hard to be successful due to word of mouth when your product sucks.

Some people don't want to market. Surely this would be a victory for them despite having higher profits available.

Because continuous growth is not the sanest decision.

This is super cool. I use a similar API to provide a client-side service called IP Request Mapper (https://chrome.google.com/webstore/detail/ip-request-mapper/...). Coming soon to a Show HN near you.

What it does is show where every asset on a web page is loaded from. It allows you to visualize how many different requests go into building just one web page. While it's gotten much better, the Houston Chronicle (https://chron.com) used to make about 500 individual requests to build its home page. It's down to about 125.

It's best to run it across two different monitors, with IP Request Mapper on one monitor and your "normal" browser window on another. Then enter any URL and watch the map start populating based on the geolocating every request made by the page.

But it's projects like ipinfo.io that make these other things possible. Standing on the shoulders of giants and all that...kudos to you, coderholic.

That's the same motivation that started me to build https://urlscan.io. I wanted an easy way for everyone to visualise the amount, size and destinations of the various HTTP requests that a single page-load triggers. Incidentally I also created a tool that is being used by a lot of Security / Phishing researchers. If you ever want some inspiration for additional IP / domain annotation sources, check it out. I should really do a "Show HN" soon ;)

Meanwhile, this is a scan for a particularly noisy German newspaper website (faz.net): https://urlscan.io/result/f23e2e7e-e1eb-4591-9794-92f97957dd...

This website contacted 35 IPs in 7 countries across 24 domains to perform 302 HTTP transactions. Of those, 51 were HTTPS (17 %) and 35% were IPv6. The main IP is, located in European Union and belongs to AKAMAI-ASN1. In total, 4 MB of data was transfered, which is 9 MB uncompressed. It took 2.51 seconds to load this page. 16 cookies were set, and 42 messages to the console were logged.

Any domain I enter, i just get this

invalid csrf token Error code 403

Very cool tool. Thanks for posting this!

Agreed. heipei, love what you did here.

How is it different from the dev tools network feature that is available in every browser?

Not sure if you are asking me or heipei. For my project, it shows the requests on a map, where the requests in dev tools aren't geolocated and mapped. Same basic principle, just a different view.

Okay, why is it helpful? Or it is just for entertaining?

I'd call it more interesting than helpful (not that it's not helpful). It gives insight into how a web page is actually built -- while I'm sure most readers here understand it, that's not true for the general population.

It might make developers think twice about how they build sites if they could see how overly complex they get. As I mentioned, the Houston Chronicle site used to require about 4x the number of requests as it does now so someone did some optimization.

Reducing number of resources loaded by a web page is the basics of the performance optimization.

Happy user here. My GF came up to me and asked if I could somehow get country names for the ip addresses she had of her survey respondants. I Googled and found this neat little API. True, I could have downloaded the raw databases from elsewhere and worry about the SQL I need, whether the data is recent or ancient or even correct. I decided it was an overkill for my need, and just used this API in a throttled(1 req/s) mode and left it overnight. If I need this IP to Location need again, I'd happily pay for this API.

I'm baffled why anyone would use this, when you can import data in a database and run it on your own server?

I mean, you might spend 20 minutes more to set it up, but you are safe from having to rely on 3rd party service.

Anyway, kudos to coderholic for creating this and sharing the story.

- Database needs to be purchased

- Database needs to be distributed to your servers

- Database can become out of date easily

- Database lookup requires going to local disk and having a relatively fast access path/cache for lookups

- In general, a local database requires a large amount of effort compared to just running a curl in your PHP code.

If you are actually going to use a database, the proper solution does not look like "put it on your webservers" anyway, it looks like "put it on a separate service with a fast caching layer" etc etc. So in other words, the proper solution to decouple yourself from a 3rd party API is to... build a 1st party API.

In other words, not a 20 minute job. For small shops, a quick curl during the page load is a 20 minute job.

There are many ways to quickly get and use a GeoIP DB like MaxMind, most requiring way less than 20 minutes. This is just one example of something I started using recently:

docker run -p 8080:8080 -d fiorix/freegeoip curl localhost:8080/json/

(Repo: https://github.com/fiorix/freegeoip)

Where are you going to host it, how will you cycle the container to keep it up to date, and will you need to load balance it?

Running a Docker container in production is not a 20 minute job.

Just put HAProxy in front of it. It’s not rocket science.


Setting up HAProxy in a production environment with discovery for the docker container may not be rocket science but it’s certainly not 20 minutes.

Yes, Docker is great but learning it can definitely require more than 20 minutes. A `curl ipinfo.io` is still simpler.

This is the tax you’re paying for that every time:

  Connected to
  HTTP/1.1 200 OK
  Server: nginx/1.8.1
  Access-Control-Allow-Origin: *
  Content-Type: application/json; charset=utf-8
  Date: Sun, 02 Jul 2017 08:54:40 GMT
  X-Content-Type-Options: nosniff
  Connection: keep-alive
  Body discarded
    DNS Lookup   TCP Connection   TLS Handshake   Server Processing   Content Transfer
  [      4ms  |         244ms  |        518ms  |            525ms  |             0ms  ]
              |                |               |                   |                  |
     namelookup:4ms            |               |                   |                  |
                         connect:248ms         |                   |                  |
                                     pretransfer:767ms             |                  |
                                                       starttransfer:1292ms           |

The OP was "baffled" why anyone would ever use this API. Clearly, the API's success shows that sometimes this tax you highlight is well worth it. (That was my point as well.) Nobody is claiming the tax doesn't exist, just that it shouldn't be baffling that a rational actor would choose to pay the tax in exchange for the corresponding benefits.

(Not to mention with very minimal effort you can usually avoid the majority of the specific tax of latency you mention, by doing things like parallelizing the request with other work or doing it asynchronously to the user's interface.)

What tool (or arg passed to curl maybe) allowed you to generate that response? The graph is great


It’s great for checking something quickly, otherwise you have the same thing in browser’s dev tools.

It is kinda a 20 minute job. On production, where I work, we just bake the maxmind DB into our app server images (it updates when we redeploy - which is a couple times a week) and use the maxmind client to read from that DB file.

Was just an extra step in our build pipeline.

You're misinterpreting "database".

In this case the geoIP database is like a 1MB CSV file, provided for free to the entire world by maxmind (a major network provider).

You can put give that file to many servers like nginx/apache out of the box, they will start adding a header with the country and the city of the client.

What this service is doing is effectively looking up the ip block in that csv file and returning the result as JSON.

In environments where you don't control the whole stack, such as a plugin or library code.

Where is this data available for download?

You can also try the IP2Location LITE



Good question!

Where did you get the IP DB from? My understanding is most you can't resell access to?

Do you think free IP DB providers insert "fictitious entries" [1] to identify breach of TOS like this, similar to what happened between Google/Bing a few years ago [2]?

1: https://en.wikipedia.org/wiki/Fictitious_entry

2: http://www.cnn.com/2011/TECH/web/02/02/google.bing.sting/ind...

Do you really think he's going to answer that question?

yeah, especially because "[they] built the API in a few hours".

having hosers abuse your free geoip service listed off the first hit from google is nice but the data being provided can't just be "hacked together" :P.

Great overview here on how you solved a problem and built a business around that.

I read that you use Elastic Beanstalk for your server config, but I wanted to ask: 1. What programming language did you use?

2. What, if any, configuration did you have to do to the Elastic Beanstalk config to deal with network spikes and autoscaling?


I was going to ask why AWS Lambda was not used instead. Or any other serverless offering from the major cloud providers.

Yeah, that would be totally hands off. But I believe you'd have to ensure that your requests didn't timeout, (3 seconds in lambda) and in this example of 10ms response times I couldn't see any issue here. If you're into python, checkout Chalice, it's being built as a "flask" like interface on top of AWS Lambda. https://github.com/awslabs/chalice

That's great. Question: does it make money? The words 'profit', 'money', 'income' and 'revenue' do not appear in the article.

Yes, it's profitable. See https://ipinfo.io/pricing for details of our paid plans.

Neat, congratulations! I know a few people that were active in that space and none of them managed to make it profitable and they all faded out again. It's important that services like these exist and even more important that they are viable businesses otherwise you are building on quicksand.

Well, he said it is his full-time job, so you can safely assume it generates revenue and also you can see they have a Pricing section on their site.

He could have saved up money and be living off of his savings while working full-time, and the amount of money that he is being paid by his customers might be less than the sum of his business and/or personal expenses. Not saying that such is the case here but just pointing out that it's possible.

Ipinfo seems to have the exact same logo as Podio (https://podio.com), a service owned by Citrix.

That's interesting. It's almost exactly the same. I did a trademark search, and although "Podio" is registered as a word mark, the logo design is not registered. So IPInfo is probably in the clear, but they may want to consider a new logo.

That's essentially identical, except for the connecting bar, and a slightly darker shade of blue. They are even both SVG files.

I see the author is posting the same thing every 20 days or so, so here is the 0 marketing...

I use ipinfo.io mostly to see my own public facing IP address and for me, it's 2 reasons:

- I somehow can remember that domain. I don't have to google "my ip" and dig through weird domains that change all the time

- The design is clean and simple. Not too many information, no ads, loads fast.

I prefer (and operate) https://wtfismyip.com/

Thank you for operating this one!! It's the only one I use :)

I even think I embedded it in a PoC software I made in a previous company :)

I love your site. /text is great too.

For someone looking for a simple CLI option to get your remote IPv4, you can use the following:

    dig +short myip.opendns.com @resolver1.opendns.com
In my case, I have a ip_remote.fish file in my $HOME/.config/fish/functions folder which defines an ip_remote function that executes the line above.

As an added bonus, you can get all local IPv4 with:

    ifconfig | sed -En 's/;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'

Curl ipinfo.io seems simpler, and provides a lot more information! If you do just want the ip then curl ipinfo.io/ip

When I google “my ip”, I get a onebox showing me my IP. Dose that not appear in your search results?

Like this [1]? These custom results are usually limited to some countries. Like when you search for "color picker" [2] and "bubble level" [3] on mobile. Google is known for testing these features in their services in specific locations so I wouldn't be surprised if people from other countries cannot see them.

[1] http://i.imgur.com/gVFAMhz.png

[2] http://i.imgur.com/JGVNDMN.png

[3] http://i.imgur.com/As7Dp2p.png

Ok have a look at canihazip.com then! Love it!

Or the super easy to remember http://ip4.me (ip6.me is a thing too)

Google gives you your IPv6, but ipinfo only supports IPv4 – which can be useful sometimes.

German Google, no. Also not when writing it in German.

Strange, both google.com and google.de show a info box with my IP for the query "my ip" for me (from Germany). Does not work with "meine IP" but that is more to type anyways.

Mobile or desktop?

I often use:

  curl curlmyip.org

As a Linux user, I mostly go for 'curl ifconfig.co' ;)

Curl ipinfo.io gives you a lot more information!

Crowded space. Quick google search of any of these keywords "ip address location api" , "ip lookup API" , "geolocation API by IP" etc. shows :

- https://db-ip.com/api - https://ipapi.co - https://freegeoip.net - ipinfodb.com - https://www.iplocation.net - http://neutrinoapi.com - http://www.ip2location.com - https://www.telize.com

and a few dozen more. I wonder if collectively they are serving over a few billion requests per day. Microservices & API culture FTW !

I've been running https://jsonip.com for years. Been serving millions of requests a day for most of that time. Doesn't really show up in searches well because it's just an API.

How do you pay for it?

I run a few small services, nothing of this scale, but one thing to bear in mind is that it's easy to pay for a lot of little side projects if they have virtually no costs.

The one cited simply echoes back your IP. That's it. How cheaply could you do that and how many requests per second could you handle on one small VPS?

Example, I recently ran https://www.tactical2017.com/ which is a tactical voting website for the UK general election. The cost for serving that whole website to 2.6 million people over 5 weeks, and 650k people in the last day and a half, was $20.

Just push costs down.

I use linode. Been a loyal customer from years. Basic service is $40 a month with $10 for automated backups. Fits within the bandwidth contraints. I've also spent a lot of time optimizing the hell out of the server.

Related, some other adventures while running an API to retrieve IP addresses.


I'm employing a similar strategy for my library https://github.com/joelgriffith/navalia as I couldn't find any solution to manage headless chrome (plus the API for that protocol is pretty heavy).

Building for what folks want, even developers, is so obvious that I think we often forget about it. It's also not as glamorous as self driving cars or rockets, so gets discredited easily.

Sound points though

Kudos to you guys for building this. There is always a lot of scepticism from people on "why would anyone pay for this" . Reality is not everyone has the time or resources to build their own kit. There are literally 1000s of businesses on the internet that that are in the business of selling "time" or timesavers and removing the risk of maintenance, ongoing support.

Keep improving this and with the rise of web personalization, the demand will continue to grow.

Does anyone know how ipinfo compares to running your own instance of https://freegeoip.net?

I was wondering about this too.

The answers on the SO question https://stackoverflow.com/q/409999/325521 that OP refers to (in his blog post)

also has another answer using freegeoip.net => https://stackoverflow.com/a/16589641/325521

From the comments on this answer (not OP's answer linking to his API), it seems like freegeoip is not all that reliable (i.e. it's down a lot).

Funnily enough, 1 of the comments on this answers links to another free service, called "freegeoip2" which seems to work just fine as of right now.

That was my experience with freegeoip.net actually, which is why I host my own instance of it.

Congrats. I m not sure but ipinfo could be very interesting to startups and programmers. So a good idea could be writing attractive articles and posting them on HN and Reddit programming and some other subreddits. That would bring more customers with zero marketing.

See also: https://news.ycombinator.com/from?site=ipinfo.io

Pretty cool man, use your site all the time for ASN lookups, although I find your carrier information wildly conflicts with digital element's DB.

Such stories don't let me stay focused on my freelance work.

I got inspired and start researching and Building. ( btw failing yet)

This has worked well for me, too. I saw an influx of "How to offer a time-based trial version on Android" on SO and developed a trial SDK as an answer: https://www.trialy.io/

Very neat. How successful have you been with this?

Given the fast lookup time, it would be useful if you could provide a JS API fot synchronous loading.

Essentially, a blocking script in the dom <script src="...api.js" /> that prepopulates the window object. With clever error handling, this could improve perceived performance significantly.

A few questions:

1. What differentiates you from ip-api.com and other providers?

2. Do you use MaxMind?

3. Is there an option for no-throttling? 100s of simultaneous requests?

I aggregate multiple IP databases for my SaaS (https://www.geoscreenshot.com) and I need highly performant / reliable IP look ups.

Why would you _want_ a synchronous, blocking script in the first place?

For conditionally loading other blocking assets in JS.

For example, if IP is in China, local fallback for Google CDN as it will fail.

You can also do all of these things asynchronously, without blocking the whole page.

Yes but like all async ops not without expense to the UX.

>90% of our 250 million daily requests get handled in less than 10 milliseconds.

Minor nit, but with that level of traffic I'd expect you to be bragging about P99.99 latency, not P90.

> Our servers use latency based DNS routing to serve over 90% of all requests handled in less than 10ms.

What exactly does that actually mean though?

Does it mean that processing time at your server is 10ms, or 10ms to time to first byte, or something else?

Giving it a quick test, I generally get the actual JSON result in around 400ms. The lowest I got was 200ms, the highest around 1000ms. It didn't seem to make any difference if I used the HTTP endpoint instead of the HTTPS one.

I see it's still a thing. Back in high school, some ten+ years ago, I coded up an 'ip2country' website. Not sure why, there were dozens of those. I guess I had a free domain and a lot of time on my hands. I put some Google AdSense on it and let it go. I checked my AdSense account some six months later and found out I was cashing $20/month. Easiest money I've ever made.

So what's your stack? Still running PHP?

well currently my location is basically totally wrong. from https://www.iplocation.net/ I've only seen one service that tracks my location 100% correct (correct village), all the others are 200 or more km away from my real location.

It just depends on the database they are using. Anyway, nowadays it's getting harder and harder to detect accurate location from the IP address (many users are on 4G or behind a proxy).

Yeah, and features like Data Saver on mobile Chrome will quietly proxy your requests through Google servers. Not sure if G forwards the proper X-FORWARDED-FOR headers (or if all IP detection services read them).

Yeah Google does, and we take the first public IP from x forwarded for, so we'll show your actual details in that case.

Yup, even other a lot of fixed broadband is moving to CG-NAT.

I wonder if having an IPv6 version would work a lot better?

The author says I took even though this was pure luck and coincidence. Attribution bias is strong in this one.

However, it is important to acknowledge that he did put himself into a position where he was available to become lucky (= he built the API and linked to it).

Checked one of our static IPs: the country is correct, but the city is 500 miles off.

That's crazy the details like lat/long, what if proxy or where does that data even come from? ISP? Or you take time to build it out ie research. At any rate cool.

Are you just using the max mind data?

What percentage of those 250,000,000 is from paid plans? Even if it's only 20% you'd be doing $xx,xxx per day. Is that in the ballpark?

How much money do you make per api req?

See https://ipinfo.io/pricing

Although I must admit, I'm a bit surprised as to why anyone would pay for this, as several HN readers have listed atleast 5 other free (and some self-hosted) alternatives here...

Several HN readers have also pointed out that some of the free solutions have shutdown because of lack of funding. When money is on the table, user may expect that the provider is "more" answerable than a free service.

Thanks for sharing.

I am ready to launch a startup and currently trying to figure out what to focus on (so many ideas!).

I posted an "Ask HN" earlier today. Wondering if anyone might have some thoughts or advice on this:


Just some rough calculations. Assuming the worst-case scenario, everyone in the highest tiers (the cheapest per request), 250M daily requests means he makes

400 * 250M / 320K = $312,500 per month.

Or $3.75M per year.

Not counting the expenses.

The cheapest per request is $0. There is a free tier where, presumably, the long tail lives.

You are great! My karma goes down, please!

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact