
Ask HN: Which sites/platforms do you wish had an API? - Jefro118
One can build cool projects on top of APIs like those of Twitter, Facebook, and so on. Which sites do you wish had an API?
======
AznHisoka
Here are the data-related API's I wish existed:

1\. Indeed.com - analytics on how many job ads mentioned a specific brand or
keyword.

2\. Yelp - analytics on how many checkins a restaurant or chain got every
month.

3\. Apple Store - analytics on how many downloads an app got every month.

4\. Amazon Reviews - an API to retrieve reviews for a product.

5\. Google Trends - API to retrieve historical trends for a keyword

6\. Google Search API - API to get search results for a keyword

7\. Linkedin Company Page API - API to get the feed for any company page

8\. Instagram API - API to get the feed for any instagram user or search
results for any keyword

Here are the non-data API's I wished exist (that could be potential low-
hanging fruit startup ideas):

....None

~~~
james2doyle
You can query Google search using the Firefox search bar endpoint:

[https://suggestqueries.google.com/complete/search?client=fir...](https://suggestqueries.google.com/complete/search?client=firefox&q=a+query)

This returns an array of 10 of matches for the term in `q`. This is how the
AnyComplete command for Hammerspoon works:
[https://github.com/nathancahill/Anycomplete](https://github.com/nathancahill/Anycomplete)

~~~
AznHisoka
This is an autocomplete API. How can I get the actual search results for the
query?

------
ken
Is it too snarky to say I wish there were a _usable_ API for Google Sheets? I
spent a week reading documentation, downloading sample code, searching
StackOverflow, digging through mailing list archives, and trying to debug what
was happening on my test account, but I simply could not get their OAuth
workflow to work at all. None of the {documentation, setup screens, sample
code, observed behavior} match with any of the others.

I mentioned this on HN once before and got a "I thought it was just me!"
response.

------
billconan
I hope some major stock trading website can have api for

1\. historical data

2\. real-time data

3\. trading api

credit card usage/history data (only for myself) that is across all credit
card brand.

~~~
raquo
I'm pretty sure APIs #1/2/3 do exist, although access costs thousands of
dollars, making hobbyist usage impractical.

~~~
lkowalcz
I think IEX has a free API with historical and real-time stock info:
[https://iextrading.com/developer/docs/](https://iextrading.com/developer/docs/)

------
MoBattah
All my banks, credit cards, investment accounts, etc.

~~~
nulagrithom
I'm eagerly awaiting a US bank that lets me review transactions and things
programmatically. Even better would be webhooks for transactions, or in front
of transactions for custom verification.

Oh the things I would build...

------
kqr
Can I say all of them?

Actually, something that may be even more important is free and open access to
APIs. I'm okay with registration procedures for larger volumes, but not for
hobby use. It seems to me an unnecessary obstacle. If the hit rates are
similar to what a user with a web browser would produce why is my script
forced to register when the web user is not?

~~~
setr
The problem is you can usually just get around the limiter pretty easily,
isn't it? Like one site I was scraping did rate limiting by ip; I just spawned
5 aws boxes to do the scraping. And with aws pricing model, there was no real
reason to stop at only 5 boxes, since each request was independent..

If I were planning to make a profit on that data, I might well have spawned
1000 boxes to speed things up (took 2 weeks of 24/7 scraping on 5)

~~~
MoBattah
Can you elaborate?

~~~
setr
If you do non-registration rate limiting on an api, then you still need some
identifier to calculate api usage; its not hard to find an identifier for a
single machine (ie IP address, which isn't actually but close enough),

But without registration, theres no way I can think of to associate multiple
machines with a single person.

And presumably if you're rate-limiting on individual users of the api (and not
the total usage across all users), then you're trying to maximize the number
of users accessing the api simultaneously (ie you don't want 1 user maxing out
resources, denying all other users).

But with things like cloud computing, its trivial for a single user to have an
absurd number of machines (legally). AWS charges on compute-hour, not number
of machines. Running 1 instance for 50 hours and 50 instances for 1 hour has
the same cost.

But for the server, that difference is significant; you apply rate limiting
_because_ you don't want to be hammered for a short duration followed by
nothing, at least not from one user.

Registration can also be bypassed somewhat trivially (temporary emails and
aliases), but its a good deal more effort than bypassing non-registration rate
limiting. And presumably most people who want to scrape a dataset large enough
to bother bypassing the limiter are nautrally, by their job, aware of what
I've described. Bypassing registration takes a good deal more work, and more
specific tooling

So tl;dr

Rate limiting by ip only affects 1 machine, but not necessarily 1 user

Rate limiting by registration affects n machines, with 1 user

Before cloud vps, 1 machine basically correlated to 1 user. Now its trivial
for n machines to correlate to 1 user.

------
psoots
I would like to see software producers provide an API to download the latest
release and a list of previous releases. You'd be surprised how difficult it
is to automate the installation of (primarily desktop) software (Slack,
IntelliJ, etc.)

~~~
RunningDroid
I think a Metalink* file would work for this as long as the software can
figure out the versioning scheme in use.

*[https://en.wikipedia.org/wiki/Metalink](https://en.wikipedia.org/wiki/Metalink)

------
vinylkey
Netflix would be nice. I'd love to be able to create playlists, or even play
random things. Trying to watch through Arrow / Flash / Legends of Tomorrow /
Supergirl in chronological order is a real pain in the butt.

~~~
fenwick67
Even better - if all the streaming services had an API that I could query to
see who has what shows / movies.

I'm sick of searching 3 catalogs (hulu, netflix, amazon) and finding out none
of them have what I'm looking for.

~~~
kaniskode
You can use [https://www.justwatch.com](https://www.justwatch.com) for that.

It's really useful for finding shows on services you use.

------
jacquesc
Google Keep. We have been asking them for years, and Google hasnt done
anything.

Probably never should have started using it in the first place.

------
fgandiya
Some of my college’s websites (dining services, course catalog). With an API,
my senior project would be so much easier. Unfortunately, I had to scape the
information needed which isn’t ideal.

------
ezekg
Amazon. I'd love to be able to place orders without using a third-party that
needs to know your CC/login credentials. So much automation could happen.

~~~
noamrubin
We do this! You can even do it without opening an Amazon account.
[https://zincapi.com/](https://zincapi.com/)

Disclaimer: I work at Zinc.

------
66d8kk
Any online based computer game. GTA Online and Rocket League are two that
spring to mind. The stuff we could create with that data!

~~~
nasso
Eve online has tons of APIs, and the entire game is very data driven with an
open market, industry and more.

Its kind of a developers wet dream if you like that kind of stuff. :)

------
ken
iWork files. There's been at least two completely different (and completely
undocumented) formats so far, with not even a proprietary library for
accessing them.

I really wish I could provide integration with Numbers.app, but as a one-man
shop I can't afford to get distracted with maintaining a reverse-engineered
file format for one use case.

------
djbeadle
GaiaGPS - I wish there was an easy way to query my GPS tracks!

~~~
pedalpete
What does it mean to "query my GPS tracks"?? Do you mean query points within
the track? Or search for tracks? Or something else?

~~~
djbeadle
I would really like to programmatically download my tracks. I guess I could
make all tracks public and then scrape the my feed page, but a more elegant
solution would be nice.

------
MonkoftheFunk
GasBuddy for displaying on my magic mirror

------
giza182
The google popular hours API.

------
rmprescott
Mint

------
travmatt
Amazon wishlist api

------
i_am_nobody
XBox

------
m_samuel_l
PFSense

