- https://www.opendatanetwork.com -- what I would call the "Google, for Socrata datasets"
- https://public.enigma.com/ -- One of the best collections of U.S. federal data, with good taxonomy and lots of useful options for refining a search, such as filtering by dataset size.
- https://www.data.gov/ -- Not as useful as what most people would want -- e.g. unlike Enigma and Socrata, it's a directory of self-submitted (by the government) data sources, not one in which the data is stored/provided in a standardized way. But it's a pretty good listing, though not sure if it's much better than just using Google.
- https://data.gov.uk/ -- Better than the U.S. version in terms of usability and taxonomy.
The Federal Data Strategy will also be opening up for comments again in October - https://strategy.data.gov/feedback/
Data.gov and Federal agencies use the same metadata standard (DCAT) that Google Dataset Search is using so much of our metadata is also being syndicated there.
If you, or anyone else who aggregates these datasets could make it EASY to find the FREQUENCY of updates, rather than just the LAST UPDATED timestamp, it'd incentivize people to consume APIs more.
I realize having a snapshot from 2014 is better than what was publicly available before. But I feel no one's really talked about why they would or wouldn't use particular data.
The value of increasing the cadence of updates should also not be understated! A lot of public dataset report on annual frequencies with more than a quarter of delay... Although this is a different issue altogether that has more to do with the processes of the reporting agency.
At Open Knowledge we built a really early one called opendatasearch.org in 2011/2012 - now defunct - and were involved in the first version of the pan EU open data portal. We also had the original https://ckan.net/ (and subsites) which is now https://datahub.io/ and has become much more focused on quality data and data deployment. [Disclosure: I was/am involved in many of these projects]
The challenge, as others have mentioned, is that data quality is very variable and searching for datasets is complicated (think of software as an analogy - searching for good code libraries is a bit of an art).
I imagine Google are trying this out before making datasets another "special type" of search result -- after all you can already search google for datasets. In addition, Google are already Google so including datasets will have a level of comprehensiveness and exposure you struggle with elsewhere (part of the power of monopoly in a sense!).
PS: for those looking for data gov sites https://dataportals.org/ has most of these.
(Disclosure, I work on this)
https://www.dati.gov.it/ (note, Italy redirects data. to dati.)
https://data.gov.za/ (South Africa's has a cert problem)
(website is being terribly slow for me right now)
(edit to add something more productive: the site is littered to the tune of at least 25% and maybe even a third with junk "data", all obviously added to get the number of records as high as possible, with no regard to whether that data is either useful to anybody, is machine-readable in any way at all, or even -- in the example above -- even qualifies as "data". Data.gov.ie would be moderately interesting if all the shit in it was removed.)
The biggest numbers bump recently was ca 1600 Met Eireann rainfall records datasets from all around the country, some of them daily rainfall dating back 60 years. (Spoiler, there’s a lot of rain)
This is specifically a catalog of data sets, it doesn’t host the data except for previews, and even doing that is pretty complicated in all its glory.
Then kindly explain these
(I'd show only the PDF-only ones but your search doesn't work.)
Oh and look two of these also contain no machine-readable data whatsoever
If you are aware of such a dataset that a public body is hosting, then it would certainly be something to include. Convincing (and helping) the public bodies to publish their data is still a big task.
The handling of the whole Eircode thing makes my blood boil, to be honest.
Another nice resource that I've used in the past is 'toddmotto/public-apis' on Github .
In the end I would prefer all public data sets to be available over the DAT protocol  instead of being hosted only on government or organization websites. A lot of climate data previously made available by the EPA was taken down, and only saved by efforts of volunteers.
Sometimes you really want a specific format for a dataset.
* Hosting problems. The first link I tried was already broken.
* Format problems. Also the presented data is in all kinds of formats, some "data sets" even require me to read data off images: https://www.ceicdata.com/en/indicator/germany/gdp-per-capita
And even if it's JSON, this is not particularly great either (Unicode support? Large (64bit) integers?).
* Update problems. Many data-sets change over time (e.g. GDP). How can I subscribe to updates? "git pull" would be nice.
* Provenance problems. I want to know who put which record into the dataset, when and why? "git log" would be nice.
* Presentation problems. (This is OK sometimes) I necessarily want to download 5Gb file before I looked into it. The first few rows of the dataset should be presented on the page, with information about it.
Wrote down a few more thoughts a while ago here: https://github.com/HeinrichHartmann/data-sharing#in-the-idea...
Approaches I have seen so far in the wild:
* figshare.com -- Addresses Hosting and Presentation.
* https://quiltdata.com/ -- (!) looks great. Still exploring.
* github.com -- works fine for small datasets (<1GB)
* packaging (yum, pkg, pip) -- (?) Not sure if that works, but at least they solve: Hosting, Update, Provenance.
This seems to be a wide open problem to me.
The ideas there are now getting realised in Frictionless Data https://frictionlessdata.io/
This an initiative providing a simple way of "packaging" data like software plus an ecosystem of tools including a package manager etc - https://frictionlessdata.io/data-packages/.
Aims to be minimal, easy to adopt etc (e.g. based on CSV). It has got significant traction with integration and adoption into Pandas, OpenRefine etc.
https://datahub.io/ itself is entirely rebuilt around Data Packages and includes a package manager tool "data".
If you're interested to talk more please come chat on http://gitter.im/datahubio/chat
I would like to create laymen oriented central repository for all public spending data of the world.
* Hosting problems - I make my own copy of the data.
* Format problems - clearing and formatting data from different sources is a real pain. Once it is on my website, I offer CSV download or COPY/PASTE tabulated data.
* Update problems - no versioning or public API yet.
* Provenance problems - there is a link to the source of the data.
* Presentation problems - tailored to displaying budgets. Not cross-browser or full mobile support yet.
I haven't seen many other platforms offer the same kind of functionality.
e.g This one is a dataset of CCTVs across Leicester. You can easily see all of the columns, sort data around, display a chart of camera types, see their location on a map etc.
A few portals powered by them:
- Paris https://opendata.paris.fr/explore/
- Mannheim https://mannheim.opendatasoft.com/page/home/
- Durham NC https://opendurham.nc.gov/pages/home/
disclaimer: I was an intern there 4 years ago
For example, someone else mentioned engima.com. I would have no idea that is related to data sources / sets unless I knew what it was.
Certainly wish you the best of luck though and will keep an eye on Qri! Cool project!
A web interface would be nice so that I don't have to install a tool to browse the content. Going to give it a shot non-theless.
You would be doing us a huge solid, working through these use cases irl is beyond helpful.
It's one of those... I know I should just do it kind of things, even just to get it out there, but I haven't found the inertia.
Seeing things like dat, quiltdata, public data sets, etc. made me think what I wanted to do was unnecessary, but I also agree with your comment.
I think a core problem is data democracy / control / politics of data. Too much we still act silo'ed instead of benefiting from massive data sharing, for a multitude of reasons (especially but not limited to $$$).
Here is the Show HN for it: https://news.ycombinator.com/item?id=17789119
I already have my presentation, but I can also provide it as a .xls, .csv, sql, or html table.
What would be best to help programmers/data scientists use my data?
2) CSV, JSON works fine if your dataset is just a few numbers and strings. GitHub will preview csv files as HTML tables.
3) If you need the efficiency of binary data and more robust data containers I would look into - Parquet https://parquet.apache.org/documentation/latest/ and
- Arvo http://avro.apache.org/ http://blog.cloudera.com/blog/2009/11/avro-a-new-format-for-...
4) DataScientists work with R/Pandas "DataFrames". If you are familiar with either one, import the data into a data frame and use an export method to do the serialization for you: https://pandas.pydata.org/pandas-docs/stable/api.html#id12
It's easy for the consumer of your data to convert a CSV to whatever format they need.
- spreadsheet, for personal analysis
- SQL database, for industrial-strength analysis
- HTML, for pretty output to their users
There's a wide set of tooling for Tabular Data Packages, plus underlying data is CSV which anyone can use.
If you want this done automatically you can just publish your CSV to https://datahub.io/ and your Tabular Data Package is made automatically for you.
It's one of those areas they have long attempts at involvement in - e.g. Google Public Data Explorer which never quite reached it's potential, and Freebase which although flawed was good and was shut down after Google acquired it.
I like that this is search based! The web is still the best place to publish data - in fact in my view normal Google search is still by far the best way to find datasets, even though it isn't directly designed for that.
There's a link from the about page of Google Dataset Search to this help for webmasters on how to mark up content for it - although it is a bit odd, mainly showing how to mark a dataset with a DOI (so good for academics certainly!):
Just metadata about data feels like a very niche thing to search to me - I'm still not convinced anyone will maintain the metadata well enough to help. Possibly will work in particular domains.
Does Dataset Search have some way to search column headings, types or content (of CSV, Excel, JSON etc)? I can imagine a load of operators that would make that really powerful for finding badly meta-marked up datasets deep in the web. Would seem like the obvious extra thing a dataset search would do.
Also previews please!!! Just nicely render the fist ten rows of common formats - CSV and Excel to begin with.
What part of Google is doing this?
It goes further by bringing this kind of data together into a single API, converting/cleaning into a similar schema where possible.
A small write up can be found on github . Any feedback/ideas would be appreciated!
 https://www.datalibrary.com (not online currently)
(disclosure - I work there)
I would love to see some cool data we might be able to use.
This should have found imagenet data in lmdb format available somewhere but it returned no results.
Is it just me?