> Do we even need structured data in the post-AI age?
When we get to the post-AI age, we can worry about that. In the early LLM age, where context space is fairly limited, structured data can be selectively retrieved more easily, making better use of context space.
edit: I tried asking ChatGPT to write SPARQL queries, but the Q123 notation used by Wikidata seems to confuse it. I asked for winners of the Man Booker Prize and it gave me code that was used the Q id for the band Slayer instead of the Booker Prize.
I use wikidata a lot for movie stuff. Ideally I imagine the wiki foundation itself will be looking into using LLMs to help parse their own data and convert it into wikidata content (or confirm it, or keep it up to date, etc.)
Wikidata is incredibly useful for things that I would considered valuable (e.g. the tMDb link for a movie) but due to the curation imposed upon Wikipedia itself isn't typically available for very many pages. An LLM won't help with that but another bit of information like where films are set would be a perfect candidate for an LLM to try and determine and fill in automatically with a flag for manual confirmation.
I used that when building a database of Japanese names, but found that even wikidata is inconsistent in the format/structure of its data, as it's contributed by a variety of automated and human sources!
Basically every wikipedia page (across languages) is linked to wikidata, and some infoboxes are generated directly from wikidata, so they're seperate, but overlapping and increasingly so.
I agree there is strong overlap between entities, and also infobox values, but both wikidata and wikipedia has many more disjoint datapoints: many tables, factual statements in wikipedia which are not in wikidata, and many statements in wikidata which are not in wikipedia.
> do we even need structured data in the post-AI age?
Even humans benefit quite a bit from structured data, I don't see why AIs would be any different, even if the AIs take over some of the generation of structured data.
FWIW, That's been my use case, when I saw the author post his initial examples pulling data from Wikipedia pages I dropped my cobbled together scripts and started using the tool via CLI & jq.
Mediawiki is notorious for being hard to parse:
* https://github.com/spencermountain/wtf_wikipedia#ok-first- - why it's hard
* https://techblog.wikimedia.org/2022/04/26/what-it-takes-to-p... - an entire article about parsing page TITLES
* https://osr.cs.fau.de/wp-content/uploads/2017/09/wikitext-pa... - a paper published about a wikitext parser