

Is anyone working on making Wikipedia computable a la Wolfram Alpha? - amichail

A first step would be to have public domain source code associated with articles where it would make sense.  Moreover, you would be able to execute this code via the web on Wikipedia data/examples possibly even with an animation explaining what is going on.<p>A second step would be to allow you to easily combine computations via the web, though not necessarily using a natural language interface.<p>UPDATE:  The emphasis would be on the open source code, animations to understand it, and ways to combine this code to perform more complex computations in a practical and simple way (not a natural language interface).  People could get the data from more trusted sources if necessary.
======
phatbyte
That will be awesome, except for one thing. Wikipedia is editable by everyone,
so the data results couldn't be 100% accurate.

~~~
cema
Few things are 100% accurate. More specifically for wikipedia, the results
will not be 100% repeatable. Not necessarily a problem, but something to take
into account.

------
paulgb
There is an open source project extracting data, but I forget what it is
called. Freebase is doing it commercially (they maintain a separate database,
but I believe they have import data from wikipedia too). They provide an API
and (IIRC) a data dump, but with a more restrictive license than Wikipedia
itself has (again, IIRC).

~~~
babyshake
Freebase is mostly focused on creating and curating a very clean programmatic
version of Wikipedia, but it is definitely not computational in a way
comparable to Wolfram Alpha.

~~~
paulgb
How so? I always assumed Wolfram Alpha was an NLP and data presentation layer
on top of a Freebase-like database. Freebase doesn't have that extra layer,
but structured data is surely part of the battle.

------
fleitz
Powerset aka. Bing

