I'm a big fan of the BBC podcast In Our Time -- and (like most people) I've been playing with the OpenAI APIs.
In Our Time has almost 1,000 episodes on everything from Cleopatra to the evolution of teeth to plasma physics, all still available, so it's my starting point to learn about most topics. But it's not well organised.
So here are the episodes sorted by library code. It's fun to explore.
Web scraping is usually pretty tedious, but I found that I could send the minimised HTML to GPT-3 and get (almost) perfect JSON back: the prompt includes the Typescript definition.
At the same time I asked for a Dewey classification... and it worked. So I replaced a few days of fiddly work with 3 cents per inference and an overnight data run.
My takeaway is that I'll be using LLMs as function call way more in the future. This isn't "generative" AI, more "programmatic" AI perhaps?
So I'm interested in what temperature=0 LLM usage looks like (you want it to be pretty deterministic), at scale, and what a language that treats that as a first-class concept might look like.
Seems like everyone has been getting excited around the search or code-generation use cases ... or simply trying to make it say naughty things (boring, not interested, wake up in a few more years), but this is eye opening.
The idea of this as a "universal coupler" is fascinating, and I think I agree with the author that we are probably standing at an early-90s-web moment with LLMs as a function call (the technology is kinda-there and mostly-works, and people are trying out a lot of ideas ... some work, some don't).
My mind is racing. Thanks for the epiphany moment.