Last year I wrote a VS Code plugin [1] that lets you chat with an LLM in a Jupyter notebook on your own machine to do Code Interpreter-like things. It uses Simon Willison’s llm command, which you can configure to use whatever llm you like.
I think running the code that way is better since the notebook is free and under your own control, but a weakness is that there’s no sandbox. Perhaps these approaches could be combined?
I’m not working on it anymore, but perhaps someone wants to pick it up.
I was distracted by other things and stopped working on anything to do with notebooks. For my current project, I’m just using Github Copilot for autocomplete, along with GPT4 for asking questions about TypeScript.
But notebooks are pretty cool for writing tutorials and maybe when I want to write a tutorial, I’ll get back to it?
I also thought my usage pattern of ChatGPT resembled Jupyter notebooks. I tried to make a chrome extension[0] that adds a code interpreter to ChatGPT, but implemented it in the most naive way, with just pyodide.
I recently learned about Jupyter kernels and realized that was probably what I should've been using to build this out.
I named the extension JPT, combining Jupyter with GPT. I was very proud of myself for that one lol :)
I think running the code that way is better since the notebook is free and under your own control, but a weakness is that there’s no sandbox. Perhaps these approaches could be combined?
I’m not working on it anymore, but perhaps someone wants to pick it up.
[1] Bot Typist https://marketplace.visualstudio.com/items?itemName=skybrian...