Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Adding Mistral Codestral and GPT-4o to Jupyter Notebooks (github.com/pretzelai)
269 points by prasoonds 5 months ago | hide | past | favorite | 74 comments
Hey HN! We’ve forked Jupyter Lab and added AI code generation features that feel native and have all the context about your notebook. You can see a demo video (2 min) here: https://www.tella.tv/video/clxt7ei4v00rr09i5gt1laop6/view

Try a hosted version here: https://pretzelai.app

Jupyter is by far the most used Data Science tool. Despite its popularity, it still lacks good code-generation extensions. The flagship AI extension jupyter-ai lags far behind in features and UX compared to modern AI code generation and understanding tools (like https://www.continue.dev and https://www.cursor.com). Also, GitHub Copilot still isn’t supported in Jupyter, more than 2 years after its launch. We’re solving this with Pretzel.

Pretzel is a free and open-source fork of Jupyter. You can install it locally with “pip install pretzelai” and launch it with “pretzel lab”. We recommend creating a new python environment if you already have jupyter lab installed. Our GitHub README has more information: https://github.com/pretzelai/pretzelai

For our first iteration, we’ve shipped 3 features:

1. Inline Tab autocomplete: This works similar to GitHub Copilot. You can choose between Mistral Codestral or GPT-4o in the settings

2. Cell level code generation: Click Ask AI or press Cmd+K / Ctrl+K to instruct AI to generate code in the active Jupyter Cell. We provide relevant context from the current notebook to the LLM with RAG. You can refer to existing variables in the notebook using the @variable syntax (for dataframes, it will pass the column names to the LLM)

3. Sidebar chat: Clicking the blue Pretzel Icon on the right sidebar opens this chat (Ctrl+Cmd+B / Ctrl+Alt+B). This chat always has context of your current cell or any selected text. Here too, we use RAG to send any relevant context from the current notebook to the LLM

All of these features work out-of-the-box via our “AI Server” but you have the option of using your own OpenAI API Key. This can be configured in the settings (Menu Bar > Settings > Settings Editor > Search for Pretzel). If you use your own OpenAI API Key but don’t have a Mistral API key, be sure to select OpenAI as the inline code completion model in the settings.

These features are just a start. We're building a modern version of Jupyter. Our roadmap includes frictionless, realtime collaboration (think pair-programming, comments, version history), full-fledged SQL support (both in code cells and as a standalone SQL IDE), a visual analysis builder, a VSCode-like coding experience powered by Monaco, and 1-click dashboard creation and sharing straight from your notebooks.

We’d love for you to try Pretzel and send us any feedback, no matter how minor (see my bio for contact info, or file a GitHub issue here: https://github.com/pretzelai/pretzelai/issues)




There are many other Jupyter notebooks with extensive AI integration. These are less (or not at all) open source, but more mature in some ways, having been iterated on for over a year:

- https://noteable.io/ -- pretty good, but then they got acquirehired out of existence

- https://deepnote.com -- also extensive AI integration and realtime collaboration

- https://github.com/jupyterlab/jupyter-ai -- a very nice standard open source extension for gen AI in Jupyter, from an Amazon. JupyterLab of course also has fairly mature realtime collaboration now.

- https://colab.google/ -- has great AI integration but of course only with Google-hosted models

- https://cocalc.com -- very extensive AI integration everywhere with all the main hosted models, mostly free or pay as you go; also has realtime collaboration. (Disclaimer: I co-authored this.)

- VS Code has a great builtin Jupyter notebook, as other people have mentioned.

Am I missing any?


Thank you for the list - I think I've come across all of these in my research! I'll try highlight the differences for each.

- https://noteable.io/ - as you say, it doesn't exist anymore

- https://deepnote.com - Deepnote is closed source sadly - you can't run it locally, you can't tweak it, you need to learn a new interface and switch to it

- https://github.com/jupyterlab/jupyter-ai - I actually mentioned this in the post but in my experience, the UX and features are far behind what we've built already. I'd love to hear from anyone who's tried jupyter-ai to give us a shot and let me know what we're missing! The plus side of jupyter-ai is of course that it supports way more models and the codebase is a lot more hackable than what we've built.

- https://colab.google/ - closed-source, similar challenges as with Deepnote. Another big challenge is that if you want to use Colab as a company, AFAICT, you need use their enterprise version (so that you can have native data collectors, support guarantees etc) and that only works with GCP so if you're an AWS shop, this might be a deal-breaker.

- https://cocalc.com - hadn't used it so far but congrats on a great project! Will check it out. Didn't look in detail but first impressions makes it look like a fairly different interface from Jupyter. One of our goals was to go to where the users already are - that meant Jupyter. So that's definitely a major difference.

- VSCode - as I've mentioned elsewhere, we're targeting a more of an analytics usecase with the features we're building. VSCode has AI features of course! But we'll look quite different once we build more items on the roadmap :)


> Didn't look in detail but first impressions makes it look like a fairly different interface from Jupyter.

That is correct, in that it is a completely different implementation. Unlike Deepnote and Colab, we try to maintain the same keyboard shortcuts and other semantics as JupyterLab, as much as we can.

If you don't already, we would love it if you came to the JupyerLab weekly dev meeting and demoed pretzelai: https://hackmd.io/Y7fBMQPSQ1C08SDGI-fwtg?view People from Colab, VS Code, etc. regularly come to the meeting and demo their JupyterLab related notebook work, and it's really good for the community.


Oh cool! I'll definitely try to make it in one of the meetings :)


https://www.cursor.com/ - an AI-first VS Code clone

VS Code (and Cursor) has so nice Jupyter support that I find it much better to use it for my workflow, rather than using any dedicated solution for Jupiter Notebooks only.


Agree with Daksh in the sibling comment. I think it's like you said - different people have different workflows and some might prefer using VSCode. IME though, most data scientists (and all data analysts) I worked with preferred using the company hosted internal Jupyter instance for their work.

Also, as we build more features, we're definitely going in the direction of more analytics workloads (live collaboration, leaving comments, google-doc type versioning, fully AI driven analyses similar to OpenAI Interpreter mode etc) and with these features, I think there will be a clear divergence of feature-set in VSCode/PyCharm vs Pretzel.

If I may ask, are you more on the engineering side (MLE) or more on the data side (Data Analyst)? EDIT: Just saw your other comment!


I am speaking of my experience, and I am an enthusiast for new things. I used the Jupyter Notebook before it was mainstream, or, say, PyTorch, in times when it was obvious that TensorFlow was the default option.

However, in general, I believe that any approach that works is good. And I don't think there is any reason to think that we need to settle with the current data science programming UIs. However, some went with mixed success, e.g. ObservableHQ never took over Jupiter.

In my view, PyCharm and VS Code are not that close to each other. PyCharm is a traditional IDE, while VS Code is more like an ecosystem of extensions. In particular, there is one (surprisingly good) for Jupyter Notebooks.

When developing any new way of interacting with code, there is a question of which ecosystem to use. Having it as a VS Code extension (or clone) has benefits and limitations. So is having it as a Jupyter extension or (the way you went) - clone.

If you want to talk more, happy to move it to emails.


IMO data scientists often are used to the jupyter form factor instead of the editor form factor, so I see why they would prefer this thing.


I am a data scientist myself, one who moved from academia, vide https://p.migdal.pl/blog/2016/03/data-science-intro-for-math....

I used Jupyter Notebook before it was popular and back when I was a PhD student. I pushed in a few places for the unorthodox way of exploring data in a browser. Now, I am back - but only thanks to wonderful code editors and their good support of Jupyter Notebooks. I recommend VSC (or, this year, Cursor) as the default environment for data sci.


That's a cool blogpost! I'm mostly using Cursor now (just waiting until someone makes a kick-ass Emacs package so I can switch back!) so I can definitely see your perspective.

I'd be curious to hear a bit more about the kind of work you do that made you switch. Also if there's anything you miss in VSC/Cursor vs Jupyter. If you don't mind a small email exchange, let me know and I'll drop you a message :)


Thank you! If it is of public interest, I am happy to discuss it here. If you want to shoot me an email, that would be great.

My path is in "What I do or: science to data science" (https://p.migdal.pl/blog/2015/12/sci-to-data-sci)... and well, a bit more recent one in "Embodiment for nerds" (https://p.migdal.pl/blog/2021/09/embodiment-for-nerds).

Regarding the shortcomings of the Jupyter Notebook, I made a few notes in "How I learned to stop worrying and love the types & tests" (https://p.migdal.pl/blog/2020/03/types-tests-typescript).


marimo is very good, been using it for a few months now and have switched over to it for most of my notebook-related tasks (it ships with copilot support)

https://github.com/marimo-team/marimo


Not sure if https://hex.tech fits here?


> Am I missing any?

DataSpell: https://www.jetbrains.com/dataspell


[flagged]


I'm not sure if you're referring to us or any other projects but in case you're referring to us, I'm sorry to hear that you came away with that impression.

Not sure what more we could have said to make it absolutely clear that Pretzel is a fork of Jupyter but here's the first line of our Show HN post today:

> We’ve forked Jupyter Lab

Here's the first line of our README

> Pretzel is a fork of Jupyter with the goal to improve Jupyter's capabilities

Here's the first line of our LICENCE file

> Pretzel is a fork of Project Jupyter, which is licensed under the terms of the Modified BSD License.

I'm happy to add this to more places if you think it would make it clearer for people.


I think they referred to the stealing or license violations in AI generated code and not to forking OSS projects.


This is a great implementation by your team + contributors. Simple but effective. And nice to see you’ve kept it open source instead of some other Show HN submissions where they take open source work, make is closed, change a few things, and claim they’ve created something great.

Im curious to see if you continue building out some other features. While these are great features (copilot, chat, etc), I’d think most users would expect their IDE to have it out of the box (or with an extension) these days


Thanks for the kind words. Keeping Pretzel open-source was important to us - partly for trust reasons. When most people use Jupyter, they do so with sensitive data. Making a closed source tool simply wouldn't work. I wouldn't have trusted a closed source jupyter alternative with my company's data unless the counterparty was huge and well-known.

To your second point - completely agree that most users would expect these feature from their IDEs today. But, only two IDEs support Jupyter Notebooks: VSCode and PyCharm. You can certainly use them for notebook work but most AI extensions written for VSCode wouldn't be optimized for notebook work (for eg, GH Copilot apparently has difficulty completing code across different cells as per a friend). Secondly, to your point, this is just a start - we're going to be building a lot more Data Analysis specific features that don't exist in any IDE. I think there's a decent space for a tool a like this.


> And nice to see you’ve kept it open source instead of some other Show HN submissions where they take open source work, make is closed, change a few things, and claim they’ve created something great.

The seem to have taken a "BSD-3-Clause" licensed project and change it to AGPLv3 licensed one. That's not the same thing, but it's similar to what you're concerned about.


This is true. All new added code is licensed under AGPLv3. But I fail to see how it's the same thing as modifying and open-source tool and re-selling it as a closed source tool. This is what AGPL gives us - anyone can use Pretzel however they want. They can even re-package and re-sell it if they want so long as they too open-source their modifications and improvements.

Selecting the license for an open-source tool backed by a company is tricky. You want your code to be open-source for it's benefits (for eg, for us one benefit is building trust with people working with sensitive data). But, the history of open-source tools is full of tools that another company just started reselling without doing any of the work (sentry, mongodb etc). So, you need to find a balance. AGPLv3 strikes the right balance for us.


> ... I fail to see how it's the same thing ...

You're right that it is not the same thing, which is why I wrote "That's not the same thing [...]" in the comment you're responding to. I have done the same as you guys (building an AGPLv3 or 'worse' product on top of BSD licensed code) many, many times! Anyway, what you're doing is really exciting!


That's fair, I was mostly responding to "but it's similar to what you're concerned about" because I didn't think the concerns are the same (but I can see your perspective on it too! While we are "giving away" the code, we're definitely posing some limitation).

Thanks for the kind words, we're exited to be building this :)


I would also be happy to video chat with you guys anytime, since we've built similar things over the years (wstein at sagemath.com).


How does that even work, copying in and relicensing someone's bsd3 code as your own AGPLv3?


You're right, that's not how it would work! If you look at our license - all the Jupyter code stays BSD3. If we modify any BSD3 code, the modified code stays BSD3. All the new code we write - in separate files - is AGPLv3 however and is clearly delineated in the repo in the file headers.


Ramon here, the other cofounder of Pretzel! Quick update: Based on some early feedback, we're already working on adding support for local LLMs and Claude Sonnet 3.5. Happy to answer any questions!


braintrust-proxy: https://github.com/braintrustdata/braintrust-proxy

LocalAI: https://github.com/mudler/LocalAI

E.g promptfoo and chainforge have multi-LLM workflows.

Promptfoo has a YAML configuration for prompts, providers,: https://www.promptfoo.dev/docs/configuration/guide/

What is the system prompt, and how does a system prompt also bias an analysis?

/? "system prompt" https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


Thank you for the links! We'll take a look.

At the moment, the system prompt is hardcoded in code. I don't think it should bias analyses because our goal is usually returning code that does something specific (for eg "calculate normalized rates for column X grouped by column Y") as opposed to something generic ("why is our churn rate going up"). So, the idea would be that the operator is responsible for asking the right questions - the AI merely acts as a facilitator to generate and understand code.

Also tangentially, we do want to allow users some king of local prompt management and allow easy access, potentially via slash commands. In our testing so far, the existing prompt largely works (took us a fair bit of hit-and-trial to find something that mostly works for common data usecases) but we're hitting limitations of a fixed system prompt so these links will be useful.


Big fan of LiteLLM Proxy and LiteLLM Python SDK to connect to various local models. Might be helpful here as well


Thanks Marc! We'll check them out. So far, with our limited experience with local models, we'd simply been thinking to use Ollama so thanks for the heads up!


Adding some love for local model support. A lot of security use cases are demanding local models. Would love to see local support soon as this looks interesting.

I agree that I think starting out with Ollama is probably hits 80% of what people would be looking for out of the box.


Github Copilot is the most useful tool I've found in a long time and having that in Jupyter Notebooks is just awesome. I've been missing that for quite some time. Great work guys!


You can also open Jupyter notebook files in VS Code, which would be another way to get AI autocomplete. I’m not enough of a Jupyter user to know whether it would make sense to use VS Code all the time.


Yeah, this is definitely a good way to access AI code completion (inline or otherwise) in Jupyter notebooks. In fact, I know some data folks who've been using Jupyter from day 1 switching to VSCode simply because their company buys a copilot license for everyone and they really miss it their Jupyter workflow.


Or in JetBrains/PyCharm


Agree. We actually tried getting GitHub Copilot to work with Jupyter but GH doesn't have an official API. We actually took some time to reverse engineer an implementation from neovim GH Copilot extension [1] and from Zed [2] but found it too flaky and too much trouble in the end.

Meanwhile, we also found a better speed/quality tradeoff with Codestral (since it has a fill-in-the-middle version, unlike a general LLM) so we decided to go with Codestral. This is inspired by continue.dev using Codestral for tab completion :)

[1] https://github.com/github/copilot.vim [2] https://zed.dev/blog/copilot


Curious about the limitations that made you fork it instead of making an extension.


So we discuss this briefly on our FAQ but let me try to expand on it.

Our goal is to make a modern literate programming tool. On a surface level, a tool like that would end up looking very similar to Jupyter, though with better features. We've mentioned some things we'd like to have in this final tool in our README and also in the post above.

Our first thought was to make a tool from scratch. The challenge was, it's very hard to get people to switch and so, we had to go where people already are - that meant Jupyter.

We could've made this one feature an extension with some difficulty (in-fact, our early experiments, we started by making an extension). It would have some downsides - we wouldn't have granular control over certain core Jupyter behaviours like we do right now (for eg, we wanted to allow creating hidden folders to store some files). But we probably could have made a 95% working version of Pretzel work as a jupyter extension.

The bigger reason we chose to fork was because down the line, we want to completely change the code execution model to being DAG based to allow for reproducible notebooks (similar to https://plutojl.org/ for eg). Similarly, we want to completely remove Codemirror and replace it with Monaco (the core editor engine in VSCode) to provide a more IDE like experience in Jupyter. These things simply couldn't have been done as extensions.


> GitHub Copilot still isn’t supported in Jupyter

What do you mean by this? I've been using Copilot in VS Code .ipynb files for over a year now.


It's as others say - in VSCode there support for Copilot but most Data Scientists and specially analysts who don't spend most of their day in a text editor still use Jupyter Lab (or Notebook - I mean the software, not the file format) and with Codestral, we've found similarly good completions (sometimes better than Copilot) but at a much better speed and cost.


I assume via Jupyter Notebook or Lab ( not VS Code running it )


They likely mean Jupyter Notebook the application, not the notebook file format.


These editors all focus on programming, does anybody have a recommendation for more general note-taking?

I'd like to do things like organizing very rough notes, having them reformatted according to a general template, apply changes according to a prompt, maybe ask questions that refer to a collection of notes, ...


I don't really get the appeal of this, I'd just use vscode with Jupyter if I really wanted "ai" integration since I can then access the whole ecosystem of extensions. The idea isn't that bad, but it lacks purpose.


Some of us either like jupyter or must use it


your work is basically a Jupyter extension (https://github.com/pretzelai/pretzelai/tree/main/packages/pr...). Why don't just create an extension like others in here (https://github.com/jupyterlab-contrib) instead of hard forking JupyterLab?


Hey, that's a fair question. I've responded to another question about why fork here: https://news.ycombinator.com/item?id=40857807

In short, for now, yes might have been able to implement this functionality as an extension (though there are several other changes that we've made to core Jupyter behavior as well). But our roadmap calls for much more sweeping changes to the code execution model itself and that simply cannot be an extension.


At this point I'm almost afraid to ask but my attempts to figure it out have failed. What is a Jupyter notebook? Where is the code running? On your computer? On someone elses computer?


Ah sorry about this. This Show HN was targeted towards folks who have a passing familiarity with Jupyter Notebooks but I should have explained a bit more.

Jupyter is a web application that lets a user execute python code in either a local or remote "kernel" - which is just a Python process with a communication layer.

You can then do literate programming (meaning run some code and immediately see results including plots and tables) within this web interface. Basically a much fancier REPL. The code can run either locally on your machine or remotely in the aforementioned python "kernel".

Here's a quick introduction to Notebooks that I found on YouTube: https://www.youtube.com/watch?v=jZ952vChhuI


It's usually run as a local web application with the browser running on the same machine as the backend, though it's possible to bind it to non-localhost interfaces.


I just use PyCharm and Copilot plugin. Works like a charm.


Yeah PyCharm and VSCode are definitely great options (though PyCharm is paid and VSCode AI extensions aren't notebook tailored). If you ever get a chance, I'd love to get your feedback on Pretzel - I think Codestral is a better and faster inline completion model than GH Copilot's GPT4 class model plus I think we might do context-relevant questions better :)


I'll give it a crack over the holiday. My primary mechanism is that I talk inline to the program.

    import pandas as pd

    # read a csv in with a pipe delimiter
    pd.read... # AI fills this in
I saw your demo and I get the "Enter AI mode" thing but I like this flow, esp since I use ideavim. Perhaps the only improvement would be if I had another mode in the editor for AI that was keyboard accessible.


Interesting! We have a inline autocomplete as well (that's the one that uses Codestral). So in your example, you'd see a autocompletion prompt after you type pd.read and hitting tab will autocomplete the line (or many lines if there's enough context to generate that much text).

The AI mode/box with Cmd + K is for more complex prompts, multi-line code chunks and such. We've tried to make everything completely keyboard accessible btw, including the sidebar so that you never have to use the mouse :)


Have y'all seen Jupyter AI? Seems to do the same but better (more features, more mature codebase, better UI/UX/DX) while being a JupyterLab extension https://github.com/jupyterlab/jupyter-ai


I tried it last month and once I was out of the base credit usage, it broke and it has stayed broke. Even the sample questions they list on the start page break when I try to run them. The last update didn't fix it either


Well you should probably look into adding credits to the LLM that you're using then. I would guess you're using one of OpenAI LLMs via API key. OpenAI used to not require pre-purchase of credits, now they do: https://community.openai.com/t/oai-api-switching-to-pre-paid.... So I'd imagine you just have not added credits to your OpenAI account.


Its their built in base plan, its supposed to reset, but doesn't.


Hey, we've actually mentioned jupyter-ai in the original post above. In my experience, the UX we've created is much better than jupyter-ai and AFAICT jupyter-ai doesn't have an inline code completion replacement like GitHub Copilot (that we're supporting with Codestral)

If you've used jupyter-ai, I'd really appreciate if you can try out Pretzel and tell us if you're missing anything. We definitely are behind on the number of integrations and as you say, jupyter-ai codebase is more mature but I really think we've a made a much more usable tool.


Codeium(https://codeium.com/) already supports this, along with VSCode jupyter notebook extensions. It has 400k downloads on the VSCode extension store. don't really see the point of this when codeium already exists..


It's true - VSC family of editors (VSCode, Codeium and Cursor) all let you use AI autocomplete, question answering etc with your notebook code through various extensions. However, lots of Data Scientists and Analysts prefer using Jupyter Lab or Colab or in general notebook like interfaces. Plus, this is just a start - we're going to be adding way more features that will make the differentiation clearer (see my other comment earlier today: https://news.ycombinator.com/item?id=40858509)

As of now though - there are plenty of people who do use Jupyter and we hope Pretzel - as it stands today - can already be of help to them.


Curious on why you went with Codestral for autocomplete, does it outperform other local models? How is the performance compared to GPT or Claude for autocomplete?

Any plans to finetune Codestral for this specific usecase?


So, we were tipped off to Codestral being really good because of continue.dev - for reference, that's a VSCode extension that gives you similar features to Cursor. After we trialled it out head-to-head against GPT-4o for fill-in-the-middle completion, in my experience (purely vibe based), it produced better completions maybe twice as fast as GPT-4o.

We haven't tried vs Sonnet 3.5 yet - my hunch is that on the speed/quality/cost space, Sonnet will end up doing better than Codestral for some folks.

Against general purpose local models (taking Llama-70B as a high-water mark), Codestral does better far better on code related tasks while being less than 1/3rd the size (22B!). That said, I'm definitely exited to try out DeepSeek Coder v2 - by all reports, it's amazing model for code completion and will likely also beat out Codestral.

I don't think we're planning to fine-tune Codestral though (or any model for that matter). The latest models keep on becoming faster, better and cheaper AND they already work quite well. My thinking at this time is that waiting it out and having a big AI lab make a more capable general model is a better strategy.


Have you seen Livebook? Best Jupyter Notebook ever!! https://livebook.dev/


Hey Mathias! Was fun chatting about Livebook the other day and yes, I'm definitely looking at it for inspiration! Alas it's an Elixir only notebook and so far as I know, there's very few data folks using Elixir so might be a hard sell.


I installed it into my jupyter env and it runs fine on port 8889, but the default port 8888, it just sits waiting on the AI to generate a response.


Hey Mike, just filed a github issue for this: https://github.com/pretzelai/pretzelai/issues/111

If you wouldn't mind, can you please comment on this issue so we can ask a few follow-up questions to help in debugging this issue?


Are there technical reasons for the fork or could Pretzel have been implemented as a set of extensions?


Yes - I've replied here: https://news.ycombinator.com/item?id=40857807

I'll also copy what I said in another comment: In short, for now, yes might have been able to implement this functionality as an extension (though there are several other changes that we've made to core Jupyter behavior as well). But our roadmap calls for much more sweeping changes to the code execution model itself and that simply cannot be an extension.


Are the file formats the same? Are there any Pretzel-specific extensions?


We're just a fork of Jupyter so everything - notebook files, keybindings, extensions, settings should just work.

We pull all of your config from the ~/.jupyter folder so you should be able to switch between Jupyter and Pretzel from different python environments (though you might see some warnings)


seems like the problem I am experiencing right now is that I'm overwhelmed by the sheer number of tools and choices its frankly exhausting

there is a feeling that i can do anything and everything with AI but in reality I can't do anything because i can't prioritize and choose anymore due to choice fatigue


That's a fair concern and one I've been through myself. I think what we've tried to do is a little bit the opposite honestly - hundreds of thousands of people already use Jupyter for data work and we started with the idea to go where the users are precisely so that they don't have to switch tools.

By making a fork that can be installed in one line of code, we're hoping that we don't make Jupyter user's go through decision fatigue for yet another dev tool. Instead, the idea is to simply make their existing tool better.


Give this a try, I installed it and so far so good. I like that they use RAG on the notebook, so it tailors the responses on what I'm doing. Really great way to get your toes in so far.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: