Hacker News new | past | comments | ask | show | jobs | submit login
JetBrains AI (jetbrains.com)
122 points by ddadon10 6 months ago | hide | past | favorite | 108 comments



Just like many others are saying here, and i hope JetBrains hears and sees the feedback- this should 100% be included in the all products pack.

We are already paying for products we don’t use, incentivize us not to downgrade.

An extra subscription on top of gpt-4 is rough.

I’ve been a loyal customer for years and have convinced many others to switch and pay for your products. Now it’s going to effectively double in price.

I’d also take at-cost, where i give you my GPT-4 API key.

cursor.so is getting more and more attractive as an option.


If you use this at any scale using an API key directly will cost a lot more than $16/mo. I’m convinced that these products are purely loss leaders at this point having seen how crazy expensive GPT-4 is when used direct.


No kidding, I'm spending $10-20 a day on API calls just between summarization and development using aider.


I think it makes sense not to include it to the license. Such solutions cause additional cost just by the needed computing power, which the customer needs to pay for. So the cost of the JetBrians license needs to include that in the long term.

If one would like to use another AI tool like Copilot they would still need to pay for JetBrains AI.

In a lot of environments Coding AI usage is not allowed or even possible (no internet access), and would just be a cost driver without any benefit.


"A lot" are not allowed? Most of places who disallow AI assistants will also happily pay exorbitant rates to consultants who will publish all your code base publicly on git...

And how many offline coders are out there? My guess is less than 1%?


It’s not just about offline coders. A lot of companies have a policy that code can’t leave their internal systems. No GitHub, no OpenAI, just internal tools.

Far more than 1% of developers are affected with that.

But let’s take it from another perspective: you will be happier if jetbrains increases the prices 100$ per year and AI is included instead of having an optional 100$ payment?


Come work in finance. No external services.


They have their own models, they don’t use GPT-4. (Update: apparently, old news, they use both).


What's so useful about cursor apart from inline prompts?


i use cursor and its amazing.


I was a bit skeptical of Cursor, but I have fully migrated over and have had no issues.


I will only want to use this if i'm able to host the LLM locally.

Currently using CodeGPT and running CodeBooga 34b @ 4bit on my work laptop. This is good enough for me, but i'd love some more advanced functionality, such as code completion.

CodeGPT

- Repo: https://github.com/carlrobertoh/CodeGPT

- Plugin: https://plugins.jetbrains.com/plugin/21056-codegpt


Been using CodeGPT a while myself.

Great tool, send e-mail to the author, seems like a good person.

I actually hook it up to my Windows gaming rig. The 4090 gets the job done, when I'm not gaming. (I just start the server when I go work, and then stop it when I want to game. It isn't that big a deal. I haven't hit the use-case where I want to use the AI to develop game mods yet. But I'll cross that bridge when I get there. ;) )


Which model are you using? I tried the deepseek 34b or whatever, but it goes nuts at times. Also, what params do you have for Prompt Context Size and Max Completion Tokens?


Currently Phind_Phind-CodeLlama-34B-v2, gonna try CodeBooga because, I can... that's the point. Use what ya like.

4k/32k, if I remember right. An interesting thing to do is actually try to use a true coding model in a chat. I had one model puking out its stack overflow training data, down to including user IDs.

Oops.


Isn't that going to be extremely slow? I can only realistically run 7B 5-bit models on my RTX 3060, anything more and it offloads to the CPU. My responses go from almost-instantaneous to 3mins+.


It seems like it's running at comparable speeds to GPT4 prior to Turbo. I could be wrong, but what I'm trying to say, it ain't bad at all.


This is where the Mac world shines.


would a 32gb M2 Max be able to run a 34b-model?


> running CodeBooga 34b @ 4bit on my work laptop.

How much RAM or VRAM does that require?


Max ram is saying 22.72GB. They gave me a laptop with a M2 MAX and 64GB of ram. I'm loving the fact that i can toss any piece of code in there without having to worry about data privacy issues and or any retention policies of service providers.


In my experience the vastly oversimplified-beyond-actual-usability calculation is that you take the parameter number (34) and count that as the amount in GB you'd need for an 8 bit model. with a 4-bit quantization that's halved, so your ball park is 17 GB plus a few GB for context. You can run these in RAM, but it's not very fast with (if I had to guess) slightly less than 1 Token/sec. on most non-apple consumer devices.


It’ll be around 17GB + context. For longer contexts you’ll add a couple GBs.


Give it another year. In theory it's not going to be difficult, but sampling speed/power consumption is prohibitive for normal hardware at the moment.


> sampling speed/power consumption is prohibitive for normal hardware at the moment.

For the top-end of existing consumer hardware for the purpose, that might be in the process of changing quite quickly:

https://huggingface.co/blog/optimum-nvidia


Honestly, with a 3090 costing like $700 used. It is within reach of many on this site quite easily, assuming they have a machine to plug the 3090 into.

People seem to think the hardware for this stuff is way out there... it is getting to be within reach VERY quickly.


Yes, but if you're looking for code completion at the quality level of Codex/Github Copilot, that's still a bit off.


Don't fall into the trap of: It has to be the best or it is useless.

The question for me is: Does it help me get my job done. Does it assist me in ways I find useful.


I have been using github copilot for a while now, but when I first tested these, together with the code AI offering from Amazon, they were... kinda useless. Only after a few updates did it get to a point that was a net positive in my daily work. And I still occasionally check the local models, and they're still not on a level in their auto-complete capabilities that I'd like to use, at least last time I checked. But I work with other LLMs and the recent advances in speed and the stricter nature of code means that soon-ish we should have a good and fast local model, we're just not there yet.


I don't use them for auto-complete that may be the difference. I use them for various other things. Tests, etc. I don't find them 100% accurate. But it beats writing it from scratch.


Okay so maybe 5% of developers who use desktop will be willing to fork out on a GPU and go thru process of setting up something that costs $10 otherwise...


Can you show me the metrics for that, please.

Additionally, what's your company's IP policy? Do they let you send their code to a remote company or do they see that as a breach of security?


I'm sure there are all sort of companies with very strict isolation requirements. But your typical enterprise will run your code editor in cloud VDI, host on atlassian, deploy using atlassian, deploy on some other cloud provider and be developed by double outsourced junior developer who's got warezd windows licence. I fail to see how adding another party who says they explicitly don't collect your data will make any difference.


Pricing seems to be $10/mo or $100/yr.

Personally I find that a bit steep when I'm already paying $100/yr for the Rider IDE (multi year concurrent discounted rate). Commercial pricing is $250/yr+ for just the IDE.

I know that's about the same as Github Copilot, but the downside to Jetbrains is if I switch to another IDE to get AI features (ie VSCode), then I'll stop paying their IDE subscription as well.

Edit: It also doesn't seem like the free trial is available to an individual subscription holder, only commercial subscriptions.


That's for individual use, VAT-exclusive. For business licenses, that's 240€/yr with VAT. It's really steep.


Its always better to use the integration that works best with your IDE. What’s wrong with switching?

I feel like $10/mo is incredibly cheap for something that offers more utility than a coffee.


I bet you can get more utility from a $10 coffee if you are making it at home/office. That doesn't mean the price isn't good, I just hate the coffee comparison so much I couldn't resist replying.


What? No, I don’t understand your reply at all. I wouldn’t value a one off coffee greater than a month of AI assistance. In fact, I could compile a long list of things that I pay 10$/month for that are less valuable.

You’re a developer - no matter how low your salary is, 10$ is incredibly cheap.


> I wouldn’t value a one off coffee greater than a month of AI assistance.

My point is that if you are making coffee and not buying it ready from some expensive place just because the name then $10 wouldn't give only two cups. I bet you might spend about $10 per whole month in many places if you making it yourself.

> You’re a developer - no matter how low your salary is, 10$ is incredibly cheap.

That's a good argument, my point was not saying that price is expensive. Just the coffee comparison is not true. Also one can talk about people working as programmers outside NA and EU but well probably they will probably not be using jetbrains IDEs but they are free for students. I really hoped they would have some education discount.


If you want to make coffee at home, sure the amortized cost could be less than $10/mo. But, I wouldn’t consider that a fair comparison - probably running a code LLM locally could also turn out cheaper.

We’re talking about paying others for the work instead of doing it ourselves. If not a coffee, eating out at a restaurant or buying an tshirt - anything can easily blow through $10 in one go.

The problem is that the upfront GPU hardware is expensive, both for training and inference. You could save on electricity by relocating datacenters, but probably wouldn’t make a big difference. Sadly, geo-based pricing might be difficult for LLMs.


The thing is that every service on Earth is marketing itself as "it’s only $10/mo, the price of three coffees!!1!".


trial for commercial means for paying customers, including individuals.


As someone who has been test driving this, I feel like this should just be value-add to the IDE. A reason for the subscription beyond DataGrip and Refactor tooling. It's decent for what it does but an extra $10/mo or $100/yr on top is a hard pill to swallow when VS Code is free and Copilot is $10/mo.


Is it at least as good as using Copilot in the Jetbrains IDE?


I've been futzing with it for the last hour and it's a hard no so far.


It's using the Copilot Chat API in the background -- if you check the Copilot Chat APIs there's even specific flags for Jetbrains IDE end-users, so this is essentially a whitelabel client for Copilot Chat.

THAT SAID, I've been using this for months and it's been performing extremely well.


I used it and then immediately stopped using it. The experience is even worse than just copy pasting code into chatgpt on your own.


I completely agree; they just developed a UI for ChatGPT, that's all.


Would love to see an unbiased direct comparison between GitHub Copilot and Copilot Chat.

How do folks handle competing "helpers"? I've found I need to turn one on at a time. Otherwise, it's distracting.


One thing to note is Copilot Chat does not work in JetBrains IDEs, yet. You have to use a text editor like Visual Studio Code instead.


It's in beta, and from the screens I see on the website it's a very similar look and feel.


I have copilot chat in Jetbrains. To be honest it's not great. For example, it seems like the "simplify this code" button really just removes all the code comments.

I get much better results by copy-pasting into ChatGPT4, or by just tabbing through the regular copilot autocomplete in a code file.


I got approved for the beta yesterday. So far I’m impressed with Copilot Chat in Webstorm but I’ve yet to use it with a complex project.


You can apply to get beta access to Chat. It took about a week to get approval for me (I did it last month).


I think the time it takes to get access varies wildly. I signed up for the waitlist for copilot chat in intellij as soon as it became available, and still haven't gotten access. Just cancelled my subscription and will be moving to jetbrains ai, assuming the trial experience isn't too-too horrible.


I have Copilot Chat in IntelliJ IDEA but I haven't really used it yet.


It's in beta, it turned on for me yesterday.


How did you know it was turned on? I applied weeks ago and haven’t heard anything.


The chat window literally just went from “sign up for the waitlist” to fully functional. I think I got an email too.


You receive an email when you do.


I should add that I pay for GitHub Copilot and I'm generally pretty happy with it.

Related to using AI to help with coding: One thing that's been annoying while trying to use ChatGPT is it tends to use deprecated and/or outdated code if I instruct it to write something using a specific library/framework.

Is JetBrains AI smart enough to suggest code based on the specific library versions of the project?


I've cancelled Copilot due to subscription fatigue and quickly found I can't live without one. Tried Cody and it's buggy af and just not as useful.

There's no going back from Copilot, but keen to try JB's AI. Seems one can't get a trial going using student licence tho.


Here is the blog post with further details: https://blog.jetbrains.com/blog/2023/12/06/introducing-jetbr...


> We encourage you to download the 2023.3 version of your go-to JetBrains IDE, open the AI Assistant tool window, log in with your JetBrains Account, and give the new functionality a try.

Oh I would love to. Upgraded RubyMine to 2023.3, subscribed to AI on their website. Clicking "Get Started" in the login window just closes it and nothing happens :D

I'll be happy to ditch GitHub Copilot if I get this to work. Sending money to JetBrains feels better than to Microsoft.


I had the same thing. Resetting my password on Jetbrain's website and the IDE sorted me out.


Yep, thanks, one of those did the trick. (For people wondering, you log out of the IDE on the initial screen when you don't have any projects open.)


Don't worry, it'll get to OpenAI^H^H^H^H^H^HMicrosoft soon enough via the API.


Kinda lame this isn't included in the "All Products Pack"


> Kinda lame this isn't included in the "All Products Pack"

This is why the ownership model of software is so much better than some SASS offering. Modern SASS solutions are just about incrementally extorting more revenue from customers after locking them into a platform, as they’re driven to chase new “features”… And this is an IDE of all things…


> This is why the ownership model of software is so much better than some SASS offering. Modern SASS solutions are just about incrementally extorting more revenue from customers after locking them into a platform, as they’re driven to chase new “features”… And this is an IDE of all things…

JetBrain’s pricing is both SaaS and ownership: you can continue to pay to get upgrades, or you can stop after the first year and keep the IDE as-is.


Nitpick : I think you mean SAAS and not SASS.


> Nitpick : I think you mean SAAS and not SASS.

You’re correct, thanks for pointing out the typo.


The all products pack becomes less appealing every year somehow...


Agreed! It would be fantastic if there was a discount for folks who already have an "all products pack."


I love the idea of an AI assistant for coding -- the ability to check references on the fly, the ability to not have to sift from through Google and StackOverflow which aren't that useful these days, is a great idea. BUT at this price, we need proof this is better than what we have. A week isn't enough. I'm not asking Jetbrains to give it to us for free as part of the package, but I can see 60 days unlimited use. It needs to be used in a large project, not just played with. It's important to remember, Jetbrains already has issues with thecloud-based IDEs now -- not everything is on the desktop. If they want me to keep paying every year, I need a strong reason to pay more than I do now.


From their blog post:

AI Assistant is currently powered by OpenAI and by our own models.

We are also working on integrating Google LLMs, and they will be available very soon.


Who owns the source code if AI is helping you write it? Is this a concern for companies that are copyrighting their code? What if multiple companies use this AI tool to implement an algorithm and then copyright it?


IIRC, (US) copyright would be assigned to the person who used the tool*. Just like how your keyboard doesn't own the copyright to what you write with it.

* Existing contracts with copyright assignment clauses notwithstanding


at what point does a keyboard become aware enough to make it not copyrightable? If I instructed a monkey to press enter and create a poem with chatgpt, that work wouldn't be accredited to me or anybody.

How come a monkey makes it not eligible but GenAI is?

https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...


I don't believe there's settled case law to answer that question. Anyone's answer is speculation at this point until it actually gets litigated.


In a recent copyright application, the US Copyright Office wouldn't let me copyright my apps translations because I used Google Translate to help. I argued that I had created the original documentation (which they allowed) and that I had used Google Translate as a tool, typically doing several "round-trip" iterations to get the meanng across, but no joy. Here's an article from Ruters discussing the current state of copyright law with respect to AI in the US. https://builtin.com/artificial-intelligence/ai-copyright To say that the law is unsettled is fair, but the US Copyright Office has a definite point of view, given existing law.


I may not have read the release notes etc properly but I'd been using this with the most recent updates of clion and assumed it was just a new feature being added.

I recently renewed my subscription in part because this looked interesting. I kind of regret that now.

I had assumed that the subscription entitled you to all of clion and other ides, not just part of them and other features would be an addition charge.


Also, Jetbrains does not indicate if I can build my own "private AI" with just my materials -- companies are concerned about what the AI knows and doesn't know, and where it gets it -- can Jetbrains AI be constrained to just a defined corpus?


> We take data and code security seriously! Our products do not send more data to the LLMs than needed. Neither we nor our service providers use your data or code for training any generative models. For stricter requirements, we will make it possible for you to use your preferred on-premises models (coming soon) and connect them to the JetBrains AI service AI and the JetBrains products that your team uses.

Until they provide the "coming soon" features I can't imagine most companies will enable this. This feels like an insufficient guarantee: Neither we nor our N partners, who knows how we keep them accountable, will use your private source code and other data sent as context. Pinky promise.


That's the same as what Copilot does.


I don't use Copilot and know a handful of Fortune 50 companies that won't use it either. Developers do silly things like hard coding credentials when they're testing, as a very basic example, and it's hard to imagine how this is safe.


Sidenote, are there any videos of a real world developer just programming real stuff, and using AI integrations like these sprinkled in? I'd be really curious to see how someone uses this.

I use GPT/etc (currently through Kagi's Ultimate) so i have long seen value in them, but i've not yet figured out how to bridge the gap with integrating it directly into my code. Too often i feel like i'm writing logic and i don't trust LLMs to be accurate enough there. Sometimes i'm doing weird refactors, so maybe that is the place?

Alas my editor of choice (Helix) doesn't support this so i'm sort of stuck left wondering. Would love to see it in real-world action.


I use GitHub Copilot IntelliJ plugin every day and would not consider going back to the pre-LLM state of affairs.

I don't use it to generate entire classes or tests, because it doesn't work well for that, and definitely not for big refactors, but it works well as a "super autocomplete" for 1-3 lines of code at a time.


It would be very impressive if Jetbrains tuned their assistant to generate structural search and replace[1] parameters. It's a very powerful tool, but it takes me way too long to figure out the syntax & API. But it could make big refactors much more practical and less risky than a LLM might be.

[1]: https://www.jetbrains.com/help/idea/structural-search-and-re...


This sounds like a great idea! You can upvote an issue for it here:

LLM-1728 Structural search/replace integration for AI Assistant https://youtrack.jetbrains.com/issue/LLM-1728


Andreas Kling uses copilot when working on SerenityOS, and it does pretty good IMO.

His YT channel: https://youtube.com/c/andreaskling


I joined the beta and have been using this for the past few weeks. Cancelled my co-pilot subscription. If you use Jetbrains stuff, the integration is more seamless. The chat functionality alone is worth the switch.


I've been trying it out on a small project for the last few hours and I am quite enjoying it! I tried copilot and was very disappointed.

I think this needs some fine tuning(it keeps trying to remove my comments), but other than that it's probably the only tool I felt comfortable other than copy pasting from chatGPT, which obviously doesn't include diff or context of the project.

Yeah it's a shame it's not included in the all products pack, but for the amount of power I get for my buck, I'm pretty happy with JetBrains overall!


I love the code completion from GitHub's Co-Pilot, but every time I use the chat feature to replace existing code, I feel like the UX is terrible.


Hmm... I was kind of hoping the AI integrations would be generalized in the UI, like how the services/problems/etc tool windows can be populated by any plugin. Seems like that's not the case, they have to "partner" with you before if you want to slot into the same AI tool windows. :( Doesn't feel very much like JetBrains.


On mobile this is just a table of contents page.


Any details on IP indemnity?

I have not really used Github Copilot out of concerns with IP; however, they recently added an IP indemnity clause that might have me take the leap. I use Jetbrains products (RubyMine) and would love to give this a shot instead.


"One can block AI features for the project by creating a file with the name '.noai' in the root of the repository."

Is this a standard among vendors yet, or are we veering into more dotfile project directory pollution?


I would have appreciated a local LLM integration as a free feature of the IDE. As it stands, I'm more likely right now to just use GitHub Copilot than to use JetBrains AI if I want a subscription billed remote LLM.


From the FAQ it state

> Why is AI Assistant not available in some countries? Access to the JetBrains AI service is currently restricted to the territories where the OpenAI service is available. You can check the full list of territories here.

But earlier they say that they are using codey and Vertex AI from google

> We are excited to partner with JetBrains and provide our advanced coding models for use in JetBrains AI. By integrating with Codey and Vertex AI, JetBrains can significantly improve developer experiences with AI-powered code completion, debugging, and generative explanations to accelerate every stage of the software development lifecycle.

And also they mention another quote that mention using openai models

> It’s remarkable to see JetBrains integrate the power of OpenAI models into the daily workflow of developers. By Infusing JetBrains’ AI Assistant with our models’ advanced reasoning capabilities, developer productivity can be greatly enhanced across a range of tasks such as code comprehension and authoring.

So are they combining the models or fine tuned something out of them specifically for their products? Who knows.


On the FAQ you reference there is a one dedicated to exactly that question...which llm does it use: "We’re aiming to support the models that best suit the needs of our users. Plus, the AI market is evolving at a rapid pace. Our customers don't want to be tied long-term to a single provider. Currently, most AI Assistant features use OpenAI, but we’re preparing to release new code completion models that are trained by JetBrains. There is also an ongoing track with other providers (e.g. Google and others) regarding their models. Still, for the majority of use cases, OpenAI is our current LLM provider. For the on-premises scenario, we will serve the provider included in the cloud platform (AWS Anthropic, Google PaLm 2, or Azure OpenAI)."


It either I missed that so bad or this was added after I read the post (unlikely though) but thanks for clarifying that for me.


"There's something so human about taking something great and ruining it a little so you can have more of it."


    Does AI Assistant send data and code fragments from my IDE?

    When you use AI-powered features, the IDE needs to send your requests and code
    to the LLM provider. In addition to the prompts you type, the IDE may send additional
    details, such as pieces of your code, file types, frameworks used, and other
    information that may be necessary for providing the LLM with sufficient context.
That'll eliminate its use from a lot of environments.


That isn't news, that's how all code-completion LLMs work. Unless you're running the LLM locally, in which case the difference is that you are the "LLM provider".


> Besides yourself, who knows your project best? Your IDE!

This statement is a bit creepy.


Could you elaborate why? You open the project, it is scanned and indexed. Seems logical and obvious if the IDE can sugges a relevant completion based on the whole project structure. Thanks!


Of course knowing is different than indexing and they are routing your code to an LLM, so data exfiltration is also involved. I can summarize my project in two minutes to a collaborator, and they will have more knowledge of it than any indexing mechanism. The underlying connotation when you use the word "knows" is that you've shared it with actual people, not a vector database. And that is creepy. Not sure why I would have to explain any of this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: