Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
JetBrains AI Launch Event [video] (youtube.com)
58 points by ymolodtsov on Dec 6, 2023 | hide | past | favorite | 21 comments


I've been using it for a while. It's just GPT-4. The integration with the IDE is OK, it reads a combination of your code and the symbol table and can figure out common runtime errors. It's good for explaining things. Code generation is kind of blah - good for putting up a skeleton or implementing a library that's completely unfamiliar. I try out a lot of things with it that I wouldn't have otherwise. On the other hand, refinding code is a pain, the more specific you get about changes the greater the likelihood that it decides to 'focus' by just deleting the rest of your code and leaving you with only the function you were working on.

Once you hit the conversation size limit everything stops and you have to start over with a new chat, meaning you have to reconstruct your prompts over again or have it read more code. Getting it to reveal its system prompt shows that it's just 'an expert [language] programmer working on a project including (~100 common libraries)'.

I like having it available, I think Jetbrains has done a good job with it. But it also feels like it's taking resources away from conventional UI improvements and bug fixes. It will be better once they have other options besides GPT-4. It does not feel multimodal, and like most transformers it suffers from the problem of guessing instead of asking, often ignoring inconvenient or unintuitive instructions.


Skipped to a random timestamp to see how it performed, and at 15:35, the AI explains that "C# does not support array or list initialization using brackets directly," a cleanup which, seconds previously, Resharper had suggested to the user. The presenters were too busy gushing about what a useful educational tool the AI is to notice the error.


This happens distressingly often in LLM demos. Significant errors being papered over because people are so enthralled by the "look Ma, no hands!" effect. Just got out of a work meeting where the exact same thing happened with an internal chatGPT integration.


To be fair, that's a C# 12 feature that came out three weeks ago


Then the fact that Resharper already supports it, while we have to make excuses for the AI Assistant, is very illustrative.


What does it illustrate? The time that it takes to train a model?


They're more concerned with selling flash and giving demos and selling something that doesn't work as expected vs taking the proper amount of time to train the models with up to date info?

Perhaps it illustrates they have people giving demos that don't actually understand or notice obvious discrepancies? This doesn't inspire confidence.

Yes, this feature's only been 'out' for 3 weeks, but obviously the resharper team knew about it ahead of time and had a definition of the feature to develop against. Are we saying the AI can only be trained on 'released' stuff, but other tooling can work against upcoming specs?


Do these models have any ability to learn? If it is wrong about something, can the user correct it or ask it to read some release notes and have it integrate the new information going forward?


Can't speak for this model, but I know from personal experience with GPT3.5-16k and GPT4, that if you get outdated code being generated, you can just include up-to-date documentation for relevant libraries and it gives you great current code. Alternatives examples include telling it that python now supports match case, and showing it the syntax expected for match case. It can then use this perfectly (though the current GPT line models know about match case, so no longer an issue)

I wouldn't call this learning, anymore than me knowing how to use a + screwdriver allows me to use a - screwdriver as well, but others might.


> The presenters were too busy gushing about what a useful educational tool the AI is to notice the error.

Having worked a bit in the education field, I think it's worrying that errors from AI applications aren't taken very seriously. I think errors in education is more serious than elsewhere because such error compounds easily: the person can keep a wrong thing in mind for years and spread it to others. A lie may even become a truth because very few individuals check sources.

We have the entire bullshit field of AI ethics already, it could start being half-useful if the problem of AI applications in education was taken under that umbrella.


Agree its a bit of a miss, but most of the battle with an LLM is pulling in the right context, so I'd imagine they could include resharper's output in the context in a future iteration. Seems like something they could iterate on.


I wonder if they have plans for allowing the usage of a locally hosted LLM?


There's an opensource IntelliJ plugin from https://github.com/continuedev/continue that allows you to do this. It supports a couple different providers and models, e.g. LocalAI with Code Llama.


Yesterday I started to explore CodeGPT, it allows to download and run local model via llama.cpp, it's working fine for me so far, at least with with DeepSeek model 6.7b https://plugins.jetbrains.com/plugin/21056-codegpt


This is the first thing I checked on the announcement page, no mention though :/ https://www.jetbrains.com/ai/


Question is; does my code end up somewhere it should not?

It's a bit tricky with AI tools when working on a proprietary codebase which is absolutely not allowed to leave the premises.


I'm interested to hear what people who have used it and CoPilot think. I tried it a tiny bit but it fought with CoPilot (unsurprisingly) and I ran into a bug in the EAP version of my IDE (which was required to use the AI) so I wasn't able to give a fair shakedown.

I've been very happy with CoPilot but this integrates way more/better into my IDE of choice so I'm tempted to switch to it.


I’ve been using Copilot in JetBrains IDEs for a long time and it’s been working fine. I don’t have VS Code to compare but, just suggesting completions, I can’t imagine it would be much better.

I’ve also used Copilot in neovim and emacs (unofficial plugin) and they are a bit worse. The main functionality is the same, they just aren’t integrated super well since this is modular software (specifically, there are a couple bugs, and I suspect that the completions are slightly worse because they lack context). In JetBrains IDEs, Copilot is integrated well.


I started using it to learn ZIO 2 about 1.5 months ago. First, it was nerve wrecking since it would only respond in ZIO 1 code. They must have caught me swearing at it relentlessly. I'm getting better results now ;)


Does anyone know of an easy to set up integration for a text editor for locally hosted inline llm code completion/function calling?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: