It's this almost 100% exact matching that develops into a rapid muscle memory where you learn to trust the IDE, and non-deterministic algorithms which have say, a 1% failure rate, tend to be annoying.
Right, but because VS Intellicode does not purely work on just the AST tree, that becomes its differentiated advantage. It's not a better way of autocomplete, but a different way that works for some scenarios. Because Intellicode is not deterministic, it is not meant to build muscle-memory like vim, emacs, etc.
I also see some comments in this thread wondering how it compares to Resharper so here's how I'd differentiate them:
Most of the previous autocomplete, "intellisense" type of algorithms work on parsing the AST of the project's source code. The lookup database is therefore scoped to the internal world of that project. Intellicode parses others' source code (e.g. top Github repos) which is out there in the external world.
What Intellicode does is use crowd-sourced data to create a different set of autocomplete suggestions. With that strategy, when one types the dot character "." to trigger a context menu for autocompletion, it's sort of a miniature "pagerank" of the most likely methods or properties instead of a "dumb" alphabetical order.
A concrete example in C# might be the autocomplete of a DateTime type. There are 2 similar properties: ".Now" and ".UtcNow".
What people probably want in many situations to avoid DST bugs is ".UtcNow" instead of ".Now". But in a regular IDE, a dumb alphabetical order means "N" shows up before "U".. Therefore, newbies inadvertently end up with "DateTime.Now" in their code instead of "DateTime.UtcNow".
Something like Intellicode would use machine learning that analyzes the top Github repos and therefore reorder things so ".UtcNow" is higher than ".Now". Of course, this re-ordering could have been manually curated by C# experts into a "best practices database" but Intellicode's idea was to approximate that by using AI applied to others' source code.
 VS2017 screenshot example: https://imgur.com/a/05d9Ij9
Eh, isn't the whole point that intellicode is trained on your own project? Mine is, it suggests things based on our usage.
Intellicode may also be trained on your own project but that's definitely not its whole point.
Here's an excerpt from https://visualstudio.microsoft.com/services/intellicode/:
>"IntelliCode recommendations are based on thousands of open source projects on GitHub each with over 100 stars."
Perhaps you were thinking of "IntelliSense" instead of "IntelliCode"? Yes, the older IntelliSENSE could parse your custom classes and types and provide autocomplete for them. IntelliCODE is a different algorithm. (I think Microsoft's naming for them is not self-descriptive and confusing.)
I've only ever noticed the "starred" recommendations on our own methods, and it's incredible.
class ComponentName extends React.|
Reliability and predictability matters a lot in interface design
Perfectly said. I've always thought the same way. Basically, I'd rather not use a "pretty good" auto complete, because "pretty good" usually means it does more harm than good (where harm is providing wrong results or doing anything that slows me down overall).
Whoever is promoting this needs to give better examples...
I guess everybody is wondering why this even exists. Sounds like a typical "solution in search of a problem" to me, and it probably only got green-lit because it mentioned "AI" in the pitch deck ;)
Other than that: this looks absolutely horrible. I don't want AI to make suggestions. I know which methods I want, and which names I want to give stuff.
Wouldn’t better suggestions be helpful for those learning a language, any language?
I could learn C# or F# in half the time, for example.
This might not be the tool, but I can imagine AI being extremely useful in software development.
For example imagine you have an API that Foos Stuff, the initial version could have been FooStuff but it turned out that after 10 years of active use, the approach was slow for modern use cases and the only way to improve it was to change the API to BeginStuffFooing, ProcessStuffFooing and FinishStuffFooing but still keep the original FooStuff for backwards compatibility.
After 10 years of use, an AI-driven approach that data mines existing codebases (especially popular codebases that are more likely to be mature/old codebases) will be more likely to use FooStuff than Begin/Process/FinishStuffFooing, thus people will be steered towards FooStuff instead of the better approach.
Something that is driven explicitly by the API designer can start recommending Begin/Process/FinishStuffFooing from the moment it is available and recommend against introducing FooStuff in new code (it could even show up a tooltip or whatever with a "here is how to use the new approach if you already know the old approach" link).
It mines your own codebase, not public ones.
Yes. It almost serves as a pop-up mini-manual, e.g. for all of the valid methods on a class instance whose name you have just have typed. Good for validation but also discovery.
I haven nothing against content marketing, but please don't ruin the reading experience.
For people that have been trying IntelliCode for a while does it help much with development? Does it have the ability to suggest larger code snippets or only word by word?
nope. Who picked this shortcut?
(Also, it's built on "AI" techniques like convolutional neural networks rather than pure usage statistics databases, so the performance characteristics are also much better than the past Visual Studio usage databases, especially on machines where it can hardware accelerate it, which supposedly on Windows 10 is most GPUs these days.)
TabNine is actually the only thing that I could see myself switching to.
And it always, always, ALWAYS puts any word it knows/guesses that starts with the characters i type at the front before making any other suggestion.
And all that in interactive practically instant speeds. How the hell Visual Studio takes ages (ie. often several seconds) to complete a word i already typed a few moments ago on a 4.2GHz 7700K machine with 32GB of RAM whereas Lazarus happily completes words even on a puny Raspberry Pi, i don't know.
Anyone who writes IDEs should work for at least 6 months with Lazarus to understand how to do things right and if their IDE still provides a worse editing experience (i'm not expecting the entire WYSIWYG GUI and object design as this isn't really something everyone cares about, although it'd be nice) after that then they are doing something very wrong. I have issues with Lazarus with the overall UX being very messy, but the editing experience is the best i've seen.
Could you give an example of a situation where it's completely missing the point?
VS Code's default completion is strictly buffer based, so no sure what you mean there either.
The screenshot I posted a while back showed me exporting something at the of the file and I had typed `Det` at which point it suggested some random functions from random APIs. Then I had `Detail` at which point no suggestions were offered. At `Detaile` it finally gave me `DetailedProgress` from the same file about 50 lines higher up.
This is a frequently recurring theme that it takes so long to actually suggest mere names (~ strings).
I've written a language server extension for VS Code (not for C#), and the completion lists I provide are better than those I get in C#/F# while writing the server, it's bad :/