Hacker News new | past | comments | ask | show | jobs | submit login
Visual Studio Code for Go (github.com)
468 points by rdudekul on Mar 3, 2016 | hide | past | web | favorite | 218 comments

We're a 100% Go shop and everyone on my team but me uses Visual Studio Code for Go. It's really amazing. (My mind has just been so corrupted by years of vim usage I'm trapped.)

It's bizarre to think my team writes in a language created by Google in an editor created by Microsoft on System76 laptops running Ubuntu. Never would have been possible in the Gates or Ballmer eras.

VS.Code + the go plugin is quite good. I recommend it to my team members who prefer GUI-based editors. VS.Code is surprisingly much faster than Atom even though they share the same electron editor.

I think vim + vim-go is still the best go editor if you know vim. It just works and it gets updated frequently.

There's no such thing as the electron editor. Electron is just a framework for making desktop apps using the client-side layers of the Web stack. It's basically a a little bit more than a glorified Chromium install, minus the browser chrome.

The editors in VSCode and Atom are completely different implementations.

I use Jetbrain's IDEA with a Go plugin and it fits all my needs. Do you know how it compares to VS? Never got into VS myself and i'm hesitant to swap IDE's though I'd be willing to try if it's a significant improvement.

I've switched from PyCharm (based on the same IntelliJ Platform as IDEA) to VS Code, and it's so much faster to work with. That said, I've found that it's not nearly as good at all the code intelligence stuff as PyCharm is.

I've never used IntelliJ's Go tools, but VS Code is much better at that stuff in Go than Python. (I believe it uses third-party open-source tools in both cases; code intelligence for a statically typed language like Go is probably much simpler which probably helps.)

The best way to look at VS Code is as an IDE minus the 'integrated' bit - instead of having the core editor and UI bundled in with all the code intelligence stuff for a specific set of languages, it's a text editor with a code intelligence and debugger UI, that calls out to out-of-process plugins to do the heavy lifting.

Dynamic languages make good tooling very difficult. It makes it painful to transition from Go for hobbies to Python for work. :(

Most go-related plugins are reaching parity in that they mostly use the same background services and utilities to update the editor. My guess is Jetbrain's IDE has roughly the same features. FWIW, VS.code starts up much faster than WebStorm on my machine.

Good to know. When I switched from windows to ubuntu I saw WebStorm start-up improve 2-3x. Haven't tried VS.code yet but I've found JetBrains products to run very well on linux machines.

This is my setup of choice. It's absolutely fantastic.

> much faster than Atom

uggh! yes :) I tried checking out atom and installed the facebook extensions to see what the fuss was all about. It was so slow I could only successfully open it 50% of the time.

opens 50% of the time, every time

You can add an extra bit :

> a language created by Google

by guys using Macintosh laptops

> in an editor created by Microsoft

At a plan9 conference when people turned up with Mac laptops, Brucee (who wrote considerable portions of Go's ancestors - Inferno & Limbo) remarked to me : "the interesting thing about the Mac was that it wasn't X86 and it wasn't Unix ... now it's both!".

Has VS Code still the phone home feature, that can't be turned off? (no it's not okay to sniff on my data or usage, on my computer!)

Only by manually editing internal program files using the command line. Also,

> You will need to apply these changes after every update to disable collection of usage data. These changes do not survive product updates.

The Feb 2016 release has a new setting that (I believe) persists between updates:



If you build from source, osscode will never connect to those endpoints.

Alternatively, you can just edit them out of the product.json from the release build that you download from MS.

For this extension to work, you need to install several Go tools in GOPATH. How did you manage your GOPATH across multiple projects? Do you have a single GOPATH for every project or each project has its own GOPATH?

I set my GOPATH to my home dir, and only use a single GOPATH for all projects. This way things are where you might expect relative to your home dir (If thinking in FHS (https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard). For example, executables end up in ~/bin, everything else is in ~/src.

Never had any issues with this setup.

I have a single $GOPATH per project and always recommend doing so. If you vendor code with a tool like godep[0], having one shared $GOPATH is a nightmare. Other tools might work better, but this is basically the standard for the projects I work on. I basically just have a zsh alias[1] that combines z[2] and sets the $GOPATH to $PWD split on "src". I do share a single $GOBIN set to $HOME/bin, though.

[0]: https://github.com/tools/godep

[1]: https://github.com/jzelinskie/dotfiles/blob/04ad026f30782189...

[2]: https://github.com/rupa/z

Using govendor with my stuff and a shared GOPATH. I haven't noticed an issue, I wonder what makes it a nightmare for you?

I have a similar setup, but I actually alias the go command itself so I don't have to type anything special.

At work we have a monorepo and therefore a single GOPATH (we use a tool called glock for dependency pinning).

A monorepo a fantastic way to live, but I totally understand that it doesn't work for everyone.

> Do you have a single GOPATH for every project or each project has its own GOPATH?

Yes, a single GOPATH/workspace for all projects, as it is described here: https://golang.org/doc/code.html#Overview

Gaah, I thought this will provide me with some motivation to try writing some Go code but can't even get started with installing the analysis tools - cannot find package "golang.org/x/tools/go/types". Anyway will debug that one later but if anyone has a hint, go ahead.

If the automated install of the analysis tools doesn't work, you can always just manually install them into your GOPATH. See https://github.com/Microsoft/vscode-go#tools.

Thanks, but manual install fails with same error for golint and goreturns.

I ended up git cloning into my %GOPATH%\src\golang.org\x from https://go.googlesource.com/tools and https://go.googlesource.com/net - that seems to have fixed it.

To be fair (and I can't believe I'm saying this), Bill Gates is spending a third of his time punching the clock at Microsoft, advising Nadella. It may not have been technically possible in the Gates era for this to happen, but I certainly agree with you on the Ballmer point. Anyways, I've been wondering how much of this Gates is responsible for.

Which System76 laptops? I'm in the market for a Linux-native laptop which'll compile faster than my MBP.

Not the same person as parent but I wouldn't recommend System 76 personally. I got caught up in the hype and got a Lemur 14" but the build quality is pretty poor and I had awful keyboard issues (since fixed I believe). If you want something that runs Linux well I have stuck with ThinkPad's. You really can't beat them in terms of performance, reliability and Linux support.

We have ~2 year old Gazelle's and the build quality is so bad. The stats are great, but the cases are creaky cheap plastic, the screens scratch/fade easily, the door over the ethernet port breaks easily, the keyboard and trackpad are subpar.

By the numbers I'd go for System 76 again, but if the chassis isn't any better these days I'd probably go back to Lenovo.

Thinkpenguin makes very good stuff. I don't have any experience with the current models, but in terms of build quality and bang for the buck, I think they blow System 76 away.

Admittedly, I use a Thinkpad t430 and a MacBook Air myself, but if I didn't have more laptops than I knew what to do with, I'd be considering Thinkpenguin (though, I'm of the same mind as Linus Torvalds, http://www.cultofmac.com/162823/linux-creator-linus-torvalds... , so I'd be likely to get a refurb'd MacBook Air).

Edit: link to Thinkpenguin site: https://www.thinkpenguin.com/

I'm still on a T420s. Other than the rather crap panel it is a great machine still. I have been holding out for a decent fanless machine. Something with the same sort of power as a 3rd or 4th gen i5/7, M.2 SSD, 1080p or better 14" IPS, etc.

They are getting very close. I work in very quiet environments and I hate fan noise. Also no moving parts!

Thinkpads ceased to be Thinkpads in all but name, ever since Lenovo acquired the division. Unless you are talking about old IBM Thinkpads?

I kind of agree, they are not as awesome as they used to be but I don't know if that is totally Lenovo's fault. Back when ThinkPad's where an IBM product laptops were bit and chunky, then we had ultrabooks and Lenovo had to compete. I believe IBM would have had to compete in the same way just like Dell and HP have had to. The Latitude series isn't as great as it was 5+ years ago. As they get thinner and lighter with less removable components they become less end-user friendly.

I recently got a thinkpad L440, I chose the bad wifi chipset (I did some research first, nowhere would have had read about that issue), and that chipset is not supported.

I installed some backport which seems to turn off/on at times.

I don't think there are decent brands for linux, or it might require deeper pockets. That's the cost of not working with microsoft.

My main machine is still a T420s and it runs every Linux version I through at it without fail. I haven't heard any bad things about more modern T series (or W series) with Linux, not sure about the L series though sorry.

I bought a System76 years ago. It was great as long as I was running Ubuntu, but requires proprietary drivers which have been extremely flaky under Debian. Now I recommend zareason to folks who ask. Typing this comment on a zareason system right now, and I'm extremely happy with it.

Yeah it looks like the consensus is that the System 76s aren't worth it. I'll look into Zareason and Thinkpenguin, thank you very much!

I'm using a very old Macbook White (MB 5,2 model) running Ubuntu 14.04 natively with no OSX in it [0]. Everything runs great but need to install the driver for iSight (easy to do).

I'm very happy with the setup.

Something worth to consider :)

[0] http://bit.ly/1Quv2uY

Getting 16gb of RAM in a laptop (without having to add any yourself) is fantastic. As I said below, by the numbers System76s are fantastic.

Just make sure their chassis/build quality is better than it was a couple years ago. Otherwise in a year you'll have broken bits and a faded scratched screen.

I have the puri.sm librem 13" and really like it. The only issue I have so far is that I have to build a custom kernel if I want to use the more advanced touchpad features on a stock debian install.

I want to start using Visual Studio Code but I am waiting for a VIM emulator plugin that is "good enough" have you tried any of them?

While I'm far from a vim pro, I've never found a vim plugin that didn't frustrate me. Hence I've been stuck in vim because my brain just can't get used to non-modal editing.

Just heard from a coworker after I posted this he's switching back to vim because the vim plugin for VS Code isn't quite there.

Visual Studio != Visual Studio Code, but VsVim for Visual Studio is excellent, and this is from a daily/advanced Vim user.

Same for me, which is why nvim is so exciting.

I am playing around with https://marketplace.visualstudio.com/items?itemName=vscodevi...

It has most of the basic stuff working. I will admit that with the auto complete and what not it isn't quite like my native vim experience. I have actually been thinking about disabling it and just trying to learn the VScode short cuts.

Recent issue discussing this - https://github.com/Microsoft/vscode/issues/3600

This is great. Its too bad there isn't a +1 button for comments in github issues, because some of them are just incredibly important to highlight. I just installed VSCode and was impressed overall and would absolutely use it, except I could never switch to an editor that didn't have basic things like this, which was mentioned in the issue thread.

  Numeric arguments (3dd, 2x, ...)
  support for combining commands (ci[, cW, ggVG, ...)
  visual commands
Anyone who uses vim primarily lives by these features.

Good feedback. I'm working on how we can improve the vim experience in VS Code Now that we have essentials completed (accessibility, localization) this is an important use case for us.

Disclaimer: I'm on the VS Code team. Ping me on Github @waderyan or Twitter @waderyan_.

amVim is the better of the two, but VSCodeVim is the one that seems to have the most mindshare (almost definitely because of the name and it came first).

Both amVim and VSCodeVim still have problems that mean they're not drop-in replacement for even stock Vim, but most of them come down to limitations in the VSCode extension APIs at this point.


I tried one recently (2 months ago) and no it's not good enough.

The integrated debugger via delve is absolutely great.

Yeah, I am not a Vim shortcut aficionado at all, but not having . be repeat command was such a deal breaker for me.

Can anyone who uses Visual Studio Code compare its functionality to the native Visual Studio? I forgot about this project because it's seldom mentioned:


I use VSCode daily and I love it.

I've always been a fan of lightweight text editors and always choose speed over piles of functionality. To me Visual Studio feels like trying to run while wearing cement shoes. It takes forever to load and crashes/freezes far too often for my taste.

VSCode is amazingly light weight and stable and brings in IntelliSense, basic refactoring tools, and pretty solid git integration.

If you're used to using a full-featured IDE like Visual Studio (etc) you'll probably find VSCode lacking (but blazingly fast). If you're used to using sublime, atom, etc... You'll find VSCode to be feature rich and comparably fast.

(I work at Microsoft, but not on VSCode or Visual Studio. These opinions are my own and don't represent Microsoft)

I feel like these editors that pull in entire browser runtime and run JavaScript aren't really light weight or stable.

It takes about 3 seconds for Visual Studio to open on my desktop; not exactly forever.

Loading Visual Studio is (decently) fast. Loading a solution with 1000s of files takes FOREVER compared to VSCode which loads basically instantly because it's folder based.

Yeah. VSCode has infinitely more functionality on my macbook than VS does. :D

But seriously, if you're running Windows and have a VS license, use VS. Otherwise use VSCode.

VSCode is the best IDE for TypeScript. Some will point at atom, they are wrong.

Visual Studio 2015 Community Edition is zero cost and doesn't appear to have any obvious weaknesses (apart from being Windows only):


VS Community Edition has licensing restrictions if you're using it at work:

An unlimited number of users within an organization can use Visual Studio Community for the following scenarios: in a classroom learning environment, for academic research, or for contributing to open source projects.

For all other usage scenarios: In non-enterprise organizations, up to five users can use Visual Studio Community. In enterprise organizations (meaning those with >250 PCs or >$1 Million US Dollars in annual revenue), no use is permitted beyond the open source, academic research, and classroom learning environment scenarios described above.


It's a completely separate project. I'm not sure there's a useful comparison between the two.

How is it as an editor? There are already lots of popular choices: vim, Emacs, Sublime, Atom, etc. How are the plugins? Code completion, multiple cursors, AceJump, etc?

Jesus, just try it dude! No one can confirm your personal preferences for you. It's literally the easiest application in the world to try out, doesn't even require installation.

Code completion is not as good. I believe VSC uses gocode. VS 2015 also has a cool new feature that shows you references to each function right above the function declaration. Clicking on the reference count will open a quick jump contextual menu.

I wouldn't compare the two products. VS has a SQL editor. It manages database connections. There is an integrated merge tool that is actually quite good. You have project templates, which I haven't found with VSC (I haven't really looked).

> VS 2015 also has a cool new feature that shows you references to each function right above the function declaration. Clicking on the reference count will open a quick jump contextual menu.

This feature is actually also available in VS Code[0], though only for C# currently. We should be able to add support for this in Go as well[1].

[0] https://code.visualstudio.com/Docs/editor/editingevolved#_re...

[1] https://github.com/Microsoft/vscode-go/pull/75

Very nice feature. While you're in there tinkering under the hood, how about wiring it up for JavaScript & Python, too? (I'm mostly kidding, because this appears to be exactly the direction you're already heading.)

But PLEASE enable basic drag and drop text editing as soon as you can. Since the 1980s, EVERY text component, text editor, IDE, and word processor Microsoft has ever made has had the basic editing ability to drag a selected bit of text and drop it into a new location, with the maddening exception of VS Code where, when you try to drag your selection, you just lose the selection.

There are multiple requests for this in the feature request section of the VSC website, but unfortunately everybody calls it by a different name, so a single, higher-priority issue that would be suggested to most voters as they look over the top of the list is instead recorded as a bunch of low-priority issues lost down in the weeds, discovered by so few that it keeps getting recreated as a new issue.

That would be amazing. It's a great way to study an unfamiliar code base. It's a great feature on its own as well.

That cool new feature was actually present in VS2013. VS2015 has even cooler stuff but for sure the reference count existed in VS2013.

Thanks for the correction. Can you tell which version I skipped? ;P

VS Code is a programmers text editor. It has great support for a lot of the things Atom does well for example. It isn't an IDE in the same way you most likely think of Visual Studio. It is free though so why not give it a try?

It depends a lot on what language you're writing. VS has a lot more features, but they tend to be for more established languages (namely C++/C#).

The only thing holding me back is the lack of an integrated terminal. This is absolutely essential to my *nix workflow and I can't for the life of me imagine a reason to not include it. Otherwise, I love what I'm seeing here.

>This is absolutely essential to my nix workflow*

Well, most editors and even IDEs don't offer one.

>and I can't for the life of me imagine a reason to not include it.

The reason being that terminal semantics are hell difficult to implement, and doubly so in a web-based stack?

I'm using IntelliJ with the golang plugin at the moment and am spoiled by the integrated terminal. A quick Google search seems to indicate that the atom editor has a terminal plugin so maybe that would be a good basis for a port. VS Code feels much snappier than IntelliJ so I look forward to the day with an integrated terminal is available.

Shouldn't attribution of Visual Studio Code also be given to Google since it uses a large portion of code from the Chromium project?

I'm going to add the customary "I love this new MS" line here. About 10 years ago a friend of mine offered to make an introduction with her cousin, who was an exec at MS, to see about a job when I got out of college. I said "no way, they're working on such boring and dull stuff there"

But after having worked in C# for a few years now (loving the language) and seeing all the open source moves they're making, it seems like a really exciting place to work these days.

And I'm sure there's a ton of renewed energy there.

I'm kind of a MS fanboy, but please don't believe their lies: - VS is still not free; the community edition can only be legally used by very small companies with slim revenue - the Xamarin acquisition will turn out to be a slap in the face of developers who want to cross target Android and WP - the desaster of the Metro App API that has been replaced by UWP after being hailed as the future of application development for Windows (for a time, you could develop Windows 8.1 Metro Apps only on Windows 8.1 machine, not a Windows 8 machine)

Anybody doing any non-desktop application development on Windows is shouldering the full risk of obsolescence within 2 year.

I would really, really like a cooperation between Google and Microsoft in the area of the utterly braindead 90ies API of Android development by switching to XAML + C# ... ah, a man can dream!

> but please don't believe their lies

That's a somewhat harsh accusation, where are they lying? Their web page clearly explains the usage scenarios and who can use it for "free" [0] (scroll down to "Usage").

[0]: https://www.visualstudio.com/en-us/products/visual-studio-co...

The plugin in question works with visual studio code though, not visual studio community edition. VS Code is free and open source.

"desaster of the Metro App API that has been replaced by UWP after being hailed as the future of application development for Windows "

huh? They're the same thing. UWP is just a new name for the expanded WinRT (Metro) APIs.

> VS is still not free; the community edition can only be legally used by very small companies with slim revenue

The licensing ambiguity of the VS Express Editions that came before was a lot worse. At least us solo devs have a legitimate path to VS now.

> XAML + C#

Xamarin enables that.


Well, I don't know about that. They still try and lock you in to their tooling at every turn. I believe that the driving force behind this cool new Microsoft is Azure. They want you to run on their cloud and they'll follow you to OS X and Linux in order to get you there. Make no mistake, Microsoft is the best there is at finding out how to charge you.

If you think they're the best, you haven't dealt with Oracle.

Oracle has the added benefit of getting vastly different prices from different reps. At least MS client licensing was pretty standard across the board.

Just like any other commercial vendor.

Maintainer of the extension here. Happy to answer any questions about VS Code or the Go support specifically.

Hi there!

I'm giving the Go plugin (latest) for VS Code (latest) a try but I'm not being able to rename code occurrences.

"Cannot rename due to errors: Error: Command failed /foo/go/bin/gorename -offset /Users/foo/go/src/github.com/foo/hello/hello.go:#79 -to helloNurse"

"rename: -offset \"/Users/foo/go/src/github.com/foo/hello/hello.go:#79\": no identifier at this position"

Do I have to do anything specific for it to work or it should just work?

Found a related issue (https://github.com/Microsoft/vscode-go/issues/165) which led me to (https://github.com/Microsoft/vscode/issues/1580) which is already merged, but I'm still not able to rename symbols.

Code: (tried to rename helloWorld to helloNurse)


package main

import (


func main() {


func helloWorld() {

    fmt.Printf("hello, world\n")


Just in case anyone else got stuck in same matter, I got a reply here https://github.com/Microsoft/vscode-go/issues/165#issuecomme....

Hi! I made some Go customizations for atom, and there were a few things that I thought worked pretty well, maybe consider adding them to this project:

1) Auto GOPATH detection: since all code is located underneath a canonical GOPATH, if the GOPATH is not set, traverse upward a few directories until you find one with a "src" folder in it. This means that I don't have to launch it from a terminal and deal with environment variables. Also, I use a GOPATH per project, so as long as each window auto-detects its own GOPATH, that works great.

2) Cross-compile all the tools (gocode, etc) for the most common platforms, presumably windows and mac, and include them in the extension so that it doesn't rely on auto-install or whatever and just works out of the box.


#2 is something we definitely want to do with the VS Code extension as well. https://github.com/Microsoft/vscode-go/issues/5

#1 is also a good idea, we currently offer a per-project setting to configure this, but could also guess better in the case that no GOPATH is set.

Hi! My Go project directory has ~2 million data files under a directory called test. When I open the project directory, VS Code starts going through all of them, consuming a lot of memory and IO bandwidth. Is there a way to make it ignore that directory?

I tried "files.exclude" which hides it from the sidebar, but VS Code still tries to walk the entire directory tree.

I'd love to get more details on this. Open an issue on GitHub?

Are you employed by Microsoft? I'm asking because the repo is hosted under github.com/Microsoft.


Is there a plan to release python plugin as well??

There is a Python plugin for VS Code available already [0]. I believe the PTVS folks are also working on a plugin for VS Code as well [1].

[0] https://marketplace.visualstudio.com/items?itemName=donjayam... [1] https://news.ycombinator.com/item?id=10589451

Ok :-)

One of my required features is seeing the method signature when my cursor is on the method name; I don't want to have to enter a command to see the signature. In sublime, the bottom bar will passively show me `func Unmarshal(data []byte, v interface{}) error` when I cursor over json.Unmarshal. I'd really like it if it also showed me the type of the variable I was cursored over, but I digress. Is this possible in your extension?

Yes - the vscode-go extension will show you function signatures (and type definitions and other symbol definitions) on hover.

If you hold down `cmd`/`ctrl` during hover you additionally get a preview of the definition source code (signature and implementation).

Thanks for this extension, but I struggle with setting it up. How would one build a Go program from inside the IDE? My go program is git controlled, but the IDE does not seem to notice it.


What's the roadmap for the extension?

Can jump to definition jump to a definition in an open file instead of opening a duplicate file? Is this handled within the extension or vscode itself?

Thanks for the hard work btw, vscode+go has made my team much more productive.

Go to Definition will navigate to the definition in an existing open document or open the document if needed. This works across packages in your GOPATH, and also for core Go APIs in GOROOT.

This is all handled in VS Code itself through it's (relatively high-level) extension API[0].

[0] https://code.visualstudio.com/docs/extensionAPI/vscode-api#D...

How does it play with things like glide, gb, and godeps?

I don't think it does. I'm trying to find flags or anything to make it know what gb is. In the gopath, things just work. In a gb project it just says "no buildable files" because it doesn't know to do `gb build`. Tried some other projects like atocker but then they don't seem to play nice with docker-machine.

I like VS Code a lot. It is cross platform and not too heavy. It has a lot of the modern features and look/feel. Don't have to load 50 million plugins to get something reasonable working. I've pretty much stopped using vim/emacs/notepad++ and numerous other editors though occasionally I use vim because I'm on a ssh connection. To me it seems the right balance between complexity and simplicity.

What is it, though? "Build and debug modern web and cloud applications." Is it like Atom (built on top of a browser)?

Very similar to Atom. Based on the same framework I think. You really can't tell it is running on a browser. It is more tuned for javascript and other web technologies but with the right plugin, it works fine for Go, Python, C, etc.

It's based on Electron, which hosts Chrome in a desktop app. But the actual editor code is different.

VS Code aims to be a modern style of editor - lighterweight and more code-focused than an IDE, but with many of the richer capabilities of an IDE offered in context inside the editor. Atom is similar in spirit.

Both Code and Atom are built on Electron[0] which hosts libchromiumcontent for rendering cross-platform desktop UI.

[0] http://electron.atom.io/

I have been working for sometime with Visual Studio Code and with this Go extension. I used to use Sublime 3 for Go development, but with this extension I have noticed that I use more often vscode than sublime for Go development.

It is also a big plus that vscode works very well with TypeScript and you can work seamlessly with TypeScript front and Go backend code.

One nice thing is that you can navigate e.g. function calls easily, which places call a function or directly find correct function in question like foo.New() by pressing F12 or shift+F12. When using Sublime to navigate to foo.New() would probably reveal quite many functions that are in your workspace path.

Go extension also imports automatically packages that you use in your code. Renaming types or functions also works nicely if your code compiles.

There are certainly things to be improved like not being able to conveniently use directories outside your vscode project e.g. common packages across different projects. But over all developer experience is really nice.

Note that YouCompleteMe supports Jump To Definition for Go. (YCM has a sublime plugin, among other editors)

Even if you don't like MS or distrust them, seriously...

VS Code is a very good spiritual successor to TextMate. It has better performance characteristics for me than Atom, and I find extending it much less intimidating than Atom. I'm still more comfortable with Emacs keys, but I found it easy to add the emacs keys I miss.

Its license is such that even if MS abandons it I suspect the community will keep it going. So give it a try.

A bit off topic, but usually I tend to avoid browser based UIs for desktop applications, but the quality of current Rust support in Visual Studio Code made me open an exception just to use it.

Huh, maybe I should try it. I'm a Sublime+Vim person, and I'm pretty comfortable with my setup (and have found browser based UIs to be too slow for usage as an editor so I avoid them).

I really love VSCODE. Like really love it.

In all fairness I am aware that I can also a bit of a Microsoft 'fanboy' at times... so I held off on trumpeting around the office how truly great I think it is.

Of course if it came up in passing or anything like "should I open that in sublime?" came up, I'd make a sweeping grandiose statement like "anything but VSCODE is for chumps" (and we'd all laugh then move on about our day)

Anyways, a developer from another team recently saw over my shoulder and said "oh yeah, I'm really liking VSCODE... I've pretty much switched to it full time now" and that was the moment I thought maybe it's not me just being a fanboy... this particular developer was pretty much known as our resident "sublime expert" so much so that he'd given multiple lunchtime presentations and talks around the office on subjects like "how to turn it up to 11 with your sublime text editing" and "snippets for sublime - making you a bajillion times more productive"

Anyways, I know how we used to have the religious holy wars about VI/M vs. EMACS vs. whatever, so people really tend to fall in love with "their" text editor and are not really quick to switch BUT VSCODE has really got something special going on.

I've never programmed in Go before. Coming from a C# background. Can someone tell me, how does Go feel? Is is pleasant to work in, or is it tricky like C.

It's very pleasant to work in. It has its quirks and language design rigidity and can get tricky, though not quite "like C" unless you're implementing cgo directly.

Its a a very simplistic language. Coming from C# there is no inheritance, only composition. Types are super strict too which can be a pain in the ass (especially cuz you cant add things like an int32 and an int64 together). Its nice once you are used to it though.

> especially cuz you cant add things like an int32 and an int64 together

Which in some ways makes a lot of sense, in particular if you want to avoid "undefined behaviour" or just behaviour which is subtle.

For example: If I add a uint32 and a uint64, what is the format of the result? What about uint64 and int32? Which of those is "larger?"

And that's just the least bad example, with two float-formats you can lose precision is strange ways.

So I think go forcing the programmer to be specific is painful in the short term but helpful in the medium to long term. Bugs should go down and maintainability up.

I've been dealing with this same issue in Rust, and I think there is definitely a middle ground to be had.

In the first example, it's trivial to add uint32 and uint64, and store the result as a uint64. No information is lost at all in that operation. This generalizes to any combination of integral types, where [u]int{x} + [u]int{y} = [u]int{max(x, y)}. It's inexplicable to me why a language wouldn't allow these loss-less operations to be implicit.

The other two operations don't have a clear answer, so it makes perfect sense to require an explicit cast in those cases.

> It's inexplicable to me why a language wouldn't allow these loss-less operations to be implicit.

Because in some sense implicit behaviour is bad behaviour. It makes it unclear what the code does.

I guess that's exactly what I don't understand. If I add a 32-bit integer and a 64-bit integer, what other possible result could I be expecting besides a 64 bit integer?

> If I add a 32-bit integer and a 64-bit integer, what other possible result could I be expecting besides a 64 bit integer?

A 128-bit integer (if you are adding a 32-bit integer and a 64-bit integer, the smallest power-of-2-bits representation guaranteed not to have an overflow is 128-bits, so its the safest result. Though, I'd agree, not the most likely thing most programmers would intend.)

Good point. As painfully explicit as Rust is at times, I'm actually slightly surprised they didn't go that route. (At least for integer sizes less than 32 bits.)

An exception, a 32 bit integer, underflows, or overflows.

That's a problem with addition, period, not type promotion. Adding a 64-bit and 32-bit integer and promoting the 32-bit integer to 64-bit doesn't produce any problems that you won't have adding two 32-bit integers together or two 64-bit integers.

I know its bad, I am just lazy and explaining the most annoying thing I found using Go as a C++ dev.

Hi. I know C# in amateur fashion and Go professionally. I think a lot of people here will give you a very positive review of Go. Since they've got that covered, let me tell you what you're giving up from C# or F# and why I won't use Go. Maybe you can form a balanced conclusion from the aggregates, because I find Go to be very polarizing.


Async: Go uses channels for all concurrency. Period. This mechanism, is sort of like half of the Erlang Actor philosophy, but even more lightweight. Channels and goroutines are constantly coming into and out of existence and you feel no real shame doing it. Even simple tasks often need them becuase Go's I/O libraries tend to be async-first.

Compared to C# and F#'s async, I think you will find this to be very different, but not particularly better in terms of performance. F# offers a very similar abstration with very similar performance characteristics and C# async methods are using a very similar mechanism under the covers to provide closure over computation chains in what amounts to a kind of effectful state monad. Don't sell your home team short on this front; MS's work there is cutting edge.

Error Handling

In daily programming, Go's weakest story is error handling. While many people rightly cricisize try-catch error handling as a primitive and error prone mechanism, the Go solution is to say, "We all hate C but actually C erorr handling was fine so long as you have multiple return values at the syntactic level." So you often return a success value (which is nullable) and an error value (which is nullable) and then ask the caller to check on the null.

This is basically error code checking. People will say it isn't, but really it is. It really is. And unlike some other langauges Go provides no facilities for "chaining" these operations. So you end up writing if err != nil { ... } over and over.

In the case of chained I/O operations, this is really tiresome. It also often leads to code repeating or some somewhat convoluted dispatch logic.

Go Error values also suffer from Go's other issue, it doesn't have an extensible type system. Instead it has "interfaces". In practice, what this means is that it's very difficult to expose new error types or give your clients good ways to dispatch on them. While this means error handling code is lightweight, it also often means it has to do silly things like regex and error message string to find out what a specific failure was.

Some people value that approach. If you're writing an executable it's actually good, becuase it's probably better to fail fast in a recognizable way. But if you're writing a library and offering OTHER people that facility you can't support them well (and you will not be well supported by Go libraries).

Build Tooling

Go's general toolchain is solid and its compiler is wicked fast. But its build story is still really, really bad. Go originally had this mountain of filesystem around every project that was tricky and error prone to share around projects.

With recent releases, they've moved to something that resembles Ruby On Rails's "vendor" approach, where a sub directory contains a whole checkout of each dependency's code. This is actually a pretty major improvement (in part becuase it works better with Github, which is Go's primary distribution mechanism). But even with this change, managing a codebase over time is error prone. Unlike Maven and Nuget, there is no enforced concept of version releases (nor discipline around snapshotting) in Github. So if the maintainer of the library has poor discipline (or if there is code poorly tagged during a maintainer change), it can be difficult to get the exact version of a library you want with the exact bugfixes you need.

Google's response to this is, "We don't have this problem because everyone at Google always keeps /master clean and we basically never make breaking API changes." But if you talk to them internally, the reality is more what you'd expect. Sometimes a lot of time is lost fixing that.

Everything else I wanted to say (aka "Conclusions")

On balance, Go is a good environment for making executables. But a lot of why people like it stems from negative experiences they've had with scripting languages and their poor packaging story, Java and its problems keeping up with other managed language runtimes (and oh god its package process is just silly and antiquated).

You've already got cutting edge concurrency, static builds, a lightweight crossplatform runtime with CoreCLR, and pretty fast cross-compilation. For you, what you might find refreshing is how very clean and unified the Go langauge is. It is many things, but one thing it excels at appealing to is the pythonic there-is-one-way-get-in-line crowd. It is small, purpose-built, and singularly uncomplicated. C# has a "history" and "legacy feature support." Things like delegates that have fallen out of fashion now but are still lurking in the codebase or backing other more modern features.

If you want to try the concurrency model but don't know if you wanna commit to a whole new runtime, do try F# if you haven't yet. You can get great performance and the channel based concurrency out of it, and I think most people would agree its error handling is light years ahead of what Go offers.

If you'd like to try a totally new language with really cool concurrency semantics on a purpose-built runtime, can I recommend Nim-lang.org? Nim is amazing. It's got one of the most ambitiously cool ideas I've seen for micro-optimized concurrency code since reading Marlow's paper on Haxl for facebook.

I agree with all of the above. If Go is going to improve the quality of your programming experience, then this is a great reason to use it, but if it is going to make things harder for you then stick with what you know.

If you are coming from C#, then you probably wouldn't enjoy checking the results of each and every function call. You know how and when to use exceptions and they will save you many lines of code over using Go.

If you have used generics then you will probably feel like you are back to .NET 1.0 when using Go. You will end up generating code for types or copying and pasting class definitions.

That said, I am all for learning a new language that will teach me a new paradigm. In that case, the loss of productivity is worth it because it will add to my toolset. If the paradigm is the same, but the productivity is lower, well then what is the point? Performance? Perhaps.

BTW, I completely agree that Nim is one very cool language. It hasn't introduced new paradigms for me (yet), but it improves my productivity over C++ when working on a small opengl game.

> Go uses channels for all concurrency.

Go uses goroutines for concurrency, not channels. Goroutines are like threads, but they are managed by the Go runtime instead of the OS, and they use less system resources (a new goroutine uses just a few kilobytes, and its stack is resized on demand if necessary).

Go uses channels as a synchronisation mechanism. But other synchronisation mechanisms can be used (i.e. mutexes or atomic operations).

> Even simple tasks often need them because Go's I/O libraries tend to be async-first.

That's quite the contrary. Most libraries are synchronous. You don't need callbacks (like in Node.js) or async/await (like in Python 3). This is made possible by goroutines and that's a big advantage of Go.

I very much like the async/await mechanism offered by C# or Python because it makes side-effects explicit. But its drawback is its "virality": if you introduce an async operation in a function, you have to "propagate" the change by converting all callers to async/await.

About the error handling, I'm still on the fence. Your comment summarizes very well the drawbacks of Go on this topic. But what we gain in terms of simplicity and making error handling very explicit is probably worth it. I think that error handling is still a subject of tension in every language and the problem is not fully solved (even in Rust, Haskell or Erlang). Time will tell.

> But its build story is still really, really bad.

I think you meant "packaging story" instead of "build story". It's true that at this moment, the Go project has not rallied around a single and universal packaging tool (like npm in Node.js). But in practice, if you vendor your dependencies, I think it's easy to manage a large codebase without errors.

> do try F# if you haven't yet. [...] its error handling is light years ahead of what Go offers

I'd be curious to read a short example.

> Marlow's paper on Haxl for facebook

I agree about this paper.

> I very much like the async/await mechanism offered by C# or Python because it makes side-effects explicit.

It's also potentially faster and more efficient than goroutines, because it packs the state to be shared on context switches into what is typically a very tight structure instead of saving the entire stack.

> I think that error handling is still a subject of tension in every language and the problem is not fully solved (even in Rust, Haskell or Erlang). Time will tell.

Can you elaborate as to what the problems you see with error handling in those languages are?

It's also potentially faster and more efficient than goroutines, because it packs the state to be shared on context switches into what is typically a very tight structure instead of saving the entire stack.

I'm sure you already know this, but for anyone new to design in concurrency backends, not only does async/await (CPS) have the potential to really trounce goroutine-style concurrency in modern systems, but if you have a really smart compiler and some OS support, it can really fly. See Joe Duffy's blog post about asynchrony in Midori for more info[1].

[1] http://joeduffyblog.com/2015/11/19/asynchronous-everything/

Well it's not like the Either monad (Haskell) or erlang's {err, Reason} pattern is much better. Go's just bad at sharing details about new errors, in that interfaces are a pretty poor tool for that sort of work. At least Either has an Applicative and Monadic form, which is really nice for decoupling flow from handling.

Error handling is tricky because it requires richness both in how you talk about types and how you handle control flow.

But using GADTs and to a lesser extend F#'s discriminated unions for errors does have nice static checking properties. That's definitely an improvement over Go's "I just regex'd the string please send help" approach.

> It's also potentially faster and more efficient than goroutines, because it packs the state to be shared on context switches into what is typically a very tight structure instead of saving the entire stack.

A context switch to another goroutine doesn't need to "save the entire stack". The stack is already there. The runtime just needs to keep a pointer to the stack of the suspended goroutine. And the stack is usually just a few kilobytes.

Moreover, keeping the stack is useful for debugging because you can print a nice stack trace.

You have to keep the stack around in memory, as opposed to having a fixed structure. And fixed structures really pull away when you think about allocation: you can use a segregated fit/free list structure, whereas you can't with a variable sized thing like a stack. If the stack starts small and grows, you're paying the costs of copying and reallocation whenever it does: another loss. Allocation in a segregated fit scheme is an order of magnitude faster than traditional malloc or allocation in a nursery and tenuring.

For these reasons and others, nginx could never be as fast as it is with a goroutine model.

I agree that a fixed structure is more efficient than a growable stack, especially in terms of memory allocation. But I don't understand how you apply this to an evented server. Asynchronous callbacks are often associated to closures. But closures can vary in size, which makes hard to store them in a fixed sized structure. What am I missing?

I haven't read nginx source code, but I guess they are able to store continuations in a fixed structure because they don't use closures and they know in advance what information must be kept. I don't see how this approach can be used as a general purpose concurrency mechanism for a programming language. But I'd like to learn something :-)

"Why Events Are A Bad Idea (for high-concurrency servers)" is an interesting counter:


This is really interesting, but I feel like Erlang is a counter example. Not sure if that's a good comparison, so I decided to ask and risk sounding stupid.

Rust error handling can be concise thanks to the try! macro, but macros bring their own problems (like making more difficult to write refactoring and static analysis tools).

Haskell error handling can be concise thanks to monads, but they need higher kinded types which bring their own share of complexity.

The conversation on the "RFC: Stabilize catch_panic", found on Rust's issue tracker, illustrate some unsettled questions I had in mind (https://github.com/rust-lang/rfcs/pull/1236).

For example, kentonv wrote:

All code can fail, because all code can have bugs. Obviously, we don't want every single function everywhere to return Result<T, E> as a way to signal arbitrary "I had a bug" failures. This is what panic is for.

graydon wrote:

Currently you've adopted a somewhat-clunky error-type with manual (macro-assisted) propagation. Some rust code uses that correctly; but much I see in the wild simply calls unwrap() and accepts that a failure there is fatal.

ArtemGr wrote:

The only way to maintain both the safety and the no-panic invariants is to remove the panics from the language whatsoever. Explicit errors on bounds check. No assertions (you should make the assertion errors a part of the function interface instead, e.g. Result). Out of memory errors returned explicitly from every heap and stack allocation.

If you'd like to keep the assertions, the smooth allocations and other goodies then you either need a way to catch the panics or end up making programs that are less reliable than C. No modern language crashes the entire program on an out-of-memory or an integer overflow, but Rust will.

The libraries we have, they do panic, it's a matter of fact. Withing the practical constraints and without some way of catching panics you can't make a reliable program that uses external crates freely.

BurntSushi wrote:

If something like catch_panic is not stabilized, what alternative would you propose? (Option<T> and Result<T, E> are insufficient.)

On the same topic, there is this post about introducing a `?` operator or a `do` notation (inspired by Haskell) to streamline error handling:


And there is RFC 243, about "First-class error handling with `?` and `catch`":


But I'm sure you're quite aware of these discussions :-)

My general feeling is that, whatever programming language you consider (Python, JavaScript/Node, Go, Rust, Haskell, Erlang, etc.), the right way to handle errors is still an open question.

> Rust error handling can be concise thanks to the try! macro, but macros bring their own problems (like making more difficult to write refactoring and static analysis tools).

No, it's not more difficult to write static analysis tools. You use libsyntax as a library. Refactoring tools, maybe, but it's a lot better than refactoring with code generation :)

> For example, kentonv wrote:

How does that describe an unsolved problem? It illustrates that Rust's bifurcation of errors into Result and panics works.

> graydon wrote:

I think it's a relatively minor issue that would be solved with "?" or something like what Swift does. Switching to Go's system would make it worse; Graydon's criticism applies even more so to Go than to Rust.

> ArtemGr wrote:

Catching panics is important, yes. No argument there. It doesn't change the overall structure of Rust's error handling story, though.

> No, it's not more difficult to write static analysis tools.

I agree it's solvable, but I'd argue it's a bit more difficult to write static analysis tools when macros are implied. But maybe I'm missing something.

Here is an example:

The subsystem types in the sdl2 crate starting in 0.8.0 are generated by a macro, so racer has issues evaluating the type of video(). A workaround is to explicitly declare the type of renderer. (source: https://github.com/phildawes/racer/issues/337)

But my real concern is how to write refactoring tools when macros are implied. It's seems a lot harder than writing static analysis tools, because the refactoring tool wants to examine the source code with macros expanded, but has to modify the source code with macros unexpanded. In other words, the tool has to map from source with expanded macros, back to source with unexpanded macros. How do you solve that?

As a sidenote, I agree that refactoring generated code doesn't sound fun too :-)

> How does that describe an unsolved problem? It illustrates that Rust's bifurcation of errors into Result and panics works.

I quoted kentonv here because it shows that Rust and Go have converged towards structurally similar solutions to error handling, by using two complementary mechanisms: explicit error checking one one hand (using Result<T,E> in Rust and using multiple return values in Go) and panic/recover on the other hand.

The big difference is that Rust have sum types (instead of using multiple return values in Go) and macros (try! instead of repeating `if err != nil { return err }` in Go).

> Catching panics is important, yes. No argument there. It doesn't change the overall structure of Rust's error handling story, though.

You're right.

> a bit more difficult to write static analysis tools when macros are implied. But maybe I'm missing something.

Most Rust static analysis tools hook into the compiler and get this for free.

Racer has that problem because racer implements a rudimentary minicompiler that's much faster than Rust. When you want autocompletion, it needs to be fast. Running a full type check is a non-starter here. So you implement your own "type searcher" which is able to perform some level of inference and search for items. Being deliberately incomplete, it doesn't handle some cases; looks like macros are one of them. Since racer uses syntex handling macros would not be much harder (just run the macro visitor first; three lines of code!), but I assume it doesn't for performance reasons or something.

(disclaimer: I only have a rough idea of racer's architecture; ICBW)

> But my real concern is how to write refactoring tools when macros are implied

This is a problem with refactoring whether or not you're using tools. And like you mention there's exactly the same problem with generated code. If anything, Rust macros being hygenic are nicer here, since you can trace back where the generated code comes from and _attempt_ to refactor the source.

And macros like try do not affect refactoring tools at all; being self-contained. Its user-defined macros that mess things up.

> Most Rust static analysis tools hook into the compiler and get this for free. Racer has that problem because racer implements a rudimentary minicompiler that's much faster than Rust.

I didn't know that. Understood. Thank you for the explanation.

> And macros like try do not affect refactoring tools at all; being self-contained. Its user-defined macros that mess things up.

What do you mean by "self-contained"? How is it different from user-defined macros?

> What do you mean by "self-contained"?

It doesn't introduce any new identifiers or anything. As far as refactoring is concerned, it's just another block with nothing interesting inside it. This is sort of highlighted by the fact that we can and do plan to add syntax sugar for try!() -- if it was a language feature it wouldn't cause refactoring issues, so why is that the case here?

User defined macros (there may be some exported library macros that do this too, but try is not one of them) may define functions or implement traits or something, which might need to be modified by your refactor, which might need fiddly modification of the macro internals.

(Also, note that due to Rust's macro hygiene, all variables defined within a macro are inaccessible in the call region, unless the identifier was passed in at the call region. This helps too)

Thanks for the very clear answer.

> like making more difficult to write refactoring and static analysis tools

As one of the people behind a lot of the out-of-tree static analysis in Rust (clippy, tenacious, Servo's lints) I'd disagree. Performing static analysis across macro boundaries is easy.

The only problem Clippy has with macros is that the UX of the linting tool is muddled up at times. Clippy checks for many style issues, but sometimes the style issue is internal to the macro.

For example, if Clippy has a lint that checks for `let foo = [expression that evaluates to ()]`, it's quite possible that due to the generic nature of macros, a particular macro invocation will contain a let statement that assigns to a unit value. Now, this isn't bad, since the style violation is inside the macro, and not something the user should worry about. So we do some checking to ensure that the user is indeed responsible for the macro before emitting the lint. Note that this isn't much work either, the only hard part is remembering to insert this check on new lints if it's relevant.

But anyway, the UX of clippy is orthogonal to the static analyses provided.

(I also don't recall us ever having issues with `try!`)

> The conversation on the "RFC: Stabilize catch_panic",

FWIW most of the points are fixed with the catch and ? sugar that you mention later.

> My general feeling is that, whatever programming language you consider (Python, JavaScript/Node, Go, Rust, Haskell, Erlang, etc.), the right way to handle errors is still an open question.

Sure, however this isn't a very useful statement when comparing languages. The OP was making a relative statement; compared to C#. Saying that "all languages have problems with error handling" doesn't add much, since the question being discussed was whether Go's error handling is nicer than C#.

I replied to pcwalton in a sibling comment:


> Go uses goroutines for concurrency, not channels.

This is more of a practical piece of advice than a strictly correct one. In practice, Go concurrency means typing "go" but thinking in terms of channel groups and one-off channels. I just don't find that distinction very productive when someone asks how it "feels" or what it is "like" to program in Go as opposed to asking for an explanation of Go's concurrency model.

> That's quite the contrary. Most libraries are synchronous. You don't need callbacks (like in Node.js) or async/await (like in Python 3). This is made possible by goroutines and that's a big advantage of Go.

Within the context of a single goroutine, you're right. I didn't express this very well. Goroutines are often how you process network I/O. My bad for not explaining this well, I was in a bit of a hurry when I wrote that part and the whole thing got too long so my proofreading was a bit sloppy. Thanks.

> I think that error handling is still a subject of tension in every language and the problem is not fully solved (even in Rust, Haskell or Erlang). Time will tell.

I really think Common Lisp had a fantastic solution here and I miss the evolution over try-catch they spec'd. I wish more people understood it. It was such an excellent idea.

> I think you meant "packaging story" instead of "build story". It's true that at this moment, the Go project has not rallied around a single and universal packaging tool (like npm in Node.js).

For Go, is there a difference? You either have static builds or you don't, and artifacts don't exist outside of this last time I checked. So any packaging solution is de-facto part of the build story and vice versa. This is in sharp contrast to Maven or others where build artifacts can exist entirely outside of the expectation of use in the build chain (it's possible to launch a jar directly).

The actual compilation of the final build is fast and the cross compiler is definitely nice to have while we're all forced into a weird world where there is no good laptop OS for developers who often end up in open plan buildings where we're expected to be migratory (or simply working without an office at all). I wouldn't dream of underselling that. Of course, I'd prefer a good interactive development pattern.

There's a great gif of Daffy Duck dressed as a highwayman constantly swinging down from a rope into a tree over and over. My friends associate this with compiling Go, and I feel like that's a very good metaphor.

But just getting the libraries you need to build is a hassle. Plain and simple.

> I really think Common Lisp had a fantastic solution here and I miss the evolution over try-catch they spec'd.

Are you thinking of Common Lisp's conditions/handlers/restarts? I've never programmed in Common Lisp but have always been intrigued by this idea.

> This is in sharp contrast to Maven or others where build artifacts can exist entirely outside of the expectation of use in the build chain

Ok, I think I understand now. So you'd like to be able to use "prebuilt" dependencies in Go (delivered like a .so in C++ or a .jar in Java)?

Honestly, the compilation is so quick, and the advantages of a statically linked executable are so great, that I have trouble imagining why I would want that. In Go, instead of saving the dependency as a .jar file (for example), I just saved the dependency source in the `vendor` directory. For the record, this is exactly what big projects like Chromium do in C++.

On Lisp, yes conditions and restarts. The only downside is the feature does inhibit some types of optimizations.

On Go, I honestly don't care if they're prebuilt. The fact the package and build story are linked is not the problem. It's the strange split brain assumption that versioning libraries is bad and taking from a git repo tip is reasonable.

Which is doubly weird because of the human contempt evident in Go. As Pike said, evidently Google employees cannot be trusted to do much off anything. Except, mysteriously, for the rather hard task of never breaking a git master and communicating breaking changes without any signing mechanism.

Vendoring is something you do to aid build times and to improve the simplicity of source distribution. It is misapplied as a substitute for actual versions and developer-focused source and binary packaging.

Vendoring means if I recheck a repo after 2 months of having it run in production I can probably still build it. Probably. But, updating it with the latest security or bugfixes? That is no easier.

Why do you use the word "story" here? Do you mean "feature"? Does story come from a marketing frame of mind and not a developer frame of mind?

To me a story is a series of features working together to explain how the developer will actually interact with things. It's not unlike Agile's definition, but maybe more practical? Languages and developer environments are a product, so considering them as the sum of their parts from a UX perspective is very healthy.

So when I say, "you use channels for concurrency" this is not strictly true (technically the concurrency primitive is goroutines, as someone corrected to me above). But since it's a practical consideration that you need to use channels (and your race condition detector will flip out if you don't), I say "you use channels". It's a useful fiction.

I used to call these Wittgenstein's Ladder because I'm a huge nerd but no one ever understood the nature of the joke and so I started speaking relatable english again. :|

I've always associated "story" with Agile, where "story" does basically equate to "feature".

You're going to miss generics for a while. Then you're going to discover interface composition and type coercion, and it's not going to be so bad.

The reflection APIs are not easy but work.

Go is dynamic code generation away from being amazing.

A lot of people complains about Go because they expect a better Java/C++/C#. If you expect a "better Java", then you are going to be disappointed.

For me Go is more like C with garbage collection and some features of a scripting language -- and that's exactly how it feels programming in Go for me. At the moment it's my language of choice for all side-projects; from web applications, microservices, shell "scripts", ...

There's an online playground/tutorial that can show you the basics. Even if you never write another Go program, it's worth trying it out just to see what else is out there: https://tour.golang.org/welcome/1

There is no type hierarchy, no generics, no extension methods, no inheritance which means that polymorphism is limited to implicit interfaces.

The concurrency model is good though (communicating sequential process), but async/await in C# is also good.

It's fun and awesome. Interfaces are _great_ overall, but I particularly love them for testing. You can use them for dependency injection.

VSCode's API is great for extensions. Markdown Viewer is incredibly useful in VSCode. I also really like the way VSCode does its Git integration.

I've gotta say though, I'm still getting used to the non-tab layout...

I hate to be that guy, but I guess I will, because aesthetics are important to me when choosing a tool i'll be using for several hours a day.

I love how Atom looks. Alongside the nicer extension system, the aesthetics of the default theme are what lured me away from Sublime Text.

I sadly can't say the same for Visual Studio Code, which I hear so many great things about, but can't bring myself to use for more than a few minutes. From what i've been able to tell, the only visual customisation available is choosing the syntax colour theme.

There is support for full color schemes for the entire UI not just the text colors. There is also support for loading TextMate and Sublime themes.


I personally use a nice dark theme, similar to the one shown in that screenshot. There are also a large range of extensions and VS Code is really fast.

What about changing the appearance of the fairly unpleasant left-most bar (with the code/search/version/debugging switcher), the super in-your-face status bar, etc?

You are right. But when you see how much memory Atom takes and compare that to how lightweight Sublime is, you might want to reconsider.

I had a good experience overall with this plugin (once it was set up, which wasn't exactly trivial), but it keeps auto-completing in comments so when I hit [return] it inserts some random word rather than going to the next line. Every time I want to go to the next line I have to press [esc] then [return].

This is great. I love how definitions are display on hovering over any variable or function, the go to definition feature and the split window feature, which makes it sweet and easy to keep reference code bits while coding ... I think with the definition display on hover, it even beats vim.

Not related to Go plugin, but in VsCode I hate the tabs in the left panel, feels very different and I could not even adjust it after 6 months.

However, the Go plugin is really nice and works perfectly, even, the debugger works /most of the time/ . It is fast and handy. Recommended.

As a long time Sublime user, having side-bar tabs in editors like Brackets or VS Code is definitely quite annoying. I wish they offered the option to place tabs on the top as an alternative.

I agree. Please vote for this:


This is the sole blocker for me. With 20 years of muscle memory, tabs and tab hot-keys (i.e. command+1, command+2, etc.) are simply not optional.

Years ago, when MS was thick in the anti-trust contentions, pops and I agreed that were they to break up, it would be great to have one segment be "tools." MS has long had and offered some great programming support -- compilers, development environments, etc.

However, with monolithic MS, those too often seem tied to and influenced by the larger corporation's goals. You know, world dominance, crushing the opposition, and all that.

I hope that this new push by MS is genuine and does not morph into another embrace, extend, and -- purposefully or simply inevitably -- extinguish effort.

I'm not in the thick of it. This is probably an outdated and way far outside observation. Nonetheless, MS support still leaves me looking for the strings attached.

I find it funny that README refer syntax highlighting as "colorization".

Looks like it's an old naming convention of theirs:


At least as far back as Visual Studio 2005.

I've used it on a macbook for a short time for node/javascript development, but found it buggy and had to switch back to sublime. The undo (cmd+z) would occasionally get in a weird state where the undo would happen partially (not all lines or columns?) and the whole history would be screwed, or outright stop doing anything. Few times i had to close the file to get last saved version. Perhaps it's something I was doing wrong but it was enough not to use the product, which was great otherwise! Will try again when I hear of new versions coming out..

That was fixed for me with 0.10.0 on Yosemite. Can't find it from my phone, but there was an issue on GitHub I was tracking.

I wonder why this was featured today, I have been using vscode for Go since a few months now. It is totally amazing, especially the ctrl+P option! It isn't highlighted but it is a little gem.

Delve integration is better than I anticipated. Breakpoints, step in, step over work as expected. Step out not functional but that's a given as delve doesn't do it either. Call stack is implemented. Variables doesn't seem to automatically work, but adding a var to Watch is ok.

With the debug tools, linter, and navigate in/out of definitions, this looks like a pretty efficient workflow.

Caveats: I can't compare to Atom, or anything other than vanilla Vim (haven't configured either of them with any of the go integrations).

Edit: after a restart I can now see Locals in the Variables window! Seems to be related to breakpoint location - if I break at main.main entry I can see variables.

Wait, does the debugger work now? The last time I picked up Visual Studio code with a C# core project I couldn't' get the debugger to work with dnx web

The VS Code debugger supports a variety of languages. There is built in support for Node.js debugging which works really nicely. And there are extensions[0] providing support for: Go, PHP, Chrome, Unity and more.

[0] https://marketplace.visualstudio.com/VSCode

The debugger for C# works on Windows only (for now, since the Linux/OSX debuggers will need to use gdb or lldb). Debuggers for other languages are supported by extensions. For instance, there's a javascript and Typescript debugger built in.

I code a lot in Go. I use Sublime Text mostly and this feels like an upgrade. Integration with Git, better plugins/support I am loving it.

Seems like Acme got a serious competition... :)

Visual Studio Code is my favorite editor. I've mentioned it before but I wanted to add that I'm working on a PHP project and by using XDebug on a local server and VSCode as an editor/debugger I have a really nice lightweight debugging solution. The PHP extension to VSCode is solid.

Is that all fully/smoothly integrated into VSCode? Along the lines of setting a breakpoint in PHP Storm and waiting for it to light up when it's hit? If so, that's sweet.

Yes, yes it is sweet. Set a breakpoint in VSCode editor, run, inspect some variables in VSCode editor debug panel, stop debugging, fix code, run with debugging again -- everything you would expect from smooth debugging experience.

Thanks, I'll revisit VSCode again.

What does Visual Studio Go offer that I don't get from vim-go? Does it have gofmt hooks on file save?

Debug support, syntax highlighter, looks very nice. Having trouble getting go build on save working.

WTF!? I already posted this exact link 4 days ago and it got almost no attention :D Sometimes HN is weird. https://news.ycombinator.com/item?id=11193028

try again at the most crowded time of HN. it may get better exposure.

I've been a .net dev for years now and it's such a joy to see stuff like this come out of MS. I'm a heavy VS user and haven't had the need to switch to VSC yet. I might check it out for angular/js apps.

VS Code seems a bit of overkill to use as an editor for golang editing. As someone already mentioned in the comments, my mind is also corrupted with Vim awesomeness and vim with vim-go works shockingly every-time.

I've found myself always coming back to VSCode for my (stupid simple, nothing crazy like Docker or anything) Go projects. They've really done a fine job with the editor and extensions. Bravo Microsoft!

If you write Go and haven't tried LiteIDE I'd recommend it strongly.

Only if you want to suffer from their old and battered gui. It's not the nineties anymore, how hard can it be to use a decent gui lib?

I happen to like the UI, on OS X at least.

Anyone used Atom with the Go plugin, can you compare?

If it has source-navigator capability(generate call-graphs etc like what source nagivator does, or similar to ctags) that will be awesome!

What's the difference between lukehoban's Go extension?

This is the same thing since he works for Microsoft.

it looks like it is his extension, he's committing to this repo if you look at the log. looks like it just moved to the Microsoft org (if it wasn't there already).

It seems as if this is his Go extension, based on the VS Marketplace link and commit history. Microsoft probably just took over ownership to make it "official"

Would love have support like this for Python.

I use it daily, this one is very good.

JFC, that's how VS looks nowadays?

VSCode. Totally different product. It's more of a lightweight code editor and less of an IDE.

interesting to see the language support for VSC: https://code.visualstudio.com/docs/languages/overview

I like VsCode especially for js css and html5 stuff. The problem I have with it is that it starts to lag when my codebase gets larger eg above 600 lines. Thats the reason why I switch to Atom. I think Atom is the best free text editor.

I'm having almost 34k lines of code for some html file, over 6mb (yes, there is a specific purpose for such a large html file for particular processing) and it works flawlessly in VS Code. I remember searching for text editor that would handle the file nicely and allow working with it. Opening that file in the first VS Code release was a pleasant surprise that performance was good.

If I remember correctly I tried Atom before VSCode as I thought: well, that's a nice looking editor. But oh, it was just unusable for such a large file.

How is a 6 MB file, with only 34,000 lines, considered "large" these days?

Smartphones from a couple of years ago already had multi-core, multi-GHz processors, and 2 GB or more of RAM. A 6 MB file would fit into memory well over a hundred times, even assuming lots of overhead.

Any moderately reasonable text editor should be able to handle a file of that size with total ease, at least for the basic operations.

I agree that it is not and was surprised that I had to search for a text editor that would handle that file nicely, provide syntax highlighting and allow me to do some regex search/replace.

You may want to take a look at the linters you have installed and in particular the "lint as you type" setting.

Is that 600 lines a typo?

No its not. 600 lines of Go code.

That is an extremely tiny amount of code for an editor to start choking on :(

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact