Hacker News new | comments | ask | show | jobs | submit login
The Monaco Code Editor (microsoft.github.io)
724 points by algorithmsRcool on June 20, 2016 | hide | past | web | favorite | 170 comments



Does anyone know on a technical level why the Monaco editor feels so much faster than the Atom editor? Is there any mechanism Microsoft is employing that Atom could adopt, or are the two editors that fundamentally different?


Hi,

I'm working on the editor since almost 5 years now. Phew, time flies.

There is no silver bullet, we mostly try to keep all computations limited to the viewport size (if you have 20 lines visible, then typing, colorizing, painting a frame, etc. should all end up being computed with loops covering those 20 lines and not the entire buffer size).

We also use extensively the profilers, and most recently (last month) I learned about this great tool called IR Hydra[1]. The gains from eliminating bailouts in the hot code paths are probably too small to notice (5-10% per render), but I like to think that everything adds up.

We use translate3d for scrolling (except in Firefox which has a known bug[2]) and that brings down the browser painting times considerably when scrolling.

I've also found insertAdjacentHTML to be the fastest way to create dom nodes (from big fat strings) across all browsers.

Sort of silly to mention, but we use binary search a lot :).

[1] http://mrale.ph/irhydra/2/

[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1083132


Forgot to mention a funny fact I found. Minify everything [1]

[1] https://top.fse.guru/nodejs-a-quick-optimization-advice-7353...


> v8 optimizer (crankshaft) inlines the functions whose body length, including the comments, is less than 600 characters.

In what kind of world is that a sensible metric to decide if a function can be inlined?


In the world where you have to make these decisions quickly, and don't have time to parse the function.


I don't buy it: that would only make sense if you could copy/paste the source or something, and even JS respects function scoping so you can't. You'd have to wait until later anyway, so why not count AST nodes or something instead?


I can't make useful guesses about the V8 developers' reasoning, but I'd assume they considered several options and chose this one for a reason, concerning how much performance is at stake. I don't know about Crankshaft's inner workings, though, so I can't make qualified comments about it. Maybe something like http://jayconrod.com/posts/54/a-tour-of-v8-crankshaft-the-op... can answer some of your questions?


Thanks. Quick CTRL+F for "inlining" produces:

> Graph generation: Crankshaft builds the Hydrogen control flow graph using the AST, scope info, and type feedback data extracted from the full-compiled code. Inlining also happens at this stage. Hydrogen is Crankshaft's high-level architecture-independent intermediate representation.

> Inlining: Performed during the graph generation phase. The inlining heuristic is very simple: basically, if the function being called is known, and it is safe to inline, the function will be inlined. Large functions (>600 source characters including whitespace or 196 AST nodes) will not be inlined. Cumulatively, a total of 196 AST nodes can be inlined in a single function.

So they do use AST nodes as a heuristic; I don't really understand why they would also use "source characters including whitespace" though.

Guess I better keep my comments outside the function body from now on when possible.


In the "I need to do this quickly to finish my sprint" world.


Wasn't V8 written in a cottage in Denmark? http://www.ft.com/cms/s/0/03775904-177c-11de-8c9d-0000779fd2... I cannot imagine that they would take unreasonable shortcuts.


Well, unreasonable now was probably perfectly reasonable at the time of implementation. Perhaps they even have or had a plan to improve upon it, but given other tasks (ES6, etc.) it hasn't been given much priority.

I mean, why would they fix it with high priority if the engine already has reasonable performance and the benchmarks are happy as well?


Even then, why include comments? Is it to support stuff like multi-line strings as comments hack[1]?

[1]: https://github.com/sindresorhus/multiline


Some more questions on performance:

It looks like you're adding and removing line divs, did you run benchmarks on trying to reuse the line divs and change the contents of them?

By using translate3D you're not using the browser's native scrolling correct? Are you using an open source library to replace the scrollbar? What was the actual performance benefit of using translate3D vs native scrolling?


I love these questions!

I've actually never tried to compute a diff and apply it inside a line, maybe I'll try it tomorrow :). A line is basically a list of spans, each having a certain class name and a certain text content. I've just always assumed that iterating over the previous spans, adjusting their class names, adjusting their text content, appending new spans or removing extra left overs would be slower than a big fat innerHTML call (given that each dom read/write access leaves the JS VM and I always thought there's a certain penalty associated with each dom call). But I will definitely try it out!

Here is what we do now - https://github.com/Microsoft/vscode/blob/master/src/vs/edito...

[It might not be the best method, but it was guided by measuring]:

* if there is no overlap between frames (e.g. you jump to a completely different location), the whole thing does a single innerHTML call

* otherwise:

* all the old lines that leave the viewport are removed via multiple domNode.removeChild

* all the new lines that enter the viewport are added via a single domNode.insertAdjacentHTML

* all the old lines that have changed are parsed in one single off-dom innerHTML and then cherry picked via multiple domNode.replaceChild

That is what I could come up with in my attempts to minimize the count of dom calls and not pay for reparsing/repainting the entire viewport on each frame. Maybe there are better ways?

If I remember correctly, we ended up not using native browser scrolling for multiple reasons:

* [if we would not set scrollTop ourselves] the browser would just happily jump to a certain scrollTop, painting nothing (white), then we'd get an `onscroll` and we'd be able to paint the lines. But you'd always get this white flash.

* if we would set scrollTop ourselves:

* AFAIK setting the scrollTop causes a stop the world sort of synchronous layout - I don't know why

* We wanted to have an overview ruler that sits inside the scrollbar and highlights things (find matches, diffs, etc.)

* IE has or had a limit around 2.5M px. That meant we would have had to do something special anyways around 80.000 lines @ 19px line height

The scrollbars are custom implemented (https://github.com/Microsoft/vscode/tree/master/src/vs/base/...). Quick tip: do not implement custom scrollbars.

PS:

Some anecdotal evidence I got that making less calls with larger chunks of data might be better was when I was investigating why creating an editor buffer was slow for very large files (100k+ lines). One of the first things the model (buffer) code did was to split the text into an array of lines.

I implemented this as any sane person would, with a nice for loop, iterating over the string, grabbing the character code at each offset and checking if it was \r or \n or a \r followed by a \n. I would then remember the last cut off index and do a simple substring to extract each line, building an array of lines. I thought that must be the best way one could possibly do this (I don't know a better way than a for loop even in C++).

If I remember correctly, that was taking 50ms in some browser for a pretty large string. I replaced that simple for loop with a lame split with a regex! - /\r\n|\r|\n/ - and the time dropped to 3ms. I can only think that looping in C++ must be a lot better than looping in JS [here's the code today - https://github.com/Microsoft/vscode/blob/master/src/vs/edito...]


Thanks for the detailed answers!

I'm maintaining a web spreadsheet that supports 10s of thousands of rows and have spent a lot of time on optimization. We have to handle some richer content (i.e. contact pictures) so simple spans aren't always sufficient.

The way we're doing the rendering each cell is its own div and we reuse divs as they scroll out of the viewport (changing their top position). Previously I was using innerHTML on each div but I found that constructing the dom nodes manually (document.createTextNode, etc) and then doing a dom.appendChild turned out to he slightly faster (full "wipe" = 16% lower render time). I then cached those prebuilt DOM nodes and then doing a full wipe ended up being 3x faster.

So there was a small speedup on initial scroll and then when you're scrolling around and seeing rows/cells that you've seen before there's a large speedup. Not sure if that's helpful, but maybe worth investigating.

And yes, I know what you mean about scroll events not getting called synchronously. There seems to be a difference in how some browsers handle scrolling vs painting. I actually filed a bug with Chrome (https://bugs.chromium.org/p/chromium/issues/detail?id=619796...) as they introduced an issue in Jan 2015 that I just discovered.

And yes, I really don't want to implement custom scrollbars so I'm hoping to get optimized enough to not need them... we'll see though.


You don't ever need to add or remove divs when you're not resizing the window. You just need one more than your window is able to display and move the divs up and down with a tiny offset. The other times you can just change their content.


Why? Can you elaborate?

In our framework, we have an option to yank complex components out of the DOM and replace them with empty containers. And when they scroll back into view, we put them back (and activate them if they are newly rendered).


Tangential remark: I remember the days when the drivers for the first mice with scroll wheels were simulating clicking on the scroll bar arrows.

And now we're painting our own scroll bars and simulate mouse wheel interactions (which then probably still invoke some legacy scroll bar based facility internally in the OS):

https://github.com/Microsoft/vscode/blob/master/src/vs/base/...


You definitely need to do some sort of a talk. This are some great leanings. Is react like vdom diffing used anywhere in Monaco / VSCode ?


I know atom used to be based on react, but it turned out to be a bad idea: https://github.com/atom/atom/pull/5624


> Quick tip: do not implement custom scrollbars.

Wasn't this part of the motivation for Java? I seem to recall reading something about the heinousness of debugging a half dozen different platform-specific scrollbar implementations...

Thanks for all the info, this is awesome. :)


FWIW, if you don't have to deal with '\r' only line breaks (do those still exist?), successive calls to strchr or memchr can be much faster than a for loop over characters. The reason is that strchr and memchr are typically already optimized to check a machine word at a time or even using SIMD instructions. If you do have to deal with '\r' only line breaks, it might be worth open-coding the strchr/memchr optimizations, but I never tried.

If you're curious to see the difference, compare the running time of `wc` vs. `wc -l` on large / many files (you might have to force the locale to C for wc -l to hit the fast path).


Yeah, but the overview ruler is awesome :)


Thanks for this. :) Learning the type of tricks needed to get DOM performance are hard-won, I imagine, so hearing it straight from a developer is extremely valuable to me.


Awesome work, I love the speed. What it needs is a useable vim plugin, otherwise, it would be too difficult to use regularly, or for any real work. BTW, I also appreciate that it's cross- platform and runs on linux.


Contributor to the VSC Vim extension here. I've been working on it frantically over the last few weeks and I'd strongly recommend giving it another try. I believe it's made some big strides in usability.

And obviously creating issues on the github repo for things you're missing is a big help for me. I try to have a fast turnaround on these things.


Missing from (or aside) this vim plugin is ex mode, similar to this[0]. I read that this is out of scope with the project, but being able to build it as another plugin could be great. This is the approach taken by vim-mode/ex-mode on Atom.

[0]: https://github.com/lloeki/ex-mode


This is very true. I let my own usage patterns and github issues be a guide to what I do next. I don't use ex mode very much, outside from simple substitutions, and I haven't gotten any issues yet either.

I remember reading that the Atom Vim authors believed separating the two was a mistake; they were too intertwined. As much as I'd like to have it be a separate project and not think about it, apparently that doesn't work so well...


Thanks for your interesting reply and wow 5 years (also why was it kept in the dark until now?)


From the FAQ. https://github.com/Microsoft/monaco-editor#faq:

"Why all these web workers and why should I care?"

"A: Language services create web workers to compute heavy stuff outside the UI thread. They cost hardly anything in terms of resource overhead and you shouldn't worry too much about them, as long as you get them to work (see above the cross-domain case)."

Maybe that's part of the reason?


I think that's pretty much standard practice. I'm not sure about Atom but Ace leverage's workers for linting, intellesense and things like that.


Educated guess: generator functions with chunked processing on setInterval callbacks would be less CPU consuming for such tasks.


Pretty sure setInterval still uses the same processor core, wasting the others


Does Atom still feel slower? I've used Atom before and it did feel sluggish, and using VSCode for TypeScript really felt lightyears ahead from Atom. (maybe a bit unfair of a comparison since it's most likely nicely integrated for it, but whatever)

To be fair I haven't caught up with their current versions (I'm a vimmer, I just tried them) but I think I tried Atom after the React move.


Atom has gotten incrementally faster and I have few problems using it as my main editor at work, on a Macbook Pro with 16GB of RAM.

It isn't pleasant to use on my personal laptop (MBA with 4GB RAM) where I frequently switch projects.

While I am a happy Atom user, I am disappointed with performance; I think there's an order-of-magnitude jump in overall speed that the editor could really use.


You are talking about 16 GB of RAM, rights? My emacs is very happy with 32 MB of RAM and my vim with even less.

I understand that Atom is doing a lot, but if you need 16 GB of RAM to run your text editor, this simply means that the developers have not been using the right data structures to manage the state of the application or maybe the new code editors have an implementation of an AI coding for you so you can outsource yourself.


It's new paradigm - bloatier than an IDE, as slow as a web page but without the advantage of storing / running anything i n the cloud.


I was going to say that my emacs uses an awful lot more memory than that, but actually it's got 190 files open in at least 3 languages and it's using 89.8 Mb of RAM at the moment.

Atom is at a fundamental disadvantage - it's built on a much more complicated and abstracted stack of technologies. Javascript engines do spectacular things these days but they have a harder job to do than executing compiled lisp, and that's before you look at the whole of the rest of the stack involved.


I've been an avid Sublime user for many years and still am, and this is what puts me off any electron based editor or app in general. The requirements for the most trivial things are so high, it is insane. Try running Atom on a AMD E1 laptop. TLDR, not very nice.


And to think people used to say emacs stood for Eight Megabytes And Constantly Swapping.


And let's not forget:

Editor for Middle Aged Computer Scientists

Escape-Meta-Alt-Control-Shift

Tt's still my favourite, though.


Many people use full IDE's to do their work - JetBrains, VS, etc - which are dramatically more resource intensive than Atom.


Yes but they are IDEs, while Atom/Vim/Emacs are plain text editors.


> plain text editors

I don't think that's a fair assessment given the what is now minimum expectations around syntax highlighting, syntax/grammar validation, autocomplete, etc. We're no longer dealing with "plain text".


So that's even less of an excuse for Atom's resource hogging. Vim and neovim use so little memory and are probably the most responsive editors I've ever used.


... and Emacs really is an IDE; the distinction between it and GUI IDEs has more to do with user interface than any limitation of Emacs.

e.g. live libclang-based code completion for Emacs: https://github.com/Sarcasm/irony-mode


Atom is essentially the same class of editor as emacs. Both have designs that can accomodate IDE features (so does vim, but it's more of an afterthought in that case), but normal usage is not IDE-like. Atom is essentially emacs built atop javascript+web rather than lisp+unix.


This is more holy war material than anything really. Emacs is just an editor.


>> Many people use full IDE's to do their work - JetBrains, VS, etc - which are dramatically more resource intensive than Atom. reply

My Visual Studio doesn't need 16 GB of ram... I have enough with 4 on Windows ;)


That's the price you pay for Electron.


The reason we're talking about Atom on this thread is because VSCode, which also uses electron, does not have performance issues.


Which makes me think, Atom is giving Electron a really bad reputation for its speed. Sure, it consumes tons of RAM, but it's not that slow by default.


Last time I seriously tried using it, it seemed unnecessarily CPU-hungry and my battery life suffered as a result. That fact alone was enough to get me to stop using it. I have a weird thing about inefficient software in general, but in this case its inefficiency was actually inconveniencing me. With my good old Vim+tmux workflow, my 12" Macbook claims it'll go for 14-15 hours, though I haven't ever fully tested that claim.


I used Atom for 3 months over New Year's on a 4 GB 2015 MBA.

I had no issues whatsoever.

Interestingly enough, Atom seems to really struggle on my Windows laptop, it's often incredibly choppy. It actually runs better in a Linux VM than natively on Windows.


The only consistent issue that I have had with Atom since I started using it in late 2014 was the handling of large files. It has gotten incrementally better, as others have stated, but trying to open up, for instance, a JSON file larger than 10-20 megabytes in the editor causes the editor to be brought to a grinding halt.


The editor is essentially a virtual list - sliding buffer of DOM elements - only visible lines are represented in the DOM. That's probably the only reasonable way of doing such editors effectively in modern browsers. Drawback of this approach: hard to integrate platform scrollbar to that, quite a lot of JS code to support virtualization, etc.

Speaking about syntax/text highlighting in HTML ...

In Sciter I've added an option [1] to mark character runs without creating tons of heavy weight DOM elements (The Monaco uses <span>'s for that). Plus an option to style those run marks in CSS:

  plaintext > text::mark(keyword) { color: blue; }
  plaintext > text::mark(symbol) { color: brown; }
Editor's DOM model in Sciter's case is a flat list of <text> elements representing each line.

  <plaintext>
    <text>first line</text>
    <text>first line</text>
    ...
  </plaintext>
In fact such marks are needed not only for syntax colorizing but for other things like misspelling highlighting, text found highlighting and other cases where you need to highlight text but DOM change is highly non-desirable.

[1] Tokenizer + ::mark() = syntax colorizer : http://sciter.com/tokenizer-mark-syntax-colorizer/


Not an expert here so I might be wrong, but I've been reading about Framework7(http://framework7.io/) lately, which uses both virtual list and native scrolling. So maybe you might not have to implement custom scrolling here?

It could also be that mobile CSS has its own quirks so different implementation can be used, again, not a DOM expert here.


Check this: http://sergimansilla.com/blog/virtual-scrolling it is an example of virtual list showing 1,000,000 records with native browser's scrollbars. So "yes" it is doable in principle but with JS help of course. And yet only for items of the same height.


After trying it out, I feel like their implementation of scrolling might be an important factor. The editor scrolls significantly faster than my browser normally does.


Sigh I select C and it gives me C++. That perfectly sums up Microsoft's attitude to C :(


We are definitely cheating there - we use the same colorizer for C and C++. I don't remember why we define two different languages that point to the same colorizer.

Please help us out and contribute a PR. Here[1] is where we register the C language and point to the cpp colorizer. And here[2] is the cpp colorizer. Please go ahead and try the C++ colorizer out at the Monarch playground[3] and edit it to make it colorize only C keywords, etc. Here[4] are the steps for adding a new colorizer.

Looking forward for a contribution on this :).

[1] https://github.com/Microsoft/monaco-languages/blob/master/sr...

[2] https://github.com/Microsoft/monaco-languages/blob/master/sr...

[3] https://microsoft.github.io/monaco-editor/monarch.html

[4] https://github.com/Microsoft/monaco-languages#dev-adding-a-n...


The highlighting isn't an issue, I think most editors cheat in the same way. As C++ is a super-set of C it shouldn't be a problem, at least not except for a few edge cases which I wouldn't worry about IMHO.

My comment was more that the C example code is actually C++.


Except C isn't a subset of C++. There are many valid C keywords that aren't valid in C++, like restrict and _Noreturn.


Well MSVC only just supports C99 so asking for C11 keywords is unlikely to happen.

I had a quick look at their C++ word list [0] and it looks like _Imaginary and _Complex and _Bool are the only missing C99 keywords. Feel free to make a PR if you want them added :)

[0] https://github.com/Microsoft/monaco-languages/blob/master/sr...


restrict is C99. Many C++ compilers accept __restrict__ or some other form of it to mean the same, but restrict doesn't exist in the C++ standard, so the GP is correct.


Good point, it gets tough remembering all the little differences between the two.



Haha... I also like how the first example is a batch script. :-D


Lexicographical sorting at its best :)


Oh, no! No support for Ada? Algol? Apl? ;)


It's pretty nice seeing the work to bring this out of VS Code, IIRC it started as a separate project for Visual Studio Online, but embedded/enhanced as part of VS Code.

Either way, it's definitely one of the better performing code editors in JS/Browser usage. For that matter VS Code works surprisingly well compared to Brackets and Atom.


Didn't vscode start life as a way to dogfood Monaco? It's my understanding that is why the vscode repo has been the official Monaco repo as well..


This looks pretty great. I've been frustrated trying to implement ACE in a project for Markdown support; it turned out to not work on iOS and Android at all, and it has a ton of bugs elsewhere, too. I ditched it for CodeMirror, which turned out to be nearly as bad on mobile.

A quick test shows that Monaco does work on iOS, although there's apparently no selection support within the editor. Surprisingly, double space produces "." as it should, but it seems iOS autocompletion doesn't work (not sure if it can be enabled).


Implementing touch selection on a widget that is not native is not trivial. In the editor, when you select something, what you see painted is not really the browser (native) selection. It is simply a bunch of divs painted in such a way that they look like a selection. To add touch selection we would pretty much need to implement it from scratch.

If you have the chance and the time, I would very much appreciate a PR to fix some of the input handling quirks you're seeing on iOS. We are not experts in everything, and I have come to appreciate the amazing world of OSS :). e.g. Recently, we've gotten an amazing PR [1] that fixes a lot of the input handling for CJK languages, which I would have had no chance to fix by myself.

[1] https://github.com/Microsoft/vscode/pull/5615


The JavaScript IntelliSense support looks really solid. Does it use TypeScript under the hood for type inference?

Also, any plans to add intellisense support for other languages?


Yep, the JS inference engine here is just the TypeScript inference system with a few extra rules bolted on. It's also the JS inference used when compiling JS files in TS under the --allowJs flag.


Can you point me to the part of the code that is responsible for that in Monaco? I'd like to use it in my editor locally.



I maintain an open source project for writing SQL queries[0] that currently uses codemirror. I find it a little sluggish to load. This looks like it could be a good option -- the one thing that would make it a codemirror killer for me is the ability to resize the editor window; something text areas obviously natively support, but codemirror does not. Any idea if Monaco does?

[0] https://github.com/groveco/django-sql-explorer



I'm looking for mouse-drag-resizing, like a text area. Not auto-sizing to fit contents. But thanks!


how does this compare to ace editor[0]?

[0]: https://ace.c9.io/


Monaco doesn't have alternate keybinding modes like vim or emacs that ace has.


In their diff example, line 33 on the left and 35 on the right are shown to be unchanged, however the indent isn't the same... Seems like they've hardcoded this example incorrectly unless I'm missing something? Left line 32 is removing a bracket at indent level 2, left line 33 is unchanged bracket at indent level 1, but there's now one less bracket at level 1 in the right side, even though no bracket at level 1 was removed?


We don't show leading/trailing whitespace diffs unless the diff consists only of leading/trailing whitespace changes.

This is sort of what we do when diffing:

* when we need to compare two buffers, we represent them as two arrays of lines

* we then proceed to trim() each line in both arrays

* we then use a greedy optimization where the first N and the last M lines that are equal (post trimming) in both arrays are dropped from further computation

* we then run a LCS algorithm over the remaining lines to find the diffs

It is important to note that the same two arrays of lines can have multiple equal longest common substrings. This method [1] could get some love and could try to recover in some of these cases.

[1] https://github.com/Microsoft/vscode/blob/5b9ec15ce526decc5dd...


It's not hardcoded. I just tested it in an actual Code instance and their diff algorithm appears to mark the wrong braces as removed.


Very impressed with the diff-editing feature.

... turns out there's a similar package for Atom: https://atom.io/packages/split-diff


Very confused that side by side diff editing is really a new feature in this world. I mean, ediff landed its first versions in the early 90's sometime I think, and vim certainly had it from very early verisons too...


I use vim daily but haven't tried diffing internally yet. What's your preferred method of doing this in vim? This page [1] suggests a few.

1. http://unix.stackexchange.com/questions/1386/comparing-two-f...


I find it's more trouble than it's worth. For some reason it's just not as good at diffing as other tools I've used and has a hard time matching up both files. I find that other tools do a better job.


If you're using split-diff I also recommend MiniMap [0] and the Minimap-Split-Diff plugin [1]

[0] https://atom.io/packages/minimap

[1] https://atom.io/packages/minimap-split-diff


I think it's almost exactly the same as the diff viewer in Team Foundation Server 2015, though I really really don't recommend installing or using that when you have Gitlab at your disposal.


I hope VSCode has fixed the bug where left side cannot be edited when picking two files.


<off-topic> You know, my first thought was that it's weird for the scroll bar to disappear like that. My instinct is that I want to know what part of the document I'm on. But then I remember that I have (scroll-bar-mode -1) in Emacs, so I guess I don't really care that much. </div>


The scroll bar style is an option, anyways: https://microsoft.github.io/monaco-editor/playground.html#cu...


Surprisingly responsive for a web based editor, really well done. I guess it's time to give VS Code a try!


Even Micrsoft doesn't use IE. Their github's screenshot uses Google Chrome... in incognito.

EDIT: I'm actually getting downvoted. This statement is just made in jest people, chill :)


As a Microsoft employee who works on conference presentations, works on hands on labs, etc., you can't win on this.

If I use Edge exclusively in screenshots, people assume it's not cross-browser, it's not cross-platform, Microsoft is out of touch with other developers, Microsoft's living in a bubble, etc.

If I use other browsers for screenshots, I get comments like this.

As @dangrossman replied, we all use whatever browser we want for day to day browsing.

As a professional web dev, I make a habit of cycling through the modern browsers because it's just a good practice. I was the annoying guy in my dev team (not Microsoft, a financial services company) in the early 2000's that tested everything in Firefox when everyone else just wanted to use IE6 all the time. I'm typing this in Edge now, last week was a Chrome week.

[please don't downvote the parent comment]


A lifetime ago, I used to hock printers for HP (one of those guys in big box stores who would approach you in the printer aisle to ask if you needed help).

It wasn't just me, there were also Canon and Epson reps there. Being that we were all basically coworkers (as we had the same jobs), we were very jovial toward each other, but customers would often remark that we should be more hostile toward each other, usually in jest -- still it made me realize how little they understood our day to day jobs, and how little people consider pragmatism in their observances.


People's mental model of the world doesn't include principal-agent problems[1]. People with actual equity in competing businesses tend to be pretty rivalrous (e.g. owners of small independent restaurants on the same street); naively, if you don't see employees as anything more than extensions of those rivalrous owners, you'd expect them to behave the same. But instead, of course, "it's just a job."

[1] https://en.wikipedia.org/wiki/Principal–agent_problem


I do customer-facing sessions too (focusing on Azure and OSS stacks) and quite enjoy the stunned silence when I pop open an SSH session and use it for a demo in their Linux distro of choice ;)

Most people don't realize how things have changed/are changing on a daily basis.


Yep. Last month I was in Moscow for a conference keynote and a workshop. Running through demos the morning before, I spilled coffee on my Lenovo and fried some keys on the keyboard. Thought a bit, pulled out my MacBook Air, and had everything running in time for a keynote rehearsal that evening. Nice stress test of the cross-platform mindset, not quite as nice of a stress test for me though.


I hope you mentioned that at the keynote, or will in upcoming talks. That's a great story.


How about using different browsers for different screenshots?


Not the parent poster, but if he were me: that can easily look inconsistent (and inconsistency is ugly), and it can also be more work (as you need to walk through whatever steps you planned many more times). And of course, if you do show multiple browsers, which ones? E.g. if you want to include both safari and IE, you need at least two platforms (more work). Firefox & chrome can be quite different across platforms when it comes to acceleration, so if that's a factor...

Perhaps one token screenshot to get the message across isn't a bad idea, especially if everything else is chrome - webdevs sometimes seem to think chrome is good enough for everyone, and if there are compat issues in modern sites, it tends to be due to chrome-specific design (IME, but I bet if you've got a particularly windows or apple focus things may be different).


I do that sometimes, but that's kind of weird too. If you're reading a tutorial that keeps flipping between browsers, it calls attention to the browser rather than what you're learning about. We do sometimes flip between browsers for conference demos - I try not to call attention to it - just use a few different browsers and people who notice that kind of thing will see it.


What next? Different operating systems? :) I suspect the guy just wants to get some work done.

I'm ok with whatever browser works for him, and I will choose whatever browser works for me if IE/Chrome/Firefox does not perform optimally to my own standards.


Just to clarify, I'm not suggesting that everyone should do this. But, if you work at Microsoft, and you think that your choice of browsers is unfairly judged whether or not you use Chrome or Edge, then I'd do the following: Show the first screenshot in two browsers (Chrome and Edge), and then show the remaining screenshots in my browser of choice. Or, show the first screenshot in Edge (or Chrome), and then show the remaining screenshots in the other browser.


Microsoft isn't a person, it's 117,354 different people, who collectively probably use every variety of browser. An incognito window of a different browser than your primary one would be a simple way to get a screenshot free of your private tabs, messy addon buttons, and any pre-release features that shouldn't be made public accidentally.


This. We actually use non-Windows platforms, too. ;)


The screenshot doesn't represent a statement on my part. I use Chrome, Edge and Firefox all the time.

I especially enjoy debugging the editor in Edge's F12 Tools, because Edge's debugger is built with the monaco editor ;).


Glad you like it. Let us know if you have any feedback regarding the tools. I'm one of the devs on the team.


Gonna have to give edge a try just for this


Why would they use IE, the use Edge :)


[flagged]


It sounds like you are attributing this to some sort of surreptitious attempt to "seem relevant", where really it's much simpler. Unless you are working directly on IE/Edge, there is absolutely no "top-down" direction as to which browser you must use.

Truth is, you use what browser you like. On my team (MSFT employee) this is a mixture of Chrome, FF, IE, and Edge, based on personal preference.


I'm not sure if your team is large enough for the following question but: do you find that usage ratios are comparable to general (global) browser share stats?


The product states it's compatible with a bunch of browsers; of course the devs use a mix. That's not interesting.

But like the requisite non-white person in every group photo, Microsoft's choice of browser screenshots most certainly is a conscious decision. For the same reasons. It's not to fake relevancy; it's to signal they're culturally aware.


You can say it's a conscious decision on a corporate branded product launch, like the surface. Not on some obsucure developer tool's page.


Just downloaded Code. Looks pretty fast. Really like the integrated terminal!


https://ace.c9.io/ ACE has been the choice for many, how does Monaco compares to?


Interesting choice of names - Monaco was the name[1] of their web-based Visual Studio editor that looked very similar to Visual Studio Code, except it came a couple of years earlier. It's pretty clear they're the same evolution of code

[1] https://dzone.com/articles/first-look-visual-studio


If I had to pick any single example of C code to put the fear of god in young programmers and make sure that they never wanted to touch C in their life, the example on this page would be it. That is the single ugliest piece of C I have ever seen. C can be a beautiful language. This is not that. I take it this is Microsofts fancy new checked C?


Anyone got this on a CDN yet? I can't find an actual github repo on github that contains a bundled JS file?


Is there some documentation of how to integrate some kind of intellisense for a custom language to it and not only syntax highlighting?

I might need to integrate an editor for a custom DSL into a webapp soon, and Monaco could of course be an interesting alternative to codemirror or ACE.


Here [1] is a working sample showing how to register a completion item provider for a language.

The monaco-typescript [2] plugin shows how a lot of the language API can be used.

[1] https://microsoft.github.io/monaco-editor/playground.html#ex...

[2] https://github.com/Microsoft/monaco-typescript/blob/master/s...


Thanks, that should be a good start. [1] does not demonstrate context-dependent completions, but I guess it should be possible, and maybe [2] helps there. It's also nice that in the playground intellisense for the monaco API is enabled.


btw, if anyone has some time (i don't unfortunately, since the env setup is pretty lengthy [1]), it would be great to be able to disable the always-on semantic highlighting [2].

[1] https://github.com/Microsoft/vscode/wiki/How-to-Contribute

[2] https://github.com/Microsoft/vscode/issues/5351


The dropdown language picker is busted (at least in Chrome). It's got an off-by-one (or possibly 2) bug.

i.e. Pick 'javascript', it shows JS below... but says 'less'.



I just tried the editor in Opera and I have a strange issue where the cursor inverts and turns gigantic when I hover over some elements (line numbers for instance)


Having the same on Chrome on Mac. Would assume it's a funny line of CSS.


What is the key to get out of insert mode and into normal mode?


you see github is dead when even microsoft moves faster than they do.

Still waiting to be able to have decent by 1970 standards code diff on their site.


Microsoft has had people working on IDEs and editors for a long time now. It's really to be expected that Microsoft is ahead of Github on this one.


i know. i know. just mocking the consensus here, that startups raison d'etre is that they can move much faster than big corps.


I don't think anyone really believes that startups can move faster than large corporations.

Startups can merely fully dedicate themselves to exploring new spaces that open up because they're not burdened by things such as being already occupied with doing something, especially something that's profitable.

So when something new appears like VR/AR a startup can jump right in at first opportunity without having to think too much about it. This has the potential to be an advantage and means that startups are "faster" (really just earlier) to certain things.

Github is a profitable business that has nothing to do with editors, so I'd argue that it's not a startup anyway, certainly not in any far that gives them advantage in developing editors or IDEs.

Quite frankly I really don't get why they even bother. It doesn't differentiate their actual business in any useful way and it doesn't make money. In the meantime they're actually facing competition from Gitlab, a company that's adding features so powerful you can and other people have build entire business around them.


> Quite frankly I really don't get why they even bother

I've asked this in the past and all that I could gather was, this is about mindshare.

With regards to GitLab, if they (GitHub) aren't talking about them in weekly meetings, they really should; because last week was the first time that I really thought GitHub, may be in serious trouble. With GitLab's recent UI changes, it looks like they are finally finding their UI groove. It's still rough in some areas, but it looks like they have the resources to improve things.

I don't necessary think GitLab, will be the ones that will usurp GitHub, but what I think GitLab can and probably will do, is seriously devalue GitHub's value proposition. That is, turn Git hosting into a commodity product.

Linkedin says GitHub has 501-1000 employees and if I've read things correctly, they have 2 people working on their search. This means they are dedicating 0.4% of their man power to maintaining and advancing their search technology. Search is one of the things that Enterprise will actually pay for, if it works well.

If history has shown us anything, Microsoft is not shy about creating loss leaders, to get people to use complimentary products in their portfolio. GitHub's entire business model right now, revolves around hosting code. Microsoft and Atlassian have complimentary products that can see Git Hosting become a loss leader, that is, a commodity product.

Other than "Social programming", I really don't know what else GitHub is focused on. And unless I'm really out of touch with today's programmers, I would say 80% of programmers, program because it pays well and not because they are passionate about programming. What I think the vast majority of programmers want, is to be able to leave work on time and this is where I think GitHub should really be throwing resources at. And I'm not sure how a free editor is going to convince people that they need/want to host and pay for code hosting.


I thought about them doing it for mindshare. However people can be very opinionated about editors especially outside of the enterprise and they're not competing with any IDEs. There's just fundamentally not a lot of mindshare to grab here.

They also already have a ton of it. If they want to grab more mindshare, they should significantly improve their education offering. The free plan is nice but doesn't make anyone (want to) use it. They're a few systems for assignments that use SVN to submit assignments, the ones I've used at university all sucked. Create a significantly better solution here and every single computer science student becomes aware of Github.

In any case working on mindshare is an investment in the future, if you're ahead of your competition and can maintain your position relative to them it makes sense to work on mindshare. This is not the case for Github at least not anymore.


Coca-cola is good example of why it's not a bad idea to invest in mindshare, even when you are an incumbent in a particular space. If the atom editor was blowing peoples mind away, I would think it makes perfect sense to keep investing in atom, but the pattern today is:

- A new version of atom is released and the top voted comment is "Why is it so slow compared to vscode".

- A new version of vscode is releases and the top voted comment is "Why is atom so slow compared to vscode"

With Coco-cola, you are reinforcing an intangible belief, plus your target audience is pretty diverse and suggestible (not the smartest). Mindshare also works with Apple, since they also have a very diverse and suggestible consumer base.

The atom editor is a power tool and its job, is not to help you watch a movie, satisfy a craving, etc. Its purpose is to help you be more productive and when it comes to productivity, it can be measured and this is where Microsoft really threw a wrench into their plans.

Starting the atom editor wasn't a bad move, but with Microsoft's current strategy with vscode, I would think it's time to cut your losses and focus on more immediate needs, like fending off GitLab and hardening your core competencies.

I don't know how many people, out of their 500+ plus employees are working on atom, but I would have to imagine, using their resources to make an enterprise grade issue tracking and wiki/content management system would be a wiser choice.


i know i am bugging my CTO staff to change to gitlab or pretty much anything else that is not github enterprise just because for me, github is a diff viewer, absolutely nothing else (specially here since i can't use the issues or wiki pages)


What is so bad about the Github diffs?


It often hides syntax highlight if it times out on their servers, which is almost always. It will outright hide the file if over 500lines or so, which i have a lot when changing config files prefixes. They will show no syntax on 99% of the cases, etc.


Microsoft's been moving pretty fast and doing some really big moves lately, I'm not sure the implication here is fair.


This makes me wonder how hard it would be to write a browser extension that uses Monaco for displaying diffs on Github.


Interesting choice for the name since that is Apple's monospace font.


Or you know, it's also a country.


It's easier to confuse a screenshot of an editor with a monospaced editor font sample than it is with a country.


I'm not sure how you'd confuse either..


"So you use Monaco, huh?"


And if you were legitimately asking this question, it would be immediately clear from context which one you meant, and my answer would either be "Yes" or "This is not Monaco, it's <something else>", with <something else> being either the name of a text editor or a font, depending on context.


Let me help you:

http://imgur.com/hqHjVw6


And a video game


And a Grand Prix.


and a salt biscuit brand


And the internal codename for a Microsoft-developed music-making program similar to Apple's GarageBand. Apparently never released.

[1] http://www.foxnews.com/story/2006/04/11/microsoft-readies-mo...


and, in italian, a priest


a monk, not a priest


yep, you're right


and a type of shandy.


Actually, although Monaco is still shipped with mac os, the default monospace font has been Menlo since 2009.

https://en.wikipedia.org/wiki/Monaco_(typeface)


I wonder if RStudio can switch to this. Probably will never happen :(


Worth noting that we also have rich integrated tools for R developers in the full Visual Studio: https://www.visualstudio.com/en-us/features/rtvs-vs.aspx

And Visual Studio Code includes the R support out of the box: https://code.visualstudio.com/docs/languages/overview


Web based vscode in 10 ... 9 ...


Actually, if you go to https://tryappservice.azure.com and create a temporary (free, login required, no registration, etc.) web app, you can edit it in real time with Monaco.

You can do that in Go, NodeJS, Python... Etc.


VSCode was actually based on a web-based version that was previously launched in Azure and called... Monaco https://dzone.com/articles/first-look-visual-studio


Isn't it mostly HTML/JS based already?


It uses Electron as a shell, so "yes"


How is it different from notepad++?


Among other things, Monaco is a web page component, while Notepad++ is a program you download.

I'm guessing you glanced at the page and thought the editors were screenshots.


No, I actually switched between some dialects, java/json, and it looked similar to codemirror. Somehow I compared it to notepad++ which I use for locally stored files. I guess someone already asked, how is it different from codemirror.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: