Interestingly enough, this feature is the primary reason behind Atom itself existing. We saw the first internal demo of "Atom" (I believe it was "Thunderhorse" at the time) 6-7 years ago, and the main idea was real-time collaboration on code. That sorta took a backseat for awhile as GitHub started to recognize that a collaborative editor was pretty swell in its own right, but glad to see that it's finally all come full circle.
Even as the author, I think web-based IDEs have pretty limited utility, but it is a great showcase for the power of our API. We (my co-founder and I) built this in about 10 days.
Edit: Atom's announcement was yesterday. Microsoft's was today. That further fuels my speculative hunch. :)
Which makes me curious, does Github know what Microsoft is going to announce at their conference, or did Microsoft knew what Github is building?
It's not exactly groundbreaking. SubEthaedit did it years ago.
Just had a déjà vu with Microsoft's NetPC, announced just after Sun Microsystem's Network Computers.
That said, I do like VS Code very much. Gave Atom a try, but it felt sluggish and its add-ons were very fragile.
In my experience nothing beats Emacs at editing text. Vim has better shortcuts, but Emacs is just smart about everything.
My problem with Emacs is that it’s showing its age, it’s hard to configure and you have to learn an old and obscure LISP dialect for it. On the other hand I’ve heard that VS Code plugins are a joy to develop, MS apparently did a good job at that.
But Emacs? Yes please, I want that — plus to be honest, in 20 years from now Emacs will still be around, whereas I have my doubts about these fancy new editors.
As someone who is glancingly familiar with emacs (I have only ever written one elisp function, that too with help) it's a really stupid question, but couldn't emacs have bindings for lua or python or something? That would increase the number of people who can program for it and customize it.
> in 20 years from now Emacs will still be around
I think the real risk for emacs is, over the years, slowly losing the pool of people who care enough to contribute to it -- not just core developers, but also people who write packages, themes, etc. I already see a lot of developers who think Atom / VSCode / Sublime Text is "good enough". You may choose to discount Sublime because it's closed source (I do despite loving it otherwise), but VSCode and Atom are open-source and browser technology is only going to get better.
It can, and here is an example:
And here is a better link I guess: http://diobla.info/blog-archive/modules-tut.html
That layer of C is hardly "tiny." Everything from font drivers to process management to a lisp interpreter to window and buffer code, to overlays, and a lot more.
But yah, not tiny, sure. I'm looking forward to it being replaced via the REmacs project.
this is only true if we're strictly talking editing text wihtout any plugin functionality. As soon as you add code completion features vim shows its age, the Ale extension for async linting for example feels very sluggish on a few only slightly dated laptops I tried out and frequently grills the cpu.
However Neovim’s plug-in architecture is a big improvement. I’m running Deoplete (which provides intellisense like functionality) on the same machine, and it is basically instant. There are GIFs at the bottom of the repo:
After adding around 10+ plugins on Atom it not only became slower but it started crashing or having internal errors.
With VS code I have 17 plugins installed and it still feels light enough. Personally I disable most plugins until I need them and I think most people should do the same considering how easy it is to disable a plugin.
I have all my language specific plugins disabled until I need to use them.
Does/would a high-end (e.g., Xeon-class) processor and/or a boatload of RAM help keep Code's performance snappy?
FWIW, I run my Code install quite light since my "duties and responsibilities", ahem, only require a few languages/data types, ergo, I am not pushing it hard at all.
FTR, I have a (licensed) install of Sublime, which I confess is snappier than Code, but the difference is so small I use whichever one is better for the task at hand and any difference dissappears under the pressure of an outage enforced deadline :-D
There is also a service for hosting fossil repos called Chisel, but it's nowhere near as useful as Github.
Plain text CRDT and OT systems are often mutually adaptable. Even if they aren't directly compatible, with a bit of work it'll probably be possible to adapt from one realtime editing protocol to the other.
Of course, in the medium to long term having a standard for OT/CRDT operations that works in a lot of source code editors would be fantastic.
Both Microsoft and Github have commercial products to sell at a later stage, VSCode has now integrations with Azure, I don't know about Github integrations on Atom but the potential is there.
When the user is using your free tool you are one step closer to provide something additional that is so convenient that is hard to say no, even if is paid.
Microsoft has been a platform vendor for a long time. VSCode is part of their argument for why they should continue to be one.
Also be interesting if someone makes a teletype plugin for VSCode :)
I remember giving up on the program because if there was any instability in the network at all it desynched in surprising ways.
Edit: it's still working great even with 15+ people - very slick!
Also, here's the madness in repo form - https://github.com/mrspeaker/teletest
[Re-shared it at 28e6c3b4-754c-44ef-9406-869604db9db5 - it would be good if you could keep a UUID somehow, but I guess that's unfeasible]
Or do you mean targeting web games, much like was discussed in this HN post: https://news.ycombinator.com/item?id=13264952
Floobits has plugins for all major editors, including Vim and IntelliJ, as well as Google Hangouts.
Disclosure: I used Floobits a few times and think it’s awesome 10/10
Please enlighten us to how you got it to work at all. I use neovim.
For me, the editor-agnosticism is the most important feature I would want my live coding experience to have. My team uses a mixture of Vim, Sublime, Emacs, VS Code, and Atom, and we have configurations we are comfortable with. It's too bad that this seems to be happening well within the confines of each editor's ecosystem, and not by some common protocol that all editors could share.
I've tried implementing a very similar algorithm (one could say it's the same approach) in the beginning of this year, but had one remaining issue with concurrent overlapping deletions that I couldn't figure out (and the paper I was basing the algorithm on didn't account for it: http://www.sciencedirect.com/science/article/pii/S1474034616...): https://github.com/jure/rgass
Weihai Yu's implementation does account for it, however: https://dl.acm.org/citation.cfm?doid=2660398.2660401 , but his implementation is in Lisp, and I've never had the stamina to work through it for that one edge case.
Kudos to the team at GitHub, I'll be studying this implementation closely.
/Some version of AmigaDOS also had truly relative timestamps. So you might see a file last accessed "Christmas, 1991."
More seriously, though, in the mid-90s, I worked at a place where most work stations were Sun Sparcs. One way of "coding together" was that one person did an xhost+ to allow a second frame of an Emacs running on a second person's machine to be opened on the first person's display. It was used only very rarely, though.
what happens when two different people editing the same document have two different settings for # spaces per tab character? whose takes precedence? (Or would there be a possibility of inconsistent spacing depending on who is adding a tab?)
If it is space characters replacing a tab keypress, then the spaces in the document should reflect the conversion ratio of the user that typed the tab.
Doing this for text is trivial by comparison.
Linden Labs is developing something new, called "Sansar", for virtual reality users. It's closer to a video game than a simulated world; it's a platform for "experiences", which are essentially third party games. Whether that works out depends on whether VR gets any traction. Second Life also might get a boost when the new Amazon "Snow Crash" series airs. Second Life is the closest thing to the "metaverse" of Snow Crash.
There's a whole world in there that is totally independent of Google and Facebook.
When I'm coding with someone we're not usually on the same files or piece of code anyway. And I don't get any value from seeing them type away at the code file that they are focused in, or knowing which file they're in beyond the name. It's a simple "hey mate what line are you on?" question.
I get excellent value from talking to and seeing the other devs face to face figuring out where their head's at and what 'page' they're on. That's the stuff of a proper realtime and face to face code sesh IMHO.
For me, if we're not using git I could get the other dev's latest changes by simply doing a file copy of whatever I needed and then get it into my own environment. I work offline with most of my dev mates. We are rarely ever coding at the same time of day given time zone differences and sleeping patterns.
If we're a small team we're probably quite aware of where the other one is in the code and what they're working on. I don't need real time for that.
I think being able to see the other dev type in real time in the same file doesn't solve anything for me and may in fact be distracting and counter productive. I'm not sure 'real time' fits here.
This is to aid pair programming. I work on a remote team as well, but there are times we pair when hashing out of a particular issue that either myself or one of my teammates is having. It's far quicker to do this than to work disconnected and go back and forth, esp. if it may impact the schedule/sprint. In the past I've worked with devs that had as much as a 10 hour time difference and we literally scheduled times where our schedules crossed so we could pair on a specific issue. One person drives at a time but it's common to switch back and forth so tools like this are helpful.
In VSCode's feature you can even debug together and the remote user can inspect your machine. You aren't sharing a screen, you're just sharing program state and file changes.
I feel likes it's a rehash of emacs vs vi debate from 30 years ago. Emacs needed many megabytes more than vi so people wouldn't use it.
As time goes on the resource usage becomes less of a problem. My 4 year old laptop has 16GB of RAM and I don't really worry about it. I'll get 32GB or 64GB in my next computer. I never liked quibbling over memory. My time is much more important to me and I want the best tools. I also want them to swim in RAM.
I wonder why do those people even care.
The only time I look at my resource usage is when apps start to behave funny. Or when it's an app that I am developing. Other than that, why should one care?
One argument could be made that they are using memory wastefully. Not sure that's the case. The baseline memory consumption is higher, so what? It's a tradeoff. It's easier to build a better editor using browser-based tools, as much as I like Elisp. Over time Atom and VSCode will close the gap.
Now, if there is a memory leak, or memory increases non-linearly with the workload, than it could be a problem. VI and Emacs are pretty great with large files (Emacs not so great with long lines), browser-based editors usually do not work as well. But there is no reason they shouldn't, it just takes engineering effort.
Well, because if an application you're using is constantly swapping, it'll slow that application down to a crawl. This was especially true back in the day when disk was slow (and expensive.. as was RAM).
People today are used to being awash in resources. RAM is fast, plentiful, and cheap. Disks are relatively fast and cheap.
You have to imagine what it was like to live in a resource-constrained environment where you actually had to care about how much memory and disk you used, and how you were using it. These decisions had severe, immediately apparent practical consequences.
What I meant to say was that "eight megabytes" sounds silly today. Who cares if an app uses 8MB today? The extrapolation is that it will be the same for Atom or other editors (I'd argue that it already is).
I created my first programs with a computer which had exactly 28815 bytes free when it booted up (out of a possible 64k). If you plugged in a floppy drive, the free memory dropped further.
So, I do understand resource constraints.
Yet vim is still more popular today by a wide margin. :)
It's not just about memory usage; it's about lag. Atom felt laggy to me. I also value my time, and I can't stand waiting for my editor.
But try VSCode. I do not notice any lag on the Mac.
I still prefer Emacs due to the maturity of its packages. But I have used VSCode for a month and I have no complaints about the performance.
That's what I think, and the reason I use SublimeText.
Even now the highest end laptops have same 16GB RAM. Sad state of affairs :(
And most mid- high-end laptops will happily accept 32GB, again going back a few years.
(Broadwell removed the density limitation that made 16GB DIMMs a no-go; anything that takes DDR4 should support 16GB SODIMMs, excepting the very low end Atom, Celeron etc.)
The thing is, this was in fact a valid critique of Emacs back in the day, and it cost Emacs users. I know I stopped using it back then partially because of its resource use (and because it was a lot slower than vi and because of its finger-twisting keyboard shortcuts).
If Emacs was as light on resources as vi was back then, it would have more users today.
I've been using Atom since 2015 and it's only getting better on each new version.
I tried going back to Sublime which has objectively much better performance but that doesn't make the whole experience better. It's like sitting in a Formula 1 car with no cushion and no AC.
In terms of our actual data structures and algorithms, we're already starting to be in really good shape. We've dropped a number of components of our core TextBuffer to C++, ensured that most of our algorithms scale logarithmically with file size, cursor count, etc, and made use of native threading for important operations.
1. The one remaining structure that we need to drop to C++ is what we call the 'display index' - the thing that stores the locations of things like folds and soft wraps. Once we do that, opening large files (which is already reasonably fast) will be like butter.
2. Our find-and-replace is already pretty good - you can type without almost any lag even when we're storing the locations of millions of search results. But now that we have the ability to easily use background threads, there are some easy optimizations we could do there. The search results could really update instantaneously, we no longer need to wait until you pause when typing in the search box.
3. We have in the works a major change to our syntax highlighting system using my incremental parsing library Tree-sitter. Once this lands, it should eliminate any perceived latency in syntax highlighting (as well as enable a host of great syntax-related improvements).
4. Atom still uses synchronous IO (which blocks the UI thread) in some places. This is because it was created before GitHub created Electron, so node APIs were not available from the outset. Many of these have been eliminated, but there are several Git-related code paths that we still have not updated. This probably kills the experience of editing on remote drives like SSHFS for some users. We need to revisit these code paths.
Great work with Atom editor. I successfully conviced my friends to move from VSC to Atom on macOS.
It'll probably take a while before there's no more CoffeeScript in the entire Atom org. We're gradually converting the code to JS as we come upon it for other reasons.
> Typescript is great at catching bugs
Startup time on windows was rough, meaning I had to remove it as a default editor almost immediately for most filetypes. Worse off, opening large files would grind Atom to a halt and usually lock the editor up.
Beyond the performance, I thought it was a great text editor. The problem is I just have no use for a slow-to-start editor when I can just DL vscode and get the same features with considerably better performance.
The only pro for this is there are no conflicts, so merging is easier. However, live conflicts are still bound to happen if 2 people want to work on the same section of the code, right?
And the cost of this is the total unability to debug...
I don't get it...
I don't like it. I think small services, incremental changes, strong tests and solid code reviews all work much better. Especially the tests and code reviews. At my last shop the company had an internal Gitlab and we used the Gitlab CI. Reviews + CI really helped keep the coding standards pretty high and helped younger devs not make beginner Scala mistakes coming from a Java background.
I like to just sit there and read through the code for a long time to figure out how it works, maybe throw in a print statement here or there, and reason out loud about it (or mumble about it myself). Most people like to step through interactively and debug together, and in a group setting this can become a real conflict in my experience.
Edit: can in the future, not now. So we'll see
As an example, if this was a commit message, the trailer adding my attribution would be like this:
`Co-authored-by: FirstName LastName <email@example.com>`
Although if you're doing remote editing you probably want to use Tramp.
(Why? How many of you have edited some config/sql query/... in past week?)
I must have missed the announcements for vim and Notepad++.
On-topic, I remember recently Uber and Lyft were working on a similar feature, and both knew about the other but neither knew that the other knew. I wish I could remember what the feature was.
I'd like to explore whether Teletype can be used across editors in a similar manner.
Really cool stuff !