
Debug Adapter Protocol - mmastrac
https://microsoft.github.io/debug-adapter-protocol/
======
breatheoften
How does this relate to and interact with the chrome devtools protocol? The
chrome devtools protocol is really quite amazing to me as it provides an
extremely thorough and powerful way to “get visibility into and external
control over” a “process” — or what I think could very well become the most
meaningful abstraction of a process for most application development in near
future.

Obviously cdp only applies to the communication mechanisms relevant to control
a browser — i guess what I’m musing about is wanting to Suss out - is there
anything that will be lost (dx feature support, performance) if one attempts
to implement ide features but using debug adapter protocol to adapt CDP? Is
CDP one of the target integrations that debug adapter protocol will aim to
support?

I think CDP is ridiculously important and likely to become more so ...

------
Normal_gaussian
I'm aware that MS has been implementing all kinds of layers of abstraction
when building features for VSCode, but I'm not aware (through my own
ignorance) of anything else using these abstracted layers - which makes it
hard to work out whether this is fluff or cool.

Has anyone used some of their abstractions elsewhere? How good are these
protocols they are pumping out?

~~~
est31
LSP at least is implemented by lots of languages as well as lots of editors.
There are long tables on the website:
[https://langserver.org/](https://langserver.org/)

Adoption wise, I'd say it's a success. As for how good the protocol is, I'd
personally have preferred something binary instead of json.

~~~
Entalpi
Been using it in Emacs with both Rust (rls) and C/C++ (clangd). It is
glorious. I hope more people rally behind both DAP and LSP.

~~~
lazyguy
Recently switched away from Anaconda to Pyls for Python editing on Spacemacs.

Documentation can be found:

[http://develop.spacemacs.org/layers/+lang/python/README.html](http://develop.spacemacs.org/layers/+lang/python/README.html)

Seems to work fine. Functionally it seems the same, maybe a bit faster.
Haven't used it a whole lot.

Really tempted to try out the Microsoft PLS just so that I can say that I am
using Microsoft in Emacs. Which is pretty close to being technically correct.

Brave new world.

Nowadays if I was forced to give up my Linux desktop and choose between
Windows and OS X for work... Windows 100% of the time. Because I can then use
WSL and get Linux nicely integrated into the base OS. Especially since Macbook
Pros are not that nice anymore.

------
lpcvoid
Good idea, but why JSON? And why HTTP content headers? I much prefer binary
protocols, but I'm probably within a minority on that one.

~~~
naikrovek
I doubt you are in the minority, if only experienced developers are
considered.

JavaScript developers really do hold skewed ideas about what kinds of things
are useful and worth the overhead that they incur.

Binary protocols are easier than JSON, to me, and to most people I would say,
who have actually implemented a binary protocol or wrote code to read or write
binary files.

~~~
rmetzler
I don’t get it. First every language brings a JSON parser. It’s easy to debug.
You also don’t have any overhead for parsing in JavaScript based editors like
VSCode and Atom. And second, When the protocol is robust and needs to be
faster, you can still switch to protocol buffers or something similar.

~~~
naikrovek
I can't believe that people have communally forgotten how easy binary
protocols are.

You don't need protobuf, you don't need json, you don't need any of these
things that just add bloat and cost you performance.

Writing and reading binary is easy, and is FAR easier than JSON if you are
using something like C.

Most languages have convenience methods for strings, yes, and those can be
stacked on top of each other so that JSON libraries can be written, of course.
Those libraries are FAR slower than just reading and writing bytes, and
deserializing those bytes into structs or serializing those structs into a
byte array, and the binary data is much smaller than a JSON document
containing the same data would be.

People these days are acting like binary protocols are hard, or that they are
suboptimal for one reason or another. I guarantee you that someone who was
trying to find a bug in a JSON library will spend as much time or more than
someone trying to find a similar bug in a method that reads or writes binary
data, and at the end of it, the person who looked at the methods that handle
binary data will FULLY understand what is happening in that code. The person
debugging the JSON library will almost certainly not have a full understanding
of the JSON library after finding and fixing a bug unless they wrote it to
begin with.

It comes down to this - all computer programs do is process data. All UX
problems, all usability problems, all computer problems boil down to problems
transforming data. Transforming data is literally the only thing a computer
can do. Any work spent writing code that does not directly solve the data
transformation problem at hand is wasted time and work on the part of the
developer. Any performance lost using code that does not directly solve the
data transformation problem at hand is time away from the CPU for other tasks,
time away from a consumer of that data, and so on. It adds up, and it adds up
quickly.

~~~
spookthesunset
I’ve worked on many projects and nothing is more annoying or tiring than
dealing with people that have zero vested interest in the project outcome and
have no understanding of a projects requirements, history, schedule and
constraints chiming in with their worthless opinions.

I call them the peanut gallery and the best way to deal when them is to pat
them on their head for being so smart, thank them for their opinion and ignore
everything they said. Failure to mute them will derail any project lead by a
junior dev who doesn’t get know how to deal with pot shots from the peanut
gallery. The poor junior dev will actually listen to the worthless feedback
and stall the project.

I’m sure the folks who worked on this protocol are rolling their eyes at this
comment thread...

------
xvilka
Hopefully, not UTF-16 this time, like LSP. Everywhere[1] but in Windows UTF-8
is being used. And it should be. So many LSP implentations recode UTF-16 to
UTF-8 and vice versa.

[1] [http://utf8everywhere.org/](http://utf8everywhere.org/)

~~~
wolfgke
> Everywhere[1] but in Windows UTF-8 is being used.

Further popular counterexamples besides the NT kernel (cf. [2]):

\- Java

\- JavaScript

\- .NET

\- Qt

\- Joilet file system (for CD-ROMs)

[1] [http://utf8everywhere.org/](http://utf8everywhere.org/)

[2]
[https://en.wikipedia.org/w/index.php?title=UTF-16&oldid=9094...](https://en.wikipedia.org/w/index.php?title=UTF-16&oldid=909417171#Usage)

~~~
new_realist
All decisions made 20 years ago. The future is UTF-8.

~~~
naikrovek
honestly the future is probably UTF-32. just make all code points 32-bits and
use the 11 bits that Unicode says it will never use for flags so things like
formatting can be encoded per character.

~~~
targonca
That's... a very bad idea.

First of all, there's graphemes are inherently not one-to-one with code
points, e.g. Á = A + `. There's simply no Unicode encoding that will let you
safely index into an array without paying attention to the meaning of the
underlying codepoints. (and no, using NFC won't solve this either, because
there are combinations for which there's no composed equivalent)

Secondly, general formatting info won't fit into 11 bits (italic, bold,
underline, strikethrough - that's already 4 bits, and we haven't talked about
color, font weights other than bold, etc.), so why bother baking in a limited,
intentionally gimped version into your character encoding?

~~~
naikrovek
It doesn't have to be formatting...

It is not a "very bad idea" it is "an idea you do not like." Those are
different things.

The way you're describing UTF-32, it can't work at all, and it definitely
does.

Trying to save space by using UTF-8 over UTF-32 seems like a very small gain
to me, is all. UTF-32 is simpler, for text created in that encoding.

~~~
targonca
There are tons of resources online about why UTF-32 doesn't make sense. I'm
not gonna repeat them. Do your own research.

[https://news.ycombinator.com/item?id=8195827](https://news.ycombinator.com/item?id=8195827)

[https://softwareengineering.stackexchange.com/questions/2361...](https://softwareengineering.stackexchange.com/questions/236194/does-
it-make-sense-to-choose-utf-32-based-on-concern-that-some-basic-rule-will)

[https://en.wikipedia.org/wiki/UTF-32#Analysis](https://en.wikipedia.org/wiki/UTF-32#Analysis)

[http://utf8everywhere.org/#myths](http://utf8everywhere.org/#myths)

------
brandmeyer
How does this compare to The Other Leading Brand: the GDB Server protocol?

------
NikkiA
And VSCode moves a step closer to being 'Emacs, now written in Javascript'

~~~
0xTJ
I use VS Code as my main IDE for both business and pleasure, but I have to
mistrust just how developer-friendly Microsoft has been. VS Code is a great,
extendable, developer-friendly tool. WSL has made it so that it's less of a
shame that I'm stuck using Windows for now. Even CMD has been getting updates
that makes it behave more like a POSIX shell.

~~~
heavenlyblue
Microsoft had always been developer-friendly.

They just haven't been friendly much towards developers outside their
ecosystem, as soon as they have monopolised the given ecosystem.

------
smg
DAP is already improving the debugging experience in other editors. Check out
DAP-mode for Emacs

[https://github.com/emacs-lsp/dap-mode](https://github.com/emacs-lsp/dap-mode)

------
chrsw
For many years now the acronym "DAP" has been used as the name of the debug
interface for ARM based SoCs. Embedded software development with Visual Studio
Code is already a thing and will likely become more popular. This can be a
little confusing.

