
Why We’re Switching to gRPC - protophason
https://eng.fromatob.com/post/2019/05/why-were-switching-to-grpc/
======
time4tea
Yeah. Easy things are easy with most technologies... It's only after a while
that you start to see the 'problems'.

With grpc... It's designed by Google for Google's use case. How they do things
and the design trade-offs they made are quite specific, and may not make sense
for you.

There are no generated language interfaces, so you cannot mock the methods.
(Except by mocking abstract classes, and nobody sane does that, right)

That's because grpc allows you to implement whatever methods you like of a
service interface, and require any fields you like - all are optional, but not
really, right.

Things that you might expect to be invalid, are valid. A zero byte array
deserialised as a protobuf message is a perfectly valid message. All the
strings are "" (not null), the bools false, and the ints 0.

Load balancing is done by maintaining multiple connections to all upstreams.

The messages dont work very well with ALB/ELB.

The tooling for web clients was terrible ( I understand this may have changed
)

The grpc generated classes are a load of slowly compiling not very nice code.

Like I say, if your tech and business is like Google's ( it probably isn't)
then it's a shoe-in, else it's definitely worth asking if there is a match for
your needs.

~~~
kjeetgill
> With grpc... It's designed by Google for Google's use case. How they do
> things and the design trade-offs they made are quite specific, and may not
> make sense for you.

Agreed. It's always important to try to pick technologies that 'align' with
your use-cases as well as possible. This is easier said than done and gets
easier the more often you fail to do it well! I do think people will read "for
Google's use case" and hear "only for Google's scale". I actually think the
gRPC Java stack is pretty efficient so it "scales down" pretty well.

I want to skip over some of what you're saying to address this:

> Things that you might expect to be invalid, are valid. A zero byte array
> deserialised as a protobuf message is a perfectly valid message. All the
> strings are "" (not null), the bools false, and the ints 0.

Using a protobuf schema layer is wayyyy nicer than JSON blobs but I agree that
it is misconstrued as type safety and validation. It's fantastic for efficient
data marshaling and decent for code generation but it dosn't solve the
"semantic correctness" side of things. You should still be writing validation.
Its a solid step up from JSON not a panacea.

~~~
alasdair_
JSON has a bunch of schema systems, including Open API which is a repackaging
of Swagger with some extra stuff and is also endorsed by Google.

Do you consider protobuf superior to those alternatives for web-based (rather
than server to server) projects?

~~~
kjeetgill
I spend all my time server to server so I don't feel qualified to give real
advice.

My impression is that if you're going to talk to a browser, that edge stands
to gain much more from conforming to HTTP standards. If your edge is more
"applicationy" and less "webpagy" then maybe a browser facing gRPC (or
GraphQL?) might be more appealing again.

As to the other JSON schema systems, I kinda wish one of them won? It feels
like a lot of competing standards still. Not really my area of expertise.

~~~
anderspitman
There are a couple gRPC implementations for the browser[0](officially
supported), but it seems to require quite a bit of adaptation, and looked
pretty complicated to set up.

[0] [https://grpc.io/blog/state-of-grpc-web/](https://grpc.io/blog/state-of-
grpc-web/)

------
kjeetgill
This article comes at a good time because I've been exploring the OpaenAPI vs.
gRPC for a codebase that presently uses neither. Evaluating technology feels
like a lot of navel gazing, so it's nice to hear other's experiences even if
their uses don't line all the way up with ours.

Disclaimer: Java fanboy bias. For services internal to a company, I think gRPC
is an all around win. If you need to talk to browsers integrations, I don't
have as many opinions.

Personally, I really prefer working at the RPC layer rather than at the HTTP
layer. It's OOP! It's SOA! Pick your favorite acronym! HTTP's use as a server
protocol (as opposed to a browser) is mostly incidental. It works great but
most of the HTTP spec entirely inapplicable to services. I like named
exceptions to 200 vs 5xx 4xx error codes. Do I _really_ care about GET, PATCH,
PUT, HEAD, POST for most of my services when all of my KV/NewSQL/API-over-DB
services have narrower semantics anyway.

Out of band headers are nice though.

Between protobufs, http2, and a fresh, active server implementation we see
pretty solid latency and throughput improvements. It's hard to generalize but
I suspect many users will. Performance isn't the only driving factor but it's
nice to start from a solid base.

I'm sure missing all the tools like curl and friends is an annoyance, but I
like debugging from within my language, and in JVM land at least it's been
easy enough.

~~~
atombender
Have you considered GraphQL? Lots of overlap with gRPC, but much more web-
friendly. Much better support for optional vs. requires data, too. And comes
with server push, replacing the need for WebSockets/SSE.

Only downside I can think of is that there's no analogous mechanism to gRPC
streams; you have to implement your own pagination.

~~~
kjeetgill
I havn't looked into GraphQL much at all, so correct me if I'm mistaken.

From what I understand of it, the big idea is that instead of passing
parameters from the client to the server and fully implementing the query
logic, stitching, and reformatting etc. on the server side, you now have a way
to pass some of that flexibility out to the client. Instead of updating both
the server and the client as uses change, more can be done from the client
alone.

I spend most of my time on the infra side of things and rarely if ever make my
way out to the browser so I can't speak to WebSockets/SSE or web friendliness.
Being the "backend-for-backend" I just prefer being more tight-fisted about
what my clients can and can't do. I mostly deal with internal customers with
tighter SLAs so I like to capacity plan new uses.

Maybe I'm just old fashioned.

------
rubenbe
I recently chose gRPC as a communication protocol between two devices (sort of
IoT).

Until now it works perfectly as expected. The C++ code generator provides a
clean abstraction plus it saved a lot of time (both in programming and
debugging). The gRPC proto file syntax also nudge you in the right direction
wrt protocol design.

When trying to "sell" gRPC it helps that there are generators for plenty of
languages and it's backed by a major company.

~~~
gravypod
I wish that tooling around generating protoc into stubs and client libraries
was simpler. I wish there was a single command I could run and turn large
collection of proto files into libraries for "all" languages (python, java,
C++, Node package, etc). Unfortunately there's no universal approach to this.

~~~
q3k
This seems like an odd requirement. Are you trying to generate stubs for your
API users ahead of time? This will likely not work as generated stubs evolve
in lockstep with protoc and runtime support libraries, and thus are not
guaranteed to work across discrepant versions. Thus, stub code should be
generated alongside the consumer/client. It also likely shouldn't be committed
into a VCS.

~~~
gravypod
It would be done in CI. Generate stubs -> package/compile -> push to internal
package repo.

This way your protocol for your infrastructure is just another library.

~~~
q3k
Having an explicit 'create client library by generating/compiling proto stubs'
is generally also bad mojo from my experience, unless you're also abstracting
API stability and service discovery. If not, it will be unnecessarily painful
to make a change to either the service discovery method or a non-backwards-
compatible proto change, as you will have to lockstep both the service
rollout, the library build and the client bump.

------
justicezyx
The truth is gRPC, like kubernetes, was built with decades of lessons of RPC
framework inside a container oriented distributed environment; and more
importantly, gRPC is the blessed framework inside Google as well, meaning it's
qualified to power the largest and most complex distributed systems in the
world (I think it'd safe to omit 'one of' here), which in comparison is not
the case for kubernetes.

Addition: Borg and kubernetes are designed with similar goals but different
emphasis. They are like complementing twins had different personalities. For
this I recommend Min Cai's Kubecon'18 presentation about peleton [1], the
slide is titled "comparison of cluster manager architecture".

[1] [https://kccna18.sched.com/event/GrTx/peloton-a-unified-
sched...](https://kccna18.sched.com/event/GrTx/peloton-a-unified-scheduler-
for-web-scale-workloads-on-mesos-kubernetes-min-cai-nitin-bahadur-uber)

~~~
shereadsthenews
Wait, I don’t get it. Kubernetes:Borg::gRPC:Stubby. Google uses gRPC
internally to the same extent that they use Kubernetes internally, i.e. hardly
at all.

~~~
mehrdada
This analogy is very misleading. Kubernetes is probably never going to run any
real workload internally at Google, but gRPC powers all external APIs of
Google Cloud, and increasing other Google properties (e.g. ads, assistant),
used by mobile apps like Duo and Allo, and have some big internal service use
cases. The reason Stubby still dominates internally is simply taking lots of
time to migrate to gRPC that might be hard to justify, but I do see gRPC being
used very widely internally at Google; it’s simply a matter of time. I don’t
see that happening to Kubernetes; it’s a joke when compared to Borg.

Google aside, many other companies like Dropbox rely on gRPC extensively to
successfully run infrastructure:
[https://static.sched.com/hosted_files/grpconf19/f7/Courier%2...](https://static.sched.com/hosted_files/grpconf19/f7/Courier%20—%20gRPC%20Conf%202019.pdf)

~~~
CydeWeys
I work at Google and my team has real workloads running on Kubernetes.

There's plenty of internal teams that use GCP. Increasingly this might be the
direction things are heading.

~~~
mehrdada
GCP itself is a job on Borg. ;)

~~~
justicezyx
That's not true. GCE uses Borg very differently than normal Google internal
systems, which you can imagine is quite natural as they are reserving
different customers. GCS and other system, in turn, also differ wildly than
GCE. when you talk about GCP as a whole, it becomes impossible to summarize in
a few statement, and I doubt there is anyone on earth who is capable to
describe it coherently even without time constraint.

~~~
mehrdada
What I said (GCP runs on Borg) is absolutely and technically correct, affirmed
by your own comment, which highlights the power and flexibility of Borg. The
point being no one[1] at Google relies on Kubernetes for raw cluster
management capabilities at scale. They might use it for other things that can
make deployment more friendly in some scenarios. (This doesn’t make Kubernetes
a bad system by any means, just quite different and not a substitute for Borg
whereas gRPC is a direct substitute for Stubby). This debate is better argued
in your own eng-misc@ and not on a public forum.

[1]: no-one that we care. At Google this is obviously always incorrect.
There’s always that someone who uses weird things like mongoDB and AWS.

------
sytelus
I'd needed RPC framework for few of my projects but every time I'd considered
gRPC, I ended up walked away from it. The big issue is that gRPC has a huge
amount of dependencies and it tries to a lot of things, many of them might be
irrelevant for you but would cause extra headache anyway. When all need is
just serializing your stuff and send over the wire, there are much better
lightweight frameworks. For C++, I think RpcLib is one of the best. It doesn't
even require maintaining .proto file, do "compile" of schems every time you
change something, etc. The moral of the story is that always look around
instead of just going for the most popular solution first.

------
shereadsthenews
A couple of subtly wrong points in the article. Firstly gRPC payload can be
anything, needs not be an encoded protocol buffer. Secondly there’s not a
whole lot of “validation” going on in the protobuf codec. Basically any
fundamentally correct buffer encoded as message A will decode successfully as
message B for any B. If there are unknown fields they are silently consumed.
If there are missing fields they are given the default values and there is no
“required” in proto3. So there is significantly less safety, and significantly
more flexibility, in gRPC than people generally realize.

~~~
duality
"Basically any fundamentally correct buffer encoded as message A will decode
successfully as message B for any B."

This is incorrect. I suspect you're overextending proto3's treatment of
unknown fields to include discarding incorrectly typed fields too. If A has
field 1 types as an int, and B has field 1 typed as a string, an A message
with field 1 set will not parse as a B message. However, if the A message has
no fields set, or sets a field number unknown to B, that could parse
successfully with "leftover" unknown fields.

~~~
shereadsthenews
Ok but these messages are isomorphic on the wire:

    
    
      message enc {
        int foo = 1;
        SomeMessage bar = 2;
      }
    
      message dec {
        bool should_explode = 1;
        string why = 2;
      }
    

You can successfully decode the latter from an encoding of the former.

~~~
dweis
Minor nit, but not necessarily. For basically all values of SomeMessage, dec
should fail to parse due to improperly encoded UTF8 data for field 2 (modulo
some proto2 vs. proto3 and language binding implementation differences).

Change field 2 to a bytes field instead of a string field and then yes.

~~~
shereadsthenews
I should mention that I consider this a feature not a bug. The isomorphism
permits an endpoint to use ‘bytes submessage_i_dont_need_to_decode’ to cheaply
handle nested message structures that need to be preserved but not inspected,
such as in a proxy application.

------
dunk010
Rich Hickey, the creator of Clojure, gave a talk with some very relevant
points in this space (The Language of the System):
[https://www.youtube.com/watch?v=ROor6_NGIWU](https://www.youtube.com/watch?v=ROor6_NGIWU)

~~~
dustingetz
"gRPC is great because most systems just glue one side effect to another side
effect, so what's the point of packaging that into REST" – I think a HN
comment from a googler

The great thing about Clojure is that you can make holistic systems that are
end-to-end immutable and value-centric, which means the side effects can go
away, which means gRPC stops making sense and we can start building
abstractions instead of procedures!

------
_ZeD_
hey, hey! have you known about this new "webservices" stuff? with SOAP you can
call remote code as it is here! and with WSDL you can create automatically the
client!

~~~
adrianmonk
Short summary of web services:

Phase 1: Ad hoc, free-for-all chaos.

Phase 2: SOAP tries to bring order. It fails mainly because the "S" ("Simple")
is a lie.

Phase 3: Pendulum swings hard toward simplicity with HTTP plus JSON plus
nothing else, thanks.

Phase 4: Things shift possibly more toward the middle (a little structure),
but none of the competing systems have become obvious winners.

~~~
crehn
Most things in life seem to change like a pendulum learning from previous
swings, slowly converging to a healthy middle. Extremes are useful since they
give perspective and attractive since they're easy to grasp.

------
stephenr
Apart from “well google (created|uses) it” I don’t really get the benefit of
gRPC compared to any other rpc, eg jsonrpc or even xmlrpc, both of which are
fairly static, open specifications for a way to communicate, rather than
actual releases of a library that apparently has a new release every 10 days
or so.

~~~
wvenable
Binary. Streaming. Strongly typed (with caveats). There's a whole article
about the advantages/differences linked from the top of this page.

~~~
jwalton
> My API just returned a single JSON array, so the server couldn’t send
> anything until it had collected all results.

Why can't you stream a JSON array?

Edit: Here's a (hastily created and untested) node.js example, even:

    
    
        class JSONArrayStream extends Transform {
            constructor() {
                super({readableObjectMode: false, writableObjectMode: true});
                this.dataWritten = false;        
            }
    
            _transform(data, encoding, callback) {
                if (!this.dataWritten) {
                    this.dataWritten = true;
                    this.push('[\n');
                    this.push(JSON.stringify(data) + '\n');
                } else {
                    this.push(',' + JSON.stringify(data) + '\n');
                }
                callback();
            }
    
            _flush(callback) {
                this.push('\n]');
            }
        }

~~~
stephenr
you’d probably need to do some tricks to get it to parse in a browser.

Editing, because I can’t reply: I _was_ specifically going to mention
SSE/EventSource but expected an immediate “not everything is a browser”
response.

Editing the 2nd: yep, that’s kinda what I meant by “tricks” - essentially
splitting out chunks to pass to the json parser.

~~~
jwalton
See my edited example above; you can stream data to this, it'll stream nicely
to the browser. This uses "\n"s at the end of every line, which means you can
write a very simple streaming parser client side, because you can just split
the input at "\n," to get nice JSON bits, but there's certainly JSON streaming
libraries on NPM that will parse this for you more "properly". And, it parses
with a normal JSON parser too.

------
nevi-me
I'm yet to see enough services expose a gRPC endpoint, at least other than
Google. That'll keep the perception of adoption s/low.

I'm writing this as I take a break from working on a polyglot project made up
of Kotlin, Rust, Node, where we use gRPC and gRPC-web. We're slowly stealing
endpoints from the other services/languages to Rust.

Without focusing on the war of languages, the codegen benefits of protobufs
have made what used to be a lot of JSON serde much easier.

~~~
mleonhard
Can you point to a single public Google-run gRPC service? I was under the
impression that all connections into Google are proxied by GFE (Google Front-
End) servers to internal servers running the Stubby RPC server code. GFE is
definitely not running gRPC server code. I don't believe a gRPC endpoint could
pass Google's own Production Readiness Review process.

~~~
terinjokes
I've seen Google endpoints available over gRPC over the last few years. Many,
if not most, of the Cloud endpoints are directly documented as being available
over gRPC[0]. For others, like Google Ads, a peak at the client libraries show
it's using gRPC[1] as well.

[0]:
[https://cloud.google.com/pubsub/docs/reference/service_apis_...](https://cloud.google.com/pubsub/docs/reference/service_apis_overview#grpc_api)

[1]: [https://github.com/googleads/google-ads-
java/blob/master/goo...](https://github.com/googleads/google-ads-
java/blob/master/google-
ads/src/main/java/com/google/ads/googleads/lib/GoogleAdsClient.java#L20)

~~~
nevi-me
Yes, this. A lot (based on my last interaction 2 yrs ago) of Google's SDKs are
convenience methods that hide gRPC behind. If you use a language that doesn't
have an SDK, you can mostly connect directly to their rpc endpoints.

------
pilif
The article says that one of the advantages of gRPC is streaming and that JSON
wouldn’t support streaming.

That’s however just an implementation detail. JSON can easily be written and
read as a stream.

Switching your whole architecture, dealing with a binary protocol and the
accompanying tooling issues just because of your choice of JSON parser feels
like total overkill.

JSON over HTTP is ubiquitous, has amazing tooling and his highly debuggable.
Parsers have become so fast that I feel they might even have the opportunity
to be faster than a protobuf based solution.

Finally I don’t buy the argument about validation. You have to validate input
and output on the boundaries no matter what.

Even when your interface says “this is a double”, it says nothing about ranges
(as seen in the article where valid ranges were specified in the comment) for
example.

~~~
maltalex
> Parsers have become so fast that I feel they might even have the opportunity
> to be faster than a protobuf based solution.

Not even close. Event new JSON serializers/deserializers aren't magic.
Protobuf is a LOT easier to parse, so it's naturally a LOT faster.

First two duck results for "json vs protobuf benchmark":

[https://auth0.com/blog/beating-json-performance-with-
protobu...](https://auth0.com/blog/beating-json-performance-with-protobuf/)

[https://codeburst.io/json-vs-protocol-buffers-vs-
flatbuffers...](https://codeburst.io/json-vs-protocol-buffers-vs-
flatbuffers-a4247f8bda6f)

~~~
ricardobeat
The first link shows a mere 4% margin when talking to a JavaScript VM.

Even at a 5x improvement, most projects will never reach a point where the
transport encoding is a bottleneck. Protobuf has a lot going for it (currently
using in a project) but can’t be sold on speed alone.

~~~
denormalfloat
Is the JSON parser implemented natively, or in JS? It may not be apples-to-
apples.

~~~
scottlamb
> Is the JSON parser implemented natively, or in JS? It may not be apples-to-
> apples.

True, but if you're wanting an implementation you can use in Javascript
running in the browser, it may accurately reflect reality. You have a high-
quality browser-supplied (presumably native) implementation of JSON available.
For a protobuf parser, you've just got Javascript. (You can call into
webassembly, but given that afaik it can't produce Javascript objects on its
own, it's not clear to me there's any advantage in doing so unless you're
moving the calling code into webassembly also.)

I don't think browser-based parsing speed is important though. It's probably
not a major contributor to display/interaction latency, energy use, or any
other metric you care about. If it is, maybe you're wasting bandwidth by
sending a bunch of data that's discarded immediately after parsing.

------
dewey
There's also [https://github.com/uw-
labs/bloomrpc/blob/master/README.md](https://github.com/uw-
labs/bloomrpc/blob/master/README.md) which is kinda like Postman but for gRPC.
I didn't see it mentioned in the Caveats section of the post so maybe useful
to someone else too.

------
superfreek
gRPC is great, but my issues with it are debugging and supporting the browser
as a first class citizen.

We've been working hard on OpenRPC [0]. An Interface Description for JSON-RPC
akin to swagger. It's a good middle ground between the two.

[0] [https://open-rpc.org](https://open-rpc.org)

~~~
denormalfloat
Have you looked at gRPC-Web?

~~~
huehehue
I have mixed feelings about gRPC-Web, and welcome alternatives. Setting up a
proxy with any sort of non-standard config can be a pain, gRPC-Web doesn't
translate outbound request data for you which can get ugly[0], and your
service bindings may or may not try to cast strings to JS number types which
silently fail if over MAX_SAFE_INTEGER.

[0] Instead of passing in a plain object, you build it as such:

    
    
      const userLookup = new UserLookupRequest();
      const idField = new UserID();
      idField.setValue(29);
      userLookup.setId(idField);
      UserService.findUser(userLookup);
    

The metadata field doesn't seem to mind though...

------
SamReidHughes
I'd just like to say I appreciate the writing at the beginning of the article.

"While more speed is always welcome, there are two aspects that were more
important for us: clear interface specifications and support for streaming."

This offers a quick exit for anybody who already knows about these advantages.

------
signa11
one fundamental issue with grpc seems to be that every request for a given
service ends up either creating a thread or use an existing one from a pool-
of-threads. ofcourse, you cannot limit the number of threads because that will
lead to deadlocks.

i _suspect_ for google-scale it should all be fine, where available cpu's are
essentially limitless, and consistency of data gets handled f.e. due to
multiple updates etc. at a different layer.

writing safe, performant, multi-threaded code in presence of
signals/exceptions etc. is non-trivial regardless of how your 'frontend' looks
like. async-grpc is quite unwieldy imho.

i have heard folks trying grpc out on cpu-horsepower-starved devices f.e.
wireless-base-stations etc. and running into aforementioned issues.

------
ahuang
gRPC isn't a requirement for response streaming (as is quoted as one of the
main reasons for doing the migration). That can all be achieved with http/json
using chunked encoding. In fact, that's what the gRPC-gateway (http/json
gateway to a gRPC service) does [https://github.com/grpc-ecosystem/grpc-
gateway](https://github.com/grpc-ecosystem/grpc-gateway).

gRPC adds bi-directional streaming which is not possible in http, but the use
cases for that are more specialized.

~~~
anderspitman
Sure, the actual transfer will be streamed, but most JSON clients wait for the
entire response before firing your callback. As far as I know there isn't even
a commonly used spec for partially reading a JSON document.

------
ishaanbahal
Great to see people using GRPC, but this article doesn't state anything that
the actual grpc.io website doesn't, except for the OpenAPI comparison.

------
j16sdiz
I don't see the article answering the "why" question. It was just "We don't
like what we where using, so we tried gRPC"

------
ww520
Another alternative is Thrift. It got lots of language bindings and the
servers superb.

------
qwerty456127
IMHO the only reason to use HTTP for services today is caching.

------
ec109685
Parsing speed shouldn’t be a factor in deciding. Parsing protobufs on the
browser is going to be way slower than using a native json parser and even on
the server, there are java libraries that are much faster than protobufs, e.g.
[https://jsoniter.com/](https://jsoniter.com/)

That’s why formats like FlatBuffer where written; however parsing is likely
not going to dominate your application, so other factors should influence your
decision instead.

~~~
kentonv
> on the server, there are java libraries that are much faster than protobufs

Be careful not to back broad arguments with outlier benchmarks.

In general, it is plainly true that JSON is much more computationally
difficult to encode and decode than Protobuf. Sure, if you compare a carefully
micro-optimized JSON implementation against a less-optimized Protobuf
implementation, it might win in some cases. That doesn't mean that Protobuf
and JSON perform equivalently in general.

~~~
ec109685
What is the reason not to use the micro optimized JSON implementation if
parsing becomes your bottleneck?

~~~
kentonv
I don't think I said that?

~~~
ec109685
My point is that json is always “fast enough”. Either you don’t care about
parsing speed and can use what is most ergonomic or you do care and you’ll use
an optimized library.

You’ll never need to move to protobufs due to parsing speed.

------
kiliancs
> When you use a microservice-style architecture, one pretty fundamental
> decision you need to make is: how do your services talk to each other?

Problems I don't have when using Erlang/Elixir umbrella apps + OTP.

------
techslave
answer: for trivial reasons. too bad he didn’t dig deeper.

------
781
Does anybody remember reading an article along the line "we use X because it's
cool and trending and we are cool people?" Not saying it's the case here, but
did anybody honestly admitted in an article that they used a technology
because it makes them look cool?

I do remember reading quite a lot of articles about the inverse of this: "we
don't hire people using Windows/IDEs/ because it says a lot about them, a
craftsman should chose his tools wisely, ...", but never the positive.

~~~
stephenr
I got a very strong “kool aid” vibe from this.

I don’t remember any “we don’t hire people on Windows/using an IDE” (the last
part would be particularly weird IMO), but I wouldn’t be surprised if
somewhere said “if you want to use Windows you’re on your own (support wise)
and if it becomes a time sink you switch or find work elsewhere”.

I’ve supported (in terms of dev environment/tooling) people on Macs, Windows
and Linux. Windows by far had the weirdest issues to solve/avoid.

~~~
781
> _I don’t remember any “we don’t hire people on Windows /using an IDE”_

Two famous examples:

[http://charlespetzold.com/etc/DoesVisualStudioRotTheMind.htm...](http://charlespetzold.com/etc/DoesVisualStudioRotTheMind.html)

> _If you are a startup looking to hire really excellent people, take notice
> of .NET on a resume, and ask why it’s there._

[https://blog.expensify.com/2011/03/25/ceo-friday-why-we-
dont...](https://blog.expensify.com/2011/03/25/ceo-friday-why-we-dont-hire-
net-programmers/)

