
A curated list of awesome .NET Performance resources - polskibus
https://github.com/adamsitnik/awesome-dot-net-performance
======
shubb
Usually, when .net is really slow, the cause has been that I'm using a
Microsoft library 'wrong'. The solution has been to learn how that library
works internally, and work around it, rather than runtime level optimizations.

Although this is all good stuff to know, if you are here trying to fix your
applications performance problem, this is probably the wrong tree to bark up.

~~~
recursive
Are you talking about things like using List<T>.Contains() instead of
HashSet<T>.Contains()? I've been doing .net for some time, and your comment is
not consistent with my experience.

~~~
jackmott
Programming is a very varied field. Sometimes people forget that their coding
experience can be totally different than anothers.

Consider someone trying to do a 3D game in C# vs a web application back end.
The kind of performance pitfalls they run into will have no similarity.

~~~
recursive
That's totally true. For instance, I (almost) never think about garbage
collection. For some kinds of development, I suppose not accounting for it can
cause a complete failure.

------
6DM
If you're looking to improve performance, the best thing you can do is learn
how to use Apache JMeter. Run it locally against your solution, with VS
running the performance profiler (which is under the Debug menu). This will
give you a nice click through graphical break down showing you which parts of
your code are taking the longest.

~~~
6DM
The other half of this is looking sql performance. Not being a DBA I'm less
proficient in this area and can't recommend a good tool :(

~~~
partycoder
MS SQL Server for instance comes with its own profiler. You can look for slow
queries, then try to understand why by getting the query execution plan. You
can alternatively also look for queries with a high standard deviation, which
can hint of queries that get slower as there are more rows associated with a
user.

Once you have the execution plan, you can either optimize the query, optimize
the schema (add indices), or simply optimize the application (e.g: narrow the
scope of the query, reduce call count). That however, won't allow you to
horizontally scale.

The only realistic way to horizontally scale is sharding. Replication will
slow down your writes as more machines are added and that simply does not
scale. Using the same database for everything and everyone is a lousy way to
go. Eventually you will need to keep things on different machines at some
point.

Then, don't put large blobs into a database and store just the blob IDs in
your database. Use another data store for that. Analytics, logging, etc
doesn't belong into an OLTP database. Use another database for that not your
OLTP one.

~~~
6DM
I tried analyzing execution plans for a fairly large stored procedure. I only
needed to do it a few times. Everytime resulting in the DBA informing me that
they've already analyzed that before and it was understandable why I couldn't
make any real headway.

Side note I found a free pdf for analyzing execution plans in this
stackoverflow answer (toward the bottom):
[http://stackoverflow.com/a/7359705/834579](http://stackoverflow.com/a/7359705/834579)

Would you consider logging user actions, for reporting in website, as
something that should be stored else where? In one case I can think of the
table had about 7 million records, but nobody seemed concerned about it.

~~~
partycoder
If it's just a log then you can "merge on write" (nosql) instead of "merge on
read" (join),

------
oneplane
I had a quick look at that page but it mostly seems to refer to 'general'
resources. I somewhat expected a guide to something breaking like hiphop was
for PHP. Or maybe a CLR-machine-bridge or something.

~~~
thr0waway1239
On a only tangentially related note, I was surprised at just how little the
big data/machine learning/data science fields intersect with the .NET
ecosystem.

~~~
DougWebb
I don't know how many .NET developers do what I do, but I think a lot of us
are doing Enterprise development, typically working with huge databases that
have been around for decades. My company does a lot of work with DB2 on IBMi
servers, replacing old green-screen applications with new WPF and web
applications. It's not the big data you're taking about, but it's the old-
school version of big data.

~~~
pjmlp
From what I hear around, most of the big data consulting shops are actually
using less data than " old-school version of big data.".

