
RudyMQ – A message queue in .NET - aashishkoirala
http://aashishkoirala.wordpress.com/2014/02/11/rudymq/#rudy
======
lafar6502
nice, but this line

var receiveOperation = queue.StartReceiving<MyMessage>(100, /* poll every 100
ms */

shows that you are polling the queue for new messages. Why such inefficient
technique? And BTW it looks like there's no support for transactional
operations. This is indeed a rudimentary queue ;)

~~~
aashishkoirala
Also, there is somewhat of a support for transactional operations - you can
declare a queue as transactional - which will return the message to the queue
if the client fails.

However, there is no concept of poison queue - or integration with
TransactionScope.

~~~
Pxtl
I'm curious, have you ever got distributed transactions - particularly ones
over WCF and the like - working? I've found that wile TransactionScope works
like a dream in the trivial case, it becomes disproportionately hairy and
frustrating as you start trying to use such elaborate features.

~~~
aashishkoirala
It generally works with anything that supports MSDTC; multiple SQL Server
nodes being the most usual scenario. The one case where it starts to get hairy
is when you bring MSMQ into the mix, not surprisingly. That has been my
experience, anyway.

------
csharpdude
Interesting. I have to be honest I did not know about the whole reader/writer
lock support. I was accustomed to just using the "lock" keyword.

~~~
bananas
Watch out for ReaderWriterLockSlim. It's dangerous under certain circumstances
particularly if:

1\. You have one core or two cores.

2\. Your CPU load is high.

3\. Your thread pool is pretty heavily utilised so there is a queue building
up (think IIS).

It relies on spinlock so it enters a tight loop for a number of cycles to
avoid lock contention. At this point it increases the CPU load so every thread
ends up getting blocked on spinlocks. This means you end up with N runnable
threads (one per core) which are utilising 100% CPU in a tight loop and your
site goes down.

I wouldn't rely on it for a concurrency mechanism in production!

We've had this in production and it took us out for a few hours.

Note: the root cause was a for loop that never terminated but all the other
threads went batshit at 100% inside ReaderWriterLockSlim so it's impossible to
identify what is going on. You can't even log into a box under that load so
it's a case of waiting for it to blow and dropping a minidump out with process
explorer and getting that off-site and firing it up in VS. Total nightmare.

~~~
danesparza
Here is a gem on SO about ReaderWriterLockSlim:
[http://stackoverflow.com/a/17296055/19020](http://stackoverflow.com/a/17296055/19020)

HC SVNT DRACONES

~~~
bananas
That's marvellous - thanks for posting :)

