
Monte Carlo Simulations and Fibonacci: Why Developers Still Need The Basics - Baustin
http://blog.smartbear.com/programming/why-developers-still-need-the-basics/
======
susi22
Monte Carlo Simulations (MCS) weren't invented to do software testing and I
wouldn't really recommend to use it as such. There is better ideas. MCS are
used daily for processes/algorithms that are so difficult that a analytical
solution is hard or impossible to find. You just throw in some data in your
black box and put a "probe" at your variable that you're interested in. Do it
many times and you see the pattern (eg. plot a histogram when probing a
variable of 1 dimension etc).

If you have an algorithm that you understand and is not too complicated, then
I'd argue that MCS are NOT the way to go (maybe in addition but NOT solely).
The reason is that due to the law of large numbers it keeps simulating the
nice cases instead of the edge cases. Unless you change your input
distribution.

I really expected the article to conclude with a link or explanation to
Quickcheck [1] since this is _exactly_ what the author is trying to get to (Ie
automatically check the edge cases and automatically minimize the input which
fails it). Many languages have been porting these ideas. The Scala/Clojure
ones are pretty nice.

[1]
[http://en.wikipedia.org/wiki/QuickCheck](http://en.wikipedia.org/wiki/QuickCheck)

Original paper of 2000:

[http://www.eecs.northwestern.edu/~robby/courses/395-495-2009...](http://www.eecs.northwestern.edu/~robby/courses/395-495-2009-fall/quick.pdf)

~~~
GFK_of_xmaspast
I think "does my software fail on random input" is an extremely important test
to run, it doesn't matter if a naive MCS doesn't hit the edge cases if it's
finding failures in the middle of the problem spaces on the regular (I've had
jobs where (somebody else's) code works fine on hand-selected test cases, but
as soon as you try something a little bit unexpected it goes all to hell.)

You can say that 'oh you should have caught that earlier' and yeah, ok,
whatever.

~~~
taeric
Depends entirely on the system. This is akin to thinking communication will
break down if we all switch to different languages. Certainly true, but far
from a likely thing to worry about.

That is, "random input" is not nearly as important as "likely input."

So, the question here would be if you could use a simulation to cover "likely
bad" values. Specifically, based on values that have caused previous methods
of similar types to fail. I would think certainly. This is, essentially,
boundary value testing. (Or am I wrong?)

And yes, I realize there are plenty of libraries that help with this.
ScalaCheck/QuickCheck and the like.

------
mrcactu5
as a mathematician, i grimace at the level of resistence I get from
programmers as being too "theoretical".

I try arguing, "but all your libraries do at low level is math calculations,
don't you want any control over that?"

The typical response is "no"... people are quite happy with the boundaries
they feel their API's are setting for them.

~~~
wavesounds
The purpose of having stuff in libraries that have been tested and work is
that you don't have to worry about it. If we had to reinvent the wheel every
time we would never get anything done.

~~~
cgrusden
This is a typical rebuttal from programmers when they don't know how something
works underneath.

You are correct about "why" libraries are created.

Since, I'm assuming you're talking about Open-source libraries: what happens
when theres an issue you're having and wondering why the library isn't doing
what you want?

~~~
DaCapoo
I'd end up reading the documentation and understanding why I was using the
library incorrectly. Failing that, I'd go look at how the library was
implemented.

There's a distinct difference between looking at something when it's broken
for the sake of fixing it and implementing every library on your own because
you don't want to use the abstractions created by the programmers who came
before you.

