x = GSL::Vector.alloc(array_of_x_values)
y = GSL::Vector.alloc(array_of_y_values)
c0, c1, cov00, cov01, cov11, chisq, status = GSL::Fit::linear(x, y)
(It also does weighted regressions and exponential fitting, among a host of other things. That wheel's gone done been invented already.)
GSL can be used internally ("in-house") without restriction, but only redistributed in other software that is under the GNU GPL.
Thanks for pointing out GSL. I appreciate the feedback.
It's great to see other people doing statistical work in Ruby, though, so please don't let my criticism keep you from continuing to do it! :)
While we do some stats in Ruby it definitely doesn't represent the entire "this is how we solved it". In fact we use a large amount of R and Java to solve all our statistics problems.
Larger data sets and performance intensive operations are better handled in Python (or Java, C++, etc). Lots of statistical analysis is way below the threshold of Hadoop and company.
The missing link here, and the reason Python gets more love from the data community, is that Python scales down to the smaller data sets as well as it handles big ones. (Not sure if you ment it couldn't, but the distinction you make implies that.)
For example, instead of computing an exact SVD, you will use something like Hebbian algorithm to compute the SVD in a streaming manner (that's what Mahaout implements for example).
EDIT: or did you mean Disco offers distributed sparse CSC operations?
Both R and Ruby have had issues with large data sets which have been addressed to some degree in more different distributions and more recent releases. Python is ready out of the box for large data sets. So what I meant to communicate is that if you know that you are going to be dealing with a large data set, you might as well go straight to python.
As far as python goes I think python has become more popular simply because it has more community around data applications. Unfortunately a lot of people view ruby as just Rails. I think academia's adoption of python has also helped it grow into a data analysis language.
Really any MR you do with python you could do with Ruby as it all uses hadoop streaming.
It's very useful to code basic statistical algorithms yourself so you understand how they work, but for any serious analysis you'll get more reliable and performant results with a library.
In fact I perform the vast majority of my statistical analysis in R.
Ruby is just a fun language to implement basic statistical algorithms in and it the Ruby community as a whole doesn't hasn't put a lot of emphasis on stats.
At best, a big asterisk should come from any of these results if you didn't have someone with actual experience validate your design/proposed analyses first.
Wikipedia has an article on Polynomial Regression: http://en.wikipedia.org/wiki/Polynomial_regression
P.S I'm doing this course https://www.coursera.org/course/ml so my knowledge may not be entirely correct so take everything I've said with a pinch of salt. :)
|0 0 0|
|0 1 0|
|0 0 1|
This ensures that the matrix is now invertible. Regularization takes care of overfitting.
P.S I'm a ml n00b doing Machine Learning course on Coursera so I might be unaware of more practical knowledge of the above. :D
Note that your W does not guarantee invertability - e.g., if your original (0,0) is already 0.
Given n features x1 to xn we introduce x0 feature which is always set to 1. During the Regularization lectures the professor said that we don't need to control (or regularize) the theta0 (the parameter for x0) because it doesn't make a difference. I believe this is the reason W(0,0) is set to 0.
The lectures are a little light on the maths, i.e the professor explains only enough maths to explain the techniques so I'm not aware of more details. I'm planning on watching some Linear Algebra lectures to fill in the gaps. :)
Re: Invertability, according to the professor, if lambda is > 0 then the matrix will be invertable. Again I'm not 100% sure if this is true or not.
He doesn't need to set W(0,0) to 1 specifically because he sets x0 to 0 (which guarantees a non-zero value in the covariance matrix).
But the standard way to do L2 regularization (also known as "ridge regression") is to add a scaled identity matrix (the entire diagonal set to be nonzero)
People who do linear regression at work don't add a x0 feature? During the lecture the prof. only said that adding a x0=1 for all samples m, is by convention and helps simplify the computation. Unless I missed something during the lecture that's the only explanation that was given.
> People who do linear regression at work don't add a x0 feature?
Sometimes they do that; sometimes the data already has a subset known to have sum 1 (e.g., if you binary variables that reflect "one of n choices" which must be set), and in this case adding x0=1 makes things worse (from a numerical perspective) for many algorithms.
Regardless, I've always seen regulation theory stated with lambda*identity matrices.
This is normally faster, more numerically stable and more space efficient. Even better, once you've computed the LU factorization of A once (which takes O(n^3) operations) you can then solve Ab = c for many different values of b and c in O(n^2) operations, by caching the factorization of A.
My linear algebra isn't very good so I'll have to look into LU Factorization but is there vast difference between the computational performance of the two operations? (Assuming you don't need solve Ab=c for different values of b and c)
Also is LU Factorization used often in machine learning instead of inverse?
: Because of the possibility to blockwise invert a matrix, where an inversion of an n×n matrix requires inversion of two half-sized matrices and 6 mulitplications between two half-sized matrices, and since matrix multiplication has a lower bound of Ω(n2 log n) operations, it can be shown that a divide and conquer algorithm that uses blockwise inversion to invert a matrix runs with the same time complexity as the matrix multiplication algorithm that is used internally. source is 
You can generally "multiply by the inverse" without actually computing the inverse, in a way that needs less intermediate floating point precision.
If your matrices have a large spread of eigenvalues, it makes a lot of difference - double precision often doesn't have enough precision for direct inversion in the real world.
(And even if you do want to invert, it is more numerically stable to do that as a solution of Ax=e, for each 'e' a basis element, as long as you compute A^-1 \* e indirectly using a numerically stable method)
Here is one of my favorite OLS-related abuses of linear algebra:
y = XB + e ---> e = y - XB ---> e = y - X(X'X)^(-1)X'Y ---> e = [I - X(X'X)^(-1)X'Y]y = My
Now use the two idempotent matrices to compute the SSR:
e'e = (My)'(My) ---> e'e = y'M'My ---> e'e = y'MMy ---> e'e = y'My