Hacker News new | past | comments | ask | show | jobs | submit login
Machine Learning Algorithms Examples in MatLab/Octave (github.com/trekhleb)
72 points by trekhleb on Oct 31, 2018 | hide | past | favorite | 7 comments



How much of the code/comments is from Andrew Ng's course (also done in MatLab/Octave with fill-in-the-blanks style problem solving), and how much is original work?


I've done that course, and some these files look very similar to the boilerplate code that you have to modify/complete for the weekly coursework. Comments seem to have been removed and some refomatting done.

Edit: As jszymborski points out, there an attribution of the source in the readme.md.


Some attribution seems to be there in the beginning of the README

> "In most cases the explanations are based on [this great](https://www.coursera.org/learn/machine-learning) machine learning course."

EDIT: It was added 6hrs ago https://github.com/trekhleb/machine-learning-octave/commit/f...


Yes, links to the Andrew Ng course is on every README of the repo: on top of the main README.md and in the "References" section of each internal README.md.


Yes the code and comments were reformatted for better reading (in my opinion). Code has also been split differently than in the course: I was trying to distinguish some similarities between the algorithms like separate hypotheis() function, separate gradient_descent_step() function and so on. The most significant changes have been made to the neural-network section since it now supports multiple internal layers of different size for the neural network. The course's version of the code supported only one internal layer. Also README file contains additional images (sometimes animations) and definitions for additional learning experience.


I've put my comments deeper in this thread. But just in case I'll also summarize them here. The links to the Andrew Ng course is on every README of the repo: on top of the main README.md and in the "References" section of each internal README.md. Regarding the changes to the original course's code the following has been changes/added: code and comments were reformatted for better reading (in my opinion), code has also been split differently than in the course: I was trying to distinguish some similarities between the algorithms like separate hypotheis() function, separate gradient_descent_step() function and so on. The most significant changes have been made to the neural-network section since it now supports multiple internal layers of different size for the neural network. The course's version of the code supported only one internal layer. Also README file contains additional images (sometimes animations) and definitions for additional learning experience.


When I was learning R a while ago I hacked on similar things [1]. I even took a shot at Linear Regression in Elixir [2]

[1] https://github.com/milosgajdos83/ml-examples/tree/master/rla...

[2] https://github.com/milosgajdos83/ml-examples/tree/master/eli...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: