What I like about this write up is it's end-to-end. Most of the ML write-ups leave you with a Keras model. Leaving many questions around how you turn the model into a product. Especially if you have to move the model to a non python platform. Really good read, enjoyed it.
The author comes across as very humble too, he doesn't pretend to be an expert, and takes his time to explain his rationale at every step. I'm not even that interested in iOS development or deploying models to phones - but I really enjoyed the read for the journey rather than the destination.
Really enjoyed the article. I don't do ML but I'm a React Developer, so I was interested in reading the article cause it said React Native.
To my surprise, I read it entirely, understood most of it, and best of all, React wasn't part of it until the very last part.
I encourage you to write more posts like this, I learned a lot.
Overkill is a point of view here. Training and deploying neural networks is becoming easier than ever.
In my group at Arm there's a solid expectation that we'll see neural networks integrated into every part of a running application, and whether they execute on special NN processors or the general-purpose CPU will largely depend on where the data is needed.
I said it was overkill because I thought I had a simple analytical solution as follows. Note: I don't know anything about segmented regression, this is just your standard CS dynamic programming to calculate splits
DP[i][j] = min over k of (DP[i][k] + (cost of splitting at k) + (linear regression error of points from kth to jth))
This should run in O(n^3) which will be fine for the author's requirement of ~100 points. But this isn't a complete solution since it's not obvious how to choose the cost of splitting (which is needed otherwise it will just split everything into 1 or 2 point segments).
I think thinking about this more and explicitly trying to design this cost function is still better than labeling a bunch of data until the machine learning algorithm can reverse engineer the cost function from your head. Then you can be confident of what your code is doing and why and know that it won't randomly output potato.
I believe this is what the author tried first in the post. He even links to this test UI where you can compare the "plain math" approach to the neural network:
I think it is a great example for deployment and may be even a good example for tackling a problem that is not easy to understand. Either I missed it, or the nature of the problem is never thoroughly analyzed. I am not an expert when it comes to mechanical watches, but the main question hovering over this topic is: Will a watch deviate from the perfect time in a linear fashion? Or can there be other models, even more so: Is it possible that different watches will deviate in completely different modes? If so, the problem gets instantly three magnitudes harder...
> I think of convolution as code reuse for neural networks. A typical fully-connected layer has no concept of space and time. By using convolutions, you’re telling the neural network it can reuse what it learned across certain dimensions.
Happy to hear it worked out for the author and a great showcase example for the technologies. Thanks for sharing with us!
I’m tracking the performance of my mechanical watch myself for over a year now. After some experimentation I’ve settled for making a burst picture of the watch hands at exact minute with my iPhone camera and reading out the EXIF for exact timing. This solves quite a few logistical problems with the measurements.
From my point of view spending time to design an automatic ml solution to something that is caused by a watch owner and can be easily identified is less optimal than for instance automating the measurements themselves as described above.
If the author is interested in moving into that direction I’d be happy to share my experience directly.
Otherwise good luck further on and keep us posted.
Using the camera for taking measurements is a great idea. Deciding where to split the trendlines is a separate problem though. A different way to take the measurements wouldn’t change that. Would love to chat about ways of improving both. Shoot me an email!
Great walkthrough.
Purchased the app - I have been curious about the performance of my watch for a while but never got around to measuring it. The app nicely hit a niche.
What would be the challenges in using the camera to identify the time on the watch face?
It’s a great idea. I had it on the back burner. Wasn’t sure if it was worth the time for a relatively niche app. Now I’m thinking it might be worth doing just so I can write about it!
Dude. This is awesome stuff! Great design, topic, demeanor, problem statement, solution. I also like the way you visually offset deep(er) dive subtopics. Thanks!
Very nice writeup. Also the timing couldn't be perfect. As I am also dabbling with an idea for a React Native app which would help me target both ios and Android. Thanks for sharing your experience.
Looks like this is a classification problem as opposed to a regression problem. (The NN is trying to pick one output). You very likely want to use a cross entropy loss function, not MSE.
I think this is correct - binary cross-entropy on the sigmoid outputs should at least make the network easier to train and may as a consequence improve test performance.
Great work shipping NNs on an app. However, if I understand the challenge correctly, wouldn't starting a new sequence for a new trendline every time the deviation drops (potentially with an error margin) do the trick?
Content aside, I love the ToC to the left with clickable links. Helps prepare a reader for what information their about to consume, and provides relevant context for each section.
I am guessing the ReactNative choice was for fun/learning since the author didn’t target Android. But is there a toolchain that can convert Torch or TensorFlow models to an Android-compatible ML framework?
Great walkthrough, but was a neural network really necessary for detecting how far off the time on a mechanical watch is? I didn't read the article that closely so maybe I'm missing something but just intuitively a machine learning model and neural network seem like a lot of work for something like this.
The neural network is not for measuring how far off the time is, it’s for guessing where different trendlines in the charts should start and end. You could say it’s overkill, but as I mention in the article I wasn’t happy with the results I got using simpler math. Also I’m pretty honest about the fact that I used a neural network because I felt like it. :)