Maybe you could try to recognize different kinds of food, but I doubt that you could measure the volume of food by just one picture. Even humans would not be able to correctly guess the calories because there is not visual difference between low-fat cheese and regular cheese, for example.
You could also devise a "pregnancy test" app - simply take a picture of the woman's face - yes, of her face ;-) - and you get the result: pregnant or not, and whether it's a boy or a girl, and the due date. Honestly, people falling for the mealsnap app would also fall for this.
The question was as much about how HNers would approach this problem, as how they think this app actually works (or does not work)
I mean, naively, you may try to crowdsource a set of photos of food, and then identify by algorithms based on photo similarity - but then how do you address the volume of food being pictured...
Here is MHO: Machine learning may work tracking faces, balls or household objects, but food looks so different that I imagine an accurate program that could do this is many decades into the future. Think about how many ways that you can cook lentils! How could you possibly tell the calorie content of curry? Perhaps someone could develope an application for fruit, but it's accuracy is probably going to be no better than looking up the average energy content of an apple on a table.
You could also devise a "pregnancy test" app - simply take a picture of the woman's face - yes, of her face ;-) - and you get the result: pregnant or not, and whether it's a boy or a girl, and the due date. Honestly, people falling for the mealsnap app would also fall for this.