But yes, you should learn to apply multilayer convnets to image problems.
And don't worry about architecture. There's 2 variables : number of parameters and number of layers. There's an optimum, which will be bigger than "advanced" architectures, but it will match the advanced architectures' performance very closely (1-2% worse perhaps).
Slight issue is that it takes 2-3 hours to test a (number of free parameters, number of layers) combination, and so "hyperparameter tuning" (pick random and test, repeat) takes a long time. But if you make it a nice pipeline you can do it on cloud in parallel on a regular basis (as new data gets added) and have extremely good results.
Math heavy ML is simple: ∀ problems: solution is Convnets
The articles application, however, is a good example of where they are very applicable.
> We don't need to know mnemonics.
> We have also these crazy tools now.
Yes, but we also have crazy tools that rapidly create complexity. The popularity of using tools we don't understand, didn't (directly) write, that we cannot meaningfully inspect/audit (e.g. modern machine learning) is rapidly adding unknown, interdependent complexity. This complexity is already spinning out of control. The only reason it seems easier today due to most of the problem being ignored.
It probably depends on your industry though, working on new experimental projects has a lot of benefits now because the experiments are talked about in the open on forums like Reddit and Github