Binary arithmetic is not useful for the beginner. But binary logic is extremely useful...and more importantly, binary logic is something that non-programmers typically have a blind spot for.
Someone once asked me what digital concepts non-programmers should be familiar with...at the top of that list for me is the concept of everything being either 0 or 1.
For example, the answer to the question: is 2 greater than 1? can be represented as a 1, for "true"
So how do computers solve complex problems? By breaking a complex problem into many, many, many yes or no questions.
How do you tell if a pixel is reddish? Its R value is greater than a certain threshold (and of course, this comparison is a composition of multiple binary operations). How do you tell if in a given photo if a man's eyes are closed? Well, given a set of pixels where the eyes are located, calculate whether most of those pixels are reddish rather than white (or brown or whatever the person's skin is). And how do you locate the eyes in a jpeg in the first place? Well, that's more binary decisions...which leads you to talking about functions and methods and encapsulation of complex code (another feature of programming that non-programmers should know).
Once a novice can see how everything, whether it's comparing two numbers, deciding a web-scraping strategy, or designing an AI, can be reduced to yes-no questions, then I feel that programming becomes much more interesting...after all, they need a way to process those countless yes-no questions in a reasonable amount of time.
Teaching programming is a tough field. If you want to teach programming to regular people, you have to go the extra mile to hide the sleeping difficulty and the mathematics. They can learn that stuff after if they wish to.
You can also teach it "MIT Style" with 1 hour lectures and cascading blackboards. It's good too, the content is great, but people pay small fortunes to study at MIT so whatever the teaching method is, it will work because are investing their life into it.