FWIW, Sandstorm is already designed to limit the possible damage from malicious apps in general, by making sure each app has only the minimum privileges it needs to do its job (and making sure the UX to grant said privileges is painless for the user). For example, we've seen quite a few security vulnerabilities in apps that were largely mitigated when running the app on Sandstorm:
Hypothetically, in the Sandstorm model, you could architect a messaging app which is incapable of leaking messages to a third party: for each contact, you would create a separate grain (fine-grained instance, in Sandstorm terminology) of the app, and you would permit that grain to communicate only with that contact's corresponding grain they created for you. If the app cannot communicate with other grains of itself -- much less third parties -- then it cannot leak any communications.
Obviously, there are a lot of UX questions raised by this design. It is a goal of the Sandstorm project to solve those UX issues, and we believe they are solvable, but I can't claim to have all the answers today. More likely what you'd run today is a single grain which can talk to all your contacts. In this case an evil app update could almost certainly find a way to covertly leak any message through the network of contacts.
(And, of course, another issue, if communications are crossing the internet, is traffic analysis and covert channels embedded therein.)
So, yeah, there may or may not be a good technical solution here. But on the bright side, the Apple-FBI case seems to indicate that the government doesn't have the power to compel false signatures on code updates. I can only hope that interpretation stands and is reinforced over time.