If you simply don't implement video/audio/scripting, nor multi-threaded anything, you can still render a large percentage of pages on Tor hidden services. That's because the sites built to work with a privacy overlay are already themselves extremely limited in complexity. And the ones that aren't frankly aren't safe to view anyway, at least not without NoScript turned on (in which case you're essentially back to HTML 4.01).
Such a browser actually has a chance of working with a lower risk of exploits than the currently available browsers. (With the downside that an "off-brand" browser stands out in the crowd wrt fingerprinting, but that's true no matter what you implement.)
But if the goal is to render not just hidden service content, but arbitrary content including normal web pages fetched through a Tor exit node, a project like this written in a language like C++ will only ever be riskier and easier to exploit than current browsers. All you could ever ethically claim to potential users is that it's not Firefox nor Chrome (and even then that's certainly no reason to prefer it in a privacy overlay).
Its not that of a big deal on the start.. just expose a 'Layer' object that can represent a surface on the backend (eg. opengl), and use a backend thread for the rendering part (rendering the current layer tree state using display lists for instance). Than the chrome engine layer and the web engine can just use that representation of a 2d surface to render.
But at least you get the architecture right from the start. Every project i know of, always want to aim to something more later. And if you dont get the architecture right from the start, and you need to advance to goal X, you will end throwing everything you have done before, and restarting from the scratch.