Of course you dont need to start with all that from the beginning but at least you need to get the architecture right
Than on top of the compositor, you will need a web engine, to form the nodes and render it back on the compositor.
I guess if you just do a immediate mode render engine on with a OpenGL backend, you can render something, but it will be pretty limited, mostly for static content.
You would need to focus on the compositor first, and then create the web engine on top. But why to start from scratch if you can just reuse some chrome parts for instance (giving this is in C++).. just take the compositor from chrome in 'src/cc'.. just reuse 'src/base', 'src/crypto', 'src/net', 'src/ipc', 'src/gpu' some parts of 'src/ui' (if you want to) and you are good to go.
Then you can just focus on creating a new innovative web engine.
For instance im using the chrome compositor, to do multi-platform ui rendering using Swift in the UI API layer, and its working pretty good.
Anyway, its not wrong at all, to start like this.. its cool.. the hacker spirit, right? But you should just know that unless you are expecting to give a lot of good years of your life, to make it really good, you must have more humble expectations about what you will end with.
(And you should consider yourself lucky if you end with a equivalent of netscape 2.0 in a year if you work on it full time)
I actually have an offline repo with multithreading started but put on hold to focus on other issues. I'd like to implement threading sooner than later as it will be easier to do on a smaller project than a large one.
It's all about the architecture. We want to get that right first and are willing to throw away our current code to make sure we have this nailed. We are focusing on the renderer using a flow-programming style (only update what needs to change) as much as possible. So we're yielded to the OS unless there are pending events (using glfw). So no render() 60 times per second (unless there was something pumping events as such)
Thanks for the kind words.
If you simply don't implement video/audio/scripting, nor multi-threaded anything, you can still render a large percentage of pages on Tor hidden services. That's because the sites built to work with a privacy overlay are already themselves extremely limited in complexity. And the ones that aren't frankly aren't safe to view anyway, at least not without NoScript turned on (in which case you're essentially back to HTML 4.01).
Such a browser actually has a chance of working with a lower risk of exploits than the currently available browsers. (With the downside that an "off-brand" browser stands out in the crowd wrt fingerprinting, but that's true no matter what you implement.)
But if the goal is to render not just hidden service content, but arbitrary content including normal web pages fetched through a Tor exit node, a project like this written in a language like C++ will only ever be riskier and easier to exploit than current browsers. All you could ever ethically claim to potential users is that it's not Firefox nor Chrome (and even then that's certainly no reason to prefer it in a privacy overlay).
Its not that of a big deal on the start.. just expose a 'Layer' object that can represent a surface on the backend (eg. opengl), and use a backend thread for the rendering part (rendering the current layer tree state using display lists for instance). Than the chrome engine layer and the web engine can just use that representation of a 2d surface to render.
But at least you get the architecture right from the start. Every project i know of, always want to aim to something more later. And if you dont get the architecture right from the start, and you need to advance to goal X, you will end throwing everything you have done before, and restarting from the scratch.
It walks through the basics of writing a layout engine which is extended over a series of posts to be quite flexible and interesting.