Hacker News new | past | comments | ask | show | jobs | submit login

They are doing it from scratch.. well, good luck with that.

Unless of course you just want to render private custom tailored stuff and dont care about the rest of the Web.

If they want to be adventurous, why at least not helping a project like Servo?

Now, this would make a lot of sense, if they will use this in a custom infrastructure. If you want to explore Tor or Bitcon in a particular way.. where you know sites will be made to be rendered in this particular browser.

Otherwise, its a multi-multi year project just to catch up where the big guys were 8 years ago. Its great as a learning path, but if you are expecting this to be the next Firefox..




Our original lead developer has left to implement a browser using servo.

My goals are a little different. I want this to be a browsers by devs for devs. And I'd rather learn how to make one by exploration instead of copying.

No expectations of this being a good browser, just hoping for the most dev friendly and we'll see how far we can take it.


> And I'd rather learn how to make one by exploration instead of copying.

> No expectations of this being a good browser, just hoping for the most dev friendly and we'll see how far we can take it.

Nice. So your primary goals and expectations are well balanced.

So giving you guys are doing this, you should maybe consider doing some exploratory goals, taking some different paths and decisions from the current browsers.

That may lead to some good innovations along the way.


If it's for use mainly with Tor I suppose they could just target HTML 4.01.


Well, but even than.. you will need to start focusing on a multi-threaded-compositor.. to render the chrome, visual dom nodes, images, video images, animations, etc..

Of course you dont need to start with all that from the beginning but at least you need to get the architecture right

Than on top of the compositor, you will need a web engine, to form the nodes and render it back on the compositor.

I guess if you just do a immediate mode render engine on with a OpenGL backend, you can render something, but it will be pretty limited, mostly for static content.

So even if its a experimental thing like.. 'i want to use lua instead of javascript for scripting..' or lisp as i've seen in the comments.

You would need to focus on the compositor first, and then create the web engine on top. But why to start from scratch if you can just reuse some chrome parts for instance (giving this is in C++).. just take the compositor from chrome in 'src/cc'.. just reuse 'src/base', 'src/crypto', 'src/net', 'src/ipc', 'src/gpu' some parts of 'src/ui' (if you want to) and you are good to go.

Then you can just focus on creating a new innovative web engine.

For instance im using the chrome compositor, to do multi-platform ui rendering using Swift in the UI API layer, and its working pretty good.

Anyway, its not wrong at all, to start like this.. its cool.. the hacker spirit, right? But you should just know that unless you are expecting to give a lot of good years of your life, to make it really good, you must have more humble expectations about what you will end with.

(And you should consider yourself lucky if you end with a equivalent of netscape 2.0 in a year if you work on it full time)


Completely agree.

I actually have an offline repo with multithreading started but put on hold to focus on other issues. I'd like to implement threading sooner than later as it will be easier to do on a smaller project than a large one.

It's all about the architecture. We want to get that right first and are willing to throw away our current code to make sure we have this nailed. We are focusing on the renderer using a flow-programming style (only update what needs to change) as much as possible. So we're yielded to the OS unless there are pending events (using glfw). So no render() 60 times per second (unless there was something pumping events as such)

We're actually going to be converting HTML/CSS into JavaScript and the renderer will be handling a dynamic DOM. So we're not falling into trap of expecting things to be static. A couple of us are ex-game devs and have worked on HTML 5 games.

Thanks for the kind words.


> We're actually going to be converting HTML/CSS into JavaScript and the renderer will be handling a dynamic DOM

Thats a cool idea. Than its just a matter of exposing the c++ composition/rendering engine to javascript.

But of course, this will require a pretty hardcore javascript JIT like the current V8..


> Well, but even than.. you will need to start focusing on a multi-threaded-compositor.. to render the chrome, visual dom nodes, images, video images, animations, etc..

Why?

If you simply don't implement video/audio/scripting, nor multi-threaded anything, you can still render a large percentage of pages on Tor hidden services. That's because the sites built to work with a privacy overlay are already themselves extremely limited in complexity. And the ones that aren't frankly aren't safe to view anyway, at least not without NoScript turned on (in which case you're essentially back to HTML 4.01).

Such a browser actually has a chance of working with a lower risk of exploits than the currently available browsers. (With the downside that an "off-brand" browser stands out in the crowd wrt fingerprinting, but that's true no matter what you implement.)

But if the goal is to render not just hidden service content, but arbitrary content including normal web pages fetched through a Tor exit node, a project like this written in a language like C++ will only ever be riskier and easier to exploit than current browsers. All you could ever ethically claim to potential users is that it's not Firefox nor Chrome (and even then that's certainly no reason to prefer it in a privacy overlay).


Of course you wont do all of those things from the start. But you write a bare minimum to render static surfaces on the screen.

Its not that of a big deal on the start.. just expose a 'Layer' object that can represent a surface on the backend (eg. opengl), and use a backend thread for the rendering part (rendering the current layer tree state using display lists for instance). Than the chrome engine layer and the web engine can just use that representation of a 2d surface to render.

But at least you get the architecture right from the start. Every project i know of, always want to aim to something more later. And if you dont get the architecture right from the start, and you need to advance to goal X, you will end throwing everything you have done before, and restarting from the scratch.


If architectural correctness is a goal then why aren't you starting with the renderer in a separate process like Chrome/Chromium?


I found this article pretty interesting, a few months ago:

https://limpet.net/mbrubeck/2014/08/08/toy-layout-engine-1.h...

It walks through the basics of writing a layout engine which is extended over a series of posts to be quite flexible and interesting.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: