Hacker News new | past | comments | ask | show | jobs | submit | r2b2's comments login

Firefox should make it clear that Firefox (browser) will not collect, transmit, nor sell user data beyond what is technically required for interaction between the browser and other computers over networks.

Anything less and people stop using Firefox.

If other Mozilla services need broader terms, those should be separate.


Building a service on top of the OSS, where some people prefer to self-host, but many don't want to self host.

Ex. Bitwarden, Infisical, Docker, etc.


To create private shareable links, store the private part in the hash of the URL. The hash is not transmitted in DNS queries or HTTP requests.

Ex. When links.com?token=<secret> is visited, that link will be transmitted and potentially saved (search parameters included) by intermediaries like Cloud Flare.

Ex. When links.com#<secret> is visited, the hash portion will not leave the browser.

Note: It's often nice to work with data in the hash portion by encoding it as a URL Safe Base64 string. (aka. JS Object ↔ JSON String ↔ URL Safe Base 64 String).


> Ex. When links.com?token=<secret> is visited, that link will be transmitted and potentially saved (search parameters included) by intermediaries like Cloud Flare.

Note: When over HTTPS, the parameter string (and path) is encrypted so the intermediaries in question need to be able to decrypt your traffic to read that secret.

Everything else is right. Just wanted to provide some nuance.


Good to point out. This distinction is especially important to keep in mind when thinking about when and/or who terminates TLS/SSL for your service, and any relevant threat models the service might have for the portion of the HTTP request after terminattion.


Cloudflare, Akamai, AWS Cloudfront are all legitimate intermediaries.


Yes, see "Cloudbleed"


Huge qualifier: Even otherwise benign Javascript running on that page can pass the fragment anywhere on the internet. Putting stuff in the fragment helps, but it's not perfect. And I don't just mean this in an ideal sense -- I've actually seen private tokens leak from the fragment this way multiple times.


Which is yet another reason to disable Javascript by default: it can see everything on the page, and do anything with it, to include sending everything to some random server somewhere.

I am not completely opposed to scripting web pages (it’s a useful capability), but the vast majority of web pages are just styled text and images: Javascript adds nothing but vulnerability.

It would be awesome if something like HTMX were baked into browsers, and if enabling Javascript were something a user would have to do manually when visiting a page — just like Flash and Java applets back in the day.


Is there a feature of DNS I'm unaware of, that queries more than just the domain part? https://example.com?token=<secret> should only lead to a DNS query with "example.com".


The problem isn't DNS in GP. DNS will happily supply the IP address for a CDN. The HTTP[S] request will thereafter be sent by the caller to the CDN (in the case of CloudFlare, Akamai, etc.) where it will be handled and potentially logged before the result is retrieved from the cache or the configured origin (i.e. backing server).


This sounds like a big security flaw in the system that uses access links. Secrets should not be logged (in most cases).

When opening a Dropbox/GoogleDocs/OneDrive link, I expect the application not to route them through potentially unsafe CDNs.


Correct, DNS only queries the hostname portion of the URL.

Maybe my attempt to be thorough – by making note of DNS along side HTTP since it's part of the browser ↔ network ↔ server request diagram – was too thorough.


Thanks, finally some thoughts about how to solve the issue. In particular, email based login/account reset is the main important use case I can think of.

Do bots that follow links in emails (for whatever reason) execute JS? Is there a risk they activate the thing with a JS induced POST?


To somewhat mitigate the link-loading bot issue, the link can land on a "confirm sign in" page with a button the user must click to trigger the POST request that completes authentication.

Another way to mitigate this issue is to store a secret in the browser that initiated the link-request (Ex. local storage). However, this can easily break in situations like private mode, where a new tab/window is opened without access to the same session storage.

An alternative to the in-browser-secret, is doing a browser fingerprint match. If the browser that opens the link doesn't match the fingerprint of the browser that requested the link, then fail authentication. This also has pitfalls.

Unfortunately, if your threat model requires blocking bots that click too, your likely stuck adding some semblance of a second factor (pin/password, bio metric, hardware key, etc.).

In any case, when using link-only authentication, best to at least put sensitive user operations (payments, PII, etc.) behind a second factor at the time of operation.


> a button the user must click

Makes sense. No action until the user clicks something on the page. One extra step but better than having “helpful bots” wreak havoc.

> to store a secret in the browser […] is doing a browser fingerprint match

I get the idea but I really dislike this. Assuming the user will use the same device or browser is an anti-pattern that causes problems with people especially while crossing the mobile-desktop boundary. Generally any web functionality shouldn’t be browser dependent. Especially hidden state like that..


I agree, better to use an additional factor than fingerprinting.


> Another way to mitigate this issue is to store a secret in the browser that initiated the link-request (Ex. local storage).

Or just a cookie …

But this approach breaks anyway in cases such as a user on a desktop who checks his email on his phone for the confirmation.


Yes, I've seen this bot JS problem, it does happen.


It's called a fragment FYI!


However, window.location calls it "hash". (Also, the query string is "search". I wonder why Netscape named them this way...)


Interesting, thanks for the additional info.


Yeah I was confused by it being referred to as the hash.

https://en.wikipedia.org/wiki/URI_fragment?useskin=vector


The secret is still stored in the browser's history DB in this case, which may be unencrypted (I believe it is for Chrome on Windows last I checked). The cookie DB on the other hand I think is always encrypted using the OS's TPM so it's harder for malicious programs to crack


Yes, adding max-use counts and expiration dates to links can mitigate against some browser-history snooping. However, if your browser history is compromised you probably have an even bigger problem...


If it doesn't leave the browser, how would the server know to serve the private content?


Client web app makes POST request. It leaves browser, but not in URL


That only works if you can run JavaScript. If you want to download a file with curl for example it fails.


The syntax of your language is quite nice. I’d maybe change (for …) to (each …), (fun …) to (fn …), and (let …) to (def …) or (set …) depending on implementation details of variable assignment, but those are just aesthetic preferences :)

I love '{thing}' for string interpolation.

If you haven’t already, check out clojure, janet-lang, io-lang, and a library like lodash/fp for more syntax and naming inspiration.

Keep building!


Thanks, I think this language will probably keep my hacking for a while :). you're right about the keyword naming, i tried to be unique, maybe too much for functions, simply because i love them being named "fun". Let in the current implementation definitely is confusing due to it being used for defining and updating a variable and its contents - maybe ill change it to let and set or def and set, who knows. Thanks for the feedback :)


  The list-style properties should be called marker-style, and list-item renamed > to marked-block or something.
I think list and list-item are clear.

How about just `mark` instead of `marker-style` to be consistent with other properties that decorate part of an element like `border` or `background`?


Autonomous electric cars make tunnels easier, which have smaller negative impact on the environment.


HN needs a @dang apprentice program so it can thrive for centuries.


Jitsi App Privacy:

https://apps.apple.com/us/app/jitsi-meet/id1165103905

And Zoom:

https://apps.apple.com/us/app/id546505307

Looks like one company likes to gobble data more than the other even if both privacy policies are gobble-open.


Yes, using a watch + compile + restart development environment where everything can be worked on locally reduces cycle time to 5ms, which is ~100,000x quicker than the 10m compile time in the parent post.

I've deployed 35,000+ lines of code to prod in 2023 with this flow. I've only had 2 small bugs.

This is by far the most efficient setup I've ever used.


The biggest latency in any large software project is... other people.


I think Fred Brooks also talked about that in one of his essays didn't he?

(I guess I'm being facetious because I think everyone should read his book of essays. The hardware platforms have changed dramatically since the 1960s, but the wetware hasn't changed a bit)


The issue most have is that they never tend to think they are the "other" people when reality is its arbitrarily anyone at any given point.

In theory if your process is perfectly optimized and everyone is doing the correct things all the tike and the goals are well defined then this is true. The biggest bottleneck I see in most projects are that there little to no definition in what needs to be done. It's not a handholding task, it's a lack of clear or continuously fluctuating goals. That ultimately ends to people slowing down.

Business leadership wants to hand the goal of "make money" to the engineering team but IMHO that's an unrealistic expectation and shows poor, incompetent, and or lazy leadership. Someone needs to find the demand signal and be decent at predicting future demand to direct the team towards what needs to be created. At some point, if you think you can hand off lofty lazy goals to an engineering team like this then your role becomes questionable because you have a hybrid engineering/entrepreneurial team who could basically work without your involvement.


So many cache misses and the context switching? It takes hours!


On my current project, any deploy that we want to do to an AWS environment, including dev, takes at least 10 minutes. So, as soon as you have to work on AWS functionality, you are transported to the past, similarly to the programmers of old. I'm not saying this is a bad thing actually, just to note that we still haven't managed to eliminate the "turn-around time".


Yes, this is the best workflow (though I'm curious how you get 5ms latency with a full dev env restart -- fork/exec alone takes 1-5ms on my 2020 macbook).

Nits aside use the same style and have set a time budget of 50ms for myself. I have even setup nvim to save on every keystroke (one also learns how to not write infinite loops when programming in this style).

This style of programming is transformative because it's like having a persistent repl. It becomes feasible to test every edge case with print debugging as you go. This workflow is also why I have little interest in any language that primarily targets llvm, which will easily blow my time budget in code gen.


Leetcode doesn't capture this, so it must not be a viable way to work.

/s



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: