Hacker News new | past | comments | ask | show | jobs | submit | samwillis's comments login

I think this is a subset of the fifth (of the seven) ideals of local-first software https://www.inkandswitch.com/local-first/#5-the-long-now

> Local-first software enables greater longevity because your data, and the software that is needed to read and modify your data, are all stored locally on your computer.

There are many ways to achieve this goal - open document standards, open source servers, a escrowed release of the server, or this idea of a bail out system for taking the current version and self hosting it. All are commendable.

Actually achieving all seven ideals is hard, and there isn't much modern software that does it. But in my view anyone trying to achieve even a few of the ideals is making strides towards a worthy goal.

There is a lot of exciting stuff happening in the local-first software movement at the moment, and a lot of that is related the sync engines that are being built (disclaimer: I work for ElectricSQL, we are building one). These sync engine don't inherently make your software local-first, fitting all the ideals, but they do make it a lot easer to do so. They are an important building block. But there is more needed, we need more open document standards - Automerge, Yjs, Loro and the other CRDT data structures are perfect lower level data structures to build these higher level abstractions on. Martin Kleppmann has talked quite a bit about sync engines that are disconnected from the underlying application, essentially pluggable sync engine, you choose who to use to sync you copy of a document or application - I'm excited to see where this goes.

But, we also need to free up the application distribution platform. The app stores are walled gardens that prevent some business models, native apps (while performant) are tied to a specific platform (or even version!), and the web is inherently tied to servers and browsers. The web platform though, i.e. JS, HTML, CSS, is perfect for building high longevity software that runs anywhere, on any device, the issue is the distribution. We need a middle ground, an application package that built on web standards, but isn't tied to a server. I want to download a bundled app to my machine and have a copy, email to a family member, or even open and hack on it. Thats the final missing piece.

A downloadedable app, with an open and pluggable sync engine would achieve the same goles of this ejectable idea.


The application package you're looking for is a single HTML file with no external dependencies, especially if it avoids minification, obfuscation, or the use of technologies like WASM blobs which require complex external toolchains to disassemble or modify. This is very achievable right now! See tools like TiddlyWiki or Decker, for example.

The primary barrier to an entirely server-free or server-agnostic webapp ecosystem is browser vendors choosing to seal the newest JavaScript APIs behind "secure contexts" which are only available to documents served over HTTPS.


I agree, I develop small utilities as single HTML for all the reasons you list (and fun), but having to work around browser protections for some various APIs can be a bummer.

The average internet user could be exploited fairly easily if every HTML file had immediate access to all the lower level APIs being introduced[0], and we end up looping back around to some sort of signing or alternative install method (pwa).

Curious to find the balance between distributable and "safe" enough to achieve wide adoption.

0: https://developer.mozilla.org/en-US/docs/Web/Security/Secure...


In practice most of those APIs are also gated behind a user's informed consent to e.g. enable access to a webcam or some other sensitive kind of I/O. I'd argue that the HTTPS delivery side of the requirements is superfluous theater pushed by "HTTPS Everywhere" ideologues and doesn't actually enhance the real security and privacy benefits already afforded by requiring manual user interaction.

Cursor Compose (essentially a chat window) in YOLO mode.

Describe what you want, get it to confirm a plan, ask it to start, and go make coffee.

Come back 5min later to 1k lines of code plus tests that are passing and ready for review (in a nice code review / diff inline interface).

(I've not used copilot since ~October, no idea if it now does this, but suspect not)


The fact that tests are passing is not a useful metric to me - it’s easy to write low quality passing tests.

I may be biased, I am working with a codebase written in copilot and I have seen tests that check if dictionary literals have the value that was entered in them, or that the functions with a certain type indeed return objects of that type.


We should have two distinct, independent LLMs: one generates the code, the other one the tests.

Do you also hire two different kinds of programmers - one that never wrote a test in their life, and is not allowed to write anything other than production code, and second that never ever wrote anything other than tests, and is only ever allowed to write tests?

It makes no sense to have "two distinct, independent LLMs" - two general-purpose tools - to do what is the same task. It might make sense to have two separate prompts.


perfect use case for GANs (generative adversarial network, consisting of (at least) a generator and a discriminator / judge) isn't it? (iiuc)

Was "ElectricSQL" made with with you using Cursor and "YOLO" mode while making coffee?

Passing tests lol. Not my experience at all with Java


I'm not American, but I'm very aware of the impact 18f has had from seeing the many posts on here of their work over the years: https://hn.algolia.com/?q=18f

I'm also aware of at least a handful of hacker new members, and people I've followed, who took time out of their carriers to do a "tour of duty" at 18f.

I feel for all today, you were doing good work!


If you hadn't post this comment, I would have.

Fireworks could have been Figma, it could have been the default platform every designer used. But Adobe didn't understand it, they saw it as a weird Photoshop competitor and shelved it.

The last few versions bundled with CS were clearly neglected maintenance releases. I finally stopped using my slowing rotting copy in about 2014 when I got a Mac with a retina screen and fireworks was stuck with a terrible pixel doubled ui. :-(


Fireworks came with the Macromedia Studio MX 2004 suite (I had the education version -- ~$299 was the happy medium for me between full price and pirating). While I made great use of Flash and Dreamweaver in that bundle, Fireworks was always an enigma. I think it exported some animated gifs for me. What did y'all make with it?


I held on until 2020. At some point I had to give up MacOS updates to keep it going.

I still haven't found an acceptable replacement, choosing instead to design in-browser with CSS. Of course this means I can't make graphic heavy designs that I can slice and export with transparent PNGs, but we haven't cycled back to that sort of design yet so I'll be OK with minimalist crap for a while.


One area we have found Caddy invaluable is for local testing of APIs with HTTP2 during development. Most dev servers are HTTP1 only, and so you are limited to max of 6 concurrent connections to localhost. HTTP2 requires SSL, which would normally make it a PITA to test/setup locally for development.

Throw a Caddy reverse proxy in front of your normal dev server and you immediately get HTTP2 via the root certificate it installs in your OS trust store. (https://caddyserver.com/docs/automatic-https)

We (ElectricSQL) recommend it for our users as our APIs do long polling, which with HTTP2 doesn't lock up those 6 concurrent connections.

I've also found that placing it in front of Vite for normal development makes reloads much faster. Vite uses the JS module system for loading individual files in the browser with support for HMR (hot module replacement), this can result in a lot of concurrent requests for larger apps, creating a queue for those files on the six connections. Other bundlers/build tools bundle the code during development, reducing the number of files loaded into the browser, this created a bit of a debate last year on which is the better approach. With HTTP2 via Caddy in front of Vite you solve all those problems!


> HTTP2 requires SSL

Strictly speaking it doesn't, unencrypted HTTP2 is allowed per the spec (and Caddy supports that mode) but the browsers chose not to support it so it's only really useful when testing non-browser clients or routing requests between servers. HTTP3 does require encryption for reals though, there's no opting out anymore.


Yep, it's really disappointing they didn't decide to support it for localhost.


I think the reason why they have decided to not support it over plain text is because they would have to detect if the server answers with HTTP/2 or HTTP/1.1, which could be complicated. This is not needed when using TLS since ALPN is used instead


No, that's rather easy for up-to-date browsers. The real reason is that dumb middle proxies might cache/mangle HTTP/2 requests to the point that it simply fails hard, but encryption makes those concerns go away. Yes, there are proxies that can decrypt TLS, but due to how negotiation of HTTP/2 works in TLS (ALPN), dumb proxies will simply not send the signals that it's HTTP/2 capable.


This is meh excuse. If you want your browser to connect to a gopher server, you type gopher://example.com. If you want to use http2, http2://example.com should work. (I know, I know, everyone removed Gopher support a few years ago. Same idea though.)

Having said all that, I just copied the certs out of here https://cs.opensource.google/go/go/+/refs/tags/go1.24.0:src/... and use them to do browser/http2 stuff locally. Why steal Go's certificates? Because it took 1 second less than making my own!


Using http2 as protocol in the URL makes the initial transition complex: One can't share links between different users and clients and from other websites with a guarantee of the "best" experience for the user. Not to mention all old links outside the control of the server operator.


Hey, thanks for this. It saves me even more than 1 second!


Which browsers/libraries trust these? Or does the go tool chain install them?


Nothing trusts them, they're just regular self-signed certificates. There is no benefit to using these over your own self-signed certificates except that you don't have to ask your LLM for the commands to generate them ;)


And of course once you trust them on localhost, you expose yourself to some risk, since the whole world can get a copy of the key.


Another way is to create a regular DNS name, and ave it redirect to localhost. If you are unable or unwilling to do so, there are free DNS services like https://traefik.me/ that provide you with a real domain name and related certificates.

I personally use traefik.me for my hobbyist fiddling, and I have a working HTTP/2 local development experience. It's also very nice to be able to build for production and test the performance locally, without having to deploy to a dev environment.


It is even simpler with `/etc/hosts`:

   127.0.0.1 localhost local.foobar.com
And just use wildcard `*.foobar.com` or SAN cert with anything local like Caddy, Nginx, Haproxy or whatever.


For people worried about somehow exposing commercially or otherwise sensitive information by the registration of DNS names, a SAN certificate is out because of certificate transparency logs.

A wildcard certificate is safe from that though. Or just choosing names that don't give secrets away.

A certificate signed by a locally trusted CA would work too of course, but unless you already have that setup for other reasons it is a bunch of admin most funny want to touch.


> a SAN certificate is out because of certificate transparency logs

First, all these certificates in the web PKI have SANs in them. X509 was designed for the X500 directory system, so when Netscape repurposed it for their SSL technology they initially used the arbitrary text fields and just wrote DNS names in as text. There are a number of reasons that's a terrible idea, but PKIX, RFC 2459 and its successors, defines a specific OID for "Subject Alternative Names". The word "alternative" here refers to the Internet's names (DNS names and IP addresses) being an alternative to the X500 directory system. PKIX says the legacy names should be phased out and everybody should use SANs.

That rule (you must use SANs in new certificates) was baked into the CA/Browser Forum Baseline Requirements (CA/B BRs or just "BRs" typically) which set the rules for certificates that will actually work in your web browser and thus in practice all the certificates people actually use. Enforcement of this rule was spotty for some time but the advent of CT logging made it possible to immediately spot any cert which violates the rule and so some years ago Google's Chome began to just reject the "legacy" just write it in a text field and hope approach and other browsers followed.

So what you're actually talking about are certificates with two or more specific DNS names rather than a single wildcard.

Secondly though, that's usually all a waste of your time if you're trying to mask the existence of named end points because of Passive DNS. A bunch of providers sell both live and historical feeds of DNS questions and answers. Who asked is not available, so this isn't PII, but if I'm wondering about your "sensitive" names at example.com I can easily ask a Passive DNS service, "Hey, other than www.example.com what else gets asked in similar DNS queries?" and get answers like "mysql.example.com" and "new-product-test.example.com".

Passive DNS isn't free, but then squirrelling away the entire CT log feed isn't free either, it's served free of charge on a small scale, but if you bang on crt.sh hard you'll get turned off.


> First, all these certificates in the web PKI have SANs in them.

Yes, and technically true is the best variety of true, but… Usually people don't refer to certificates where “Subject” is equal to the one and only “Subject Alternative Name”, as SAN certificates.

> So what you're actually talking about are certificates with two or more specific DNS names rather than a single wildcard.

If we are going to nitpick over the SAN designation a basic wildcard certificate is usually a SAN cert too, by the same definition. They have (at least mine always have had):

    Subject =
            “CN = *.domain.tld”
    Subject Alternate Name = 
            “DNS Name: *.domain.tld
             DNS Name: domain.tld”
(or similar for a wildcard hung off a sub-domain)

> "Hey, other than www.example.com what else gets asked in similar DNS queries?"

True, but only if those queries are hitting public DNS somehow. You can hide this by having your local DNS be authoritative for internal domains — your internal requests are never going to outside DNS servers. There could be leaks if someone who normally has access via VPN tries to connect without, but if you have something so truly sensitive that just knowing the name is a problem¹ then I hope your people are more careful than that (or your devices seriously locked down).

And I still say the easy workaround for this is names that only mean something internally. projectwhatthefuck.dev.company.tld is not going to mean much other than giving an attacker compared to projectousurpcompetitor.company.tld. Yes, they'll know the server name, and if it is publicly addressable they can connect to it, but if you have it properly configured they'll have to give it auth information they hopefully won't have before it hands over any useful information beyond the meaningless (to them) name that they already know.

--------

[1] Some of our contracts actually say that we can't reveal that we work with the other party, so technically² we could be in trouble if we leak the company name via DNS (bigwellknowmultinationalbank.ourservice.tld). Though when we offer a different name, in case the link between us can leak out that way, in those cases they've always declined.

[2] Really they don't care that much. They just don't want us to use their name/logo/other in promotional material.


I use a combination of mkcert and localdev.me … mkcert to generate a CA and install certs, then localdev.me redirects any subdomain to localhost.


Aren't you exposing your dev instance to the world then? Not worried about that?


DNS to a local address doesn’t expose anything.

For example, postulate a DNS entry of myTopSecrets mapped to localhost. If you use it, it will be routed to your own computer. If someone else uses it, they would be routed to their own computer. The same follows for IP addresses within your local area network.

Unless you did extra work outside the scope of DNS, nothing in your lan is addressable from outside your lan.


You're still revealing the existence of myTopSecrets to the world, though.

Between this and certificate transparency logs, it seems insane to me that the commonly advised Correct Setup, to be able to experiment and hack random little personal stuff, and have it reliably work on modern browsers, requires you to 1) buy a subscription (domain), 2) enter into another subscription-ish contractual relationship (Let's Encrypt), and 3) announce to the whole world what you're doing (possibly in two places!).

Imagine your computer stops booting up because you repositioned your desk, and everyone tells you the Correct Way to do it is to file a form with the post office, and apply for a free building permit from the local council. That's how this feels.


I totally agree, I have long since accepted that this is how things are but.. that doesn't mean it's right.. it feels like browsers are overtly obstructing the use of the local system as a development platform and local-hosting option. This also includes base-line features like the JS localStorage API that only works when connected to a domain name (localhost is a no-go). That last one in particular just feels perverse to me - in NO WAY should that require a domain name, it feels anti-democratic and clunky as can be. It also 100% stops webapps from being local-first (i.e. I save an HTML/JS bundle to a folder and run the "app", it's automatically isolated to said folder) with network connectivity as a secondary option. If browsers could do the latter then it would be a death-blow to a lot of remaining platform-specific apps.


> to be able to experiment and hack random little personal stuff

Yes, a more sane approach is just use replit or the like, but this thread is about keeping it complicated.

> 2) enter into another subscription-ish contractual relationship (Let's Encrypt),

afaik, LE only does certs on machines for which they can see.

Taking a moment to look it up, I'm incorrect, it looks like you can establish LE with a DNS challenge instead of http. [0]

0. https://letsencrypt.org/docs/challenge-types/#dns-01-challen...


> You're still revealing the existence of myTopSecrets to the world, though.

Not if you only present that name in local DNS, and use a wildcard certificate to avoid needing to reveal the name via a SAN cert or other externally referable information.

Also, perhaps refrain from calling it myTopSecrets. Perhaps ProjectLooBreak instead.


Couldn't you just add the domain to /etc/hosts and have it resolve that way. No need to buy domain if you are just testing locally. Also you wouldn't be exposing anything to outside world.


Perhaps I could, but I'm afraid to do it[0]. And I'd still need a matching certificate, and generating a one that browsers won't refuse to look at and make them trust it across multiple devices (including mobile) is it's own kind of hell.

--

[0] - I'm honestly afraid of DNS. I keep losing too much of my life to random name resolution failures, whose fixes work non-deterministically. Or at least I was until ~yesterday, when I randomly found out about https://messwithdns.net, and there I learned that nameservers are required to have a negative cache, which they use to cache failed lookups, often with absurdly high timeout values. That little bit of knowledge finally lets me make sense of those problems.


I was only commenting about DNS part, self signed certificates come with their own lot of trouble. At least I havent ever run into any cache issues with local resolvers.

I have previously used https://github.com/jsha/minica which makes it at least easy to create a root certificate and matching server cert. How to get that root cert trusted on different array of devices is another story.


You can add what you want to /etc/hosts, but you need to actually control a domain to get a real cert for it that your browser will trust. Otherwise, you need to mess about with self-signed certs, browser exceptions, etc.

If you already own a domain, it's pretty convenient.


myTopSecrets can instead be mapped through a local redirection without needing to put that information out onto the Internet.


Just a note, because this comment made me curious and prompted me to look into it:

Vite does use HTTP2 automatically if you configure the cert, which is easy to do locally without Caddy. In that case specifically there's no real reason to use Caddy locally that I can see, other than wanting to use Caddy's local cert instead of mkcert or the Vite plugin that automatically provides a local cert.


Completely agree. If you want a nice way to do this with a shared config that you can commit to a git repo, check out my project, Localias. It also lets you visit dev servers from other devices on the same wifi network — great for mobile testing!

Localias is built on Caddy; my whole goal is to make local web dev with https as simple as possible.

https://github.com/peterldowns/localias


That only works on localhost, right? I am looking for a solution for intranet that doesn't require complex sysadmin skills such as setting up DNS servers and installing root certificates. This is for my customers who need to run my web server on the intranet while encrypting traffic (no need to verify that the server is who it claims to be).


Localias is not designed for your usecase and cannot solve your problem, sorry.


Without verifying the server identity, the encryption is useless.


The six connections thing is just a default that you can change in about:config. Really it should probably have a higher default in $currentYear, but I don't expect major browser vendors to care.


I assumed almost everyone (product, enterprise) uses ngork to expose development/localhost server to get HTTP2 now a days, but it's good to realize Caddy can do the job well.


> so you are limited to max of 6 concurrent connections to localhost.

I think a web server listening on 0.0.0.0 will accept “localhost” connections on 127.0.0.2, 127.0.0.3, 127.0.0.4 … etc., and that you could have six connections to each.

https://superuser.com/questions/393700/what-is-the-127-0-0-2...

( a comment there says “not on macOS” though)


> via the root certificate it installs in your OS trust store

This does not sound like the kind of feature I would want in a web server


It is optional for this purpose and you have to explicitly install it.


The link is to their poweroutage.us, but they also cover Canada (https://poweroutage.com/ca), the UK (https://poweroutage.com/uk) and the EU (https://poweroutage.com/eu). For some reason they don't link to these from the US site.

Really interesting data, particularly when you compare the very low level of outages in Canada/UK compared with the US.


The UK regulator introduced the Interruptions Incentive Scheme in 2002 to encourage distribution networks to reduce customer interruptions and minutes lost. It triggered a large wave of investment in network automation (remote switching, auto reclosers etc.)


So basically the difference between 9s is investment/cost.

At least here (Canada), utils push all of their costs to end-users & IIUC have an incentive to have high capex/low opex networks because of regulated return.

As a Canadian residential electrical customer, we pay a lot in base fees from what I hear relative to US customers. Sure it's more reliable, but tbh, it's not worth spending much to get 5 9s (5 mins of downtime a year) vs 4 9s (50 minutes/yr). Heck, even 500 minutes/yr would be fine for me.

But commercial/industrial users won't feel the same way, and managed to successfully spread the cost of adding 9s among users that largely don't care.


Btw, at least for Ontario and Quebec, current average downtime is below 4 9s, and quite close to your quoted 500m/yr time.

Ontario's Energy Board has a dashboard (https://app.powerbi.com/view?r=eyJrIjoiNmY1YjU0NmUtMTJhYi00N...) that says in 2023, total average outage time was ~5hrs (and that's somewhat typical of the last 10ish years).

Hydro Quebec says that in 2023 (https://www.hydroquebec.com/data/documents-donnees/pdf/hqd-0... FR sorry) the average downtime was ~4.5 hours.


Its the same in the US. Except for texas which does some slightly more innovative things. Its really hard to do anything else than overbuild, Texas was strongly criticized for underbuilding its network when there was an ice storm a few years ago.


It should be easy for a human at TfL to make an assessment on something like this, see the autistic and technical value, and offer a free but heavily restricted license to the developer.

But is suppose many organisations just don't give people the autonomy and authority to do such tings.


For that specific map, based on what the email he got sent from TfL said, I don't think they directly have permission to issue that license - their site says people have to go through the partner who produced the schematic art to get a license


Except the schematic art is covered by copyright, not trademark.


On their website, TfL says both things:

1. The map is covered by copyright.

2. The only way to get a license is to buy one from their map partner.

> We protect the map under copyright and officially license it for brands and businesses to reproduce it.

> To use the map in your design, you must have the permission of our map licensing partner, Pindar Creative. This is the only way to officially license the map, no matter how you'd like to use it.

Yet, they don't even mention the case you might be a third-party developer providing a non-profit service.

https://tfl.gov.uk/info-for/business-and-advertisers/map-lic...

> For registered charities and schools, the licence is royalty free, but we still charge an artwork fee of £352 + VAT

https://tfl.gov.uk/info-for/business-and-advertisers/using-t...


> For registered charities and schools, the licence is royalty free, but we still charge an artwork fee of £352 + VAT

Institution funded by taxpayers charging institutions funded by donors and taxpayers. Be nice if there were any value being added, rather than just exchanged.


If TfL hasn't bought the full rights to their map layouts, the shame is on them.


autistic value? Trains? I see what you did there?


s/autistic/artistic

I violated my own rule of always re-reading a post 5 min after posting it...


Unit tests work well with PGlite, at least in the TS/JS world where it natively sits at the moment. You can have a unique Postgres instance for each unit test that's up and running in just a few ms.

It's possible to use PGlite from any language using native Postgres clients and pg-gateway, but you lose some of the nice test DX when it's embed directly in the test code.

I'm hopeful that we can bring PGlite to other platforms, it's being actively worked on.

The other thing I hope we can look at at some point is instant forks of in memory databases, it would make it possible to setup a test db once and then reset it to a known state for each test.

(I work on PGlite)


Sadly HN has stripped the fragment from the URL, see page 268: https://babel.hathitrust.org/cgi/pt?id=mdp.39015055216876&se...


Very much this! I was also at a thing at their office a few weeks ago (some thing? "Local Thirst"), and Steve gave a demo of this. It is incredible.

I've joked before that the last generation of human machine interfaces ware invented at Xerox park, and the next generation is being invented at TLDraw of Finsbury Park. But it's not really a joke, I genuinely believe it.


I agree. Looking at this, it seems to be exactly how I want to use LLMs. Describe a small transformation of data I don't want to work out now, connect it to other components. As the needs become more-defined, replace each part with a faster, more-reliable, well-defined data transformation. I could actually see developing a system this way...


It was a cool thing... I expected a hacky demo that'd fall apart mid-way but it held up. The Macintosh SE in the office was cool too.


Ha yeah, that was the same thing! The night it rained sideways.

So this is the demo people were talking about at the end of the night! I was quite annoyed I missed it, makes sense now. I think I was nerding out over current-gen HIDs while eyeing up their very tastefully equipped coffee station (ozone roasters ftw)


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: