Hacker News new | past | comments | ask | show | jobs | submit login
Memoirs from the old web: The KEYGEN element (devever.net)
191 points by pabs3 on May 1, 2023 | hide | past | favorite | 66 comments



Client side SSL authentication, as a bit of an old school sysadmin, remains both my favourite and least favourite authentication method.

Favourite because I like the decentralised model, secrets in the perimeter instead of the core, and it was also fantastic for M2M comms before OAuth etc came on the scene.

Least favourite because the client side implementation was always ugly and unpleasant. In particular the shared key store between Windows world (Inc IE) and Chrome and absence of that with Firefox. Explaining to a user how to make establish with client certificates was such a pain.

I still use this today in professional life for at least 2 B2B relationships that spring to mind. Both of them are with Telcos. I also use it for a couple of older OpenVPN setups.

I think client side SSL still holds a strong place in the toolbelt of options for doing small scale micro service models. It handles many authentication and identity challenges "for free".

EasyRSA is a great goto tool to help you get started with it.


Author here. "Secrets in the perimeter instead of the core" is an interesting point. To my mind it can just as equally be a disadvantage - I'd usually consider the core more trusted and less exposed.

When deploying client certificate auth with HTTP load balancers, etc. you basically have to have the LB add a X-Client-Certificate: ... header and then trust that it's telling the truth. This means a LB compromise basically compromises everything, since it can just lie, whereas something like (for example) AWS's HMAC-based signature system (where the substance of a request is signed using a shared secret unknown to LBs) for its APIs wouldn't be.

So I actually find this a compelling argument against client certificates in practice. It's actually a curious example of how the security of client certificates might be worse in practice, even though theoretically superior on the client-to-server leg, because of the backend implications it creates.


However, that LB code is written once and generic for all users, so it can be thoroughly checked and audited. How often are LBs hacked? Not so often. Whereas when servers handle their own auth they have to roll their own every single time and it's inevitable that a lot of them will get it wrong (see the article on auto maker web apps getting hacked and letting people take control of cars, usually due to bad auth impls).


With a non-client-cert based authentication system, one can devise a common authentication scheme (AWS's v4 signatures being a good example), but then delegate implementation of that scheme to a centralised service (or a standardised library), rather than being reimplemented in every application service. But that service needn't be a public, user-visible service.

I think there's a distinction here to be made about whether an application should roll its own authentication (answer: no), whether an organisation should roll its own authentication (answer: probably not), and whether, when we are comparing unified authentication systems which do or could exist, including client certificates, whether client certificates are particularly good (my view: probably not). The question of who designed an authentication scheme, and when and where that scheme is actually enforced (on what machine, in what codebase), are, except in the case of client certificates, largely orthogonal.

(I just wrote up my thoughts on this in more detail and will publish that blog post sometime.)


There's a couple things that I think are worth digging into with this post and the grandparent:

- If a LB is a place where all traffic is going through, it does become a higher value target. Saying that they are not hacked feels very anecdotal. - It is possible to have a LB which forwards TLS connections and does not need to MitM other connections. There are tradeoffs in it, but this is entirely possible. I had a writeup of my own on it over here: https://er4hn.info/blog/2023.02.18-tls-load-balancer/

hlandau, would love to read your article on comparing different auth schemes when you have that written up.


I'll email you when I publish it. Until then you may enjoy: https://www.devever.net/~hl/auth


> To my mind it can just as equally be a disadvantage - I'd usually consider the core more trusted and less exposed.

If a bad actor takes a little sneak peek at your database, they can brute-force hashed passwords or cookies and impersonate users. They can’t brute-force public keys.

It has nothing to do with the location of data, because that is the same: user has a secret credential, server has verification function


> This means a LB compromise basically compromises everything

Think about what a compromise of the LB means in a world where the encryption (TLS session) is not end-to-end. The LB can now steal all the content and lie. So, same thing.

If connection security is a priority, you need the TLS session to be from client to destination server with no MITMs. This is where mutual TLS authentication is ideal, since any would-be MITMs can't fake being the client.

This is why allowing TLS MITMs such as cloudflare is such a terrible idea if confidentiality matters at all.


This assumes that a compromise of confidentiality is equally as undesirable as the ability to forge requests which change state. As I see it it's going to be fairly common to have applications where one of these things is much less desirable than the other. If confidentiality is less important than ensuring requests can't be forged, client certificates may be a bad option. If confidentiality is far more important (including against enterprise MitM) is more important than ensuring requests can't be forged, it may well be a better option. So it's a question of tradeoffs.

Of course there's also always the option to combine both and require both a client certificate and some sort of application-layer signature, for the best of both worlds at the cost of greater complexity.

I agree that the Cloudflare trend is a disaster though (as I wrote in my article about Cloudflare), people literally opting into having their traffic MitM'd.


why is the LB unwrapping thing it shouldn't be the authority for? it whould work on L2 then


For better or worse, a lot of services are built around layer 7 http load balancing, which means the balancer (or something in front of it) needs to unwrap the TLS before it gets to origins. Once you start sending /foo to one group of servers and /bar to another, you're stuck with a load balancer that sees all the content.


Stickiness also plays a role hear. If the LB does not terminate the TLS connection, it needs to route all requests in that TLS session to the same „sticky“ host.

HTTP being stateless, the LB can in theory distribute those requests to distinct hosts.


It's a tradeoff between how you route requests. If they all come from the same host, using the same source port, they are all probably related and it's not a bad idea to have them all go to the same server node to process.

In general I believe that having LBs decrypt HTTPS to HTTP for better routing is an anti-pattern. It makes the LB a high value target in a network. I wrote up a blog post in more detail about how to LB w/ TLS over here: https://er4hn.info/blog/2023.02.18-tls-load-balancer/


> In particular the shared key store between Windows world (Inc IE) and Chrome and absence of that with Firefox.

I don’t know about client certificates, but roots you can definitely convince Firefox ≥ 49 to pull from Windows if you set security.enterprise_roots.enabled=true in about:config[1]. (Intermediates you definitely can’t.) The caveat is that the roots will be pulled into NSS as a plain list of certs, not queried via the native Crypto API, so any accompanying info you might be using—like the undocumented externally-imposed name constraints[2]—will end up ignored.

Ah, apparently Firefox ≥ 75 knows how to pull client certs from the system, while ≥ 90 will even do it by default[3].

[1] https://support.mozilla.org/en-US/kb/setting-certificate-aut...

[2] https://www.namecoin.org/2021/01/14/undocumented-windows-fea...

[3] https://blog.mozilla.org/security/2021/07/28/making-client-c...


> Least favourite because the client side implementation was always ugly and unpleasant.

At least it's there and supported though. Even on mobile browsers.


This looks to be an interesting series; too often it’s really hard to find out what something once was. Some of the articles are about things that have been removed (like <keygen>), others are of things that are still actually there, though no one uses them any more (such as server-side image maps).

I hope one is written about <ISINDEX>. That element really confused me when I was young and it was already thoroughly on the way out (though I can’t remember whether it had quite been removed yet).

For something like this, remarks about the site itself can be fair game. Here’s an interesting one: the site actually uses XML syntax, something few realise is even possible these days, but which is rather interesting in its differences and incompatibilities. (Some affect authoring, e.g. whether <, & and > must or must not be entity-encoded inside <script> and <style>. Others affect runtime, e.g. many JavaScript libraries assume Element.prototype.tagName is uppercase, but it’s not in XML syntax.) Probably my favourite thing about XML syntax is that it lets you nest links (though it’s nominally invalid), which can’t be done in HTML syntax (though you can do it in a live DOM using JavaScript).


Author here - yes, ISINDEX is on my radar. ;) Taking other suggestions also - I'm sure my memory isn't exhaustive. At some point I may venture into the weird proprietary extensions IE had also.


Of regular HTML features, one more immediately springs to mind: <frameset>. It’s similar to image maps as a feature that still works, but which no one uses any more.

(I actually used <frameset> for fun a couple of years ago in a live-coding-completely-from-scratch demo/talk that I prepared roughly but have never actually delivered, entitled “building a fair dinkum email client in half an hour with JMAP”. I was mildly surprised to get the frameset right without looking up any docs, despite not having used it for well over a decade, and not much before that.)

Hmm… maybe <plaintext> and <xmp> too, which still have fun parser implications. No idea how much actual use they ever got. (For that matter, I’m not sure how much use ISINDEX got.)


Yeah. Of course, nowadays I believe you could replace <frameset> almost entirely with iframes and CSS layout. But it does occur to me that there was exactly one feature of frameset which you maybe can't replicate: the ability of the user to resize frames with the mouse without relying on JavaScript.

...Although, come to think of it, the CSS `resize:` property exists now, and it seems it can be used to make arbitrary elements resizeable. Might actually work for it, though the UI might differ a bit.


There are differences between browsers in how they handle frames and history management (e.g. if you reload, Firefox keeps the iframe source documents as they were at the time of reload, since Firefox likes to retain stuff across page loads, and generally does an excellent job of it, but occasionally what it does messes with what the page tries to do if they aren’t aware of Firefox’s superior handling; Chromium, by contrast, does what it does so well, and throws it all away and starts afresh from what the source document specifies, which is actually a bit baffling given that frame locations are actually a part of the history entries so they could easily do better in at least most cases).

But I don’t think there are differences any more within a single browser between how frameset frames and iframes are handled, for history purposes. I vaguely recall seeing some of this stuff being specified in the HTML Standard maybe last year or so. I suspect there used to be differences, but I’m not confident.

As for CSS resize: for consistent support, you’d need to have the iframe fill a div, and make the div resizable. (As the spec says <https://www.w3.org/TR/css-ui-3/#resize>: “Applies to: elements with ‘overflow’ other than visible, and optionally replaced elements such as images, videos, and iframes”. Firefox doesn’t currently support resize on replaced elements. Fortunately the workaround is straightforward, and reliable so long as you’re not having the elements’ intrinsic sizes influencing layout.)

As for its UI, current consensus gets you a grippy in… well, roughly the bottom-end corner. (It depends on direction and writing-mode, as it should, but some of the choices seem moderately arbitrary since they’ve obviously decided to anchor it to one of the bottom corners, and while Firefox’s placements all seem sensible enough, Chromium gets the position obviously wrong when you combine direction rtl with writing-mode tb, tb-rl or vertical-rl; I should probably file a bug for this, but I’m too lazy.) Anyway, just a regular corner grippy for `resize: horizontal` or `resize: vertical` is rather disappointing. Need some way of controlling it to say “make it a full interactive border”.

All up, it’s nowhere near as simple as <frameset> if that’s exactly what you want, and its UI is a bit wonky, but yeah, it pretty much works, and it does give you a lot more flexibility.


From what I can tell isindex was some kind of search element designed to search the contents of the current page, via the server? A bit like the <input type="search"> of today? Would you say this search I have here is similar to isindex? https://www.lloydatkinson.net/articles/


ISINDEX and NEXTID would be interesting to see.


Touching on both link nesting in HTML, and obscure elements, this blog article (https://www.kizu.ru/nested-links/) give a quite interesting approach of using the <object> element.


I believe you can also use SVG’s <foreignObject> to jam all sorts of things into hierarchies HTML and browser DOM doesn’t usually permit. And of course with custom elements + shadow DOM, the rules are only as restrictive as your imagination.

Edit: corrected element name


20 years ago I had my blog (this was before we really called them "blogs") set up to authenticate me via a certificate to show the admin panel and anybody without that certificate (or with a different one) to just see the read-only front page. I even had a couple of friends enroll so they could leave comments. I feel like client certificates are the biggest and most depressing what-might-have-been's of the Web; a great technology that was ultimately killed by pedantic people putting up unnecessary barriers because somebody might scrape your process tree for a password or whatever. We really let the perfect be the enemy of the good here.


It's annoying that this technology never made it, but there's also a downside to this tech. Last time I've set up a client cert, I got constantly prompted to authenticate on random websites, and it turns out that ad tracking/fingerprinters/other cyberstalkingware abuse this process to track information about you. Another major problem is that middleboxes freak the fuck out when you don't follow TLS 1.2 to the point crappy middlebox vendors have bothered to implement the spec. Personally, I'd just accept that and let people stuck behind these crappy middleboxes get their ISPs/workplaces to get their shit together, but that's not a very business-friendly approach.

To fix the tracking issue, you'd need to employ strict limitations about what service can or cannot prompt for authentication when you're on a web page, and I'm sure restricting that stuff will break the few remaining big sites that rely on client certificates today.

Luckily, we have an alternative today. WebAuthn uses similar cryptographic principles to set up secure tokens and the UX is a lot better.


Meh. Is it really though? Middle boxes can't MITM the signed requests so that's still a "problem". I agree signed requests are better than bearer tokens, but webauthn only signs the auth request and then you typically are back at square 1 receiving a cookie from the server with a bearer token anyway... so no improvement for 99.999% of the requests you actually make. The tracking issues are solvable with browser UX, they're not inherent to client certs.

The only way I see out of this mess is for everyone to start replacing bearer tokens with signed requests. And possibly some extension to webauthn so the browser will use your webauthn creds to sign every requests using a standard signing protocol.


There's two different problem domains here that people keep stubbornly insisting on solving with the same technology.

1. I care about the identity of the other station. This does require a full PKI (or something like it)

2. I don't care about the identity of the other station I just don't want some third-party rando listening in. This is the majority of my web traffic, personally: I don't trust ycombinator.com any more than I would trust someone pretending to be ycombinator.com, so the verification that they are ycombinator.com doesn't actually do anything. Just encrypt opportunistically, everywhere, and leave the PKI for situations where it actually matters (like, if I were applying to ycombinator or something).


It hasn't gone anywhere though, there are entire countries using it.

It was just way ahead of its time and by now has regressed a bit from lack of interested parties. TLSv1.3 implementations made renegotiations to mTLS impossible for example, making for example a separate `/login` page not possible. It would be possible to improve the UX, if there'd be an interest. We already have passkeys/webauthn, client certificates aren't much more complex.

I also doubt it will go anywhere any time soon for those same reasons.


I wonder if the Client Certificates mentioned in the article are the same thing as certificates used in Spain for interaction with government agencies, the so called "certificado digital".

In that case they are not "rarely used", but I would agree that they their usability is abysmal.


Yes, they are. There was an EU wide push to use these as a form of online public authentication.

I think a lot more people used keygen than this article seems to claim, even without knowing it. The alternative on IE was either an activeX control or a java applet (neither feasible on a mozilla browser since the applet wont have acess to the browser certificate storage).

When Mozilla deprecated keygen, these services never recovered. The proposed alternatives had even worse UI and were even less portable.


ah yes, the good old bay-area-bubble.

if your two friends don't use a feature, it can be dropped. that's the origin of sites only working with IE6 buttons (to keep with the age of the article) or noways things crashing unless your browser implements the latest buggy and non standard features that chrome is pushing.

also, remember, even the standard stuff (w3c) was always written and worked on folks in the payroll of companies whose main business was selling advertisement.


Yes, highly likely it's mTLS like in other EU countries. Though some countries have a longer track-record and better client software (and legislative enforcement of support) than others, providing a smoother experience.

Most of the stumbles seem to be from countries and the EU not cooperating with browser vendors to fix the rough edges. Unfortunate but probably a tragedy of the commons.


> Why are client certificates rarely used? There are likely several reasons, but probably the most obvious one is that the UI for handling them has been truly abysmal.

While the UI is bad, I think the main reason client cents aren’t adopted is due to its cost and specifically recurring costs. For b2c stuff, users don’t want to purchase certificates just to authenticate. And they certainly don’t want to repurchase them every year or so. Imagine if a web site had an $80 annual authentication fee that didn’t even go to the site.

Way back, I wished there would be some sort of government issuance that would cover the costs and negotiate with standards people to allow perpetual client certs that never expire (good enough for bitcoin wallets).

Interestingly, the US government does do this for employees through the Homeland Security Presidential Directive [0] where all federal staff have a client certificate issued and stored in a physical card for use.

The UI is still bad, but you can use it to authenticate to federal websites.

[0] https://www.dhs.gov/homeland-security-presidential-directive...


Author here. There needn't be any cost for client certificates, since client certificates are frequently issued by custom in-house CAs for free. A more pertinent example of what you're talking about would be the use of S/MIME email encryption - I believe you could pay to get a proper certificate from a trusted CA for your email address to use S/MIME - I have no idea if any CA still does this or how many people do so. It's not something I've ever encountered personally, at least.


> I believe you could pay to get a proper certificate from a trusted CA for your email address to use S/MIME - I have no idea if any CA still does this or how many people do so.

There are a handful of CAs that offer S/MIME certificates, not that obscure of a service really. Actalis is the only one doing it for free though.

It is a bit cumbersome but it's rather well-supported (gmail, apple mail, outlook, thunderbird etc.). Hopefully things get simpler now that the CA/B forum S/MIME workgroup is working on clear baseline requirements (so the ecosystem is still moving forward). Something like a mailbox-validated "Let's Encrypt" would be doable and I hope something like that would appear at some point.


There needn’t be, but there is.

I worked with a software system, globus, that allowed self-signed certs and trusted exchange and it was so difficult to find staff to get it set up.

One of the really frustrating parts of PKI is the unnecessary cost. I, and a few friends, negotiate our own certs and have been going well for many years with the same certs (knock wood), but all the widely accepted protocols require a third party and those third parties charge quite a bit.

And I don’t know any commercial CAs that issue perpetual client certs, which is what I want.


Several countries have CAs available that provide with certificates for citizens and companies. Also, those CAs are usually in your typical trusted CAs kiat.

For example, in Spain, in every national ID you have a certificate available in hardware, accesible via its smartcard chip or NFC. If you don't want (or can) use this specific hardware, you can order for free a software certificate you can download later. This certificates can be used to log in in official sites (or any other site that enables certificate login) and/or to sign documents/files.


client certs (from the time the article mentions) are supposed to be used for consumers, hence no fancy chain like you mention, which was intended for b2b.

for cliente using a browser, the site would provide the user once with much less overhead and costs and red tape, using it own cert/private keys, which do have the full chain. it was basically a better way to be always logged without forever plain text cookies, as we actually got. or even worse, baked in the publisher owned OS/browser as google mamaged to fool everyone into.


Not really. Only for government certificates, the paid stuff matters since that gives you access to client certificates from a "trusted" CA, which can alleviate some user friction.

The real problem is that the support for client certificates is downright abysmal when it comes to browser and OS vendors and even worse when it comes to modern apps, combined with OpenSSLs UI being godawful to use.

Both Chrome and Firefox make certificate management a complete mess to get set up "right". It works really well once everything is configured, but good luck getting it configured in the first place.

The Windows certificate store does poor certificate validation, so poorly made certificates (which iirc the Serbian government gives out) can end up corrupting the whole thing. That will in turn cause many other issues and the actual interface for dealing with this stuff is a complete mess since it blatantly wasn't touched after XP.

On Android, you just need to import both certificates into the system certificate store, but applications need to specifically check for the certificate store if they want to add it to requests (at least Chrome does). Since CCA is very rarely used, most apps just straight up don't support it, which adds yet more friction to actually using it.

In the end I just went with tailscale because it just ended up being much easier for achieving the same effective goal (protecting access to a certain part of my VPS to only devices I personally trust) without needing to either massively abuse apktool or open a bunch of niche bugs at FOSS repositories.


> the main reason client cents aren’t adopted

I agree with the UI objection, but not the cost objection.

But my principal objection is the same as my objection to WoT: I want my certificate to identify me as the holder of the certificate, and nothing else. Nearly all issuers of client certs insist that you provide driving license, bank statements or whatever, to tie the certificate to me as an identifiable individual person. Most relying entities have the same expectation.

That binding to a real person is quite separable from the idea of a client certificate, but in practice usage of client certs always seems to rely on that binding.


The UI for client certificates these days is generally handled by web 3 wallets.

It’s not mutual TLS and not perfect (unless you’re dining Brave, you will have to install an extension), but otherwise leaps and bounds compared to the client a certificate experience from 20 years ago.


Even today, client cert selection in the browser is really ugly.

I once needed to add client cert support to an API but because the same API was used by our SPA, enabling client certs even as optional caused the browser to display a cert prompt when the SPA made its first API call.

To get around this, I ended up writing a custom IIS module that turned on client cert support only when a certain request header was present.


My only time ever seeing and using client-side certificates was when I used StartSSL's service to get a free server certificate. It was an interesting experience and radically different from the usual experience of registering on a website with a username/email and password. Looks like the author says quite a few things about StartSSL's practices too.

Just like the article laments, the UI for managing client-side certificates was very basic and not user-friendly; I was using Firefox. I do wish this technology was explored deeper by both webmasters and browsers; I prefer to have one or a few private keys instead of hundreds of different passwords for each website. I want to be immune to password keylogging and server-side database leaks.


It would have been great if the KEYGEN element's semantics meant

  - generating a fresh keypair of a certain strength and persistence of the newly generated key if a keypair is not associated with a specific domain
  - selection of the keypair from the persistence store using a browser's UI dialog if a keypair is already associated with the specific domain, with an option to regenerate the keypair
That would have been a good declarative alternative for password-based-authentication.

It would have enabled client-side id generation and simplified user registration and other authentication flows, including perhaps keyrotation.

I guess we should use webauthn now.


Gemini uses ssl client certificates as its primary method of authenticating users and keeping track of user sessions. It works really well; a major benefit is that the user is in charge of how long their session exists for.


Been in this game for 25+ years and this is the first I've heard of this tag :)


Makes me wonder how WebAuthN will fare. As long as the browser is in control of the UX for managing keypairs, I don't see strong browser crypto ever succeeding, because it will fail in the same way <keygen> did: horrible browser UX. Also nobody wants all their keys stuck in one browser. You need a software layer to help you manage your keys and share them across devices. That is unless every website adopts a "device registration" pattern where a user is just a collection of trusted "devices".


That's the entire point of passkeys. They're natively multi-device.


I wish it was easier to just do authentication based on self-signed client certificates instead of passwords. Instead of cookies saving your login, do key-exchange of temporary client certificate. If it was done properly, it has the bonus security that it's completely immune to XSS stealing your authentication on the website - even if the website was subject to some XSS, they can't steal your authentication cookies, and the client certificate is inaccessible to JavaScript.


You know, this is exactly what I want webauthn and all that newer authentication stuff to do.

I'll say though it isn't just client side UI but webapps just don't support the idea of a user registering and loggingg in with a client cert that well.

But it is truly a shame because it is an unphishable method of authentication already on every device and doesn't cost hardware-money like a yubikey. It just isn't sexy enough for developers to care about it.


Looks like you got downvoted because you offended someone ;)


Just as an aside I love this guy's website. No shenanigans, it just works, and probably works on any browser going back at least 10 years. Kudos 100%.


If client certificate UX had not been so completely terrible, there would be whole cybercrime operations and classes of attacks that wouldn't exist. Phishing, cred stuffing, session hijacking, etc - all of these would never have even become an issue. Probably the biggest missed opportunity ever, the cost of not developing a usable UX for client certs is likely in tens or hundreds of billions by now.


> There is no opportunity for websites to provide a message like “you have not registered yet, go here”.

As I remember it, you had to buy a certificate from a CA and provide some real-world credentials along with this, and this wouldn't have been an instant process. Would this even have accepted self-signed certificates (as it seems to be suggested here)?


You don't need to, it just depends on who's your source of identity. The mTLS handshake includes exchanging which CA's are accepted, AFAIK. In some cases it's some commercial entity, in others it's a country itself, but nobody stops you from using your own CA.


I was just mentioning this, because I had a personal email certificate back then (when this became an integrated option with the built-in Netscape Navigator email client and the like), and this was a rather convoluted process and it took about a week. And, as I remember it, there were just a few options and the process was different for each. Which may explain the "poor UI": this was something you really had to want and you had to jump through a few hoops to achieve this, the entire process wasn't exactly user friendly, and there wasn't even a standard process or standard requirements.

So how could you point someone at a link, “you have not registered yet, go here”? You'd probably include an entire paragraph explaining what was required and link to several CA sites in order to obtain a valid certificate.


> So how could you point someone at a link, “you have not registered yet, go here”? You'd probably include an entire paragraph explaining what was required and link to several CA sites in order to obtain a valid certificate.

Well as I said, it will be complex if it's some enterprise CA you want your users to use. It's very easy if just everyone in your country has a certificate already. It's fairly easy if you use your own CA and just give the users the .p12 to install.

E-mail certificates, S/MIME, is a bit more complex and not exactly the same - there you'd actually want some larger publicly trusted entity to be used. (Though some already-deployed multi-purpose certificates do exist in some EU countries.)


Mind that as of Netscape Communicator 4, we're speaking of August, 1997… (So a it was a somewhat different situation when this was conceived. As I recall it, a personal certificate bound to a single email address was the cheapest and easiest to obtain back then.)


awwh, I thought this was going to be about keygens and the demoscene music that came with. That was definite an era of the old web.


To mitigate your disappointment you could visit the “Keygen Church” by Master Boot Record (a mix of metal, chiptune and ASCII + pixel art):

https://www.youtube.com/watch?v=ynVpatxQERs


Good track, great band, I saw them live the other month. Such an fantastic show.

CONFIG.SYS is one of my favs: https://www.youtube.com/watch?v=6CQq0jnie5U -- they also have a BBS and IRC network which was also cool.


Also a fan here, although I never had the chance to see them live. I bought a tapes (even if I don’t own a tape player) just because of the gorgeous design…!

https://masterbootrecord.bandcamp.com


Only related by name, IOSYS is one of my favorites due to their catchy Touhou Project derivative music and funny Flash videos.


Ha, I had the same thought. I recently watched this video which you might find interesting (mostly re: demoscene): https://www.youtube.com/watch?v=roBkg-iPrbw


Whoa. I've never heard of this, that's truly obscure.

I have worked with client certificates though, they were required for all communication (not just Browsers, also server-to-server) in a government agency I worked for.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: