Worked like a champ!
> Will Let’s Encrypt issue wildcard certificates?
> We currently have no plans to do so, but it is a possibility in the future. Hopefully wildcards aren’t necessary for the vast majority of our potential subscribers because it should be easy to get and manage certificates for all subdomains.
Weird, why allow a generous 500 registrations per 3 hours, while limiting certs per domain like this? Anyone have a link to anywhere that letsencrypt explains what they are trying to do here?
Certificates have to be signed by a Hardware Security Module with limited capacity. OCSP messages have to be signed every couple of days for the lifetime of a cert by the same HSM. This is significantly harder (and more expensive) to scale.
Because if so, that would seem to make the certs-per-domain limits not so much of a problem. If you own example.com, and have customers using sub-domains at a.example.com, b.example.com, etc -- that would seem to make example.com suitable for inclusion on the "PRIVATE" section of the list.
"owners of privately-registered domains who themselves issue subdomains to mutually-untrusting parties may wish to be added to the PRIVATE section of the list... Requests for changes to the PRIVATE section must come from the domain owner."
And indeed there are a few dozen random .com, .net, etc domains in the PRIVATE section. For instance `github.io` is listed there.
If that's the way for SaaS providers to get free certs from letsencrypt for their customers at customername.provider.com, I'd expect to see the listings in the PRIVATE section skyrocket.
You're right about this being rather easy to bypass, but the main goal is probably not to mitigate against abuse but rather prevent buggy automation scripts stuck in some kind of infinite loop from DDoSing them.
Here's where I got mine, works great.
Important though, for compatability with firefox and some other browsers, you'll need to copy the intermediate cert to the end of the cert file. it works fine with 2 certs in the file, just put the intermediate at the end.
I almost pasted the page into something else to make it easier to read - before realising how short it was! It's still a touch off-putting as it is.
Process of getting ssl certs is not very simple and the way you laid it out for us/me really helped.
My small site I used as example zeljko.rocks is now https secured.
Printed the public key and that worked fine. Went to step 2 and pasted in my CSR, and that also worked fine.
Start Keychain -> "Request a Certificate from a Certificate Authority" -> Save to disk.
Though weirdly enough, my public key generated from keygen didn't quite work, I actually had to use
"openssl rsa -in myPrivateKey -pubout" for it to accept it. Why is that?
I know everyone here is all about naked websites but I couldn't help but add these three lines of CSS to the body:
margin: 0 auto;
padding: 0 15px;
Here's a screenshot: http://imgur.com/UFHJp8a
If you've changed the background from white to light-grey, that's enough. Stop changing text from black to dark-grey (especially the non-bold text)!
When this is additionally compounded with a non-standard super thin font, the result is text almost invisible, even when zoomed, at least on non-retina non-macs.
I also noticed some (very few, but not that rare) websites use a font that looks completely rubbish on my Windows machines (certain serifs not being displayed, making it impossible to decipher letters). I thought it was impossible that every Windows user had this, they would have learnt. Turned out my ClearType settings hugely affected the rendering of mentioned font (ClearType is antialiasing method on Windows, when you enable it, it goes through manual calibration process, hence every Windows machine may have radically different config)
Don't go wild with colors and fonts for main content of the page. The more standard ones you use, the better chance it will render well for every user. Not everyone has same device with same config as yours.
This is why the ADA happened.
No, but given the way that the web and browsers are designed, along with the way that my site is designed the option is there for people that want to take it. Again, should I have to provide a link to a spoken version of my website in case somebody who is blind doesn't have a screen reader installed?
ADA is about making things accessible. So long as somebody can, with reasonable accommodation, access my website (for example, my design being simple and trivially overridable with user styles) then I'm accessible. If someone with dyslexia cannot handle the contrast levels of my site because a small minority of dyslexic users have their dyslexia triggered by that amount of contract then that is precisely what user style sheets are for, and I don't think it's unreasonable to ask somebody who is a minority of a minority to use the tools that are provided implicitly for them to configure things in a way that is easier for them.
It's true that modern devices, particularly mobiles, emit too much light. I always set device/monitor brightness to very low. Additionally I use f.lux/twilight to shift colors slightly towards red. When the page is black on white, I can adjust brightness enough to not bother me too much.
But when the page is gray on gray thin font, the only way to read the text is to enable custom stylesheet that changes font to black Georgia. Easy on desktop, not so easy on mobile.
Please make an effort in your own work to respect those in the world with dyslexia, colorblindness (never use red/green for status without another visual indicator, an overwhelmingly common sin that I'd wager 80% of those reading have done at some point), full blindness, and many other disabilities that can affect those who use your software. If your organization doesn't have accessibility guidelines, it should write some; there are a lot to start from online. You can (and should) also run your software through accessibility tools to help.
Hard black, #000000, is a modern thing and itself the trend. Typography has never in history been hard black until Web pages started just throwing 'black' on 'white' down without understanding color and color physiology. Hard black, all zeroes, is even outside of generally usable color in television; almost all broadcast equipment stays in the 16-255 range and relegates 0-15 for superblack purposes. Notice you can tell when a TV is on? That's why, and there are numerous technical reasons for even that.
Black is the goal in the case of printed text, and should be the goal on a screen too.
Black on white? How often do you see that kind of contrast in real life? Tone it down a bit, asshole. I would've even made this site's background a nice #EEEEEE if I wasn't so focused on keeping declarations to a lean 7 fucking lines."
But my monitor is not perfect, nobodies is so why make stuff less well readable, I don't understand it. It annoys me to no end when I have to copy paste stuff into a text editor just to get a white background because someone decide a background should never be white and people seem to agree. If I want less contrast I'll tone down the background lighting of my screen.
echo "200 fucking characters long. Keep it to a nice 60-80 and users might actually read more " | wc -c
I was right.
You have to zero in on that edge with your mouse. There's only a window of ~10 pixels so you have to try to be precise, and it seems to be a rather expensive graphical thing to show because I always see ugly, jolting artifacts, and I have a GTX 970
Edit: I also highly recommend using mice instead of touchpads for all computer interactions. I cringe when I watch a coworker using an Apple magic touchpad, because they have to move so slowly.
Also, Alt+Left-drag to move the window is super nice (not having to aim for the title bar).
I don't like narrow columns being forced on me, and if line length bothers you, can't you just set your browser to the width you like?
Thank you for saying it. This "fixed-width-in-pixels" thing that has taken hold completely baffles me.
It's pretty special seeing text butted up hard against the left margin. I'm glad the proliferation of web standards has advanced to the point that this is acceptable!
However, I realize that I'm in the minority - I don't find narrow columns easier to read, I don't consider analog watches "more attractive" than digital, while I love physical books I don't feel I lose any experience in going to a nice eInk reader, and I find most icons to be worse than a short text label. For these sins I am prepared to simply suffer a tiny bit until society catches up to me :)
Sure. The padding adds 15px of space on both sides of the body so that the text doesn't "stick" to the edges when the size of the screen is less than the defined max-width (i.e: on mobile). Doesn't make a difference on desktop though.
I recommend giving "The Elements of Graphic Design" by Alex W. White a read, where he talks about issues of space and typography and why certain conventions exist. CSS is after all a way to express graphical design relations in code, which isn't very useful if one is unaware of the principles driving clear and useful design.
 see oliv__'s response, it's more detailed than mine :)
Actually, no, you still have to perform the "find the window" function and the "bring the window to the top" function, for both maximized as well as non-maximized windows. And once any given window is sized, the 'management hassle' for 'find and raise' is identical between both modes (maximized and non-maximized.
The screenshot you posted is of a Mac, so because of that I am going to assume you are using a Mac (because otherwise you would have posted a screen shot of some other OS). This explains why you believe that window management is a hassle.
Mac's window management support is on par with MSWindows window management, which is to say, they both suck for handling more than a handful of windows at once, and they both suck for doing much more than "find something - bring it to the top" (and even that function on both is awful as well). Which is really sad given that Apple is the company that brought the whole multiple overlapping windows GUI model into the popular view (Apple Lisa).
Additionally, because you are using a Mac, I am going to assume you are likely using a Mac laptop (simply due to the sheer number of Mac laptops sold vs. Mac desktops, there is a significantly higher probability you are on a Mac laptop). This leads directly to a second reason why you believe window management is a hassle. On almost any touchpad/trackpad, with the current UI hooks used by Mac (and Windows as well), window management is a hassle and sucks terribly. This is because all the UI hooks are designed around a mouse with a separate button (or buttons) that can be independently held down separately from creating mouse motion. On any touchpad/trackpad that lacks separate mouse buttons that can be independently held down with your second hand, performing any kind of "hold mouse button down while moving" operation is made significantly more difficult and a major hassle. This is due to the need to reposition your fingers on the pad to perform large movements, and the act of repositioning loses the "hold button down" mode, so you have to repeat the "hold button down" action just to reposition your fingers. But this is not a flaw with multiple window UI's, this is a flaw with Apple deploying a touchpad without separate buttons and failing to update the UI interface to better mesh with the physical constraints of a single touchpad without external buttons.
I am using Windows and I'm writing this from a desktop, from a Linux VM. I don't find window management to be much better when using a mouse. I maximize all my non-instant messenger windows.
Maybe one day I'll use a different window management paradigm, but I doubt it. I already have a special keyboard layout and every time I use a standard keyboard layout I feel/look like a newbie since I need to adapt to the new circumstances. Switching to a different window management paradigm would mean that I would have an even harder time adapting to using another PC.
This strikes me as particularly neat. I wish more SPA's were able to work like this.
But the concept is interesting.
I don't see why not.
I recently hacked together a completely web-based, client-side CSR generator for PKCS#10; you can take a look at it at https://johannes.truschnigg.info/csr/ With something like that fused into your project, users wouldn't even have to execute `openssl` to generate their key material and CSR, they'd just need a modern browser with support for the W3 Web Cryptography API.
If people prefer, we also dynamically generate a full OpenSSL or powershell command,so they can make keypairs on their own server with a single paste - no terminal Q and A.
We're awesome, but there's nothing stopping you from using the tools wherever you want. :^)
Having to teach users that you always see the padlock when accessing your valuable information over the Internet, but do not see it when accessing your even more valuable information on the LAN doesn't seem good.
Tooling will hopefully get better now that browsers are pretty much set on going HTTPS-only.
Some consumer devices could probably implement something similar to what Plex did to deploy TLS .
I do agree that the industry isn't where it should be yet, but hopefully everyone's feeling the pressure now. :)
The Plex solution looks good. I wonder if lets encrypt could provide a similar solution that works for everyone, rather than products having to reimplement what Plex already did.
It does not require CSRs, but uses your DNS provider to complete the challenge. You do not need to run anything on your production servers.
The Lets Encrypt flow with DNS is way less complicated than obtaining a cert through many commercial services, so it probably works out about even.
Thanks to OP, diafygi and Lets Encrypt !
(It was their personal certificate they issued to you that had to be in the browser.)
I assume that this was for TLS client authentication. Username+password sucks as a method of authentication. The only thing it has going for it is that you can -theoretically- remember your password and key it in on a machine you've never used before. 
Client certs are effectively unguessable and typically stored in the most secure place that the OS can provide. What's more, there's -IIRC- absolutely no reason for the remote side to remember anything about the certificate that they issued you after they generate it, so there's no risk of a server-side DB breach revealing any significant information about a client's credentials.
Frankly, I wish more sites would eschew username/password authentication for username/cert (or at least offer the option of username/cert). Then the UI for certificate operations would certainly get easier to use. :)
 Though, in today's environment, it seems... highly unlikely that any non-mutant will be able to remember all of the passwords for all of the sites that they use.
Client certs might in theory be stored in "secure places", but that is not the case in practise. Have anyone get a client cert, and then make a backup or copy to another device. I'll guarantee it is trivially accessible - eg sending it in clear email to yourself to get from one machine to another.
It doesn't matter how secure something is, if a standard user would find it highly inconvenient, and it doesn't recognise the modern world of the same user having multiple devices in multiple locations.
You obviously missed the point of my comment. I'll highlight the part that most clearly captures the essence of my point:
"Frankly, I wish more sites would eschew username/password authentication for username/cert (or at least offer the option of username/cert). Then the UI for certificate operations would certainly get easier to use."
> ...and it doesn't recognise the modern world of the same user having multiple devices in multiple locations.
Easy fix: issue one cert per device. Does the UI for this suck right now? Yes. Does it have to suck? Fuck no.
> Client certs might in theory be stored in "secure places", but that is not the case in practise.
It absolutely is the case in practice.
Imagine a hypothetical magical computer that stores all username/password pairs in a magical HSM that never lets the username/password outside of the HSM, ever. The credentials in this system are stored in the most secure place that the computer can provide. The fact that the user of that system may have also written those credentials down on a sticky note under his keyboard or saved in a plaintext message in his webmail mailbox does not change that fact.
Does that make sense?
I don't understand how you issue one cert per device. There would be a bootstrap problem, presumably solved by entering a username and password. Or some mythical software/system that is able to securely distribute and update your certs across your systems.
Chrome stored my client cert from Startcom in a file in my home directory. Any app running as my user id has access to that file. Since this is a personal workstation, pretty much all the software (some system daemons excluded) runs as me. I don't consider this a secure place in practise, hence the comment. I do agree they could be done differently, but they aren't. Heck I've yet to encounter any browser or similar that uses the TPM in my laptop for security purposes.
Client certs seem very similar to ssh keys in usage and needs (secret blobs that need to be backed up and distributed). I know several above average developers who manage theirs well. But then I come across many who don't because it requires too much mental effort. It gets reflected in using passwords for git operations at github. A trivial number of the people who get them right, also don't bother with PGP for email because that is even more painful.
For client certs to have any hope of wide usage, the non authentication issues need to be solved, and there is nothing even close.
I gather that you don't have a smart card attached to your system? This  indicates that Chrome uses Mozilla's NSS to store certificates. The documentation for NSS indicates that it's happy to work with certs stored on (or to store certs on) a smart card. :) Granted, the UI "sucks" for this, but that's part of my point.
> I got the point about client certs being more convenient at the point of authentication.
That's not what I was saying. Client certs are unguessable, the authenticating partner has no need to store any significant information about the cert, and certs typically rest in the most secure storage the OS can provide.  Certs are leaps and bounds safer than passwords.
Right now, certs are typically harder to use than than passwords.
> For client certs to have any hope of wide usage, the non authentication issues need to be solved...
I'm not trying to be confrontational, but didn't you see that this was exactly what I was saying in my previous two comments?
No need to write four 'graphs that largely restate what I already said. :)
Thing is, if noone uses a thing, the UI for that thing will never get better. You frequently have to offer the thing as an option so that people will start using it before UI design teams will deign to make the UI for that thing better.
> A trivial number of the people who get them right, also don't bother with PGP for email because that is even more painful.
This is kind of a tangent, but: I guess those people don't use email clients that have adequate support for PGP. Enigmail makes key management, selection, and use trivial.
 User mishandling notwithstanding.
I am having trouble understanding what you are saying. The wording and tense seems to switch between what is really happening today, versus what could happen today given sufficient effort and implementation. I believe that other than small niches (eg startcom, some smartcard use in corporate and DOD), client certs are essentially unused by the vast majority of users. And I believe that even if everywhere accepted them, the logistical issues around ensuring a user has their certs in the relevant places are far too hard to overcome no matter how big the magic wand :-)
> I gather that you don't have a smart card attached to your system?
Nope. I could get the smart card reader when I buy Thinkpads, but don't. And I use Linux, Windows, Mac, Android and iOS. Having a smartcard reader on one machine wouldn't really help.
> Certs are leaps and bounds safer than passwords.
Only if the user is perfect. All it takes is them emailing it themselves, putting in Dropbox, being careless with backups or any number of other scenarios and it has the same "safety" as a password. (That is assuming your magic wand doesn't appear and somehow put smart card readers in every device overnight. :)
> Enigmail makes key management, selection, and use trivial.
Enigmail is indeed exactly what I use. Your statement above is true, but doesn't address the big picture. Every time I install a new system, I have to do copious amounts of googling to figure out exactly where my certs are installed, and then copy them over to the new machine, and get them imported. I'm never sure if I have done it right, haven't left myself open to compromise etc. A somewhat distant friend works in security, and once asked several people to email him their public keys. Virtually every response was a private key.
We've had decades of certs for PGP, everyone using it wants to be secure, and everyone wants it to be better and "user friendly". The effort has been a dismal failure.
Are you still intending to add a renew page at some point?
so the cert will expire in 90 days, how to deal with that? come to the same site every 3 months and regenerate a new SSL cert? Why not at least valid for a year?
So the EFF decided that since automated certificate regeneration makes how often a cert expires irrelevant, they should use short-lived certificates so compromised certs can only be used maliciously for a short window, regardless of how well any given browser honors revocation registries.
I can't speak to whether this manual version of Let's Encrypt has flexibility in choosing certificate lifespans.
This would be great, apart from apparent insurance regular certificates bring, which I still don't know how to claim.
They're very well supported. Won't work in Windows XP and Android versions lower than 2.3.6. https://community.letsencrypt.org/t/which-browsers-and-opera...
> This would be great, apart from apparent insurance regular certificates bring, which I still don't know how to claim.
To my knowledge, that insurance has never been paid out, from any SSL vendor. It's a marketing gimmick.
I just switched to AWS Cert Manager last month from StartSSL, which is free if you're an AWS customer.
When it finally works I see that the certificate expires in 2 months.
Not everyone is having five hours' worth of problems -- many people are getting it to work right immediately -- but there are clearly also people who are running into difficulties which it would be great to figure out how to address.
But it is weird that there isn't a more "formal" announcement, and I haven't tried it...
SSL certs are used to secure TLS connections, not just websites connections. So you can host a mail server, a mumble server, an XMPP server, or really anything that speaks TLS.
Let's encrypt has been explicitely created for websites, but that doesn't mean its certs must be used for websites only.
If I access the web interface over HTTP, the admin password is sent in clear text, which means someone on the network I connect to could get my password and delete my music, or god forbid, secretly mistag some of it.
* : http://groovebasin.com/