Hacker News new | comments | ask | show | jobs | submit login
Chrome will mark all HTTP sites as ‘not secure’ starting in July (theverge.com)
179 points by Sami_Lehtinen 7 months ago | hide | past | web | favorite | 133 comments

If you wonder how to get HTTPS for your local virtual hosts:

1. create a local CA

2. create a certificate using that local CA

3. Then you can add the CA in your trusted authorities (Firefox does need an extra step: either enable the "security.enterprise_roots.enabled" flag, either import the CA certificate manually in it).

Details at: https://gist.github.com/cecilemuller/9492b848eb8fe46d462abeb...

Let’s push for self-signed certicates everywhere ! Let’s do Trust On First Use like SSH and now we’re done with all this certificates authorities bloated bureaucraties

But how am I supposed to know if I can trust it if it is the first time I am using it?

DNS? It works for SSH fingerprints. Random example: https://matoski.com/article/sshfp-dns-records/

doesn't that just push the "authority" to DNS. You still need someone to vouch that you are who you say you are.

I'd prefer a wild-west type of world where certs are decentralized and "blind trust" towards big authorities becomes optional and still there as a reinforcement, but not necessary and definitely not required to communicate securely.

I mean- if there are adversaries out there trying to hack our communications- then we need to let these adversaries try so that we can engage them head on so that their methods become published, public, and thoroughly analyzed by the people in charge of strengthening our protocols.

This approach would be like fracturing the bone to make it stronger- we allow nation state hack us in order to figure out ways to stop and prevent such hacks using open and transparent software alone. Trusting any group of people anywhere "just because they're trust worthy" feels like a variable defining <the contents of its data> as equal to <the contents of its data>. It just doesn't make sense for a variable to trust itself "just because" because then I wonder if something fishy is going on under the hood.

That's exactly what you don't want if you can't trust your ISP.

Well, yeah. You need to trust someone. And if you aren't willing to centralize, then you're left with key signing parties.

DNS is a natural source for trust. We need secure DNS for a sane internet anyway. DNS has hierarchical structure which naturally maps to certificate hierarchy.

If a hostile controls a DNS server, you are hosed anyway.

If the sites you care about are using HSTS (and they're in the HSTS preload list, or you've visited them before from this device), then the worst the attacker can do is deny you from accessing them.

HSTS depends on DNS and NTP.

Chrome used to have this. It had DANE support but it was removed.

Some kind of cert observatory tool that checks if the cert you're getting matches the cert everyone else is getting.

If only there was some kind of distributed ledger thing we could use to make sure everyone had the same information as everyone else using cryptography

My understanding is that Certificate Transparency (https://www.certificate-transparency.org/) is pretty much that.

"Certificate Transparency helps eliminate these flaws by providing an open framework for monitoring and auditing SSL certificates in nearly real time. Specifically, Certificate Transparency makes it possible to detect SSL certificates that have been mistakenly issued by a certificate authority or maliciously acquired from an otherwise unimpeachable certificate authority. It also makes it possible to identify certificate authorities that have gone rogue and are maliciously issuing certificates."

"Certificate Transparency logs use a special cryptographic mechanism to facilitate public auditing of certificates and logs. This special cryptographic mechanism, known as a Merkle hash tree, is a simple binary tree consisting of hashed leaves and nodes (see figure 1)."

I trust more my first time connection to a website than so random authority a random country signing it, it’s that simple. But the money grab is so big that they keep existing !

EDIT: And you’re saying the SSH model is broken then. Also you can verify the certificate signature via another channel, like a git repo of all the signatures of most important websites (I know, it look like a CA)

A few problems:

1. SSH's whining about first connection fingerprint trusting is needlessly petty and nobody actually checks the fingerprints, and in many cases they have no need to do so anyways.

2. Almost all cert errors a user will encounter in the real world are the fault of misconfiguration (wrong domain) or pathological/greed-driven behavior (expiration) rather than something that actually impacts the confidentiality of the connection (which is what we care about).

3. The fact that all cert errors are treated as the same severity (red screen! exclamation points!!1 YOU ARE IN DANGER!!!1one) conditions people to click by them without thought.

> A few problems:

> 1. SSH's whining about first connection fingerprint trusting is needlessly petty and nobody actually checks the fingerprints, and in many cases they have no need to do so anyways.

I disagree, but this is really a question of configured defaults and security UX. The first connection you make to a server is not secure, and impacts the security of all subsequent requests to that server.

> 2. Almost all cert errors a user will encounter in the real world are the fault of misconfiguration (wrong domain) or pathological/greed-driven behavior (expiration) rather than something that actually impacts the confidentiality of the connection (which is what we care about).

This is the great success of TLS - attacks are so rare that most users won't encounter them. Misconfiguration is indistinguishable from an attack, so the only reasonable thing to do is to warn the user as if it is an attack. Expiration is not a money grab, especially since the CA with the shortest expiration is also completely free. Expiration is a great thing. It limits the window of vulnerability for compromised certificates, and means that revocation lists like those shipped by chrome do not have to grow endlessly large, since expired certificates can be pruned.

> 3. The fact that all cert errors are treated as the same severity (red screen! exclamation points!!1 YOU ARE IN DANGER!!!1one) conditions people to click by them without thought.

With HSTS, that's not an option - and chrome can be configured by sites and enterprises to disallow bypassing certificate warnings. For example, try bypassing this one:


Interesting. I thought Safari in iOS would block it, but I guess that is just chrome still.

Re: Your edit.

SSH actually follows the same model as SSL in this respect. It's just that basically everyone goes self-signed and there isn't a big institutional system to distribute SSH CA's.

It's more common in enterprisey environments where you have config management to distribute the CA but you can do it right now https://www.lorier.net/docs/ssh-ca.html

Maybe it my problem, because I don't understand all my options.

But when I spin up a new cloud server built on some image from a cloud provider, I am not sure how I can verify this certificate. How can I verify I am not being Mitm'ed on this new server?

Or maybe the cloud provider is dropping the ball in not giving me the fingerprint when I request its creation?

But yes, it does seem broken to me.

I don’t quite grasp your question but when you setup an instance, the hosting provider can provide you the SSH fingerprint of the server for you to check the first connection

Check the pubkey fingerprint out of band, like you would do with ssh.

Pretty sure parent forgot the /s.

I found this script to manage your local CA:


You can also use the good old EASY-RSA: https://github.com/OpenVPN/easy-rsa

Don't get me wrong, using easy-rsa is definitely easier than not but it doesn't abstract nearly as much as a tool for local development probably should which mkcert definitely provides.

Thank you! This, along with my sibling replies, is some great information. Nothing that most of us couldn't accomplish without it, but it's going to save a lot of time. Multiplied by all the others that find it useful (I can't be the only one, right?), you have saved a LOT of time :)

The other option is to register a domain and use that to get either LetsEncrypt cetts or purchase a wildcard.

That’s what I do. And LE actually supports wildcards, so that’s no problem. If your local network is firewalled off LE can still issue you a cert so long as you use DNS record validation.

It depends how often you want to update things I guess. A wildcard costs less than $50 a year these days, so if that might be better for some people than renewing their LE every few months.

I almost sympathize but

1. Maximum cert lifetimes are falling, once upon a time you'd just pony up the cash and get five years. A year ago it was 36 months, for a few months now it's been 825 days, and there is downward pressure. So you are still going to need to renew this cert, and that means...

2. You can and should automate. Imagine buying a device in 2018 that expects you to manually input an IP address because "Eh, we could do DHCP but this was less effort (for us)". That'd be crazy right? Time to feel the same way about certificate automation.

Between Certs and Java, it's the older IPMI devices that really kill me. I have some that don't even allow for custom certs and none support that type of automation that I'm aware of. In theory these problems go away over time but most of my home servers are in the cost vs. performance-per-watt sweet spot and won't be replaced for many years.

I'd expect Chrome to provide some settings at least to not mark local IPs as not secure

So, what is the Chromes team solution for local network devices like routers? Proxy it over the manufacturers server for a complete loss of any privacy and security, but hey, there is a green check mark then?

This change will only add the grey 'not secure' text to the address bar and not hinder functionality. So it is just less green and not more red as with invalid or self-signed certificates where you have to perform additional steps to continue.

I can only agree with that. Having security contexts as a formal concept and requiring secure transport for certain features. Stuff like webcam or other hardware access (bluetooth) should really not be allowed from a insecure (MITM'ed?) source.

That's in this version. How many versions until we get the ridiculous harassment already required for self-signed certs?

The older blogpost that laid out the roadmap stops here. https://security.googleblog.com/2016/09/moving-towards-more-... It's not impossible that they would do that in the future, but their plans so far don't seem to include that.

For now..

Look at the trend. First it was just a small SEO bump.

The problem is really _naming_ these devices first of all. If they have (global) names there's no problem to issue a certificate for those names.

But so often the device doesn't have any name at all, so it's maybe, and so is everything else, the problem only appears to be in the security layer because that's the first place which absolutely insists that you can't have a situation where everybody is just named "Bruce" with no other identifier.

Where it does have a name, the name is often not part of the global namespace. At least here we can fix that with a namespace suffix. Sold five million routers with serial numbers? Name them $serialnumber.routers.your-company.example and problem solved. Now that they have a name, issuing them certificates isn't difficult.

(Yes, a commercial vendor who'll hook you up with five million certificates won't do it for free. The little rubber feet and the half-arsed English translation of the instruction manual weren't free either. Too bad)

That assumes your DNS is running through the device itself. For a router, maybe, beyond that, not really.

I don't know why there is still no standardization for advertising/providing CA services for local networks. How difficult would it be to just put local ACME endpoint to DHCP options?

This sounds great acually. Browsers should only trust them when connecting to local IPs (based on subnet or something), is displayed as a gray padlock that says "Local", the ACME endpoint should be restricted to only giving out certs on specific domains (.local or something), and to reissue any cert with the same name as a previous cert the device has to prove that it's the same (unique embedded public key or something). Maybe an exception that devices can always get a local cert for its IP, and the owner of the ACME device can assign "pretty" names to "unique" names on a case-by-case basis.

The thing issuing DHCP leases has full control over your ability to connect to the internet anyways, so around here seems like the right place to put it.

My only qualm is that I trust router manufacturers to implement this correctly about as far as I can throw a sheet of paper.

But then what’s the point? How is it more secure to have anyone get a server cert automatically without credentialing. Not to mention training users to trust all the BS local CAs popping up now that can then MITM traffic.

As a user, I don’t want local networks setting me up to make me recognize their CA services.

At first I liked SSL everywhere, but now I’m seeing a lot of hacks that are going to make SSL less useful.

Not to mention training users to trust all the BS local CAs popping up now that can then MITM traffic.

You say that as if users don't already mindlessly dismiss most warnings already. I'm not convinced this would be that big of a difference from the current system.

adding a local CA means you can middleman anything you want to, seems like something that should be difficult to do, to me.

Name constraints are a thing. Local CA should be constrained to signing only local names (roughly matching dhcp domain-name/domain-search options).

Name constraints are not universally supported in browsers sadly.

And this should be fixed. Name constraints would be incredibly useful for a number of things, if only they were supported.

I believe Apple is the holdout in this case, meaning Safari and Chrome on MacOS don't support it.

ACME has nothing to do with provisioning root certificate trust stores and I hope DHCP never will either. If you need additional local root CAs then deploy them via enterprise workstation management solutions.

do many routers not support https for their config pages? if so, that's terrible on their part, get a new router. If they do, then accept the cert in your browser and move on.

That's exactly the problem. Devices in the local network can not partake in the public CA infrastructure. So they run self-signed certificates, which is probably a small security win over plain HTTP, but then customers are faced with a scary "someone is stealing your data" page before ever seeing the configuration page. That's a no go, so most stay on HTTP, but as we can tell from this announcement, time is running out for that. Leaving them with zero good options.

If this trend continues it means you will no longer be able to configure these devices with a webbrowser but forced to use the manufacturers "cloud solution" or install an app where both ends of the TLS connection can be controlled and you're not bound to public CAs.

I bought a Uniquiti AMPLIFI home router. It’s great for signal, but can only be administered through their app (all traffic through their cloud, etc). This really sucks and they don’t really care as it’s probably easier to dev and they get all that sweet usage data.

It’s not a good trend, but this is a plus for most consumers who don’t care about local network security.

Amplifi should be manageable through UNMS by the end of the year or so. I cant seem to find the chart, but here is an image from the forums: https://usercontent-community.amplifi.com/461f0130-9835-459b...

That will be good. I almost bought their enterprise stuff because their signal is so good.

I think you are jumping to conclusions for no good reason here, I think it's absolutely unlikely to happen that browsers remove http support. Worst case they'll add a clickthrough screen similar to the invalid ssl interstitials of today. Surely you can deal with an extra click through when logging in to your wifi router if you can't be bothered to install an ssl cert for it?

I can deal, but we're looking for the Dropbox solution here. This is only going to become worse with all the IoT hype, and if it's not solved, it will just mean all these devices will go through the manufacturer, all the time.

The Ubiquity routers use a self signed certificate, which Chrome already marks as insecure. There's ways to change it to use Let's Encrypt though.

Adding this cert to your trusted certs file marks it as secure, even if self-signed. This is better than usual.

Most consumer routers use self signed certificates which will be marked as insecure by the browser.

All modern browsers allow you to import certs into their trust store.

But the UIs for that is actively discouraging people from doing so.

Besides, I believe, the majority of people using router config page wouldn't even know for sure what 'importing certs into their browsers trust store' means or why it should be done and where those certs are etc. There's a ton of consumer-unfriendly terms and concepts, plus, as the parent noted, the UIs are usually discouraging.

This is a small signal change, most users won't notice. See the example image: https://3.bp.blogspot.com/-pcT-gkZb6OA/WnyBrJKufcI/AAAAAAAAA...

I wish Chrome would make an exception for pages on localhost and devices on the local network. There are currently no good solutions for acquiring certificates in these cases.

Localhost already is an exception. For best browser compatibility use either or ::1 as appropriate rather than the name localhost.

The browser has no realistic way to conclude that your "local" network is secure. It probably isn't. So there's no sane policy that says that's OK

They aren't using the “faulty TLS” block page that requires work to get around, just an address bar “not secure” message.

If you are using localhost (or other HTTP instead of HTTPS site intentionally, it's not going to cause any problems.)

HTTPS is definitely good but this is a bit strong handed.

Many small devs don't want to deal with the complexity of HTTPS and the extra fees. It's a lot better with Let's Encrypt but I've talked to non technical people who have shelled out $300/year to their host providers just to have HTTPS and inevitably lots of things break due to hard coded links in their outdated software.

If authentication happens through a 3rd party provider and the there isn't any need for a site to be secure, why force the matter?

Broken sites lead to a massive drop in sales. All because Google thinks it knows best.

If they truly wanted to solve the problem, why don't they offer a proxy, that converts HTTP traffic to HTTPS traffic that gets used in Chrome.

Instead they force people who don't have the technical knowledge that they can get HTTPS for free to pay huge fees and inevitably have their sites broken in the process.

I thought this had been the case for a while now? I swear I've seen this headline 10 times in the last two years.

Chrome in particular has been gradually tightening things up. So there have been similar stories, as well as of course both the announcement and then it actually happening both get a HN story.

For example, a while back Chrome changed their porn viewing mode ("Incognito") to label HTTP Not Secure, and changed normal mode to mark pages Not Secure if the user seems to be filling out a form.

If you want to test this in Firefox you can set these prefs:



And yet they recently changed it so that if you name a text field username or password it will get filled in with the current user's info, even when it is an admin page where you create new users. There is no way to instruct Chrome to NOT autofill your credentials onto another user. They removed support for the HTML attributes ages ago and even removed the workaround (hidden fields with display: none that don't get used) developers used to prevent this behavior.

Yup. I had to disable autofill entirely to stop obliterating users' data in an app I work with.

We should mark all sites serving Google ads as "spying on you"

Is there an alternative ad network that doesn't spy on users?

There is no need in ad network at all.

True - I'm more worried about the privacy intrusions from advertising networks than I am for MITM attacks.

The last GDPR preferences link I clicked on showed nearly 100 companies potentially spying on me. Google is barely the tip of the iceberg.


Please don't post unsubstantive comments here.

Technical people should not be pushing centralization and vested interests, that's not a technical solution.

In the a world of state surveillance and invasive data practices by SV based companies it's a difficult to understand this obsession with http scaremongering by some to perpetuate more centralization.

>it's a difficult to understand this obsession with http scaremongering

Your surely meant it's not difficult, right? The first part of your sentence is exactly the answer.

Why would that be? Hn is full of people who do not like even essential bureaucracy, let alone an unneeded one. Yet when it comes to superfluous certificate authorities suddenly its ok? That does not make sense.

Everyone is concerned about centralization in other contexts but do not see the downsides of certificate centralization and control? How is it that there is no technical solution that does not involve 'authorities'?

This is how control works, first its innocuous and harmless - just get a cert, its even free from letsencrypt. Then after that is accepted its x,y,z. Then its x,y,z and your first newborn. And now you have a way to effectively prevent people from publishing and can silence dissent and anything you don't like under the cover of 'process'.

Wait, you don't see the obvious benefits of a local coffee shop page displaying its address and hours being served over an encrypted and secured connection?

Sarcasm aside, I think that the big organizations pushing for HTTPS everywhere also tend to employ a lot of people who visit HN; company culture does have an effect.

previous discussion of source: https://news.ycombinator.com/item?id=16334241 (806 points, 814 comments)

Is this also true for localhost, or does localhost get special treatment? What is a good way to get https certificates for localhost other than self signed certificates?

This isn't true for localhost. But some browsers don't know whether "localhost" is really localhost, so for best compatibility write or ::1 as appropriate

If you need to simulate HTTPS for your local host, but you actually control all the moving parts (e.g. a dev environment) you can use any private key + associated certificate for a DNS FQDN you control, then use /etc/hosts or its moral equivalent to tell your local machine that this name is on the local loop, and the key + certificate will validate.

You must not ship this as a "product" because when you do that all the end users end up with the private key, which both destroys the whole _point_ of public key cryptography AND violates the terms of whichever CA issued you with the certificate.

Will it cause a problem for you if your browser displays gray text saying your connection to localhost is not secure?


Still self-signed, but generates a CA that gets added to your browser. It is all pretty seamless.

Is there any downside to using this to secure local servers with non TLD domains such as server.local?

Adding a CA to the local register. Or simply registering one on the browser.

Not sure if asked but, how can I enable HTTPS for my statically hosted Github.io site CNAMEd under my own domain?

I think it's funny that Google wants to start marking HTTPS as insecure and yet Google Search Console's "Fetch as Google" won't even follow a 301 response that redirects HTTP queries to HTTPS.

Chrome is a jerk

All traffic encrypted means no one can see your outbound transmission data, including you!

No way to verify what you're sending on the wire if the application is proprietary (and statically compiled) without dumping memory, which would be quite odious.


You own the client. You can watch the traffic in the browser before it is encrypted.

No, developer tools.

Bye bye chrome. You were useful until you started pushing your agenda on everyone

You're kidding right? Its 2018, there is no reason to not use https these days. With lets encrypt its not like its costing you anything.

There can be some reasons. Assume you have a site/blog hosted by a service provider like Wordpress, Medium, Github.io, Blogger, Tumblr or some other local provider. It can be expected at least some of them don't support SSL certificates for custom domains.

time to get a new host then?

Time is money. It takes time to set it up.

So does showering, and you do that before interacting with people outside right?

damn, I should do what you do for a living if five whole minutes costs that much.

It's easy for people like us to set it up. I've set up Letsencrypt many, many times.

Now, imagine you are Joe Blow hosting his blog on some small web host that barely supports Wordpress. Logging into CPanel is confusing to you. How do you deploy SSL?

that is a fair point.

It looks like CPanel does support a letsencrypt plugin, which is really cool: https://blog.cpanel.com/announcing-cpanel-whms-official-lets...

However, I have to wonder how many hosts actually enable it...

Mozilla's been doing this for HTTP sites with password fields for a while, and has publicly said they will roll it out to all HTTP domains. Not sure why you think it's Google pushing an agenda here when it's recommended policy of the W3C TAG.


Because my statically generated blog really needs https.

This is really going to create an additional layer of inconvenience for people who just want to drop some html documents in an ftp folder and be done with it.

>Because my statically generated blog really needs https.

So I was reading your blog and am particularly concerned about the crypto miner present on the page. Care to explain this to me? Hint: MITM due to insecure context and the miner isn't coming from you but as a user, I'm going to blame you because it happens on your insecure blog page.

Both my personal static site and my "literally only I can use it I've disabled user registration" file host use https. I can think of no good reason not to - to which people always link me that stupid anti-https n-gate article. The same site where the owner links to a Patreon account that I cannot verify is them and not a malicious actor looking to get donations from readers of the site. They also link to a Twitter account that may or may not be them.

Nobody's saying it does - including Chrome. The browser will simply (and correctly) show "Not Secure" in the address bar when next to the url viewing your blog. If a user doesn't have an expectation of security, they won't be bothered or perhaps even notice.

Which is great for users that understand what these security concerns are all about, like typical HN folk. But these people are probably aware of cyber security already, so not much gain here.

And people that _don't_ understand cyber security will have no context for what "not secure" means, and may needlessly avoid a variety of HTTP static-HTML sites, where these security issues aren't that great a concern.

Um, good? If people avoid your site because its not secure, maybe you should fix it?

What does it mean to fix a site that does not accept/process POST requests?

Start serving over https? Since when has encryption only mattered for POST?

Can you explain the benefit in other situations?

Preventing MitM attacks is the only thing I can think of.

But your static blog isn’t secure served over HTTP. And that’s fine if you choose it.

Oh no, someone's going to MITM my shitty blog posts. "Secure" is a word that only makes sense with context, and without that context it only serves to cause irrational panic. I wonder why Google's really doing this.

> I wonder why Google's really doing this.

You don't have to wonder all that hard given how publicly Google has discussed their stance on this. They have been using their leverage to try to force SSL usage for some time, including adversely affecting search rankings for sites that don't use it. They have clearly articulated many times they think SSL everywhere is important for the web, and they have the leverage in search/browser marketshare to try to make this a reality.

> https://security.googleblog.com/2014/08/https-as-ranking-sig...

The Google IO talk for Google's desire for "HTTPS everywhere"

> https://www.youtube.com/watch?v=cBhZ6S0PFCY&utm_source=wmx_b...

For what its worth, most metrics show a significant jump in SSL usage in 2016/17 following the announcement that it could adversely affect search rankings, although who knows if the two are related.

Yes. They literally will. ISPs have injected ads into pages delivered to their customers. Data integrity also comes from HTTPS too.

Boxing out ISPs is probably a large part of why they're doing it. Google believes traffic should be able to go directly from your computer to the server in question and then load up Adsense without any interference from your ISP injecting ads. They can do things for more than one reason, it's a good thing, but it also nullifies one of the few channels they don't have access to.

You may not care if bad actors inject crap into your web page, but I care, as may many of your readers.

This is not a theoretical vulnerability. Comcast routinely adds stuff to unencrypted web pages.

You'd be amazed at the amount of web sites which are not secured. E-commerce sites, corporate sites, classified sites, you name it. How unlikely would be to visit a news site through some "free" vpn service only to have most of the ads replaced with shady ones. I mean let's get realistic here, https while inconvenient will make the web a better place. And even if your site isn't secure most users won't even notice because they rarely ever watch the address bar. If they did there wouldn't be a bazillion of phising attacks out there.

It's not irrational. Remember the Great Cannon of China attack in 2015? That attack tool works by modifying the responses to insert code to attack the victim. If your shitty blog posts aren't over https, then they can be used to attack other people.

In a way, it's a little like a public health argument. You might not be worried about measles but you should still be vaccinated for the sake of the herd.

How is having a "not secure" label showing up on your not secure http html folder an inconvenience? I think it makes more sense to move to secure by default, with a large "not secure" label for the nondefault (non-https) sites. Better to be proactive with the labeling than relying on endusers having to remember to look for that little padlock now that letsencrypt hands out certificates for free in a super easy, automated fashion.

And what happens if my grandmother mitm your site to redirect to a fake login page. Internet security is everyone hosting on the internet's responsibility. You're doing a disservice to your users.

If the only advantage you can think of for sending traffic unencrypted is that it saves you a small amount of time, you're unlikely to find anyone who cares.

"I'm lazy, so I don't want to set up encryption on my website, but please don't tell my site visitors. They don't need to know"

Yes, plus it gives the feeling that https://chase-very-secure-sign-in-1024-bits-certificate.scam... is perfectly secure when it’s not.

Good education >> Browser gimmicks.

Good education doesn't scale. Browser "gimmicks" help people make better decisions when they don't have good education.

Also I get a huge red alert when I follow that link. Seems like chrome is doing a good job telling people it isn't secure.

On the contrary, the move is also to remove the padlock and "Secure" signs from such sites.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact