Hacker News new | past | comments | ask | show | jobs | submit login
Small things add up: 4chan's migration to a cookieless domain (chrishateswriting.com)
444 points by moot on Dec 2, 2013 | hide | past | web | favorite | 131 comments



> 50 bytes may not seem like a lot, but when you’re serving 500 million pageviews per month, it adds up.

That's a void argument in the article. If they were serving pages with 500 bytes each this would indeed be a huge improvement, but no page is 500 bytes. I just opened 4chan.org, and the markup before <body> is already 1,836 bytes. The entire frontpage of /b/ is 114,428 bytes, and saving 50 is absolutely negligible. On the other hand, if saving single bytes was significant, there would be a lot more potential in the source by shortening CSS class names etc. rather than picking a short domain name.

EDIT: According to http://www.4chan.org/advertise, there are a total of 575,000,000 monthly page impressions.


You're absolutely right -- it is negligible. When you're serving upwards of a petabyte per month, 23 GB isn't exactly a lot!

> On the other hand, if saving single bytes was significant, there would be a lot more potential in the source by shortening CSS class names etc. rather than picking a short domain name.

Also spot on, but the point I was trying to make is I was given the choice of choosing a longer domain and a shorter one, and the shorter one resulted in smaller page size, which does result in some (though as you put it, negligible) savings in terms of transfer. CSS/JS refactoring/pruning would definitely be a better bang for your buck if your goal was solely to reduce page weight, but my primary goal was to decrease request overhead and this was just a side benefit at no additional cost to me.

As an aside, I would say the non-technical benefit of the longer domain (4chan-cdn.org) would have been avoiding user confusion, but I feel this is mitigated since visiting 4cdn.org directly bounces you to www.4chan.org and our custom error pages make 4cdn.org clearly 4chan related.


Right, if you use a short domain name "because why not" for something new that's alright. It was just the "it adds up" part that I was commenting on.

Do you have (well, you're moot, so -- want to share) some more data of 4chan's current size? I'm sure lots of people would be interested in hearing about that. (This would probably make a great individual post.)


I'd be very interested in this... I used to visit 4chan all the time back in the day, not so much now, but I do enjoy reading statistics about the site.

Thanks for this article moot.


I've been meaning to write a post about all of the weird stuff we do in the interest of maximizing our limited resources. We've always had to stretch things as far as possible given server, financial, and time constraints, which has led to some interesting/unorthodox "solutions."


That's what interests me about 4chan's story... How you do that is amazing.


What is your caching like, and do you use precompression? In fact, how do you serve that many pages full-stop? We (HN) need to know!


We write pages to disk as compressed HTML, and make use of nginx's gzip_static and gunzip modules to serve them. So every time a person posts, we regenerate the applicable reply and index HTML. The high postrate boards are rebuilt with a daemon on a timer, since past a certain point (~1 post per second) it's wasteful to regenerate on demand given how long the script takes to run.

We essentially think of flat files on disk as a cache for the database, and don't employ a proxy cache or any other common HTTP proxies. It's a little unorthodox, but it works well for us.


Flat files on a disk == awesome


We've had a surprising amount of difficulty maintaining this over the years, since FreeBSD's NFSv4 implementation kind of sucked for a while, and it doesn't support mounting a memory partition (tmpfs). But you can trick it by mounting the memory partition using nullfs, and then nfsmount-ing that.

We used that memory-partition-over-network thing for a while, but actually switched to SSD-over-network because it was faster than the memory partition. I spoke with a FreeBSD maintainer about it and he said what we were doing was so unsupported/unoptimized that he wasn't surprised.

We run into weird FreeBSD edge cases pretty often where there are few people, if anyone, who can answer our questions. Sometimes I wish we'd gone with Linux, but after ten years the hassle of switching doesn't seem worth it. Thankfully 9.2-RELEASE has been pretty good to us.


Have you considered memcached with nginx HttpMemcachedModule? If so, why didn't you decide to use that instead? Seems like all of 4chan's active posts could easily fit in memory of server with a moderate amount of ram (especially if compressed).


Why even write the html to disk? 4chan flushes history so aggressively I'd think you could fit all the high-traffic stuff in memory.

if there's a power-blink and the posts get lost? Its 4chan.


Without fsync, its effectively that; but you get atomicity (with unlink etc) and can use multi process etc.


You are looking at the traffic sent by the server to the client but cookies are sent by the client to the server. Upstream vs. downstream. Cookies are much larger proportion of the request. Also the response can't even be sent until the entire request is retrieved.


Excellent point... we can also safely assume the average user's upstream connection is quite a bit slower--likely on any metric other than latency--than his downstream.

In addition, let's not forget to multiply the cookie overhead by the number of images on the page... likely to be quite a large number given 4chan's love of images and enormous, eager-loaded discussion threads.


the markup before <body> is already 1,836 bytes

All those bytes are justified because they are rendered on the screen provide direct utility to the user. The extra 50 bytes does nothing but slow the page down.

there would be a lot more potential in the source by shortening CSS class names etc

There is no either/or here: this can and should be done as well.


Is there a good CSS minifier which looks at the CSS as well as the HTML, and shortens class names automatically? In most cases I don't really care what the class names are in the final HTML, as long as they match up with the CSS.

I realize there are a lot of things that could go wrong, such as * correlating the HTML and CSS for an entire site instead of just one page * dealing with third party dependencies that require certain class names to be used

Just wondering if there's any work already done with this approach.


I don't think there's anything that will change the class names, but there are minifiers

http://yui.github.io/yuicompressor/css.html

Changing the class names would be difficult as it wouldn't pick up any dynamic class names, like

    var className = 'user_' + user.getState(); //'user_deleted' or 'user_active'


But that's the same problem as minifying JS libraries already, where you have a set of public symbols that should not be altered. An exclusion list works well in that case where you could just put the class named that are for dynamic generation (you probably don't do that with all classes in the stylesheet).


No need, because you can gzip compress, which will make even better savings than that (because it can also compress and save space taken by tags etc, not just class names).


You could do that and gzip though; it would probably yield some savings.

An obfuscator generating code à la iocc efficiency would be quite neat (e.g. http://www.ioccc.org/2012/endoh1/endoh1.c)


>You could do that and gzip though; it would probably yield some savings.

Nope. That's classic non-engineer thinking. The same kind of thinking that does "optimization" without profiling.

Let me introduce you to my friend: http://en.wikipedia.org/wiki/Diminishing_returns


"some savings" says the parent. A diminishing return is still a return.


And 0.0000000000001 of a dollar is still money. Yet it's meaningless to do anything about getting it...


I can say the savings would probably be greater than the 50 bytes he saved on the domain name. Compressing is not magical, if you compress class names you're still storing at least one instance of each name plus overhead.


Sure, but that's just by pointing image URLs, etc., to a shorter domain vs. a longer one.

There's also savings in not sending cookies with every image request - which saves them a honkin' 46 terabytes.

Plus, he himself admits that this isn't the first place to start optimizing - it's just where he chose to.


That's still 25GB/month of bandwidth they save. Who knows, maybe that nudges them out of a high-cost data tier.


My point was that "saving 25 GiB" is way below anything they can possibly care about. How much traffic does 4chan handle in a single hour? I think it's pretty safe to assume that 25 GiB is significantly lower than one standard deviation of their monthly traffic, so cutting costs with it is also not feasible.


Ok, so let's assume you are right. Did he make the wrong choice? The shorter name seems strictly better to me- whether we feel it is trivially better, it saves traffic and the domain name has lost no information.

If he reworked old code just to save 50 bytes, that would probably be a mistake, but it sounds like the work was being done anyway and he had the choice to save 50 bytes OR use a longer domain name.


> If he reworked old code just to save 50 bytes, that would probably be a mistake, but it sounds like the work was being done anyway and he had the choice to save 50 bytes OR use a longer domain name.

Basically this. I thought it was an interesting side benefit that came at no additional cost to the main benefit of greatly reducing request overhead for static resources.


You often pay less per GB the higher you get


> 50 bytes may not seem like a lot, but when you’re serving 500 million pageviews per month, it adds up.

This is rubbish. What matters is how many packets of data go through the network. It is packets which are the unit of transfer and which are handled by intermediate nodes. The difference between say 2340 and 2290 bytes is of no material consequence whatsoever, it may cause one less packet to be sent but probably won't. To think it does have a consequence is to demonstrate a complete lack of understanding of what happens in the network and in endpoints. None of these '50 byte savings' accumulate anywhere in any meaningful or measurable sense whatsoever. So no, it doesn't "add up" to anything.

And if you're going to downvote me explain why I am wrong and how the benefit of these mythical 'savings' can be demonstrated.


Packets are not padded to the MTU, so a link literally needs more time to send a longer packet. If you make a response 50 bytes shorter, the last packet will always clear a 1 Gbps link 0.37 ms sooner, even before you shave a packet off 3% of your responses. How much you care depends on what you're doing and how bursty it is, but it's certainly measurable at scale.


If the few bytes saved by this new domain seems worth commenting on what about the very large number of unnecessary whitespace characters they are serving?


They have around 100 images on a single page, and google analytics increases the 50 bytes with 1KB. This does indeed result in 100kb overhead


I think you're confusing the two benefits.

What the thread OP is talking about is the decrease in page size from the URLs included in the page source. Choosing a shorter URL for the static domain versus a longer one resulted in a rough savings of 50 bytes per page (size), compressed.

You're referring to request cookie size, which was also decreased significantly (CloudFlare still sets a single cookie, unfortunately), which results in the big savings of ~100 KB upstream per page load.


Kind of off-topic, but how do you feel about doing a conscious MITM to your users(by using Cloudflare), being 4chan home of Anonymous and all that?


4chan logs IPs and provides them to the cops if the ask, so it's not like users who want to remain completely anonymous can do so without being behind seven proxies anyway.


Yeah... people shouldn't ever assume that "anonymous" means "anything goes"


Providing IP's to law enforcement is not the same as allowing a third-party to do a MITM. Cloudflare can inject whatever content they want to the pages they serve.


> > 50 bytes may not seem like a lot, but when you’re serving 500 million pageviews per month, it adds up.

I thought you were quoting me here, I wrote the same text yesterday only with a figure of 100 million pageviews. That was weird to see.


> If you’ve been linked directly to a Facebook photo, you may have noticed the domain wasn’t facebook.com, but instead something like fbcdn-x-x.akamaihd.net. Large sites load static content from special domains for a few reasons, but primarily to reduce request overhead, and sometimes security.

This works for 4chan, but in case of Facebook, it actually reduces security - since cookies cannot be checked for photos on static domains, everybody can access every photo (as long as they are given an URL), regardless of the photo's privacy settings. In the case of Facebook, they are probably using sufficiently random URLs that mostly mitigate the issue, but a naive implementation could be very problematic.


Totally. In our case the main security benefit is that we allow the uploading of SWFs, and it mitigates the threat of cookie stealing via Flash (which we've actually seen attempted). Previously we'd used 4channel.org for this, but I switched that over along with the migration to 4cdn.org.


If you give the url of the photo you are allowed to see to someone else, you can give the actual photo as well. I don't get the security risk here.


If the urls are guessable then someone could harvest images without having been given urls by authorized users.


I kind of like it for FB. It's nice to be able to easily show a friend a photo, whether or not they are friends with the person who posted that photo and sometimes even whether or not the friend has a FB account.


Nice for you, but potentially not so nice for the poster of the photo who thinks that only his or her friends can access it.


Yeah, I acknowledge it has some less ideal traits as well. Of course, if I can see it, I can show it to anyone, whether or not I can direct-link.


yep, this process just speeds up the process. Images are something that really can NOT be protected. If they are visible, they are steal-able


Nothing that right-click, save, upload to imgur couldn't accomplish though.


Or printing out and handing it over.


I'm interested in learning more about _how_ secure random URL parts are, and what kinds of attacks are seen in practice. Presumably an IP would get blocked after enough 404s?


I'm pretty certain 4channers are good at tracking down profile URLs from just a Facebook image URL/filename. I don't think you can access the full album per se, but you can definitely locate the profile.


that's because the image urls actually include the profile id.


I thought they changed this behaviour months ago?


I just tried it and it definitely still has the profile ID in the URL.


Why even include the original name of the file to begin with, though?


Sounds like a situation where X-Sendfile could help?


A quick look shows that you could minify the Javascript quite a bit more (which is relatively easy, and would save a lot more than 50 bytes). You might also look at inlining all of the Javascript into the HTML directly (this is what Google does), which saves the extra HTTP requests. You could also do this with some of the persistent images on the page (logo) and inline them with base64 (but I don't think it actually helps with the page size).

Finally, instead of using the browser extensions (which are helpful), get actual PageSpeed installed on the servers!

Apache: https://developers.google.com/speed/pagespeed/module nginx: https://github.com/pagespeed/ngx_pagespeed

Regardless, awesome stuff. Big fan. Much love.


Thanks for pointing that out. We minify production JS using Closure Compiler but sometimes that leaves room for improvement.

There's definitely a tradeoff between inlining JS and small images, but I think in our case it makes more sense to leave them external to leverage browser (and since we use a CDN for static content -- edge) caching.

I believe I tried to get ngx_pagespeed up and running when it was announced, but couldn't get it to compile from source. Sometimes (read: often) it sucks to be a FreeBSD user.


It is the second time in this discussion that you mention the downsides of using FreeBSD, and you present it as an historical choice. Do you have upsides as well? What were the criteria that made you pick, 10 years ago, FreeBSD over another OS?


I've found that uglifyjs does about as good a job as Closure Compiler, but waaay faster. Could be a nice speed up in deploy times for you.


closure kicks uglify for size when compiling with advanced optimisations.


I found that the advanced optimisations broke our Javascript when I compared the two a couple of years back. We weren't interested in rewriting our Javascript to make it compatible with the Closure.


The thing that will break your code is this type of notation this['function'] since the compiler can have no idea what renaming should apply. There are aspects of the library that lets you expose public apis.

I can really recommend the compiler in AO mode, the type saving is insane (75% reduction in file size) and type checking is sweet


    which saves the extra HTTP requests
The reason google does it is due to them wanting a fast single page load.

4chan reloads the the page on each click meaning you save a shitload of bandwidth by not having to send the scripts and css each time the page loads.


You're absolutely right, I was just sharing some other options. Given the way 4chan is currently built inlining all of the Javascript would not be beneficial over keeping it external and allowing the browser to cache it.


Wouldn't inlining the js mean that it's downloaded for every page request instead of being downloaded once and cached?


Yes. Along with base64ing the logo. Better to serve those with some nice HTTP cache headers.


Each page of the site isn't all that different from the rest. They could just load the page once and update the page content with a JS AJAX call and DOM manipulation, honestly. You wouldn't even need to send full URLs for the images that way, just the unique part of the URL such that JS could reconstruct it from a constant prefix.


We actually do this with reply loading already, but not by default. We have a read-only JSON API and use that to append new replies when you're browsing in a thread (click [Update] or [x Auto] at the top/bottom of a thread). We also have a de-pagination feature that grabs all of the OPs from a board and let's you scroll through the indexes as one giant page.


Hey moot, why don't you let pass users bypass individual IP blocks?

I refuse to disable my VPN, so it means I can't post any more. A shame 4chan has no way for privacy conscious users to post, especially given your support of StopWatching.us and such.


It heavily depends on how they implement IP bans. Often they are done at a much lower level, making verifying the user is logged in impossible. Doing them at a high level usually makes the ban not effective as they still waste a bunch of overhead.

The alternative is to have two views of the site that hit different servers, one that requires login and another that does not, but that introduces a whole slew of other problems. It would probably be the way to go if you wanted to do this however.


You can... you just have to shell out money for a 4chan pass :\


I have one.

> "4chan Pass users may bypass ISP, IP range, and country blocks" > "Pass users cannot bypass individual (regular) IP bans."

So, if some random spammer uses the same VPN server, it gets blocked by an individual ban. This rapidly happens to all popular shared VPNs.


I never even contemplated the size of cookies before seeing this. It never occurred to me that it could create such an overhead. It'd be incredibly handy if we could set a header like `x-send-cookies: NO` to stop the browser sending cookies along for static content. Great post, a real eye opener.


Set a header where (on which request from whom to whom?)


The initial request from browser to server, I presume.


Here as well. I knew of several reasons to serve static content from a different domain/cdn, but never thought of this cookie-benefit.


Maybe it's just me, but I hate how current web technologies force people to register separate domains for static content. This is not how domains are supposed to work.


They don't. You could serve your site off of www.domain.com and your CDN off of cdn.domain.com easily.


But then you still have to remain vigilant against a clueless dev or random JS lib on www.domain.com setting a cookie for .domain.com, which your browser will helpfully include with requests to cdn.domain.com. With the completely separate root you're protected from that.

http://en.wikipedia.org/wiki/HTTP_cookie#Domain_and_Path


So basically it doesn't have to be that way, but to protect yourself from cluelessness/stupidity, you do it.

Kind of like Unix permissions vs. jails/virtual machines. Both are secure, but one is more secure against incompetence than the other.


I wouldn't agree. There are completely legitimate reasons to set cookies on *.domain.com -- it isn't "clueless/stupid" to do so, just less ideal.


I wasn't making a statement, just trying to clarify and then making an analogy to verify my understanding. I think I omitted a question mark where I should have had one.

Certainly, it is not automatically a bad idea to set such cookies. I see that.


We use wildcard cookies to let users login to www.domain and peek at beta.domain without logging in again. This isn't incompetence so much as reducing user drag. Perhaps I should have set it up to use www.domain/beta/ instead.

Unfortunately, this means our cookies are sent to static.domain. Worse, once we get rid of beta.domain there's no going back on wildcard cookies - there's no way to force clients to expunge cookies.


You could use beta.www.domain.com.


This makes a ton of sense, but it might be unorthodox enough to scare off some users thinking it was a scam of some sort.


Correct, but only if you don't set any *.domain.com cookies, which as it turns out most do.


Right -- if you're serving your website off of the root, then you're going to have this problem. If you serve it off of www (which, as it turns out, is how domains were originally intended to be used!) then you don't.

It's not an inherent flaw of the tech. It's a flaw in how we use it.


+1 Use of a subdomain prefix is absolutely the right thing to do, trendy root-domain-only sites notwithstanding.

CNAMEs are inherently more flexible and more resilient in the face of various load challenges or DoS attacks:

"Root domains are aesthetically pleasing, but the nature of DNS prevents them from being a robust solution for web apps. Root domains don't allow CNAMEs, which requires hardcoding IP addresses, which in turn prevents flexibility on updates to IPs which may need to change over time to handle new load or divert denial-of-service attacks. We strongly recommend against using root domains. Use a subdomain that can be CNAME aliased... " - Heroku [https://status.heroku.com/incident/156]


Could always use cookie paths and require all secured or stateful actions to hit some path (domain.tld/a). As long as no cookies are set at root, same benefit.

However, having static assets spread across multiple host names also helps browsers which can spawn multiple threads to pull assets from a page. I think most browsers allow 4 concurrent threads per HOST. In this case, it's just one additional host.


and I hate how Google (and others) force websites to serve small files from *.googleapis.com (and other domains)... some websites end up loading files from 10-15 different domain names... I wish this would stop, but I am sarcastic at the same time because I know that they could easily host them locally.


I'm very curious why an anonymous site that doesn't even allow registered users gets 100k worth of cookies on a typical connection.

Not saying you're doing anything wrong, just curious. I assume some of it is for ad tracking, but that's still a hell of a lot of data!


Well it's a single kilobyte, but 100 KB in aggregate.

It's almost entirely Google Analytics, unfortunately. Our ads are served from a different domain (4chan-ads.org) for specifically this reason (user privacy and cookie bloat).


The newest version of Google Analytics eliminates most of the cookie bloat. There's now just a single id cookie that's around 30 bytes or so.

https://developers.google.com/analytics/devguides/collection...


Wasn't aware of this (but was hoping it was in the works) -- thanks a bunch!


Have you ever thought of running server-side analytics?


Yes, but every time I've investigated it, the tl;dr was "not worth it" given our requirements/constraints.

Google Analytics has its shortcomings, but it's a great product and free.


So you never encountered any problems with the free usage tier and its limitations? Asking out of curiosity, as we use it as well and break the hit-limits every month. My direct supervisor asked me, if I envisioned problems, and I told him, Google does not guarantee, but delivers non the less (as far as I knew at that time).

How do you feel about the believability of the data?


We wanted to do this at a news site I worked for (since we had way too many cookies) but the problem was Google news.

On Google news thumbnails would not be shown unless they came from the same domain as the other page.

So our content was on "www.example.com" and pictures were on "media.example.com" but the cookies were for "example.com" so got sent with every image request.


Set cookie for www.example.com?


Unfortunately they used various subdomains for things.


moot, given that your site seems like an ideal candidate for WebP, why not use it with pagespeed (and/or let users opt to use the format when posting)?


In order to stay compatible with non-WebP-supporting browsers, moot would have to keep two versions of image files on the disk, causing a lot of pain and HDD waste.

It could work for the smaller boards, though...


A board for WebP animations would be amazing.


Recently a way to save ~33 raw bytes was suggested in the HTML5 Boilerplate issues by using localStorage.

https://github.com/h5bp/html5-boilerplate/issues/1444


Now, what about the extra DNS lookups? That adds 1x roundtrip time for the user, plus 20-40 bytes of IP header (v4/v6), plus 8 bytes UDP header, plus ~25 bytes for the query and ~100 bytes for the reply.


The DNS lookup only has to be done once and is then cached on the users local machine, at least for the overall request (if not longer).

Thus the cost is only 160 bytes for the page, which isn't all that much.

Additionally assuming their cache timeouts are non-trivial there are several caches to reduce the delay incurred by this.


That's another thing YSlow and PageSpeed have complained about for ages, but hopefully the local resolver cache mitigates it somewhat.


Wouldn't that be cached after the first load?


Doesn't that depend on the browser? Chromium definitely does cache DNS records (chrome://net-internals/#dns). I remember Firefox relying on a system configured DNS cache (dnsmasq or pdnsd on Linux).


Yes it would be.


4chan uses cloudflare, right?

Have you investigated the gains to using Cloudflare's Railgun? Seems like it'd be able to save quite a bit of bandwidth on your end.


I'm a bit paranoid, don't trust google and never really liked the idea of recaptcha on 4chan because of the illusion of anonymity found there. The cherry on the top is finding out uses Google analytics (mind you I haven't been there in a while. Back then I wasn't nearly this concerned with privacy)


I also wish 4chan would switch to open source, self-hosted CAPTCHA and analytics solutions. CAPTCHA especially, because while analytics scripts, tracking images, etc. can easily be blocked, you cannot participate on the site without allowing ReCAPTCHA to constantly phone home to Google.


Switching from google analytics to piwik would be nice.


They should probably also force connections trough their SPDY supported HTTPS, rather than making it an option.


A huge portion of their user-base doesn't visit with SPDY-capable browsers.


Actually 78.75% of our users are on Chrome/Firefox!

SSL is forced on our domain you post to (sys.4chan.org) with redirects and HSTS, and we set cookies with proper Secure and HTTP-Only flags. Maybe some day we'll force SSL site-wide, but I don't think that's the right decision for now.

I definitely encourage people use the EFF's wonderful HTTPS Everywhere extension though: https://www.eff.org/https-everywhere


78.75%? That's great news!

I imagined a lot of people would be using mobile and I know that Safari iOS doesn't support SPDY. Does this mean that > 80% of users are browsing on desktops, or is it possible there's a mobile app that's reporting a false user agent?

Or maybe all the iOS users fell victim to waterproof tests...


We get surprisingly little mobile web traffic -- only 16% in November.


Looks like you just assumed that iOS drives all the traffic on every website?

Having some idea of 4chan user base, I won't be surprized if android is more popular then iPhone.


I'd love to hear what's stopping you from forcing SSL site-wide. Is it cost? If so, what are the specifics if you don't mind sharing?

Also, I don't know if you're using different VIPs for load balancing or lack of SNI support reasons or what not, but if your certificate provides proof of authentication for all your hostnames (probably need to use SubjectAltNames and maybe wildcards too) and the VIPs match, then Chrome & Firefox will send requests for those different hostnames over the same SPDY connection.


I doubt domain length will really make any difference, gzip compression should take care of a longer domain name.


The 50 bytes figure represents a compressed response. (We actually write all of our pages compressed to disk before serving them -- nothing is served dynamically. But that's for another post...)

The example below isn't the most scientific, but should give you a rough idea.

  Test index page with different static URLs:
  URLs as 4cdn.org -- 23261 bytes compressed
  URLs as 4chan-cdn.org -- 23311 bytes compressed
  URLs as 4chan.org (control) -- 23278 bytes compressed


So 4ch.io would have knocked-off another 30 bytes or so?


BRB, switching everything now!


Can someone with more ZIP-algorithm knowledge than me explain why a difference in word length of 5 characters can result in a difference of compressed result length of 50 characters? I mean, a reference to a 1000-character word takes as much space as a reference to a 1-character word?


He did say "~50 bytes compressed".


I don't have any of these problems with 4chan because I block most of their JS and I don't accept 3rd party cookies, and I definitely don't let any Google APIs run on my local machine. Google's APIs are for Google's hardware, which my hardware is not a subset of.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: