Hacker News new | past | comments | ask | show | jobs | submit login
Building a r/place in a weekend (josephg.com)
401 points by mxstbr on Apr 16, 2017 | hide | past | web | favorite | 80 comments



Thanks for the support everyone! It was a really fun little project. I'm happy to answer any questions people have.

I'd also like to say that I highly recommend everyone does projects like this from time to time. I don't know any way to gain programming skills faster than making small throwaway projects using new tools and techniques.


Currently, https://josephg.com/sp/current returns a png with 3x8 (RGB) bits/pixel. But since the image has only 16 colors, you can reduce its filesize to ~20% (from ~714 to ~147 KB, with http://optipng.sourceforge.net/). Can't you can gain performance and reduce bandwith by keeping the image serverside as a 4 bit png?


Yes. I wanted to do that, but I couldn't find a png library on npm that supports encoding 4 bit paletted pngs. Fixing that was on my todo list from day 1.

Although now that I think about it, I could probably get close by using an 8bit greyscale image and then apply the palette in the client. That would probably halve the image size.


Cool project! Optipng does many other optimisations but could be taxing on CPU. It should be very easy to put it in the stack for test though.


Any tips for coming up with good projects to do? Something like this is obviously beyond the scope of most novices.


I'd honestly disagree. Start small, don't expect to build something that works for 100k users and you'll be fine!

Take the bare bones of this project:

- Canvas.

- Websockets.

That's literally it. You'll need to know how to draw on a canvas, and how to send and receive WebSocket information. You can quite happily keep the current state of the canvas in an in-memory array, perhaps saving it to a file every few minutes in case the server crashes. Then, perhaps, when that's done you can swap our your in-memory array for a REDIS bitfield, improve the web sockets to no longer use JSON, but instead binary? Both of which should be only a few tens of lines of changes, but after that you'll be able to support tens of thousands of simultaneous users with hundreds, if not thousands of changes per second.

The thing with this project that's complex is the number of users required to use this at once, lessen the requirements a little and you'll come up with a simple project.


I upvoted.

Looks like you genuinely felt it is out of the scope of a lot of novices (which it well might be at least for a weekend project), and you genuinely were looking for tips.

Just look for projects that would make yourself or others happy. Doesn't matter how small or large, whether it would take an hour or a decade, the hardest part is getting past line zero. Once you have that the next hardest part is getting to line one; wash, rinse repeat.

The good news is everything is relative and everyone has different skill sets - what's hard for everyone else might be a cinch for you due to your childhood fascination blending fishing line with champagne corks - who knows? ;)

Try not to be discouraged, just stick with it and be infinitely inquisitive, throw caution to the wind and dive right in.

Good luck and happy sailing!


Just start a project, whatever you would like to see built. You will either build it or you will learn a lot. Win/win. Just don't invest resources you can't spare.


Excellent work actually completing the challenge set! If you were to do it again, would you still use the same technology used here? Kafka especially seemed to cause you a few issues.


Kafka wasn't a problem at all. Actually, I was shocked how well kafka just worked. It did everything I expected it to do, exactly as advertised in the documentation. Well, except for how annoying it was to get it installed and working on ubuntu. I will definitely be using kafka in future projects.

I think all my technology choices worked out fine. I dumped server-sent events halfway through in favour of websockets because WS support binary packets. But that was a pretty easy change affecting at most 50 lines of code.

I still wish we had an efficient (native) solution for broadcasting an event sourcing log out to 100k+ browser clients, with catchup ('subscribe from kafka offset X'). Nodejs is handling the load better than I expected it to, but a native code solution would be screaming fast. It should be relatively simple to implement, too. Just, unless my google-fu is failing me I don't think anyone has done it yet.


"an efficient (native) solution for broadcasting an event sourcing log out to 100k+ browser clients, with catchup"

Seems to me like you just described long-polling, which you dismissed in the article as "so 2005".


So for context I wrote a websocket-like TCP implementation on top of long polling a few years ago[1], before websockets were well supported in browsers. I'm quite aware of what long polling can and cannot do.

Yes, I did dismiss it out of hand in the article. The longer response is this:

In this instance long polling would require every request to be terminated at the origin server. I need to terminate at the origin server because every connection will start at a different version. The origin server in this case is running JS, and I don't want to send 100k messages from javascript every second. Performance is good enough, but barely. And with that many objects floating around the garbage collector starts causing mischief.

The logic for that endpoint is really simple - it just subscribes to a kafka topic from a client-requested offset and sends all messages to the client. It would be easy to write in native code, and it would perform great. After the per-client prefix, each message is just broadcast to all clients, so you could probably implement it using some really efficient zero-copy broadcast code.

The other approach is to bunch edits behind constantly-created URLs, and use long-hanging GETs to fetch them. I mentioned that in the blog post, but its not long-polling - there's no poll. Its just old-school long-hanging GETs. I think that would work, but it requires an HTTP request/response for each client, for each update. A native solution using websockets would be better. (From memory WS have a per-frame overhead of only about 4 bytes)

[1] https://github.com/josephg/node-browserchannel


Btw, there's a Kafka docker based install that's great for spinning up Kafka and testing quickly.

Nice work on all this!


I had always crazy problem in setting up Kafka until I discovered https://github.com/Landoop/fast-data-dev


Hi, right now you seem to have a downtime (500 for DPAD_all.png and timeouts for /sp/current). The app however does not display that an error has occurred.


Yes I know - So much for my big talk about handing a large load. There's no substitute for proper load testing.

The server that both the app and my blog are on only has 1 core and 1.5gb of ram. I didn't pay close enough attention to kafka, and it turns out kafka was using half of the available ram starving the rest of the processes. And it didn't help people's bots were thrashing POST requests at nginx to edit the page.

I've just done some high fructose server maintenance, spinning up a new machine 4 cores and 8 gigs of ram. The old site is proxying all traffic across, and once DNS propagates it'll stop being hit at all. Hopefully that'll ease the congestion.

Edit: at the time of writing everything seems back up and happy. Nginx was running out of open file descriptors, kafka was eating all the ram and ghost (my blogging platform) wasn't sending the right cache-control headers.

A few tweaks and a bit more CPU to play with and everything seems happier now.


Looks like people are continuing to spam defaced Angela Merkel pics.

How did you go about preventing spam? If you feel it might give the spammers the information to circumvent your measures, consider writing a blog post later.

How many users did the site receive over the course of the past 2 days?

Really cool weekend project btw, this is literally the first time I have seen someone follow through on the "I can build X in a weekend" claims.


> How did you go about preventing spam?

I can't really hide anything - all the code is on github, though its ... um, ... not pretty. It might be better if nobody looks. http://github.com/josephg/sephsplace

But the system at the moment is really simple: The site only accepts 10 edits per 10 seconds from any IP address. After that your edits get rejected until the next 10 second window begins. You can write bots to draw things for you (and lots of the images you see are drawn this way). But drawing big objects is slow. So thats ok. I think thats a reasonable compromise between bots being powerful and humans being powerful.

The giant Angela Merkel image (and some other smut that I deleted) was drawn by someone proxying edits through about 200 IP addresses. I don't know if they're using TOR, or have access to a botnet or are using an anonymizing proxy or something. I could tell they were all the same botnet because all the requests had the useragent of 'python-requests/2.10.0'. (I have an IP address list if anyone wants to take a look.)

Anyway, I figured those addresses are probably hard to replace - so I let them do it, harvested the addresses and banned them all. Worse, I made it impossible to tell which servers are banned - the server replies to banned servers like normal - their edits just never appear.

I caught about 2/3rds of their addresses before they started making their headers match real browser traffic, but I think I ruined their fun and they stopped.

I have a few more tricks up my sleeve I could pull out if that little war escalated further. For example, I could always add a captcha you have to fill out when you open the page. The captcha would generate a token that you would have to provide with each edit. Rate limiting would then be by-token. Bots would still work, but you would have to give the token to your script. But getting around the rate limiting would be harder.

> How many users did the site receive over the course of the past 2 days?

Um I'm embarrassed to say I don't know! I don't have a log of which IP addresses generated each edit. Monitoring and logging seemed much less important at the time. Its obvious in retrospect, but I wish I'd sent each edit along with a timestamp and some metadata into a separate kafka queue when I received them. That way I would have a complete audit log to play with now. I have all the edits - but I have no way of knowing who did what, or how many unique visitors I've had.


Oh the spam. Back in about 2003 I built a site that let people doodle a small square canvas and add it to thousands of others. It didn't take long for it to be riddled with dick-picks, stars and swastikas. I had a mighty banhammer. But it was so depressing.

The site was called 'blograffiti.com' - Remnants of it still exist on the wayback machine.

Fantastic work by the way. And thanks for the post about it all. I love weekend projects like this. That's exactly what kicked blograffiti off. (And most of my most valuable projects, come to think of i!.)


One thing that reddit gets "for free" is a user authentication system that is tied to their existing system. A huge way of them slowing down spam was preventing anyone from editing unless they had a reddit account that predated the existence of /r/place. You could mock something similar up by OAuthing to Twitter/Facebook and only allowing accounts created prior to today from editing.


IPs are a terrible way of limiting input into a system given relying on the network stack to provide unique identifiers is an irrational expectation.

I'm thinking of a grid of Litecoin addresses would work well to limit abuse of infrastructure to gain advantage, while still allowing bot activity. A payment to a given address would last the amount paid divided by cost of ownership amount per time period.


Dunno, these days reCAPTCHA is pretty easy and cheap to solve. (With services like 2captcha etc)


Bravo!


> "I'd never actually used kafka before, and I have a rule with projects like this that any new technology has to be put in first just in case there are unknown unknowns that affect the design."

I have been doing the same intuitively for as long as I can remember but never stopped to realize this or why. I wonder what else I've learned by doing like this that now I use unconciously.


I think Jeff Atwood (known from CodingHorror and StackOverflow) once talked about "de-risking" a project. I'm not sure if this is a general, well-known business technique, but from all the business things I've had in school, this was never any. It made a lot of sense to me too and having a term to throw around helps convincing others that it's a good idea.

Teachers typically want to first make up requirements and use-cases, then functional design, then technical design, then either code and tests or first tests then code (depending on the teacher)... Basically, you wait till 60-70% of the work is done to discover design flaws. Later on we had some Agile stuff as well, but more as a "this also exists" rather than "this is how it's done". Doing some prototyping and benchmarking to see whether something works at all was never part of anything.

One, exactly one subject ever had a performance requirement: 1000 simultaneous users in a multiplayer game. And it had to work over Java RMI (which makes no sense at all). I was the only person of two classes who pushed for (and was finally granted) the use of raw sockets. I was the only person who took this as a challenge and ran thousands of prototype clients on the school's computing cluster on a Saturday night so I'm not taking anyone's compute time. They never even looked at it. But next Wednesday is the last thing I will ever have to do for that school (unless I have to resit) and I'm so happy I'm done with their shit and can do my own thing next. Properly.


I can see though how arguably it's more applicable to personal or startup (flexible, able to pivot) projects, not so much to medium-big companies. I can see how it might be more important to implement it regardless of it being 1h or 1 week than asking whether or not it's a good idea to implement it.

And also arguably the university teaching focus on these companies. That is why you have all of these fancy ways of encapsulating dependencies and wrapping them into oblivion.

Funnily enough a really similar thing happens in other degrees. I studied Industrial Engineering and can calculate whatever you want about the cinematics of a robot arm but it wasn't until I set up with some friend to learn how to make one from scratch that we really knew what it was all about.


This is exactly how I develop. Prototype the hard parts first, if they work, good, if not, serious redesign is needed. It would be a waste of resources to focus on the easy parts first.


You could make it so that blank pixels are free to draw on, but it takes longer to redraw subsequent times. This would encourage the board to get filled up even with a small number of users, but would eventially allow things to be "locked down". Also would work well if you expect users to scale with time!.

For example, first draw = free. 2nd redraw 10 seconds, 3rd redraw 20 seconds... capped a 5 minutes. Not that the actual implimentation is that important.


Oh thats a great idea! That would also encourage people to draw new things instead of defacing old ones.

One of the first things that happened when the site went up was that someone started drawing something, and then someone else immediately start spamming junk pixels over the top of it. Despite being able to draw literally anywhere else in the world they thought the best use of their time was to ruin someone else's creation. It was kind of disappointing to witness.

I might make that change now actually - the simplest form of that is very easy. I can just make white pixels cheaper to draw on, and for everything else there's stiffer rate limiting penalties. (Which isn't quite what you said, but I think its the MVP version of it)

(Edit: This is implemented now. You can draw over 25 white pixels in each 10 second window, but only 10 colored pixels)



Thanks. My blog is running on the same server as the app itself. Its a small linode machine with only 1 CPU core, and nginx isn't keeping up with the traffic of both sites while fighting for CPU with ghost and kafka.

So much for my big talk about performance numbers. I'm fixing it as fast as I can.


Serving a static page shouldn't take much though... Are you using Wordpress?


Ghost, and I agree. I think the server might be getting thrashed by bots hitting sephsplace.

I've spun up a new much bigger server to handle the load. I'm just waiting DNS to propagate and it should start running much smoother.


This is super awesome work. Well done - you should be proud!

I have to admit, I remember stumbling across your comment when you accepted the challenge and in my mind I scoffed, thinking you were never going to do it. Boy was I wrong! Once again, this is super cool.


Heh thanks!

I feel like there's two kinds of people who make bold statements like that: There's young people who are suffering from the Dunning-Kruger effect - inexperienced but think they're hot shit. Then there's people who've actually done a lot of hackathon-type events and as a result know what it takes to pull them off successfully. (Time, caffine, and a deep familiarity your tools.)


As the one who challenged you in that original thread, what drew me to your initial comment was the great point that you made: that much of the time and difficulty in doing something novel is making many of the tough decisions, and that once those design and technical decisions are made (and revealed), it seems "obvious" to others, and is judged simple in comparison.

Congratulations on following through, and demonstrating your core premise!

What were the top things that you felt weren't captured by that premise--for instance, undocumented decisions that you had to discover on your own, or cases where you made tradeoffs that led to unexpected complexity? Were they maianly around bot-mitigation?


Thanks for saying so!

> that much of the time and difficulty in doing something novel is making many of the tough decisions, and that once those design and technical decisions are made (and revealed), it seems "obvious" to others, and is judged simple in comparison.

Yes - one of the things that drew me to the project was how building this in an event-sourcing style fits so well here. Doing it that way solves some of the architecture problems reddit talked about in their blog. It seems obvious to me that this is a good approach, but obviously not everyone shares that view!

> What were the top things that you felt weren't captured by that premise--for instance, undocumented decisions that you had to discover on your own, or cases where you made tradeoffs that led to unexpected complexity? Were they maianly around bot-mitigation?

Thats a great question, but I didn't spend much time surprised.

The thing I was most concerned about was kafka, but integrating kafka turned out to be was delightfully easy. I had to write some code to buffer recent operations in my server for catchup - I wish kafka had an API for that, but that wasn't hard to work around.

I think getting notifications working would have been a time sink but I explicitly removed them from the spec so I wouldn't have to deal with them.

It took me way too long to get kafka actually running through systemd on my linode. But I've spent enough time with apt-get that I wasn't surprised, just disappointed.

I was surprised how quickly people started drawing smut, and how much time I needed to spend early on cleaning things up or writing tools to remove large bot-drawn genitals.

There are still a lot of decisions around rate limiting that I feel uneasy about. I worry that reddit's 5 minute rule wouldn't work for a little website like mine. I allow ~1 edit per second. Is that a good idea? I don't know. Its an expensive experiment to try different values and see what happens because there's a community involved. And I don't have reddit's huge user base. But maybe I'm being unnecessarily risk averse by allowing so many edits. Forcing slow editing is bolder - it requires a longer commitment to draw, but is probably also much more satisfying to people who create content.


Did you consider using Docker for the provisioning of Kafka?

A couple of days ago I remember reading how difficult was to deploy Oracle on Linux and how Docker made this a breeze. I wonder if Kafka would also fall onto the same premise.


Probably. I was bullish on docker in the past, but I'm no longer convinced its worth the trouble for small projects. It adds an awful lot of operational complexity for what is essentially a more complex abstraction around processes.

I think its a nice tool for deployment and making reproducible builds, but a lot of other things become harder through docker - like managing a databases's data, and communication between local processes.

Maybe the tooling has improved in the last few years, but I've gone back to the raw unix coalface.


>... but a lot of other things become harder through docker - like managing a databases's data, and communication between local processes.

It doesn't have to be this way. If you use shared folders to persist data on host you are in no worse position than you would be in if you used natively installed app, persistence wise.

I think the Docker's focus on orchestration (which makes business sense for them) is the reason why running DBs in containers got bad reputation. But really, if you use shared dirs with host and view containers as processes you can use them for DBs too.

IPC with containers OTOH forces you to architect the system as a bunch of microservices, which is usually not a bad idea either.


Event Sourcing is a great pattern that could be used for many applications. It gives you an audit trail for free and lets you rewind to any point in time.

You don't even need to use a dependency like Kafka. We built a tool for tracking the lifecycle of software we ship, it uses Event Sourcing with snapshots and is open source: https://tech.fundingcircle.com/blog/2016/09/06/shipping-in-f...


In this context, what do you mean by pattern?


Would you consider streaming/broadcasting yourself doing a project like this? I think it would be interesting to pop in and watch you work, especially if once every couple hours you provided some commentary about your thought process and what the latest hurdle to overcome is.


That was pretty fun. I missed out on the original until many days after it happened, but just now I started to recreate the Windows 95 start button in the bottom left corner... seemed almost daunting to even do that many pixels, but it took no time at all until other people started collaborating and finished the button in a time I couldn't imagine at the start of that miniature journey.

I have no clue who those people are. It's just anarchy and the only thing we have in common is a canvas :-)


I was repairing the Germany flag above, when I saw you wanted to draw a Windows 95 start button I moved it up to make space. I was amazed when I saw that people building the button helped me move it up a notch :-) ps: also did some grey pixels on the button. great fun indeed


Seems like the opposite of anarchy :)


I could well have missed it but - have you got any scripts or config associated with setting up Kafka as you're using it?

I haven't used it before, and the comments on your writeup make it sound more approachable than I'd expected.


First of all, great work. It's been a long time since I've done a weekend hack like this myself.

However, it seems to me that Kafka is unnecessary in this system. It's clear that, at least in the final version, the system isn't designed to scale beyond one application server. For one thing, you're storing the ban list on the local filesystem. So it's definitely not 12-Factor compliant. And you're storing a local snapshot of the image. So why send the edits out to Kafka, only to have them come back in to the same process?


Even if I'm only using a single machine, using kafka allows me to load balance across a local cluster of server processes. This is important for nodejs apps because node is single threaded. And if I skipped kafka I'd still need to store the edit log somewhere for catchup. I've written that code a few times now, and I think its arguably more complicated to implement than just using kafka directly.

But right now to help deal with load the process is running across 2 machines. I just had to manually copy the ban list and snapshot database files across. When the server came up it pulled the snapshot version out of the database file, caught up from the kafka log and went to work.

Having a nice solution to distribute those files would be lovely - but I made the whole project start to finish in 2 days. I'm not going for 12 factor compliance here.


Ah, makes sense.


honest question, how did you fill out the canvas? you put it live, and then? thousands of people (including those who care enough to setup bot) suddenly came from your tweets?


I imagine quite a few came from the HN thread [1] where he originally made the bet that he could do it. That's how I found the site. It was 95% blank and I just decided to draw a bit. Later I came back several times to see how my drawing had lasted and fixed some minor vandalism. Also had to completely redraw at one point due to the Merkel bot. [2] Finally, after the blog post discussing how he built it started to get traction the vandalism rate got so high that I decided to write a bot to maintain the art for me.

--

[1] https://news.ycombinator.com/item?id=14109158

[2] https://twitter.com/josephgentle/status/853312152223965184


Someone also posted it to the original r/place subreddit. I think a lot of the traffic came from there: https://www.reddit.com/r/place/comments/65hs9j/inspired_by_r...


I might be going a bit tangential. I did read the article but I always wonder. How would I go about learning all the technology involved to build all this.. is there like a guide for people who know the basics of php and mysql. I'm a civil engineer by trade and only do this as a hobby sometimes.


I make my living programming, but I've never had any formal training -- programming used to just be a hobby as well. I learn the best by digging into what other people have done and seeing how they did it. So I'd suggest taking a look at the GitHub repo for this project and set up your own instance of it. Learn how to install and configure the dependencies. Then make a few small changes that forces you to learn how the code works.


Seph's law: Programming is 95% decisions and 5% typing.

You are an inspiration! If people grasp what you've done then you've lowered the fear people have of copying things. That should lead to more attempts, failures, improvements, competition, a true catalyst!


Aw thanks! And yeah I agree. The way to get good at programming is to make lots of stuff.

> Seph's law: Programming is 95% decisions and 5% typing.

:) In real projects there's usually the other 95% of the time spent reading the existing code and figuring out how it works. But thats much harder to fit in a glib saying.


"the other 95% of the time" haha


Tom Cargill managed it. (-:


Would the server-side bit of this be a good, or at least fun, benchmark? More towards the TechEmpower end of the pool than TPC. There's a pretty clearly defined API, and some fun concurrency and data-handling to do.


Just the write-up of what he did would take me more than a weekend.


Great work thanks fir sharing!

One decision that I don't agree on is the choice to send messages in order.

I usually prefer to flood the client of messages and attach at each message a timestamp (a monotonic increasing integer) and having the client to re-order everything.

It is cheaper from the server point of view and the worked is done by the clients.

Are there any reason why you picked your specific solution?

Just a technical question, that I am very curious about. I guess that there are concerns that I am overlooking at the moment...


That would work, but I don't think it would make anything simpler. You would essentially be reimplementing parts of TCP on top of whatever unordered protocol you'd use to make sure your client view is eventually consistent. All the APIs I used strictly order messages. Websockets simply don't have a UDP-equivalent, unless you want to send the events over plain long-polled HTTP requests or something, but that sounds like an inefficient recipe for disaster.

And if you did that the client would need to be able to track and fetch lost messages. That adds client complexity and server complexity for the extra endpoints.

My imagination is haunted by premonitions of bugs. In one ghostly image I see edits getting silently lost sometimes and not knowing why. Just, sometimes if I draw a line, on your screen you see the occasional block missing until you refresh your browser. In another premonition I imagine a packet reordering system thinking its missing a single operation and waiting on it forever. To the user it looks like everything has frozen completely.

We have well-implemented protocols that deliver messages in order. I see no reason not to use them.


Definitely, mine was just a simple question on the design, it is interesting to see how different people thinks ;-)

The work to keep everything in order, definitely, has to be done somewhere. You are doing it twice, once sending the messages actually in order, another in the TCP stack. Clearly the one on the TCP stack "comes from free" for anybody reasonable.

Like yours, also my mind is haunted by possible bugs, however in this cases I prefer to borrow from the Erlang philosophy and embrace possible failure. The way I model this problem is that the packets arrives usually in the same order, or one close enough, to the order in which they are send. It is rare that a single packet get lost, but if this happen I want to be able to reask for it and don't block the whole rendering.

I would have accept to receive messages slightly out of order and have a guarantee of something like the last 10/20 messages.

Would it require some reimplementation of the TCP stack? For sure! Would it be the common case? Definitely not! Would it make the architecture more resilient? I believe so but I may be wrong.

Now, to be clear, your work is amazing! I am really glad that you shared it and it I a pleasure to have this kind of technical conversation. Given your technical and time constraints I would have done just the same.

Just wondering if you have any thoughts on my counter points.

Happy Easter!


Great work.

I'm curious about the algorithm for bans, once you're done with the project I hope that you will disclose it.


The system isn't fancy. Its just the bare minimum code to stop the specific forms of abuse I was seeing.

I replied to another comment with details: https://news.ycombinator.com/item?id=14125518


This is absolutely great, and also much closer to how I would have done it as well. Reddit's implementation seemed like they chose so many wrong tools.


Fantastic job, congrats and Happy Easter, it was very nice following along. A lot of people were rooting for you.

Someone get this man a chocolate egg, he deserves it!


Its down


What if we had reddit but with arbitrary group activities instead of just chat?


It looks like it's down for me. Is anyone else still able to see it?


will be possible to make this entirely serverless with WebRTC P2P ?


Hahaha, awesome work. And sorry from Germany for the Nazi morons spamming you.


Germany did nothing wrong.


I highly suspect at least the spamming of distorted/vandalized pictures of Merkel is the work of German alt-rights (or, to be accurate, neo-Nazis).

Spamming swastikas, though, this one is a known modus operandi of trolls worldwide.


It says "rules: no swastikas", which is just an invitation for people to draw swastikas.

This reference is better than a swastika though. http://i.imgur.com/sFwteao.png


I added that after someone got snippy at me after I deleted their 'art'. This way expectations are clear about when I'll intercede with my admin powers.


Good job!


super awesome!!


Jesus, how many more weeks are people going to write 10 articles a day about this silliness? You'd think it was the first online webpage that did something besides display hypertext?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: