Hacker News new | past | comments | ask | show | jobs | submit login

This is super awesome work. Well done - you should be proud!

I have to admit, I remember stumbling across your comment when you accepted the challenge and in my mind I scoffed, thinking you were never going to do it. Boy was I wrong! Once again, this is super cool.




Heh thanks!

I feel like there's two kinds of people who make bold statements like that: There's young people who are suffering from the Dunning-Kruger effect - inexperienced but think they're hot shit. Then there's people who've actually done a lot of hackathon-type events and as a result know what it takes to pull them off successfully. (Time, caffine, and a deep familiarity your tools.)


As the one who challenged you in that original thread, what drew me to your initial comment was the great point that you made: that much of the time and difficulty in doing something novel is making many of the tough decisions, and that once those design and technical decisions are made (and revealed), it seems "obvious" to others, and is judged simple in comparison.

Congratulations on following through, and demonstrating your core premise!

What were the top things that you felt weren't captured by that premise--for instance, undocumented decisions that you had to discover on your own, or cases where you made tradeoffs that led to unexpected complexity? Were they maianly around bot-mitigation?


Thanks for saying so!

> that much of the time and difficulty in doing something novel is making many of the tough decisions, and that once those design and technical decisions are made (and revealed), it seems "obvious" to others, and is judged simple in comparison.

Yes - one of the things that drew me to the project was how building this in an event-sourcing style fits so well here. Doing it that way solves some of the architecture problems reddit talked about in their blog. It seems obvious to me that this is a good approach, but obviously not everyone shares that view!

> What were the top things that you felt weren't captured by that premise--for instance, undocumented decisions that you had to discover on your own, or cases where you made tradeoffs that led to unexpected complexity? Were they maianly around bot-mitigation?

Thats a great question, but I didn't spend much time surprised.

The thing I was most concerned about was kafka, but integrating kafka turned out to be was delightfully easy. I had to write some code to buffer recent operations in my server for catchup - I wish kafka had an API for that, but that wasn't hard to work around.

I think getting notifications working would have been a time sink but I explicitly removed them from the spec so I wouldn't have to deal with them.

It took me way too long to get kafka actually running through systemd on my linode. But I've spent enough time with apt-get that I wasn't surprised, just disappointed.

I was surprised how quickly people started drawing smut, and how much time I needed to spend early on cleaning things up or writing tools to remove large bot-drawn genitals.

There are still a lot of decisions around rate limiting that I feel uneasy about. I worry that reddit's 5 minute rule wouldn't work for a little website like mine. I allow ~1 edit per second. Is that a good idea? I don't know. Its an expensive experiment to try different values and see what happens because there's a community involved. And I don't have reddit's huge user base. But maybe I'm being unnecessarily risk averse by allowing so many edits. Forcing slow editing is bolder - it requires a longer commitment to draw, but is probably also much more satisfying to people who create content.


Did you consider using Docker for the provisioning of Kafka?

A couple of days ago I remember reading how difficult was to deploy Oracle on Linux and how Docker made this a breeze. I wonder if Kafka would also fall onto the same premise.


Probably. I was bullish on docker in the past, but I'm no longer convinced its worth the trouble for small projects. It adds an awful lot of operational complexity for what is essentially a more complex abstraction around processes.

I think its a nice tool for deployment and making reproducible builds, but a lot of other things become harder through docker - like managing a databases's data, and communication between local processes.

Maybe the tooling has improved in the last few years, but I've gone back to the raw unix coalface.


>... but a lot of other things become harder through docker - like managing a databases's data, and communication between local processes.

It doesn't have to be this way. If you use shared folders to persist data on host you are in no worse position than you would be in if you used natively installed app, persistence wise.

I think the Docker's focus on orchestration (which makes business sense for them) is the reason why running DBs in containers got bad reputation. But really, if you use shared dirs with host and view containers as processes you can use them for DBs too.

IPC with containers OTOH forces you to architect the system as a bunch of microservices, which is usually not a bad idea either.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: