Hacker News new | past | comments | ask | show | jobs | submit | colinchartier's comments login

Postgres can be tuned to 10k inserts per second, Kafka is definitely overkill for 200/s


Thanks! I will definitely look into this some more. M


Reminds me of this video about the origin of the name Tiffany: https://m.youtube.com/watch?v=9LMr5XTgeyI

"I didn't fly across the Atlantic to... Of course I did"


Ha, then you might enjoy this anecdote: In that Tiffany video, there is a chart that appears on screen for a fraction of a second at 7:39 (the correlation "People drowning in swimming pool <> Films with Nicholas Cage"). That chart is from Spurious Correlations, and was made by the author of the bridge article.

Source: Am that author. :)


I notice that that part of the video is highlighted in "most replayed" now, possibly as a result of this discussion.

I know Grey has mentioned in the past being confused about traffic surges to his videos, so wonder if he noticed this one. But then he's also mentioned reading hacker news at times, so maybe he read this comment too.


Reminds me of the ghost Unicode character saga: https://www.dampfkraft.com/ghost-characters.html


just saw this. I've got no idea how kanji ocr works but I do know enough japanese to know what most of those characters are attempting to refer to, my penmanship has certainly been that bad. I still don't understand how it would make its way into the standard unless that part wasn't written by someone who is competent in japanese.

I wonder how often that happens - surely there's tons of people dealing with japanese text who can't read it and just use diligence to make sure the "letters are the same"


Nested virt doesn't work with some things like OS snapshot/restore, so they might not support it to allow those features.


We got around this at my company by just pooling all of the LISTEN/NOTIFY streams into a single database connection in software, here's a sample implementation:

function software_listen(channel, callback):

  if not channel_listened(channel):

    sql("LISTEN " + channel)

  listeners[channel].append(callback)

function on_message(channel, data):

  for listener in listeners[channel]

    listener(channel, data)

function unlisten(channel, listener):

  listeners[channel].remove(listener)

  if len(listeners[channel]) == 0:

    sql("UNLISTEN " + channel)

Here's the actual go implementation we use:

https://gist.github.com/ColinChartier/59633c1006407478168b52...


As far as I understand, that's 100 shared queries across all users.


Why can't Apollo use per-user OAuth and not Apollo's own API keys?


Wouldn't that require each user to go in the developer settings for their account and set up their own OAuth app integration? That's probably not feasible for end-users and I would imagine breaks the spirit of the API terms if not the letter of them.



Paul Graham would say for exponential growth, 10x better is the bar


Will add those to the landing page - they got lost in the Figma design


It looks like all of the auth data is empty - maybe you logged into an account which doesn't have permission to install on that repository?

If you email me at colin@webapp.io with your GitHub account name, webapp.io account (if any), and the name of the repository, I can take a look for you.


Ah, I stripped it from the post, didn't want to leave some important tokens on the internets.. :D I removed the authorization and tried again a few minutes later, it ended up working. Probably a temporary problem.


Practically - We don't actually require you to use Docker, and we don't use it under the hood. The configuration (which looks like a Dockerfile) can run multiple containers in the same VM:

FROM vm/ubuntu:22.04

RUN (install docker)

RUN REPEATABLE docker-compose build

RUN BACKGROUND docker-compose up

EXPOSE WEBSITE localhost:8080

You can also just run entirely non-docker services:

RUN BACKGROUND redis-server

- The DSL will watch which files are used for each build step to skip everything you haven't changed, so builds are much faster - You can use us to clone VMs and run Cypress tests in parallel, then promote one of the clones to a preview environment, and another to a production environment - We're more declarative (in my opinion), we don't have a CLI. Everything is done by editing a Layerfile and git pushing it

TL;DR, we focused on a declarative configuration format to quickly build, fork, and hibernate VMs, where Fly is more focused on building/shipping docker containers (though the products do fill a similar niche)


Hey! Best of luck with this. Real quick: we don't use Docker "under the hood" at fly.io either. We take OCI containers as inputs, and transform them into VMs. The only "Docker" running here is the code that allows us to accept our users existing containers.


Sure, I meant "docker" here as a simplification for the OCI format.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: