Hacker News new | past | comments | ask | show | jobs | submit | kalmar's comments login

This used to be me until I started using screenshot shortcuts that put images in the clipboard instead of files. You can paste directly into jira, slack, hangouts, whatsapp web, and many other places.


Author here. Those are great tips, thanks! I actually just installed evolve from pip last night. I'll probably borrow from your hgrc, or at least let it inspire me :-)

> I'm in the opposite boat where I've used mercurial for nearly 10 years but any time I try to use git I get lost.

Very interesting! Do you end up using it as a client to git repos?


I've been able to avoid git repos for the most part, or the few times I've had to interact with them I've kept it a bare minimum of contribution to avoid any complex workflow. It's not the best idea especially since the industry still largely uses git, however I haven't had the appropriate occasion to learn git while also learning the project and commit the git process to memory. I've heard you can use hg on git repos but I haven't investigated using that yet. By far I think the things that confuse me most are I believe two points you brought up in your post -- local vs. public changesets, and working in a detached head state (working on a feature branch and then rebasing onto primary branch for landing). I think on top of that I get overly confused when github is thrown into the mix as again I've not sat down to learn the process since it comes up only sporadically.


And you get 4 extra cores and 24 gigs of ram "for free" vs i3.16xl (which is what we use). I think we looked into switching but it wasn't clear if the reservations could be switched over.


Yup, that's the blog post I mentioned in the intro. Sometimes obsessively reading HN pays off. Even if it's a year later... :-)


Back when we ran the Citus cluster on EBS, we lost some EBS volumes as well. This manifested as disk not responding, followed several days later by an email from AWS with the subject Your Amazon EBS Volume vol-123456789abcdef telling you the disk was lost irrecoverably.

But yeah, you need to be ready for your disks to go away no matter where they are: ephemeral, EBS, physical, whatever.


Post author here. It's ephemeral, yes. It survives reboots, so that's not a problem. It doesn't survive instance-stop, so if a machine is being decommissioned by AWS we do indeed lose its data. As for how we protect against it, the main thing is replication: the data is stored on more than one machine. If we lose a machine for whatever reason, the shards from that machine are copied from a replica to another DB instance.


As local NVMe storage does not have any interaction with the "classic" block device mapping APIs (the storage shows up as a PCI device, the same way that a GPU or FPGA does, and it doesn't matter in any way how the block device mapping is set up), there is no reason to use "ephemeral" to describe it.

Said more directly: no, it is not ephemeral. It is local storage that is tied to the life cycle of the instance.


Hi post author here! First off, we actually do use RDS for other databases. As you point out, having a lot of the operational stuff taken care of for you is great.

The post is specifically about our Citus cluster, which stores the analytics data for all our customers. Most of the reasons we do this have been given by other folks in the replies:

  * RDS doesn't support the Citus extension
  * data is stored on ZFS for filesystem compression
  * we get significantly higher disk performance from these
    instances' NVMe attached storage, which isn't available for
    RDS


Post author here, happy to answer any questions. Looking into this one was fun, especially actually looking at the vDSO source instead of just thinking of it as "the makes gettimeofday syscall fast thing".


I'm curious why chroot is used instead of mount namespace and pivot_root(2). This would let them get away without CAP_SYS_CHROOT, while also providing stronger filesystem isolation.


Honest question: how do people use influxdb for monitoring and alerting? Our metrics feed into influx, and I cannot get answers to simple questions like “what is the failure rate” because arithmetic across measurements isn't possible [0]. I could shoehorn things into a schema to make it work, but in the limit I end up with one mega measurement.

[0]: https://github.com/influxdata/influxdb/issues/3552


Maybe try to use Graphite as additional query API for influx. We switched to Influx from Graphite but some queries where unable or cumbersome to translate into Influx query language (especially inside Grafana). In our case we use Graphite-api with a Influx plugin instead of Graphite's own frontend. https://github.com/InfluxGraph/influxgraph


It's possible to write raw queries in Grafana by changing the query edit mode.


True, but even then there are some queries you can't do which graphite can. And you loose the nice editor grafana offers.


> Honest question: how do people use influxdb for monitoring and alerting?

In our case, influx+grafana+alert notifications work well.

Yes, the query language needs a lot of work. It doesn't support anything beyond simple queries.


I've had similar problems.

I feed analytics from my webpage into InfluxDB and it is impossible to compute a histogram of pageload times of the last 10k hits.


Right. You would have to approximate the last 10k hits to the time period.

Also, check out the grafana histogram plugin. Works great for these scenarios.


I've tried the grafana histogram plugin but it doesn't work; simply get a 404 error on load and makes the entire row unusuable.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: