Hacker Newsnew | past | comments | ask | show | jobs | submit | TechIsCool's commentslogin

I think the rotating photos create a poor UX. The purpose of this layout it seems is to let users view the images carefully and study the details, but the slideshow effect makes that difficult.

From a casual browsing perspective, I liked it. However, it'd be nice to have it pause when you hover over one - or something like that. To get the best of both worlds.

I mean if your intent is to view the images carefully and study the details why not click through to the details page and see larger, more detailed photos?

Because I'd like to look at the image I'm looking at at the moment I'm looking at it.

I found the page frustrating and overwhelming and had to close it.


I agree, 2-3x pricing because you have more people always felt like a cash grab, similar to an SSO Tax. We also have a lot of complex Pagerduty configurations and their APIs are painful. Why do timestamps drift, templates get updated but don't show drift, and identifiers are not the same between UI and API. I regret implementing within Terraform and would rather just let teams manage their own on-call sadly.


Feels like a sales pitch only due to the abstraction of Provider A,B,C vs actually naming the products. Guess thats what you get for a vendor blog.


Hey, author of the post here.

We're actually not allowed to post head to head comparison with competitors and share their names, that's why :) Post contains the dataset, the tool and methodology how the data was collected, which hopefully gives confidence in fairness of the benchmark.


At the GitHub Enterprise level, you can see that reflected if you look at any of the users profiles https://github.com/mghaught


This collector is one of my favorites to ask Copilot Agent to use for validation when the stack is missing tests. You give the agent a couple well written prompts of what you expect to happen and since the app has distributed tracing enabled. all logs flow to text and are consumable by the agent.


I am not shocked by this, as I have asked for writing a support case about AWS SCP and it wrote it in the style of the scp-wiki. I got a good chuckle out of it and wondered if it made sense to add it as a joke to my default prompt.


Didn't we hear this tune from Sony in the Playstation 3 days with Developer mode and then it slowly faded away after a couple years of application/product releases...


I am surprised that the use of a messaging queue through MQTT is considered a misuse of their technology when in reality it appears that the other application just was using an internal API that could change without notice. I also could see how certificate based authentication could be viewed by some as a time based expiration on the firmware.


Yeah that's a huge bummer if so, I've got both a HA automation that shows the printer status without needing to have an app installed and I've got a secondary filtration system that's fully automated which would be a PITA if I had to manage manually.

Totally understand if it's something that could change/break in future updates but the language about it being "exploited" is a bummer, you would think extending/documenting that would actually drive further adoption of the printers by building a more robust ecosystem around them.


I love dive and its something that I use in my tool kit multiple times a month.

I am curious if anyone knows how to get the contents of the file you have highlighted, a lot of the times I use dive to validate that a file exists in a layer and then I want to peak at it. Currently I normally revert to running the container and using cat or extracting the contents and then wandering into the folders.


You can use rsync with some magic to get at the files, but it's not much removed from using cat.


With the mention of AWS RDS and Aurora, I am curious if you had thought about creating a replication slot, adding a replica to the cluster and then promoting the replica to its own cluster. Then connecting the new cluster to the original with the replication slot based on the position of the snapshot. This would save the large original replication time and also keep the sequences consistent without manual intervention.


That's a very interesting approach, I'm not sure if the sequences would remain consistent under that model or not. AWS RDS Aurora also requires you to drop replication slots when performing version upgrades, so we would unfortunately have lost the LSNs for replication slots that we use to synchronize with other services (e.g. data warehouse).

I'd look into it more next time if it weren't for the fact that AWS now supports Blue/Green upgrades on Aurora for our version of Postgres. But, it's an interesting approach for sure.


Yeah its been nice to leverage this while working on some of our larger multi TB non-partitioned clusters. We have seen snapshots restore in under 10 minutes across AWS Accounts (same region) as long as you already have one snapshot shipped with the same KMS keys. We have been upgrading DBs to lift out of RDS into Aurora Serverless.

If anyone here knows how to get LSN numbers after an upgrade/cluster replacement. I would love to hear about it since its always painful to get Debezium reconnected when a cluster dies.


I looked at getting LSN numbers after an upgrade/cluster replacement, and IIRC restoring from a snapshot emits LSN information into the logs, but it's a bit of of a mixed bag as to whether or not you get the __right__ LSN out the other side. Because the LSN is more a measure of how many bytes have been written within a cluster, it's not something that meaningfully translates to other clusters, unfortunately.


Agreed, the snapshot does output a message in the logs but based on our conversations with AWS it was suggested that we use the SQL Command to determine the LSN. Sometimes depending on revision you won't get the logs and other times the log line is emitted twice based on the internal RDS consistency checks. Makes me long for GTIDs from MySQL MariaDB Galera[1]. They worked super well and we never looked back at my last company.

[1] https://mariadb.com/kb/en/using-mariadb-gtids-with-mariadb-g...


Can you expand on what those problems were you had with reconnecting? Would love to better understand this and see whether there's anything which can be improved in Debezium.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: