Hacker Newsnew | past | comments | ask | show | jobs | submit | patshead's commentslogin

On the other side of that coin, I am excited to be up and running while everyone else is down!


This can be such a complex problem. Maxing out 8 cores is rarely twice as fast as maxing out 4 cores, but it will likely make the CPU draw around twice as many watts. That makes it seem like it might be a better use of power to keep cores idle during a big compile job, but it also takes watts to drive the display and keep the backlight on.

I can't imagine this would be easy to get right, and it is probably much harder to set a good default. Even so, the background compile job taking twice as long might not be a waste of time as long as I get to play Oxygen Not Included twice as long!


Race-to-idle is sometimes a goal, yeah; if you have some performance budget (say, a 16ms time slice), and a fixed workload, does it make sense to run resources at lower power and barely meet the budget, or to run everything at max power so they can spend the remainder at idle? The answer is different for many scenarios.


Just another way too much heat can threaten your ONI run :)


The Crucial MX500 is a rather old piece of hardware now.

Tom's Hardware NVMe benchmarks include a "Sustained Write Performance and Cache Recovery" component. Whenever there's a good sale price on an NVMe, that is just about the only metric I hunt down now. Most of the worst drives they test will always beat a mechanical disk, but the worst drives Tom's Hardware ever tests are still decent drives.

I grabbed one of the cheapest SATA SSDs last week to replacing a failing lvmcache drive. It is a NETAC 1 TB that might still be on sale on eBay for $34. I expected the worst, and I did want to test its sustained write performance, but I wasn't as nearly scientific as you!

I just ran dd for a while and watched it stay between 420 and 470 megabytes per second for about 120 gigabytes straight before I stopped the test. The meanest I am to this cache is dropping 50 GB of video on two different days each month, so that was all the data I needed.

Had I known that I would be reading your blog four days later I would have let the dd finish so I could take better notes! Thank you for taking the time to do the science for us!


> The Crucial MX500 is a rather old piece of hardware now.

Depends on when you bought it. Crucial/Micron decided to stop introducing new branding when they updated their SATA SSDs, but the hardware inside has changed several times to incorporate new generations of NAND flash memory, and probably at least one update to the SSD controller by now. None of that matters to the top-line specifications they advertise, but such changes can be relevant for more stressful, more thorough or less realistic benchmarks.


The views are nice, but the time on page stats from social media tend to be horrible. One of Simon's screenshots shows over a million pageviews with 43 minutes of time on page. This isn't a long blog post, but it is fairly information dense. Out of 1 million unique views, this is only enough minutes for a couple dozen people to have actually read the entire post.

Most of my blog's traffic comes in from Google, and I most of my posts that see a reasonable amount of traffic will have at least average 1 or 2 minutes of time on page.

If I get a spike in traffic to a page from Twitter, the average time on page will drop to 1 or 2 seconds.

When someone comes in from search, they are seeking information about something specific. The folks dropping in from Twitter are infinitely more likely to immediately click on something else.

I don't know that I write much that would be of broad interest to the Hacker News audience, but I would be much happier to see traffic from here than from Twitter. I would bet Simon's 49.5k clicks from here got him way more engagement than 712k clicks from Twitter.


Plausible, like many other client-side analytics packages, by default only measures time on page for people who view more than one page. The time they report is the time between clicks.

So for big viral social media moments, the stat is basically worthless, since the vast majority of those visits bounce off the article page.


I'm very skeptical of that 43 minutes number. It doesn't make sense as a sum-of-all-time number, but it also doesn't make sense as a average-time number either.

I'd like to hear from Plausible about what that's meant to mean, because as it is I've been ignoring it as possibly a data collection bug of some sort.


I never have much trust in the time on page or time on site numbers. In your screenshot, the number sure looks like it is meant to be a total.

I use Matomo for my analytics. I think their little Javascript doodad attempts to report back time information after 15 seconds. Anyone who clicks away before that timer goes off is recorded as a zero.

I get a sort of warm fuzzy feeling when I see high time on page numbers, but zeroes never bother me. There are a huge number of reasons why a time can't be captured, but when a time is captured, it is almost definitely someone who had eyeballs on your page.

You're doing a good job, Simon! I would be thrilled to see a million clicks from Twitter on one of my posts. I am just more excited about your 50k clicks from Hacker News!


> Time on Page

> The average time people spend on a particular page on your site. This is calculated as the difference between the point when a person lands on a particular page and when they move on to the next page.

https://plausible.io/docs/metrics-definitions#time-on-page


"I use Matomo for my analytics. I think their little Javascript doodad attempts to report back time information after 15 seconds. Anyone who clicks away before that timer goes off is recorded as a zero."

Why in the world do they measure time on page like that? All they need to do is compare timestamps between hits...


There isn't always a second hit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: