Hacker News new | past | comments | ask | show | jobs | submit login
Beating Jeff's 3.14 Ghz Raspberry Pi 5 (jonatron.github.io)
229 points by jonatron 28 days ago | hide | past | favorite | 60 comments



Awesome work, and I'm glad you could post some results! I'm hoping to get time to delid one, put on a peltier cooler, and try to control the temperature a little better for a run to see how high it'll go before either burning up or going unstable.

From my testing on clocks on the Pi 5, it looks like the default clock of 2.4 GHz is pretty close to the sweet spot for this chip (BCM2712), and you burn a lot of power for small incremental gains after that[1]. (Which you seem to also show with the 3.3 GHz overclock!).

I also spoke to one of the Pi engineers about the chip behavior at higher clocks, and he suggested unlike some chips, this chip might run more stably at higher temperatures (like 50-60°C) rather than 'as cold as you can get it'. So that poses some challenges since most cooling solutions aren't tuned for 'keep a temperature' but instead 'get it as cold as possible', without a lot of manual tweaking.

[1] https://www.jeffgeerling.com/blog/2023/overclocking-and-unde...


Did the engineer explain why the higher temps contribute to stability? I've not heard of such a phenomenon before.


No physics-level explanation, just that they found the chips to be more stable in testing when they were a little warmer vs a little colder. The key was to keep them around that temperature, though, which still requires a good amount of cooling the more voltage that runs through it!

Just... he mentioned I might not have as much success using LN2 or something more exotic, compared to standard water or Peltier cooling.


That's pretty crazy, thanks for sharing, Jeff.

Overclocking was my bread and butter as a [relatively] broke teenager in the late nineties, around the era of the first Athlon Thunderbirds, when you could take a 1GHz chip and [maybe] OC it to 1.5Ghz. It was a great time to be alive, and yet this is the first case I've heard of where LN2 would not give you a dramatically better result 99.99%+ of the time! I still miss HardOCP and Kyle Bennett and his team's reviews.

That one t-bird with char spots... It still worked reliably somehow, I might even have it in a box somewhere. Those swirly finned CPU coolers were shit! I came home every day after school and volted/burned the hell out of that poor chip, not realizing what I was doing.. lol.


> I still miss HardOCP and Kyle Bennett and his team's reviews.

Kyle stil posts on Hardforum, and much of the team went on to https://www.thefpsreview.com/ but it's not really the same, because there's no 50% overclocking by just moving a jumper. CPUs and GPUs get factory overclocking that's probably within 10% of what you can get with reasonable efforts.


If it comes like that from the factory, is it overclocking? Isn't it just clocking?


We live in the age of dynamic clocks where modern performance-oriented CPUs try to optimize both performance and power efficiency at all times, so for the vast majority of users, it's both a lot faster and a lot more efficient to do things this way, while enthusiast parts still allow a lot of manual control to squeeze out the last few percent. I've had really good luck in my past few builds just getting very high-spec RAM and running XMP / DOCP profiles with a small FSB OC and letting the multipliers do the rest.


potato, overpotato


I still remember my dual celeron 450 clocked to 900 mhz. Those chips ran rock solid at double their rated clock, no didling the voltage or anything. Just needed decent cooling. Never mind that at the time having 2 processors was nearly useless.


I have overclocked a 1433MHz Athlon 1700+ (TBred/B IIRC) to 2200 MHz (3200+ levels) with an AN7-Ultra.

The secret sauce was running it at 200x11, with 1T capable RAMs, and that thing was snappier than "bog standard" 3200+ systems a considerable amount.

Without much of a voltage bump, and a good cooling solution, it ran within its thermal design without noise, and with rock solid stability. That system lived more than 15 years IIRC.


Heh, back in those days decent cooling was a lot easier than now (for overclocking, at least).

That's one nice thing working on these little mobile chips—I don't need a $300 cooler, I can use a cheap little water block, or a small peltier element that doesn't cost much at all... and it's not being a space heater for the room. It's only pulling maybe 10-20W max.


Where are these "cheap" water blocks you speak of? Haha, I've never encountered such a thing.

Except maybe the $100 Corsair liquid coolers, but c'mon, they aren't real water cooling


I still remember that time, when you actually waited for the low end chip to be released and get it as fast as the earlier released flagships. Was a different time. But I also roughly remember having a dual socket board at the time with two overclocked celerons if my mind does not fool me. Funnily my wife is still using this 25 year old PC case, in which I glued bitumen for sound damping, for her current ryzen based PC.


Haha, yes, back in the 90s I had a dual Gigabyte motherboard and two 300Mhz Celerons overclocked to 450. Helped a lot with 3d animation and video rendering, but not much else!


My favorite OCs were a $250 Black Friday special Celeron laptop that I pin-modded using a few-mm long piece of a single strand of stranded cat5 to go from (IIRC) 1.5Ghz to 2.0Ghz by just shorting two pins on the socket under the CPU, and a 4.6ghz (3.3-3.7ghz stock) Sandy Bridge i5-2500k that was rock-solid on air cooling with a comically large (especially at the time) tower cooler. That latter desktop ran the heck out of Team Fortress 2 (which I still play at least once a week with one of the last remaining community servers, End of the Line Gaming) for many many years.


i had a 1.4ghz Duron i ran stably at 2.45ghz, amazing value


The temperature inversion effect is more pronounced at 16nm (Pi 5) than older nodes. This results in high VT cells performing better when warmer, the opposite of what we are used to. At the "normal" operating conditions (temp and frequency), this shouldn't be noticeable but when your at the absolute frequency limit it's not ideal for your critical path (where you normally use HVT cells) to get any slower. Perhaps it's related to this.


I have extensive experience in watching LTT's jank cooling videos, and I think water cooling with a big relatively big reservoir would be able to keep the temperature at a chosen temperature. I found someone who has done it: https://www.youtube.com/watch?v=iBNSbzTzfSE&t=20s using this kit: https://www.seeedstudio.com/High-Performance-Liquid-Cooler-f...


Wonder if it was more to do with deltas. Everything at 50° might be more stable then hotspots?


This might be naive but you may take a look at warm water cooling from HPC/ Hyper Scalers. Combined with a custom block this should stay stable at the 50-60° sweet spot.


> but instead 'get it as cold as possible'

More like 'get it as ambient as possible' so if you're in a 50 degree room you're all set.


You've mixed up Celsius and Fahrenheit.


I don't see where anybody in this thread or the linked article said Fahrenheit. Where do you see mixed units?


I find myself disappointed in the lack of helium cooled silicon chips in the modern year of 2024.


Is there a consensus on the best available cooler for the Pi 5? I looked at this exact unit but wanted more of a "case" design.

I first tried the Flirc passive case. It seems to transport and dissipate heat notably better than active coolers with copper heatsinks and 4000 RPM fans. That's especially impressive given that the entire top and bottom are plastic, leaving the horizontal edge as the only surface for heat dissipation.

My remaining concern there is that it only cools the Broadcom SoC, while creating a nice little insulated oven for the other chips. The inner surface area is much greater than the outer surface area, and with no ventilation by design, so heat from the SoC is being distributed throughout the whole inner volume.

I also tried an active cooler to avoid that, which I'm sure is better for every other chip but I'm surprised to find was substantially worse for the SoC itself. I guess the tiny copper block gets saturated very quickly and its surface area isn't very large for air cooling.

Maybe that's why the monoblock passive coolers do so well, in theory they combine the best of these approaches. I just wish they'd apply the same idea to a refined "case" design like the Flirc.


if you don't need the pi to stack I've found the Argon One to be a good case for both overclocked 4's and 5's. The fan is somewhat weedy and the airflow is questionable, but the entire case acts as a heatsink. As far as thermal mass goes I don't know of any that beat it.

And additionally, you get a Pi case that puts all the ports on one edge where they belong instead of forcing you to make cable squids on your workbench/desk.


The bigger the better, right? If so, this: https://thepihut.com/products/ice-tower-plus-for-raspberry-p...


I also use the flirc cases and have been similarly concerned by the other components getting hot, but I must admit that it doesn’t seem to cause any actual problems, at least not yet, and it certainly does a good job of keeping the CPU cool.


I have always found coolers with heat pipes provide the best cooling for mine.


pi's feel like the perfect candidate for submerged liquid cooling. imagining a tiny little french fry basket filled with pi's being lowered into some mineral oil and bubblin. salt to taste.


raspberry fri


> I don't want people blaming me if their Pi decides to halt and catch fire.

For those who didn't catch the reference:

https://en.wikipedia.org/wiki/Halt_and_Catch_Fire_(computing...


> A single mov.cc instruction can be patched to remove the voltage limit. However, it's in bootmain, which is signed, so we can't just patch bootmain and flash the eeprom. > > However, as a root linux user on Raspberry Pi full access to system memory, including memory used by the videocore. I mmap'd /dev/vc-mem, searched for the instruction and replaced it, but i'll leave that as an exercise to the reader. I don't want people blaming me if their Pi decides to halt and catch fire.

Does this need to be reapplied every time at boot? Guessing yes...


Yes. Maybe it's theoretically possible to add something to a modifiable part of the eeprom, but that's beyond what I can do in my spare time.


> There's a silicon lottery

Isn't there also an environmental factor that hasn't been fully explore? Are we sure there isn't an alternative to the cooling mechanism on the CPU than the two options the parent and Jeff used?


How much voltage noise can get from a PSU to the cpu these days? I wonder if a better one lets you get closer to the theoretical limit.


Don’t know anything about arm or pi cooling But on my ryzen 9, a big ass air cooler beats liquid cooling by miles.


Liquid cooling itself doesn't really cool anything, it's actually all about being able to move heat around so you can have an absolutely insane amount of radiator because it doesn't have to fit in the space around the CPU. If your radiator isn't enormous (i.e. at least 360mm, if not larger), is too thin, or is itself not getting proper airflow to actually dissipate the energy then it's probably not doing much better than a large air cooler could do. Well the other thing liquid loops add is they give a larger thermal mass (i.e. takes a lot longer to heat up and can absorb bursts better) but that doesn't really matter 2 hours in.

Also a 7950X owner :). It's been a good CPU outside the memory controller is a bit weak compared to the Intel counterparts, will be interesting to see what the two release this year.


Semi-related, but TIL the Raspberry Pi Foundation enabling large corporate customers to secure-boot lock the Pis they're embedding in their juice dispensers and whatnot.

Nothing like being a supposed open source darling and helping corporations deny people the right to use hardware they purchase, the way they want to - and helping contribute to e-waste, because there will be millions of Pis that nobody can use for anything other than the IoT banana dispenser they were integrated into...


There are legitimate reasons for such a thing though, I think. Like I don't want my juice dispenser to do general purpose computation. I don't want it to be capable of web surfing.

Ideally you would do that by building it out of simple mechanical components, but if taking smart components and dumbing them down is cheaper then that sounds fine too.


I'm not sure Raspberry Pi are to blame. Broadcom insist on closed source binary blobs, and their chip has e-fuses built in.


I think it is reasonable to criticise them for being quite so tightly tied to Broadcom, though. That's an artefact of the local ecosystem they're in, and it's arguable that the pi wouldn't exist at the price point it does without such a close tie, but live by the sword, die by the sword...


> Nothing like being a supposed open source darling

They've never claimed to be an open-source company. It's unfair to judge a company (or charity) against your own ideas of what they stand for. Some of their software is open source and some is not. Some of their hardware is open (2040), most is not.


Can you say more about where you found this information? I didn’t see it in the OP.

I think rpi has made some compromises to advance business viability, which feeds their more altruistic endeavours.



Assuming a fast reliable internet connection, how well does an overclocked raspberry pi 5 perform when video conferencing using popular conferencing applications such as zoom, google meet, ms teams and the like?


Probably slightly worse than a 10 year old core i7 cpu computer.


True, to some extent this is also an artefact of (comparatively) poor software optimisation for pi. In this case almost certainly zoom and others apps will be using cpu encoding/decoding rather than offloading.


Exacerbated by the Raspberry Pi 5 having lost all Hardware Video Encoding and H.264 Video Decoding. (That logic I don't follow at all)


The cartel charges pretty high fees for those.


Zoom in particular will be a browser-only experience as well, since there is not an arm Linux version of the app.


For that kind of thing, you’re 100x better off getting an old enterprise uSFF workstation on the cheap. Whenever I can, I reach for a used Dell 5070 or similar if I don’t need GPIO. They eBay for about £40-50, have real NVME, x86, and the pentium ones are pretty powerful.


Teams works fine, so I suspect the others do as well.


I'd love to see a 16GB variant of the RPi5 some day.


If you don’t need the pi software ecosystem the orange pi 5 plus competitor comes in 32gb


Just quickly looking at Orange Pi 5 images without properly researching, the official site links to images on google drive, and armbian has an image with a 5.10 kernel, that requires PPA's for 3D acceleration.

There's got to be an SBC other than Raspberry Pi that has reasonable software support. Does anyone know? I'm not buying another SBC that advertises hardware video decoding but doesn't actually have the software for it, or requires one specific modified kernel version.


RPi OS is also using a modified kernel and packages that include the v4l2 codec and FFmpeg (rpi-ffmpeg, which I ever tested). https://github.com/raspberrypi/linux/commit/46f21cab3e888823... https://github.com/jc-kynesim/rpi-ffmpeg

These staging drivers do not exist in the Linux mainline. It means that you will not get hardware acceleration support when compiling and installing the kernel from `torvalds/linux` instead of `raspberrypi/linux`.

As for the RK3588 SBCs, you are free to choose to use Linux 5.10 LTS (legacy) or 6.1 LTS kernel, both of which are officially supported by Rockchip. Or alternatively, use the bleed edge kernel 6.9. Official 3D acceleration will be available in Mesa 24.1 and Linux 6.10, and the developers have also backported it to 6.1 LTS for ease of use.

In addition to Armbian, you can also use `ubuntu-rockchip`, which has full hardware-accelerated desktop/server Ubuntu 22.04/24.04 LTS support. https://github.com/Joshua-Riek/ubuntu-rockchip

The VPU used by video decoding has nothing to do with 3D/GPU. With `ffmpeg-rockchip` and `libv4l-rkmpp` you get 4k@60 hw decoding support in Chromium and MPV player, and 8k@60 hw decoding support in Kodi. https://github.com/nyanmisaka/ffmpeg-rockchip/wiki/Rendering

Jellyfin also provides complete transcoding pipeline support on the RK3588 based SBCs. https://jellyfin.org/docs/general/administration/hardware-ac...


Thanks, this is really useful information.


Late response sorry.

The software situation is as you describe. Utter shitshow. Google drive hilariously is rate capped so half the time you can't even grab the image.

The hardware is objectively better though. (notable exception being nvme throughput...the pi5 nvme hat will do faster than orange pi native)

Unsure what the lay of the land is on hw decode, sorry


Another thing that won't be matched by Tau Day.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: