Oh my! This is such a crazy upgrade. I've been using the RPI2 as my HTPC/NAS at my folks, and I'm so happy with it. I was itching to get the last one for myself.
USB 3.0! Gigabit Ethernet! WiFi 802.11ac, BT 5.0, 4GB RAM! 4K! $55 at most?!
What the!? How the??! I know I'm not maintaining decorum at Hacker News, but I am SO mighty, MIGHTY excited!
I'm setting up a VPN to hook this (when I get it) to my VPS and then do a LOT of fun stuff back and forth, remotely, and with the other RPI at my folks.
Using the Pi as a file server can be a bit flaky. The ethernet controller was an USB one, and was neither really stable or took load very well. The new PHY on a dedicated link is probably the single biggest improvement with this new revision.
The HEVC is a bit unexpected considering the high license fees and general uncertainties. Let's hope the documentation can be released as well.
So unsure at this stage, but that may well be the case for any license fees regarding HVEC. Heck, if they have absorbed those costs into the base price - I'd be utterly amazed.
I just setup my RockPro64 with 4GB, also PCIex4, have an SSD, going to setup a raid at somepoint when I get it all figured out. The board was a bit more expensive than the pi4, but I am interested in playing around with it.
> The ethernet controller was an USB one, and was neither really stable or took load very well
Hmm... that explains a problem I had with a pi of mine. Every time the 10/100 switch it was connected to rebooted, the pi would lose its ethernet link until rebooted. Never had this with other machines on the same switch.
There is an official PoE board that fits the pi 3+ which uses a 4 pin header by the ethernet jack to grab power. The pi 4 appears to have the same header.
I have one of these but am new to RPi. It covers all of the GPIO pins in addition to the four header pins for power. Is there any way to still use the GPIO pins with the PoE hat?
I'd guess primarily the lower volume of the PoE HAT would play a role here. But, it's also a fully isolated power supply with a fan. To me, it looks like this couldn't be made on a fully automated line, so there would need to be manual assembly, which is also more expensive.
I agree that it would be nice if PoE was built in, but there are better ways to say that than what you said.
Also, the HAT has a transformer (and I assume if they included it, they know what they are doing), which seems like it would be too much for every single Pi to include one.
did they finally release the 100W spec? IEEE really let that spec languish for too long -- vendors got impatient and implemented their own. i had to write 60W PoE code supporting three different specs around 2015. bosch's design was the worst.
My point was that PoE, especially with the recent revision, allows for very useful levels of power to be drawn, so if you're going to have an 'embedded' system with an Ethernet port, I would think it'd be useful to use it.
Why plug in two cables (USB-C (power) and Ethernet (comms)) when you can just plug in one?
The big shortcoming of the other Raspberry Pis I have is power. Plug anything into it and you risk undervoltage problems (even with the official supply)
I have one system where I relented and did that, but really, for many generations of pi... why is this even a thing?
For example, recently I plugged in a USB stick and a webcam and couldn't boot. So I would boot, then then plug in the webcam ... carefully. And there were still 2 usb ports free.
I agree with your excitement! I've been waiting for this upgrade forever. I keep looking at the other options out there, but while their hardware is great, their software is terrible.
IMO, at this point you're not paying so much for the pi, but rather for the community and accessories. It's the entire environment that makes the pi useful; not just the hardware.
(I speak as someone who sucks at programming and doesn't spend a 1/100th of the time learning the latest as I used too as a teen. So having the community around to help with my latest project that I need something more than a microcontroller for, is immensely worthwhile. Obviously those who don't need these types of spillovers would probably be better served with other hardware.)
I mentioned it above, but just to reiterate- there are lots of great hardware boards out there and many beat the Pi (though the 4 makes up the different finally). However, they ALL have terrible software support: old OS's, bad drivers, out of date or incomplete documentation.
To add to this, the bad software support had very real performance implications as well. I was surprised to see benchmarks where the Pi 3 dominated boards that very clearly beat it on paper
That said, does the pi have accelerated graphics yet? I was playing with pygame (based on SDL) and I think all the blits and other operations were in software.
Not really, what matters to me is the price I can get them at, including shipping, including taxes, including shenanigan fees, including sales, including discount codes, including 5% cashback on Amazon, including 1% cashback on other sites.
If you have a Microcenter near you, they often have 'in store only' sales. Right now a Zero W (limit 1) is $5 and a 3B+ is $25. Right now they list the 4GB 4B due in on the 28th for $55. (All $ in US.)
Sure, wasn't meant to be. Just saying that if you want to use the new capabilities, don't already have the accessories and buy the official ones it adds up. The older ones were cheaper, had less requirements and more common accessories.
It is in that the rasp pi's price doesn't have much slack. So it will never really go on sale whereas most other hardware with a few exceptions has a lot of slack. So it seems fair to compare the projected prices of things.
So, one party by default overcharging for their product and the other supplying it at cost makes you feel that when the one party puts their product on sale, a temporary condition at best the product suddenly become equivalent?
Then you should also compare the Chrombooks with second hand large format laptops, old servers on sale with new ones and so on.
Chromebooks 'on sale' are an entirely different class of product than raspberry pi's, they are larger, need wall power, have built in batteries and screens, are in general still more expensive and do not have GPIO.
I'm not singling out chromebooks. I'm just saying at $100 price point is crowded with a lot of old and new hardware. So a raspi may not be the best thing for you at that price point.
Sure, if you're doing CPU-only tasks, Jetson Nano is A57 and Pi4 is A72 which is maybe twice as fast, but if you can offload anything to the GPU at all (media encoding, neural networks, matrix operations) you'll probably get some enormous gains in efficiency with the Nano.
OTOH, your going to be pretty much stuck with whatever handouts nvidia gives you unless your willing to hack the firmware/etc and reverse engineer stuff.
Pretty much everything works on the existing rpi. Heck you can run Windows IoT on it. It has a UEFI firmware (edk2) port which mostly complete, and pretty much every distro and 3rd party OS supports it. And the hardware itself is mostly open at this point.
This. Jetson is part of NVIDIA's pipeline to drive customers to their chips. Raspberry Pi is vaguely that for Broadcom but is much more an end in and of itself. RPi is massively supported on the web.
If you're doing stuff that needs the Jetson, it's great, but the Pi is definitely better supported.
> Jetson Nano is A57 and Pi4 is A72 which is maybe twice as fast
A57 and A72 are very close in performance, as the A72 is an evolution of the A57. ARM's numbers have the A72 around 20-30% faster than the A57. The main advantage of the A72 was it fixed the horrific power problems the A57 had so in a phone usage you could actually use the CPU for longer than 10 seconds. But in raw performance it wasn't that much of a jump. A53 to A72 is around 2-3x as fast, though. A53 was the power efficient one. A57 was the performance one.
Both the Jetson Nano & Pi4 are clocked at around 1.5ghz, so the Pi4 should still eek out a bit of a CPU performance gap over the Nano. But if you're purely comparing factory vs. factory then the Nano would probably win over longer stretches as the Pi4's lack of heatsink results in it throttling down to around 1ghz after a few minutes. The Nano looks like it has a beefy enough heatsink to keep the A57's churning for a lot longer at their 1.43ghz spec.
The Nano's GPU is half of what is available in the Nintendo Switch. Based on some benchmarks I've seen, the one in the Pi 4 should be between 2 and 3 times slower.
It might work "well" as long as you don't need a lot of capacity, nor do you care about the fact that everything is running at < 30MB/sec.
The existing rpi lines are seriously IO bandwidth constrained in every manner. The USB3 on the 4 hopefully fixes some of this. Bottom line, unless you don't mind your photo's taking minutes to copy, your going to be much better off with pretty much anything else besides a rpi3 as a file server.
Even so, A72 is a highly-inefficient chip (which is probably why they set such a low clock speed for it). Cortex-A73 would've been much better. But I guess there always has to be at least one obvious lacking in Raspberry Pi generations.
I'm intrigued as to why they didn't use a big.LITTLE design (or whatever they're calling it these days). Perhaps mainstream-linux still isn't so great at handling it.
Really the only benefit of that is energy efficiency if it is used correctly, in exchange for additional cost or lower max performance. Makes sense for a smartphone, less so for a mains powered device (and the Raspberry Pi SoCs originally were designed for set-top boxes, although I don't know if that's still true for the new ones or if they're custom)
The addition of gigabit ethernet and USB 3.0 means that a Pi no longer feels like a bottleneck in one’s home network. I know that the Pi was invented as an educational product, but thanks to the Linux distribution OSMC it is commonly used as a media center for playing films, music, TV, etc.
I have had gigabit internet for a few years now, and every day on average, I torrent a Blu-Ray image onto my main computer. However, subsequently moving the Blu-Ray to my Raspberry Pi 3 media center is always slow on two counts: 1) ethernet from the router to the Pi was limited to 10/100 speeds, and 2) the Pi could push large files to an attached hard drive only over a USB 2.0 port. Consequently, on a Raspberry Pi 1–3 it takes an hour just to move a high-definition file around one’s home network! On a Pi 4, it looks like one can just put the torrent client directly on the media center.
As to properly balance out this review, what exactly were the issues you had that needed to be answered by a community for the RockPro64? The RPi seems to have a slew of things that just work on it out of the box, and if there are issues, 99.9% of the time you can find the solution with a quick Google. Very curious to understand what type of issues something like the RockPro64 (which has appeared numerously in this thread as a 'great alternative') faces for anyone interested in the alternatives.
I'm looking at buying the raspberry pi 4 right now, as I've been thinking of getting something to replace my odroid-c1 for a while now.
My understanding is that the RockPro64 has pcie slots on it, which allows Sata boards so you can connect drives directly without usb. Personally, I'm more interested in the odroid-h2, with the native Sata. Unfortunately, it costs far more ($111 without ram) and has no wireless connectivity built in.
Are you running it as both NAS storage and a Plex server? How's transcode performance?
I put together a NAS/Plex box with a Kaby Lake Celeron and some hard drives, but I was thinking of splitting it out into separate NFS and Plex servers.
Keep in mind that you are in a very small world there... most people alive today will never have internet that fast, personally i've never had a connection above 6 Mbits in the middle of a city, and I know that's likely above the median globally (keep in mind average is a poor metric due to connections like yours, SDL is still the primary type of endpoint for homes)
My point being, the previous generations USB based ethernet still has massive headroom for the vast majority of peoples internet.
The ethernet port on the 3B+ is still attached to the USB hub on the board. In my tests it never gets more than 300 Mbps. The Pi 4 should have true GigE support because it doesn't have that limitation.
Just read theverge review on how the Pi struggles to play a video full screen, even if resolution is 480p. How are then people using it as a media center?
What review did you read? Playing video is likely the only thing the PI 3 does really well; even most H265 FHD do play without stuttering on the 3+ although we're close to the board limits.
Are you sure the writer didn't use the board the wrong way? Some people still believe that videos should be streamed after being transcoded because that is the only way to watch them on their ridiculously limited smart TV which lacks the necessary codecs to watch them the right way. Of course doing this over WiFi would make the problem even worse.
To optimize network usage, videos should be kept encoded until they reach the player so that the network won't be clogged. If you use the PI to read the movie as a file over a shared SMB or NFS directory, the network usage is so low that you could watch like 20 different movies on 20 different players on the same home network at the same time. Probably even more.
The PI 3 (and to some extent probably the PI4 too) is still behind many other boards in other contexts (openness, performance, price) but playing video is surely not one of them.
Apparently Verge said it "reportedly" struggles with 480p "Youtube videos". Which is doubly wrong, as who knows whether whatever mechanism was being used to play Youtube even supports the hardware decoding capabilities of the Pi.
I'm not sure that transcoding has anything to do with it, unless the transcoding was happening on the Pi itself (which would indeed be dumb). Most of the time people are transcoding things it's x264 -> x264 with Plex, just with a much lower bitrate (and probably 720p) because (as you say) their player's platform can't handle it and no one cares about video quality these days.
It wasn't The Verge that originally said the Pi4 struggled to play YouTube videos [1], it was Tom's Hardware [2].
Tom’s Hardware’s review notes that the hardware is able to handle many everyday tasks such as web browsing with up to 15 Chromium tabs, light image editing using GIMP, and document and spreadsheet work using LibreOffice. Unsurprisingly, the sub-$100 miniature PC has its limits. It reportedly struggles with full screen video playback from YouTube for example, even if you turn down the resolution to 480p.
Tom's Hardware were using a pre-release OS so it's possible the issues with video playback were caused by this?
It’s important to note that, at launch time, some important Raspberry Pi software doesn’t yet work on the Pi 4. To run Pi 4, you’ll need to download a brand new build of the Raspbian OS, Raspbian Buster. And not everything runs in Buster yet. During testing, we found numerous Python libraries or other required packages that weren’t compatible with the new OS.
My biggest problems involved video playback. If I wanted to watch a YouTube video, I had to keep it in a window, because even in 480p resolution, it was jerky at full screen. The other task I’d like to perform is playing retro games, but as of this writing, the Retropie package of emulators doesn’t work with Pi 4.
During extensive hands-on testing, I found that, while the 4K at 30 Hz is tolerable, little things like the movement of the mouse pointer are a bit sluggish. If you have a 4K screen, you’re definitely better off going for the 60 Hz mode, but note that the added voltage may also cause your CPU to get hot and throttle more easily.
While surfing the web, looking at still images and just enjoying all the extra screen real estate of 4K is great, video playback is the Raspberry Pi 4’s Achille’s heel, at least as of this writing. Whether we were attempting to stream a 4K video or use a downloaded file, we never got a smooth, workable 4K experience, either in Raspbian Buster or LibreElec, an OS that runs the Kodi media player. Several H.264 encoded videos, including Tears of Steel, did not play at all or showed as a jumble of colours. Even the sample jelly fish videos that the folks at Kodi recommended for my testing appeared as still pictures with no movement. Clearly, there’s a lot of optimization that still needs to be done both on the OS and software side to make the Raspberry Pi 4 capable of playing 4K video.
Unfortunately, even streaming 1080p YouTube videos is a challenge at this point. Running at 1080p resolution, full screen video trailer for Stranger Things showed obvious jerkiness. However, the playback was smooth when I watched the same clip in a smaller window. The same problem occurred, even when I dropped the stream’s resolution down to 480p.
Playing offline 1080p videos works well, provided your screen is at 1920 x 1080 or lower resolution. A downloaded trailer of Avenger’s Endgame was perfectly smooth when I watched it using the VLC player.
My slightly older Intel NUC struggled hard with YouTube too, until I installed the extension that forces YouTube to serve me h264 content rather than vp9. After that it has been butter smooth.
tl;dr they "built a gaming pc" with a "wireless anti static bracelet", RAM installed in single channel, backwards PSU, terrible parts choices... don't trust the verge.
I've never had these problems with the various media center apps. My Pi 3B+ has been running Kodi for a few months now, and it happily plays 1080p videos at what appears to be smooth 60 FPS.
Mind, this is specifically running under Kodi, which is optimized as a media center, and is NOT running also a full desktop environment. Attmepting to do the same in, say, the stripped down Chromium browser was an exercise in frustration. That's always been a major limitation of the 3D acceleration in the Pi, as the libraries took a long while to mature and were always a bit hacky.
The announcement for proper OpenGL support and compositing in the desktop environment is huge for me. It seems like it'll finally push the Pi up into "workshop computer" territory that, while underpowered, should be just capable enough to run some of my always on operations and act as a lightweight CAD station for small parts. I ordered one as soon as I woke up and saw the announcement, and we'll see how well that works in practice.
Likewise. While a lightweight desktop on even the 3B+ can be excruciatingly slow, I ws using the 2 as streaming media center for 1080p video without issues. The main thing to look out for is compatible codecs.
Can you link it? Was the video streaming or playing from disk? Perhaps it was an unsupported video codec? There are certain circumstances where the pi may struggle to play video, but I think for common formats they graphics driver has hardware level support for decoding. If the video format was weird and it had to decode with the CPU it could see problems. Another issue is the power supply. If you do not use a quality 2A power supply, a little red light by the power jack will indicate reduced power and it will throttle the CPU. This applies to the older PIs, but I've not seen the Pi4 yet.
I imagine the wait must be agonizing! Why not just have a NFS server on your more powerful computer for the Pi to stream the file from? 100 Mbps is easily enough to stream any Bluray.
> Why not just have a NFS server on your more powerful computer for the Pi to stream the file from?
The “more powerful computer” is a laptop, and it is only being used for torrenting the films because it has gigabit ethernet. I don’t want to have to leave it on all the time, and sometimes it is still packed in its case when I want to sit down and watch a film. Storing the films on a hard drive attached to the Pi is a lot more convenient.
I haven't actually seen a single 1080p raw bluray file with a higher video bitrate than 35 Mbps. Maybe with TrueHD/Atmos audio the total bitrate could push past 40, but 100 Mbps is way more than you need for 1080p.
Even 4k doesn't seem to push past 50 Mbps, but I think that's because h.265/HEVC is more efficient for the same observed quality/CRF.
You are right. I don't know what your parent comment is talking about.
Blu-rays use H.264 High Level 4.1. That has a maximum bitrate (for a single buffer) of 50 Mbps. The average bitrate for an entire disc is usually closer to 25-30. Even the most extreme cases, like the mastered-in-4K Lawrence of Arabia, have peaks at about 48 Mbps, and average bitrates of about 42 Mbps.
If you're having issues streaming Blurays on a 100 Mbps network, the speed of the network is not your problem!
> If you're having issues streaming Blurays on a 100 Mbps network, the speed of the network is not your problem!
Well, it COULD be the problem if you're not actually getting the 100 Mbps. I've diagnosed WiFi that should have a lot more headroom than 100 Mbps but still stutters on mid-bitrate 1080p. Signal degradation
NFS can not use the entire raw bandwidth for application data, and it is very likely that mplayer compresses its filesystem calls instead of making a perfectly spaced stream of them.
Also, Ethernet does simply stops working way before 100% of the bandwidth is used.
I don't know why you expect things to run smoothly at 50% utilization.
NFS is not the most effect protocol however NFS3 over UDP should be able to hit 75% liberate on that 100M link easy. Something else is the bottle neck here. I have done a great deal of testing on this on 100M all the way to 100G links in my career and properly tuned you should be able to hit 80% or so or the link speed at real data through put.
Can't wait to buy this, boot it up, play with it for four hours, then stick it in the same desk drawer with the other Pis I have bought over the years.
(The upgrades look great, just my attention span is not so great)
* A Zero W hooked up to a PM2.5 to do air quality monitoring in the house. Just bought a couple more sensors for it (VOC, eCO2, etc), but haven't hooked them up yet.
* A 3B+ running the UniFi controller for my home network.
* One is running a custom Hue automation I built to shift the color temperature of the lights throughout the day.
* One is built into an internet connected dog treat dispenser I built as a gift.
* A rather dusty Pi is running CNCjs so I can have a decent interface to my cheap grbl CNC.
* And finally I have a Pi running OctoPrint for my 3D printer.
And that's just the ones currently running. I've got two more in progress. One to automate an exhaust fan based on inside and outside temperatures. Another is destined for the garage where it will replace the not-so-great MyQ "smart" functionality of the garage door opener.
To each their own I suppose, but I've been consuming RasPis like candy. $60 all-in gets you a fairly beefy platform with almost all the I/O you could require and a vast ecosystem of software and HATs. Honestly their only downside is that at some point I'll have to reconfigure my home network when I start exhausting my current internal /24 with 200 RasPis.
> A Zero W hooked up to a PM2.5 to do air quality monitoring in the house. Just bought a couple more sensors for it (VOC, eCO2, etc), but haven't hooked them up yet.
Do you have any resources on how to set up something like that?
Adafruit sells a lot of air quality sensors that can work with RasPi, Arduino, etc. They also have lots of guides. So I'd just go on there and take a look. My setup isn't really unique. I used this link: https://www.balena.io/blog/build-an-environment-and-air-qual... as my guide for setting up something with logging and graphing. I didn't do a full Docker-fied Balena cloud deal; I just replicated the stack manually and used their guide as inspiration. It's overkill, but was quick and works fine.
I don't do the fancy minute by minute adjustments to the color temperature; just a couple fixed settings for time of day and based on when the sun sets. And I just have it adjust a scene, which I have my Hue switches configured to use when I turn the lights on.
There's no good way to have this system work with, for example, turning on the lights through Alexa/Siri/etc since they won't use the Circadian scene that's been setup. But what I've got works well enough for now.
Not the OP but I use Kelvin for this, works great! It basically treats my Hue White Ambience bulbs like Flux, where they automatically dim and warm in the evenings. Those changes happen gradually over the course of minutes/hours so it's not jarring.
> * A Zero W hooked up to a PM2.5 to do air quality monitoring in the house. Just bought a couple more sensors for it (VOC, eCO2, etc), but haven't hooked them up yet.
We have neighbors that smoke, and sometimes based upon wind patterns it blows into our yard. Any idea if they have sensors that can pick up this sort of thing so I can close our windows?
You'll probably need to research what the particle size(s) is(are) for cigarette smoke, but this detects a bunch of different sizes:
https://www.sparkfun.com/products/15103
I have some code to read from this, it's not too hard. PM me if you're interested in my code. (I can open source it.)
I haven't used it yet, only have the PM2.5 hooked up right now. But for what it's worth I grabbed the Adafruit SGP30 breakout. I'm not going to be "happy" with it, since it's not a real CO2 sensor, but I don't have $1000 for a real sensor. Hopefully it'll be interesting regardless. The PM2.5 sensor for all its faults has at least given me a better idea of what activities around the house cause increases in particulate matter. It's not useful information, but it's fun and interesting.
I'm using a co2meter.com one [1] connected with USB. Reading out the values is a bit slow, but I just do that in a cron job every 5 minutes and log the results. This does exactly what I needed it to do with a minimum of fuss, so I'm happy with it.
[1] "CO2Meter RAD-0301 Mini CO2 Monitor" on Amazon, $70
I actually set up a pihole a week ago. Still trying to figure out to how optimize my use out of it as it doesn't block everything (mainly Chinese stuff).
I also considered a magic mirror, but my problem with a magic mirror is that I don't have the tools necessary to build my own frame.
That is awesome, it took me awhile to start using mine. I just put together a pi hole, plex server on Rockpro64 and I just bought a tinker board from Frys for $50. Runs a bit hot though
What software are you running? I have been using Dietpi for almost all my projects. Plex comes native( as an option to install) and a bunch of other software.
I actually came here to ask which would be a better platform for building a CNC, Raspberry Pi or Arduino. Is your cheap grbl CNC using an arduino to control the steppers?
Yeah it's the cheap 3018 Pro from Sainsmart. An Arduino runs GRBL to control the steppers. I tacked on a RasPi 3B+ running CNCjs so the CNC could be operated headlessly.
Normally you need a laptop or something hooked into it over USB to feed it your G code, or do manual control. But I didn't want my laptop in the shop getting dusty while the CNC runs, and I also didn't want to risk the cheap CNC failing and throwing a voltage spike into my laptop.
So I opted to use the RasPi. Also makes it nice to have a web interface, and you can add lots of stuff to it (e.g. camera).
(Sainsmart sells an offline controller attachment, which allows manual control with buttons and feeding gcode off an sd card. But the RasPi is only a little more money and you get the web interface, etc)
P.S. I should tack on the usual warning about hooking up a life threatening device like a CNC to the network. I use an SSH tunnel to keep mine secure, instead of exposing CNCjs directly.
Oh for sure. Normally I'd throw the Hue automation software on my home server, but it's busy with other stuff.
Other than that, for the other stuff I listed, I would have said the same a few years ago and opted for ESPs, Arduinos, STM32s, or some other "lightweight" solutions. But the Pi has gotten to this nice sweet spot now that they've got WiFi built-in and most of the rough edges are gone. They can do almost everything the other solutions can do, and for the things they can't you can buy HATs that fill in the gaps. So it's just nice to have one big, universal hammer that I can pull out for my crazy projects.
Most importantly, I don't like wasting time any more, and I'm willing to spend a couple tens of bucks on RasPis that might be overkill, rather than save X dollars making the best, slimmest, most engineered solution around the perfect STM32 chip.
Just the software that runs Ubiquiti equipment. You can run it on anything, the "cloud key" device Ubiquiti has is probably the most popular, I just run it on a Windows server in my network. Some people run it on a rpi, though I've never understood why.
It is, but in my experience the reliability factor is pretty ugly for the rpis. I'd rather either pay directly for the device designed for it, or just run the software on a machine already on the network. Obviously that doesn't work for everyone.
RasPi reliability is tricky, for sure. The recipe that's working for me now is brand name power supplies (RasPi foundations, CanaKit, or Adafruit) and brand name SD cards. Don't trust SD cards that come in RasPi bundles or kits, even from Adafruit; I've had those fail horribly. So far all my Samsung cards are working great. I've also been using A1 grade SanDisks, no issues so far (only been a few months) as well as using cheap SSDs for one RasPi. I highly doubt the SSD is going to run into any issues, and it works seamlessly on the new Pis with a nice boost in performance.
The RasPi itself is so cheap I think a lot of people cheap out on the power supply and such, which ends up giving them lots of problems.
Anyway, given the poor reviews I've seen on the Cloud Key regarding hardware failures, especially on the newer versions, I just went with a Pi. But yeah, if someone has a machine on their network already, that's a great option too. I just wish Ubiquiti would release an official Docker image.
EDIT: I'm not necessarily arguing that people use RasPis for UniFi controllers, or that you personally should. I just thought your comment was a good opportunity to discuss RasPi reliability, because I know a lot of people have trouble with that.
In my experience, anything other than the Cloud Key is quite unreliable. I have been running it on my Win10 desktop without many issues besides the annoyingly restarting after updating.
Don't forget to keep it powered while it's in the drawer so you could ssh and build something really awesome.
Maybe it's a personality flaw, but I get plenty of satisfaction from just reading blog write-ups if things I could have done with my tech junk, without all the associated time invested.
It's much easier to vicariously enjoy projects like these, which is why I think drawer-dwelling is an inevitable destiny for most of these widgets.
My flaw regarding SBCs, is that I want to master the whole stack (what variant of linux.. , how is <multimedia server> is written,...), yet I don't have the context to, so I get fed up with almost good use cases and lose interest.
That's also why I'm tring microcontrollers, it's back to low level, less shiny projects, but mentally saner.
STM32s have great Rust support[0] if that matters to you. Personally I strongly prefer the embedded_hal[1] and RTFM[2] APIs over the Arduino ones, although the peripheral coverage is a fair bit weaker. Their DISCOVERY kits[3] (preassembled boards that are ready to program, just like Arduinos) are ~$20, and also double as programmers if you eventually end up designing your own boards.
I would recommend taking the path of least resistance and getting something Arduino compatible. After ignoring it for years, I tried it out last week. It is very easy to use. And the M0- and M4- based boards are very capable.
The Adafruit Trinket M0 was what I needed for a recent project. It's $8 and tiny. The feather boards also look very neat, if you need BLE, the ability to stack peripherals, etc. They all integrate with Arduino so you can be up and running in a minute. (They also support Python running on the device, which is maybe even easier than Arduino.)
ESP8266 or the ESP32 (if you want something beefier that also has bluetooth) give fantastic bang for your buck, especially if you get them from a trustworthy-ish store on Aliexpress
Well, same reason why so many people watch playthroughs of difficult games on YT without playing the games themselves. There is a pay off for seeing what you could do if you had the time or motivation to do so.
I also like getting grand plans for a (hardware) project, ordering the parts from Ali express, then losing interest before the parts have arrived, so they still end up in 'the drawer'.
I can relate. However these days I have found a new life for those abandoned Pis: I install Pi-Hole [1] on them and set them up for family and close friends.
I run a pi-hole as well and I have to say it's been an absolutely great addition to my home network.
It essentially blocks tracking and advertisements on all devices, not just my computers with ad block.
Just need to keep the block lists up to date every couple weeks, but it's honestly great.
I wish pi-hole was just a tiny bit more polished. I found it takes a lot of work (and knowledge of linux networking) to get up and running properly and stable.
A great option I've found is to use the DietPi distribution that has Pi-hole configuration/install wizard built in (https://dietpi.com)
After having various weird network issues and instability on a different OS with Pi-hole (updates often broke), I switched to DietPi, and now my PiHole and OS upgrades are extremely stable.
The DietPi installer/configuration for Pi-hole sets up all the networking for you in the install "wizard", took all the weird networking headaches out for me
I've run into the same. It often would come back from a reboot without DNS restarted correctly. I never went through a Pi Hole update without having to manually fix things either.
A few weeks ago I switched to Ad Guard Home. Has been rock solid since. The big dancehaus it doesn't support the exact same filter files.
I know this isn't terribly helpful to you, but I have never had any problems with using or upgrading Pi Hole. I have been running mine for a few years now, and everything has been smooth and error-free.
MotionEye OS is good as a camera surveillance system. Needs a USB camera or a Raspberry Pi camera. Takes a bit of tweaking to get the optimal camera settings (using a GUI), but then it works well.
LibeElec is useful as a home theatre setup, based on Kodi.
Second the idea of MotionEye OS - I have one running as a camera for the front door of my house, emailing me when it triggers to my phone's gmail account. So I have a kind of running "backup" of events if anything happens.
I use to run ZoneMinder on an old PC, but it was a bit flakey, way too crazy to set up for a home system; I had learned how to config and admin it over the last several years, but I was just plain tired of it. It's another great system for security cameras, but not really for a home, unless you have a ton of cameras that need monitoring, and don't mind the dedication of a beefy machine to the task.
MotionEye OS is more a distributed solution. It is possible to set it up so one install can monitor multiple cameras (in some manner - I haven't played with it), but I like it as a simple single IP camera turn-key solution. It basically can turn a Rasperry Pi into a cheap wireless IP camera that isn't locked down or tied to a proprietary ($) cloud system.
Using a RasPi Zero W and the cheapest camera you can find, you can build such a camera for under $50.00 USD off Amazon; probably cheaper if you shop around a bit more. The only cheaper option I've found (but it takes more to set it up properly) is the ESP32 camera modules that you can get.
I haven't looked at LibreElec in a while - have they improved the setup for infrared receivers and remotes? That's the main thing that caused me to switch to OSMC. I'm using an IR receiver hooked to the GPIO pins, and OSMC made it quite simple to set up with my old RC6 remote.
I have one with rutorrent that acts our house seedbox / file server. I also added droppy as a simple, user friendly file browser for my flatmates. Obviously, this all runs on docker-compose, because I'm a very weak man (but also, it makes managing the whole thing trivial once setup)
I have a raspberry pi zero w that I use with PivPN and Pihole and also I compiled tor on it. I can vpn into my home network , use tor as my socks 5 proxy and I also use pinhole to deal with tracking and advertising. best 5 dollars I've ever spent.
I'm just laughing so hard at this thread. I also have a drawer, but with a bunch of Sonoff Home Automation devices. I promise I'll activate them one day!
My wife doesn't have a clever name, but instead of usually rolling her eyes at yet another unfinished project, she loves this particular drawer because it's perfectly (and neatly) organized, like the best of unused item drawers.
Sonoff with Tasmota firmware let you control all your devices easily (increadibly easy). You just need a raspi or a simple(and cheaper) orange pi zero with a mosquitto mqtt server.
I use one of my Pis as a PulseAudio network sink that lets me stream music to it from every computer in the house. Not the most creative use but maybe some inspiration.
I have to confess that I have never even booted up my Pi-3. Now I wonder if it is worth booting up the Pi-3 or should I wait till I get my hands on the Pi-4. That way I don't have to think of upgrading.
The Pi 3 is no less usable than it was before the announcement. Why bother getting a Pi 4 when you haven't found a use for your 3 yet? I still use a Pi 2 with no complaints.
One downside to the 4 is that it is moving from the low power to laptop realm in terms of power consumption which was my main interest in them. If top performance is what you care about, the Raspberry Pi is the wrong place to go looking for it.
My growing collection of rpis includes an rpi2 with a usb wifi dongle that still gets a fair amount of use. Mostly for experiments, but truth be told, it's not an old slouch.
These pi models are all tailor-made options. I think its a mistake to think in terms of "older models" when you consider the lower power requirement and higher physical stability of the rpi2 (doesn't overheat even without a fan, for example). The Zeros are less powerful but still belong in the range because of exactly that.
Thanks for the informative answer. My comment was slightly tongue in cheek :-) but I appreciate you taking the time to answer. I will indeed make an effort to at least boot up my Pi-3.
Heh, I know the feeling. I have three retired Pi systems (two OG units and a 1B+), but I have three running 24/7 as well: first is a 2B+ running OSMC as my media center. The second one is a 3B with a sense hat, running rtl433 and MRTG to graph outdoor and basement temperature and humidity received from transmitters on-premises, barometric pressure from the hat (and also displaying data on the LED display). The last is a 3B+ running Home Assistant.
I've been - cross my fingers - lucky with MicroSD cards (usually Samsung, sometimes SanDisk), but having USB3 on the new model is quite the game-changer.
ETA: I do have rsync backing up my Pi setups, so losing a MicroSD would be merely annoying rather than catastrophic.
I sold a company and the hardware wasn't included. I have a bunch sitting idle, plus about 1500 RFID tags and 20 readers. Takes up space but I can't get rid of. Should sell them.
In which part? We only have a handful of pi's and half belong to my co-founder, he may want them. As for the RFID readers and RFID tags, we could definitely part with those.
I have a few in drawers, but I do have three running at most times in my house. One is literally just a print server. One is for hobbies. One is for watching movies.
- Half the RAM but double the cores. I'm waiting for some benchmarks to see if the RPi4 is faster and by how much.
- Also Gigabit Ethernet and it works great. My downloads are always at 108-111MB/s for the whole transfer.
- Not USB 3.0 but has "oldschool" SATA through an internal USB-2-SATA adapter. It's at least more compact, otherwise the RPi4 with an external USB 3.0 drive will probably work even better.
- works with a normal 12V power supply, which could be lying around already, from older external drives.
Not to disrespect the RPi4, as I'll be getting one of them too very soon.
The distinction in actual usage between using cloud, server, and homelab to describe their setup seems to be rooted in their purpose.
* Home Lab :: Running a partial/full enterprise IT stack for fun and education.
* Home Server :: Running primarily internal services like file storage, backups, media streaming, home automation, maybe some light networking.
* Home Cloud :: Running primarily external services on the public internet to replace 3rd party SaaS services. More often than not this is done with a VPS provider rather than physical hardware in your home.
So maybe you find the terminology annoying since everything is cloud these days but it's genuinely useful to us folks in the forums. You can also call "home cloud" selfhosting if you find it less jarring.
I am not sure that is a great distinction. A server is traditionally serving some resource, usually to multiple people. The idea behind 'home cloud' is to host 'slice' of a server yourself. Something that used to be hard because of the cost of hardware, making hosting only a few 'sessions' expensive.
Of course hardware hasn't been expensive for probably a decade or more. Today it is almost entirely a software problem. Or more precisely how to create independent quality software when many developers are employed by large corporations and no one wants to pay for development. I am not sure much is happening on that front.
if you run a "home cloud" on a raspberry pi 2/3/4 or an orange pic or something else entirely and it fails, will your "cloud" go down with the hardware?
If does go down then it's not cloud, no matter what hardware you're using.
In general, my rule of thumb is: what's the upper limit, the capacity or your wallet?
I see value in what you call home cloud. It has a lot of potential for distributing things and giving control of data back to people. Is there a place where I can follow developments like this?
Also, it seems like Western Digital would be an ideal player in the space since they want to bring processing to data storage via RISC-V.
Synology is probably the biggest player in this actually. They sell NAS hardware you can just add software to, with a selection of "home cloud" software of their own.
On the hardware front I'll plug the project a friend of mine is working on[0]. As far as self-hosted software goes, the file sharing/sync section[1] of awesome self-hosted is great (the whole list is great).
Unless your server offers some form of slicing of resources (containerization / virtualization) then it's difficult to describe it as a cloud in anything other than a buzzword.
I think most ISPs don't allow servers on residential, so one would be better off using a home cloud. Preferably one with blockchain to disrupt blocking or throttling.
Most ISP’s don’t actually care if you host a small server for a low number of users. As they shouldn’t.
Seperately, you’re not going to fool anyone by calling a server something else. (Not that “home cloud” is a bad term, but I think everyone realizes that still constitutes a server.)
I had an Exynos 5422, and when it came out it was a great card, however, nowadays, it's old generation - it consumes more and it's less performing than the latest archictures (A7x).
"Double the cores" is not a valid consideration - 4+ core configurations typically have 2/4 cores (the 5422 has 4) with a high-powered architecture, and the remainder with a low powered one.
Compare for example the XU4 with the N2 - the N2 is more powerful, and yet, it has less cores (4 hp. + 2 lp.) and requires no fan.
The RPi is an interesting configuration - they have 4 high-powered architecture cores (4x A72) only. It seems it doesn't require any fan.
Of course if one requires specific chipset/components, we're talking about specific use cases, which is another story.
It does at least need a heatsink. Though it will function without it, you may get temperature warnings, and it will run hot enough to significantly reduce component life. I've also hooked an old PC case fan to the GPIO pins or a USB port, and it runs slower (5V vs the 12 it expects) but does the job fine.
To me, the RPi is the choice only because every other single board I've used had so much less support than RPi does... I have a 3B+ running retropie and it's doing okay, but if this one can also do a decent job with h265 under kodi, I'll be very happy indeed.
Ordered a starter canakit with a couple extras, and looks like I won't see it until August. :-( ... I'll probably forget I ordered it by the time it comes.
I've performed a `git cherry -v` out of curiosity a month ago, and there were a few dozen commits (even reverts!).
While I think pretty much any programmer could reapply them, my guess is that in the long term, one needs to know how device drivers development works, in order to adapt to the kernel changes.
But I'm not a kernel dev, so the maintenance could be easier.
Are you sure? I've worked with a myriad of ARM devices, and I would say that despite their small size, Hardkernel are one of the most responsive companies when I have had issues.
Hardware vendors hardly are the best sources for OS images, except maybe for very new boards. When I shop for a board I always look if it is supported by community driven projects such as Armbian, DietPI or even plain Debian.
Im excited to look at the wifi on the RP4! Any word on the chipset used for that?
On a more personal note - thank you for your service.
You got any advice for pulling the aircrack_ng/rtl8812au driver from github and making it into a patch for building in-kernel? I really like having signed-module verification, but I also really want this driver.
The credits at the end of the post thank folks who worked on the CYW43455 integration, which matchs up with the raspberry expectation of being a (former) Broadcom part.
Hi, author of the piece here! You're right that WordPress wouldn't have given you a popup before you could read, for free, the article I spent a couple of months working on. It also wouldn't have provided me with any income to support creating the article in the first place.
Medium, on the other hand, does. I mean, it's not much - I get a slice of the revenue from paying subscribers', based on how much they 'applaud' my piece - but it's higher than zero. Despite this, Medium also makes it available to read free of charge for non-members - up to, I believe, a somewhat miserly three articles a month, though you can bypass this if you really must by using a private browsing window to get another three, and another three, and another three, and another three...
I've got kids to feed and bills to pay. If you really don't want to click an X on the login prompt and read it all for free, I can give you my payment details and sell you a PDF copy...
I wouldn’t know. It’s not like I’ve consumed my “fair free share” of your content, I’ve apparently consumed my fair free share of content across all of medium.
Would you accept to be requested documents when you enter the mall? Would you find normal to be stopped by a security guard that says “sir/madame, you’ve browsed enough stores for free without handing in your id card and personal data, please fill this form or leave” ?
I owe you nothing. If anything, you owe me. It’s my time that builds your audience, not the other way around.
- would you mind sharing how much you actually you expect in revenue from this article?
- Have you considered any other ways of monetizing it? Just an idea: if you had your own blog and registered on Brave as a content creator, you could be getting a few cents from me already.
Massively depends on performance. You get money from a subset of a subset of a subset: there's the set of the audience; there's the subset of the audience that are logged in to Medium at the time; there's the subset of the logged-in subset of the audience that bother to click the 'Applaud' button; there's the subset of the bother-to-click-Applaud subset of the logged-in subset of the audience who actually have a paying membership.
Then how much you actually get is totally up in the air. If mine's the only piece Reader A applauds that month, I get 100% of the revenue (minus Medium's cut, of course - the house always wins); if Reader B has applauded 1,000 pieces this month, I get 0.1 percent of the revenue (as do the other 999 authors.)
It's a model which is inherently insular: of the traffic that has visited the piece so far, 90% is external (and thus earns me nothing other than name-recognition) and 10% is internal to Medium. Only a tiny, tiny fraction of that 10% has applauded, and I won't know what that translates to in terms of Cash Monies until Medium calculates it and tells me. I'd be much better off promoting it to existing Medium members - such as by joining a 'publication' on Medium - and ignoring external traffic sources, but I don't want to do that.
As a ballpark, though, the answer - long in coming - is "not much, but considerably more than I'd get on Brave." The Raspberry Pi 3 B+ benchmarking piece I wrote on Medium has earned about $277 lifetime; if this earns the same, I'll have done very well indeed.
Thankfully, I'm not relying on the Medium income: I've pieces in various websites and magazines based on the same core data, which pay one heck of a lot better!
Thanks for your answer. In all honesty and taking what you said in consideration, I still believe that the Medium model should die in a fire, I won't feel bad for not supporting you through it and I hope you consider other alternatives.
I'm not sure you read what I wrote, but I have considered other alternatives: it's called "writing for magazines." If you'd like to support me without supporting Medium, you'll likely find me inside more than one bound collection of thinly-sliced dead tree at your nearest newsagent, supermarket, or bookseller.
I've even considered Brave. Hell, I've even tried Brave. According to my email archive, I signed up as a publisher in January 2018. Sadly, it's just not a sustainable model yet - which is why my piece is monetised by Medium, not Brave.
I wouldn't be keen on supporting dead tree magazines and its excessive ad-to-information ratio, newspapers that only are tangentially focused on providing good content and more focused on creating constant crisis as well or any kind of publishing industry with so many middleman that need to be eliminated.
Sorry, it is really not my intention to pile on you. I am just really tired of the current state of affairs in regards to the publishing/authoring economy. I know it is easier said than done, but we need to have more content creators that are willing to take a principled stand and stay away from these actors and start creating exclusively on terms that are more ethical.
If you're using Brave, I'm assuming you're using the browser's main claim to fame: the ad-blocking/ad-switching functionality, yes?
So, you won't support content creators who publish on a website which uses advertising.
You won't support content creators who publish on a website which allows non-members and free-tier members access to a limited number of articles a month and charges a fee, distributed to the content creators, for unlimited access.
You won't support content creators who publish in print, in magazines or newspapers.
I'm sensing a theme, here: you won't support content creators.
I would love to host my own website (actually, I host several) and write the same kind of content I do now, but how exactly am I going to feed the bills and pay my children? This is literally my job - I'm not just dashing out a quick blog post as I Segway to the London office of my cryptocurrency startup for a day of find-and-replace in the whitepaper. If I'm not getting paid for my words I'm not getting paid at all.
Brave is not the answer, I'm sorry to say. Something like Brave may be - I used to play around with Flattr, which was the same kind of micropayments model as Medium but applicable to any third-party web content, and doesn't have the ethical issue of blocking everybody's adverts but its own - but Brave ain't it, at least as it stands.
You don't want to support content creators, you want to support Brave. That's fine, but don't frame it as wanting to support content creators but only in one very specific and questionably-ethical way.
Otherwise, put your money where your mouth is: pop me a payment across, in the currency or cryptocurrency of your choosing, and I'll publish the same piece on my main website. No adverts, unless you count the cover shots of the books I've published (hey, there's another way you could support me - and if you're worried about ethics, some of them are available for free download under a Creative Commons licence!) down the side.
I used to have about ~$15/month deposited on flattr* for quite some time, and the main reason that I've been using brave is not because of its anti-ad stance but rather their anti-tracking + the possibility of a way to fund content creators.
I am also contributing about ~10€/month on patreon for different software projects and writers. I've written to more than one youtube channel producers asking them to look into alternatives so that they could take my money. The Quilette model is also something that I do appreciate.
Believe me when I say that I am more than willing to support people that create content. And depending how much you are asking for me to send you, I'd gladly take on your offer.
* story time: I got a call from an Eyeo recruiter some months ago, who was looking for people in their ad-block/acceptable ads team. It turned into a most-of-the-time-friendly discussion about how acceptable ads does nothing about the tracking of the users, so I wouldn't be interested in joining their team and me asking him to call me back only if he had some position on flattr.
The problem with Patreon - and thus Quilette, which is 95 percent funded by Patreon - is that there's a massive gulf you can't cross. Popular Content Creator who has a hojillion Patreon backers and gets $10,000 a month from 'em has no worries; person who plays about with it in their spare time and gets $5 a month can buy a beer. Job's a good 'un.
But what about the person who wants to write full time, but hasn't built the audience yet? How do they go from $5 a month to paying the bills? In my case, I didn't have to: by the time I switched careers I had enough regular clients to cover all my outgoings, albeit only just. Quit the day job, picked up some more clients, and here I am doing it full-time to this day.
If I were relying wholly on Patreon - or Brave, or Flattr, or even Medium - I couldn't have done that. Patreon isn't going to give me $300 on spec to write an article that might not do well; Medium won't front me a few grand against royalties so I can take time to write a book.
D'you know who will? The traditional publishers.
I appreciate you have a personal stance on this, but so do I - and mine comes not from the perspective of "I'd like to read this but it's on a website I don't like" but from the perspective of "if I don't get paid for this I'm literally homeless."
Actually, I have a Patreon account - https://www.patreon.com/ghalfacree - I signed up just before the new fee scheme came in to lock in the old rates, but never launched it (hence the zero backers.) Don't really have time to give it the love it would need to gain traction, either - again, we're back to the problem of not having the cash to go from zero Patrons to I-can-feed-my-children Patrons.
I am sorry but this is the point where we disagree. "I still need to make a living" is not that I would accept as an argument to justify all of the unethical issues that arise from the attention economy industry.
Yes, this means that I will actively find ways to accelerate the demise of these business. No matter how much I want to support content creators, it does not make me responsible in guaranteeing their job.
The OrangePi 3 at $40 is also pretty neat, PCIe 1x, 8GB onboard eMMC, 4x USB 3.0, Bluetooth 5, Wireless AC (pretty sure they beat Raspberry Pi to the punch on this...) and it has mainline kernel support: http://www.orangepi.org/Orange%20Pi%203/
I think this is to avoid the plentiful 12v adapters that use those sizes, as you could fry the board with a bad PSU. The 5v adapters that do use those more standard sizes are generally 500mA or 1A, which is enough to (unreliably) boot and run the board.
I have been using Odroid XU4 for home server (home assistant), personal CCTV, controls IR blaster and other sensors for years, still running perfect without rebooting for months. I also have Pi 3 for Pi Hole, but honestly my Odroid XU4 is more stable than Pi 3.
I had ODROID C2, I don't know how things are now but I've returned mine as it hanged a lot and had lots of different issues. At same time I had 2x Raspberry Pi and it worked fine. This was couple of years ago.
The USB port on the Odroid HC2 is 2.0, but the SATA interface is connected to a USB3 but, as is the Gigabit Ethernet.
Furthermore, SATA and Ethernet are connected to individual USB3 busses, as opposed to earlier RPi designs where everything shared the same USB2 bus.
I haven't checked the RPi 4 specs yet, but i can imagine it's still the same layout, just a faster bus, which can be "just fine" - it should be plenty fast to saturate a Gigabit ethernet as well as the SSD/HDD IO required to do that.
I'd love to run something like this, but I recently switched to ZFS which recommends having a lot of RAM (my NAS has 4GB). It's what kept me from going the route of the Helios 4.
I'm so happy they haven't removed the composite video out in the headphone jack! If you didn't know, you just need a 3.5 mm TRRS connector - the pinout[0] looks like:
Playing your emulated video games on a real CRT TV so you don't have to use computationally-expensive CRT filters on your video output to get it to look even sort-of the way you remember? Can't think of another use but I assume they exist.
I have a raspberry with retroarch connected to an old CRT and an arcade stick. Old arcade games with a lot of dithering just don't look the same on an LCD.
(Some emulators now come with some decent CRT filters, but It's still not as good as a real CRT)
The good-looking CRT filters take some serious hardware to run well, too. The cheap options like the "scanlines" mode on the NES Classic so little resemble the real thing that I'm not sure why they bother. Maybe people who didn't grow up with CRTs think that's a retro look?
It is analog video. Meaning you can read the result without needing a HDMI decoder.
It also has much long range than HDMI (150 feet is cited compared with 50 feet for HDMI). You can also use boost extenders to transmit the feed over almost unlimited distances for little additional cost.
It is less useful at home, and more useful in industry, scientific, and experimental applications.
Wouldn't it be better to just need run Ethernet instead? You could use the PoE hat to get both power and digital data over a single cable 300 feet long. There are PoE repeaters to let you extend that distance without needing additional power cables. What situation requires you to be viewing a video feed from a raspberry pi a long distance away that wouldn't be better served with ethernet?
I've seen Pi's used in the art world for video installations. It can be quite a bit cheaper (and a different aesthetic) to get a pile of old CRTs than to get HDMI-capable TVs/monitors for such things.
IIRC, if you're building a handheld gaming device on the RPi Zero, your two choices for driving a small display are composite output or the SPI interface. The SPI interface has lower bandwidth, so some people use composite to eliminate tearing.
I setup a friend with a librelec raspi media player in his ~5 year old minivan that only has composite video in for the built in screen. Its not quite dead yet.
I'm very excited about these upgrades too (especially GigE), but as far as I can tell nothing on this news page specifies whether the Pi will also support HDR output as part of the 4K upgrade. That's most of the practical benefit of 4K - that 4K releases tend to come with HDR10 or DolbyVision support.
Anyone know if we can expect HDR output to work? If I knew it supported that I'd be purchasing one right now to upgrade my media center from my current Pi 3 setup.
Even the tech specs page says nothing about 10bit decoding, which is required for most real world 4K HEVC video.
"The 4B hardware is HDR capable, but software support has a dependency on the new Linux kernel frameworks merged by Intel developers (with help from Team LibreELEC/Kodi) in Linux 5.2 and a kernel bump will be needed to use them. Once the initial excitement and activity from the 4B launch calms down, serious work on HDR and transitioning Raspberry Pi over to the new GBM/V4L2 video pipeline can start."
The spec table says VideoCore VI. I really hope that is not a typo. I suspect it really is a VC6 because 2x4k is a big bump in pixel count and without a corresponding bump in fill-rate, perceived performance will drop.
[edit] looking at the benchmarks it's a modest boost. Now about twice the FPS of a PI-2 for Quake3 at equal resolution. Be interesting to see if something with more complex shaders changes the relative performance.
The VideoCore 4 (fuck the roman numerals, for exactly this reason) can not output 4k and it is also not a ES 3.0 capable GPU. So this must be a 5 or 6.
I'm crossing my fingers here. This is really something they should have put in the specs if it's supported. With 10bit decoding and Rec. 2100 [1] support, this would make a fantastic platform to build a media center on. I'd certainly upgrade from the Pi 3 even though I don't currently have a 4K television.
Do we know for sure that all VC5/6 SoCs support HDR decoding, or is that still unknown? The Wikipedia page for VideoCore only lists one VC5 SoC and no VC6 SoCs. The Broadcom page for the VC5 SoC is pretty uninformative.
> The H.265 / HEVC decoder is a HEVCv2 Main 4:4:4 10 design supporting bitstreams up to profile 5.1
Sounds promising! (The documentation on the website should certainly be fixed if this is correct.) Now assuming Rec. 2100 is properly supported to allow connecting to HDR capable displays over HDMI 2.0, we should be good!
Sure, but the video decode IP is a different piece of hardware from the video scaler/compositor. In a typical SoC, you have the GPU and the video decode engines as separate blocks that feed into the video scaler/compositor which then blends, rotates, scales, .. etc the inputs and feeds the final signal to LVDS, HDMI or whatever.
Now does it make sense to have your video decoder support 10 bit when your compositor can't handle that format? I guess you could argue that 10 bit source material will still improve the quality even if you downsample it to 8 bit for display, and I think for example DVB-T2 has standardized on 10 bit so if you can't decode that, that's a large chunk of market gone.
Ultimately, from the little we know, the compositor hardware is the same as the old Rpi, and that couldn't do 4k60 and it can't do 10 bit depths. The Raspbian image currently doesn't use the Linux kernel implementation for driving the compositor (it uses the fkms or "firmware kernel mode setting") so there might well have been hardware tweaks.
(Ok, instead of making this comment thread any longer, I simply asked and the compositor hardware in the RPi indeed has HDR support:
> Sounds promising! (The documentation on the website should certainly be fixed if this is correct.) Now assuming Rec. 2100 is properly supported to allow connecting to HDR capable displays over HDMI 2.0, we should be good!
As hopeful as I am, that's quite a big assumption...
What are the odds that this will support 1080p streaming with Plex? I've tried Plex on a Pi 3 using a Roku as a player with 1080p and it's a complete failure. Plex tries to encode everything even if native playback to the Roku is possible using the Roku media player.
If encoding is required? Almost certainly won't work, it's just not powerful enough. But you should avoid reencoding if at all possible, and I imagine it will work if you can manage to disable it.
My plex isn't on a Pi but I do not have this issue with Roku Express as a player. Could be other factors like the video codec and direct stream is set on Roku.
That's really too bad. It makes the 4K support useless for building an HTPC, which is a common use for the Pi. As far as I can determine, several of their competitors already support 10bit decoding, although specifics (about stable HDR support) are sometimes hard to come by.
As a fifty seven year old, I don't have 4k eyeballs. agree you have a limitation, not one which is holding me back since osmc does 720p just fine, for Olde telecine mp4s of B&W movies
I'm in my 50s as well and may eyes are definitely showing their age, but if anything I find high fidelity screens more important to me now than they were years ago. I use 1080p 23" screens at work and the slight blurriness from pixelation is definitely noticeable. Comparatively, the 5k screen on my iMac is dramatically sharper and easier on my eyes.
I find that when my own vision is slightly blurred, add in even more slight blurring on the screen and they compound each other. On the other hand if the source image is pin sharp, it makes it easier to cope with the flaws in my own vision.
I'm sure higher resolution is nice, but nowadays I'm more interested in HDR and larger colour gamuts (Rec.2020).
I don't doubt 4K and 8K look "better", but IMHO we're approaching the point of diminishing returns, and visual enhancements in other areas are worth exploring (even for 1080p).
I have a 4K-capable 40 inch TV in a standard living room (for Europe). 4K videos look slightly crispier than full HD ones.
While I could have a slightly larger TV, I think that for typical domestic use 4K's improvement is marginal because people are not sitting 50cm from the screen and room sizes are limited.
Really, so far the top benefit has been that I could tell my wife: "Look, if I move right up to the screen I can still see Jeremy Clarkson's individual hair strands!"
I've got a 4K TV and it's placed in a location that I can tell whether or not something is 4K if I sit a bit closer, but the core problem is that it's still very difficult to provide it true 4K content, since I've not been willing to drop the money for 4K blurays.
I've got a variety of content that is, technically, a video stream that decodes into a framebuffer that is a "4K" framebuffer according to the metadata on the video, but without enough bits for it to truly be "4K"; the same number of bits dedicated to a 1080P video would look just as good.
For all the bragging about how streaming is the future, it seems to me that the companies providing those streams at scale still have a lot of incentive to cut the bitrate back so far that it's not practically a 4K stream anymore, because 95%+ of their audience can't really tell.
For those of us who aren't there, can you clarify how big a standard European living room is? This has nothing to do with the resolution stuff you mentioned, just curious.
In the UK in a victorian terrace, and our front room is perhaps on the small side at about 3x4m. I'd guess somewhere between that and 4x5m is about average. For us 32inch still can feel imposing. And that size screen seems to be being phased out.
We do have another room, that's slightly larger but it's a bit bleak. And weirdly it's a more fussy room to furnish. The smaller size is better in the winter. The old brick terraces aren't that warm. In large houses people can gravitate to smaller rooms for that reason. Personally I'd like a large room that I could bring the furniture in from the sides. Lucky to even have a house to live in to be honest, so can't really grumble.
> I wasn't aware of a major homeless problem in Europe.
Homes in the UK can be expensive, so there's a large rental market.
On top of that many people are vulnerably housed - living in emergency temporary accommodation (which may be for many months) or on friend's sofas.
The introduction of the benefit "Universal Credit" has increased homelessness and people who are vulnerably housed. In the UK a landlord can apply for eviction if the tenant hasn't paid for two months. (This is for shorthold tenancy agreements where tenants have most rights - other tenancies have less protection.) There is a minimum wait of 5 weeks before Universal Credit claimants get paid, which pushes some people very close to this limit. The bizarre sanctions regime tips many people over that limit.
That's larger than mine. Sizes are getting smaller. Can't comment for the rest of Europe but the UK has quite a bad homeless problem, it's very visible at the moment. Rents are prohibitively high, and getting onto the housing ladder is very difficult. Small houses - even semi-detached bungalows around our way are at least £300k (edit: oops missed a 0 originally!). The housing stock is shockingly bad in the UK, and very expensive.
I have to comment. That's huge! Way bigger than most UK Victorian 2 up, 2 down terraces. More like the size expected in an older 4 bedroom house if you ignore the extra space the photographer is clearly standing in.
That's slightly smaller than ours probably all in all. But I've seen smaller, like the one up, one down back to backs in Leeds/Woodhouse and many modern flats apartments, are just broom cupboards - but then you look at down town Japan and this feels palacial.
But joking aside, I'm surrounded by glass on all sides in a car. That makes a great deal of difference. I probably wouldn't feel confined in a room that size if I had an entire wall of glass.
In a UK terrace, even given a large bay window, your view will likely be squandered by a large work van, as the terraces seldom have off-street parking.
I'm still struggling a bit with large screens. For something like playing games I'm guessing you can comfortably sit back. High resolution screens for computers feel attractive, but getting up close and personal can feel a bit much with light levels. I'm guessing OLED might be nicer in that regard.
I know we're talking video here but I wanted to share something. My father just got a new set of 'ears', he's been going deaf. They are titanium studs that are implanted into the bone just behind the ear, about the size of pencil erasers. They work through bone conduction. The hearing part are replaceable electronics that snap on and off the studs.
The current gen are about the size of a quarter and are bluetooth capable, so he can sync to his devices, watch movies, etc with these little guys tucked behind some hair.
It got me thinking, I wonder if audio guys have started to look at some of this sort of thing to really "hear" music perfectly. Very interesting tech.
There are some earphones that use bone conductance, but this sounds like a special case.
If hearing is down to tiny hairs in your ear resonating with the audio frequency, I would have thought being hard of hearing was down to those hairs not being able to function properly. How does bone conductance audio get around that? Does it use a different sense?
My eyesight is also going, I have to rub my eyeball on the phone screen like one of those anti-deorderant balls, I daresay my hearing will be next.
homo cyberia
(That's probably nonsensical latin, if anyone with a modicum of latin knowledge wants to go all Life of Brian on it, please feel free. What is latin for augmented human?)
I'm not sure I'd use a pi has an HTPC for true blue-ray type 4K. Not to ding it, and I'm very excited (and happy to be proven wrong), but I'm not sure it will have the performance for that. I expect quite a bit will depend on whether or not someone puts in the software optimization work, as I remember how powerful PCs couldn't decode what they can on today's codecs a few years back. But we'll see.
I guess maybe if you added a heatsink and/or fan? I'd be a bit concerned about component life, taxing it that hard.
Oh yeah, and I wanna see how this thing overclocks.
I haven't been able to get HDR working on my desktop amdgpu + X11. I don't think it's supported in Wayland yet either (let me know if I'm wrong) and the devs in #mpv on freenode said they don't have HDR10 output support either (although mpv can do HDR10 tone-mapping).
For HDR videos, I still play them via my Windows box. I think the current MacOS supports HDR too (and if not, it will get support soon as they have that crazy new $6k HDR screen).
don't want to derail the RPi4 celebration, but for this specific purpose (media centre) I've been super happy with Nvidia Shield. You can often pick them up for $150 on a sale, less on ebay. 4k HDR, runs Android TV (some people have managed to get Ubuntu running on it too), VLC/Kodi work well, Moonlight works well for game streaming from another gaming PC with NVidia card. It's super zippy, too, and quite small (though bigger than RPi).
Yeah, the Shield seems to be the standard here. If we can get confirmation that the Pi works, we'll have an alternative that is much cheaper and is more useful as a general purpose platform!
> The power savings delivered by the smaller process geometry have allowed us to replace Cortex-A53 with the much more powerful, out-of-order, Cortex-A72 core; this can execute more instructions per clock, yielding performance increases over Raspberry Pi 3B+ of between two and four times, depending on the benchmark.
Looks like the Pi 4 will be vulnerable to Spectre. That's unfortunate, since it seems like this is quite an upgrade otherwise.
> ARM has reported that the majority of their processors are not vulnerable, and published a list of the specific processors that are affected by the Spectre vulnerability: Cortex-R7, Cortex-R8, Cortex-A8, Cortex-A9, Cortex-A15, Cortex-A17, Cortex-A57, Cortex-A72, Cortex-A73 and ARM Cortex-A75 cores.
In most cases RPI will not be used as a host on which many VMs from different customers will run. Many embedded systems will not have browser with JavaScript enabled. And so on...
>ARM has reported that the majority of their processors are not vulnerable, and published a list of the specific processors that are affected by the Spectre vulnerability: Cortex-R7, Cortex-R8, Cortex-A8, Cortex-A9, Cortex-A15, Cortex-A17, Cortex-A57, Cortex-A72, Cortex-A73 and ARM Cortex-A75 cores.
So, "most not vulnerable", then they proceed to list almost their entire lineup as vulnerable.
I'm super excited as well. I figured out the secret sauce to get a low latency kernel build I was able to get down to 20ms latency but I'm excited to see if this can't get down to 10ms even though 20ms is acceptable low audio latency, already with only 1 gig of ram on a pi3 ardour is usable for multitrack recording, excited to see how this does with 4 gigs.
I recently connected a midi keyboard to our Pi3 and ran Timidity for synthesis. It worked, but unfortunately the notes happen about half a second after you press a key, making it rather useless. Strange, since if you dump the midi events themselves to the console they appear to happen immediately.
I tried teaking alsa and pulseaudio settings, no luck. Also installed jack but never got sound from it. Eight hours on the weekend wasted with nothing to show for it. Any ideas on how to fix this?
I installed a low-latency kernel on my desktop machine with a midi keyboard over USB, got horrible performance until I got off of Pulseaudio.
I had some difficulty with Jack, but Alsa worked great. The main thing is you can't run Alsa and Jack at the same time. Once I was no longer using Pulseaudio, a lot of my problems went away -- I think it's just a really slow interface.
In audio that's still acceptable. In our games we usually run at either 40ms or 60ms audio frame size(which means you are getting at least that much latency before hearing the sound for your action).
>Yes. VideoCore 3D is the only publicly documented 3D graphics core for ARM‑based SoCs, and we want to make Raspberry Pi more open over time, not less.
This stated intention and rpi.org's actions are simply not isomorphic. If they want to make Raspberry Pi more open, firstly, why do they publish only abridged (read: fake) schematics? [1]
Secondly, why was rpi.org caught adding DRM chips to their optional camera addon board? [2]
Other SBC makers don't have an issue offering full schematics, yet, due to their smaller size, have relatively more to lose from being cloned. Moreover, I hope nobody believes that a lack of full schematics is going to stop people from making clones. Schematics can be reversed with not too much effort. Due to their large audience and brand recognition, rpi actually has less to fear from clones than smaller SBC vendors.
If anything, it seems like they've judged that their large size and brand recognition is something that they can coast on to avoid having to offer what other SBC vendors do.
I'm not complaining about rpi existing, I'm pointing out that what they offer is in certain aspects inferior to the offerings of smaller SBC vendors, which is actually surprising, given that their large size should make them much better resourced to match or exceed the offerings of those vendors. This causes me to question whether these values are actually a priority at all, even if they claim so.
It's very unlikely releasing schematics would change anything.
RPi as a board is clearly simple enough that there would be clones if you could get components at competitive prices. The only conclusion is than that you either can't get the CPU, or you can't get it at a competitive price.
RPi use Broadcom chips, which you rarely find these on other hobbyist boards.
The most likely cause, after what I heard from people in the industry, is that you usually either can't can't get Broadcom chips at all, at acceptable prices, or even documentation for them when you are only interested in the small quantities you'd need for introducing a new/clone hobbyist board. Not unless you've got some serious connections.
How are you a non-profit if you pay a salary to you employees and allow your suppliers to make a profit?. They act as a 'low profit' company, but they do make profits.
Do you realise that non-profit means that you just don't have leftover cash at the end of the year in your balance sheet? Non-profits pay their employees and their suppliers, their financial situation has absolutely nothing to do with what you are implying.
I think the gp’s point is that to a lot of entities (employees, suppliers) there is little difference between for profit and non profit. And if your nonprofit is passing profit along to a for profit ... you can see how the lines blur. Maybe a way to interpret the above comment is that the incentive structure for many people involved is not significantly impacted by the nonprofit status. And non profits can have money left over at the end of the year, they just don’t distribute it to shareholders.
The Raspberry Pi Foundation is a charity registered in England and Wales. "The object of the charity is to further the advancement of education of adults and children, particularly in the field of Computers, Computer Science and related subjects." They have a trading subsidiary.
Anyone sufficiently cynical (not me) can read the accounts of both entities:
I thought being a non-profit meant you had goals that weren't profit; there are non-profits with billion dollar endowments that presumably have leftover cash on their balance sheet...
They've specifically said that the camera module DRM chip is there because they make money from selling the camera modules and they don't want third-party cloners undercutting their pricing and eating into their profits, as happened with the non-DRMed version one of the camera module. Thing is, the third-party camera modules aren't just much cheaper, they're also offered in a whole bunch of useful variants that aren't offered officially. So in order to ensure their continued profits, they're using DRM to actively make their platform less useful.
> So in order to ensure their continued profits, they're using DRM to actively make their platform less useful.
but that is demonstrably not true. The CSI interface and driver is opensource (https://patchwork.kernel.org/patch/9951525/), anyone can create a camera that piggybacks onto that port. Infact, there are a number of aftermarket cameras that do. Some have built in infrared, some are tiny.
What they are attempting to do is stop counterfeit "official" camera.
Also, all that money goes to either developing more board, or running educational outreach. So I think one can forgive them the urge to protect revenue. It's not like they are microsoft in the 90s, or oracle.
The CSI interface is open source, the image processing hardware to take the data and turn it into actual images is not - it's run by a proprietary undocumented blob on a proprietary undocumented core. So you can interface to whatever camera you like, you just can't use it as a camera. Even that part wasn't open source until well over year after they added the DRM chip.
Every one of those afternarket cameras - the tiny ones, the big ones with replaceable lenses, the IR ones, the funny fisheye ones, all of them - works around this by using the same sensor as the official v1 camera and looking enough like it that the existing code will talk to it. This is precisely what the Raspberry Pi Foundation added the DRM chip to the v2 to stop people from doing. They can't do anything about the v1 clones, but they can stop anyone from doing the same with the better sensor in the v2 and they have.
(In theory the open source CSI driver is useful for non-camera hardware though - for example, there's one obscure third party board that uses this for HDMI capture. I think this may be the main intended purpose. It came too late to save the Kickstarter campaign a few years back promising such a board though.)
Is 1) really valuable? There's plenty of computer hardware you could put a fully open software stack on that's as or more powerful than an RPi and would otherwise just end up in a landfill or whatever. Open hardware would be significantly more interesting.
Yeah, I know. The people who stock the vending machine at work throw out 'expired' stuff all the time that I swipe. Your point? We absolutely should not be wasting this stuff, it's bad for the environment, and it's bad for our wallets.
There is merit to ecological reasoning, but sometimes people like nice things; and concern over the use of pi’s over existing arbitrary computers is so low on the list of ecological priorities that perhaps it’s better to simply recognize the value it provides to people who want to buy one.
No droplet feels it is responsible for the flood. These things add up.
I'm not saying Raspberry Pis don't have a place, I just think a lot of things people seem to want them for would be better suited by just reusing old hardware, which has the nice side effects of meaning that hardware isn't wasted.
Yes, given the recent discussions how some projects were surprised that many companies are putting non-copyleft licenses to good use of their investors, when they are full compliant with what the license requires from them.
I believe the point was that some people don't think that view point is fine. No one has yet taken ownership of that position apart from possibly the great grandparent
As a previous line manager one said to me “yes, and I want a pony”. That they want to be more open does not mean they are currently fully open – in fact, it requires them to not yet to be fully open. Their primary goal (last I checked) is to be cheap, being fully open is at the very best their secondary goal.
That doesn't follow. Firstly, I don't think I've seen any other SBC vendor fail to publish full schematics. This is a peculiarity of rpi's offering.
Secondly, nobody is forcing them to add DRM chips to anything. That's not something that can be written off on "a contractual obligation made me do it." They did it of their own will.
The camera PCB was made by rpi for specific use with the rpi. The DRM chip is not part of the camera vendor's assembly. It is a separate chip placed on the PCB, therefore placed there by the PCB designer, i.e., by rpi.
> You can plug any CSI camera into the Raspberry Pi. There's even a kernel driver for the CSI receiver, which we paid for ((link: https://patchwork.kernel.org/patch/9951525/) patchwork.kernel.org/patch/9951525/). What you can't do is clone the official v2 camera module and use the default ISP tuning, which we also paid for.
That's a fair point. Perhaps when camera v3 is released they'll have the option of updating start.elf and removing the restrictions on using this hardware.
Well, it's optional in the sense that if you can find a way to use the camera with some board other than a Raspberry Pi then you don't have to use it. If you're doing the intended thing of connecting it to a Raspberry Pi, though? Not optional.
> Vendor: “Certainly. We sell $thing for ten dollars per unit if it doesn’t have DRM, or five dollars per unit if it does.”
Hah - sounds like the "boneless vs bone-in steak" sales model - take the bone out of the steak, and suddenly it costs more per pound than it does with the bone left in. Somehow, less costs more - and people sadly pay for it.
Citing the part with the VideoCore is just fully nonsensical. The Raspberry Pi is the only SBC out there with a fully open-source, production-level graphics driver.
> Secondly, why was rpi.org caught adding DRM chips to their optional camera addon board?
to stop counterfeit boards. There are many _other_ cameras that you can use for the rpi, so its not like they are trying to block out competition. its to stop people making illegal clones and ripping people off. (cough I'm looking at you amazon)
Thanks to you and @kingosticks above for digging up these official accounts.
Cursory examination of rpi firmware suggests that this DRM logic is implemented in start.elf, one of the proprietary blobs required to boot an rpi. This is, in itself, significant, since opening this blob would obviously make this restriction easy to circumvent. In other words, rpi have taken design decisions which essentially motivate them not to open up these boot blobs, but rather use them as a chokepoint of control over the platform, see also [1].
From this, we can infer that rpi.org will probably never get rid of these boot blobs (as opposed to simply not having found the time/resources to do something about them).
You are right, they probably won't open the blobs if it means circumventing their own DRM, which I think has been shown to be used reasonably. The mpeg license has almost the same argument as the v2 camera, I think we have covered that.
But that aside, "as a chokepoint of control over the platform"? What platform are you specifically talking about?
I don't think they got rid of the blob, so much as moved it from the SD card to an onboard SPI flash more like you'd find on a typical PC. It's still there and even more required to boot the board than on previous Pis, but people distributing images no longer have to worry about the licensing and other headaches that were caused by having to include it.
There's a mainline driver that according to their own statements is now used for the gpu. Whether there's a firmware blob underneath it is a whole other story and as long as it's not something that has to be distributed with the OS I don't see how it should be treated any different than e.g. the AMD or intel gpus.
IIUC, you're saying that, for the Raspberry Pi designers, being open hardware is not only not a requirement, but there's actually requirements to be aggressively closed hardware in some regards?
I don't think one can paint attempting to stop counterfeit cameras (which were aggressively counterfeited) as being aggressively closed hardware.
Look Rpi have a clear goal, to get kids coding. Everything, and I mean _everything_ they do is to further that goal. One part of that is to have a platform that is, safe, easy, expandable and _cheap_. A side effect of that is that its mostly open.
Now, is it as open as say fabbing your own RISC-V SoC? no. But then that would cost a boat load of cash, and it would then be cloned and resold by third parties. Thus, no money for education. (worse still, dangerous counterfeits might be bought by schools, cause injury leading to legal action.)
So, being pragmatic, and noting that the ecosystem is far more valuable to me, and almost everyone else, than the hardware, I will accept this behaviour.
It's fine if that's their requirements, but that wasn't clear originally.
It seems that people who care about technology open standards, open source, and open hardware need a clear understanding of Raspberry Pi requirements, and that they're actually getting hardware that is more closed than a typical PC.
(Related: Starting over a decade ago, some IBM ThinkPad models infamously whitelisted mini-PCIe cards, so that only a small set of particular cards could be used, which was very unusual for PC hardware based on open standards. One of the biggest practical reasons to put Coreboot on those ThinkPads is to get rid of the awful whitelisting, so that people could use whatever cards they wanted, including using WiFi cards supporting later standards and working without having to download closed firmware blobs.)
It was always the requirement. It was designed as a machine to replace the BBC micro.
most PCs have BIOS, which is closed source, and difficult to get access to. yes there are opensource alternatives, but they are not entirely practical.
Most modern PCs have a TPM, EFI and a whole host of other bits that make opensource drivers exceedingly difficult. Then there is the graphics card, where if you want to actually have decent speed it means closed source drivers (I'm talking nVidia/ATI, intel doesn't really count as they are adding GPUs as a value add, not a core business.)
Harddrives have closed source and obfuscated software that actually does the writing/reading.
Then there is the out-of-band management that is shoehorned into a lot of Intel and AMD's kit.
so, to say that it is more closed than a PC, is either ignorance or hyperbole.
Also, you have to remember that yes, the PC was based around open standards, but not open source. You still had to pay to get certified, or even get access to the spec.
Considering that you can attach any hardware either in the form of a HAT, or via the CSI, I think the argument is flawed. Again, there is nothing stopping people using a clone, if they so wish. But all the rpi clones appear to be mediocre at best.
Although PCs have traditionally had closed-source firmware, this firmware has not typically been used to implement malicious functionality (aside from cases such as the Thinkpad example above), making it historically less of a concern (though this is changing). Vendor lockin doesn't magically become acceptable because kids (nor is it ethical to teach kids that DRM is okay/normal).
Also, define rpi clone - rpi did not invent the SBC genre. Many more open products precede it.
> not typically been used to implement malicious functionality
so raspberry pi, in an effort to be more open than say Dell/HP, where you had/have to pay to download drivers, bios and other updates, is _more_ malicious than say apple/google who are competing to create a closed garden with a remote lockout, where nothing is owned, only rented?
ya.
I really don't understand this viewpoint.
Look, I've used embedded 386s. I've seen the evolution of ARM SBC first hand. I've used gumstix, sheevas and custom rolled jobbies. You know what united them all? they were bloody expensive and difficult to use. Pis are a joy to use. Plus if you accidentally blow the board, you're not down £500
Adding a DRM so that people can't make knock off cameras, in exchange for a £10 full linux SBC that is _easy_ is a worthy exchange. Even if they weren't a charity. The fact that they are single-handedly dragging the pathetic excuse for IT education in the UK ("here is how to use Microsoft office") into something approaching usefulness is a massive massive bonus.
The ecosystem that raspberry pi supports, the OS, magazines, training, teaching, class room outreach and research can only be done with cold hard cash. That cash comes from Pi sales.
Don't get me started on DRM. Look I know you think that everything should be free, but that means I can't earn a living.
Teaching kids that copy/pasting someone's work, making a halfarsed cheap copy and selling it for loads of profit, with no support is a good moral choice, you might want to re-asses your world view.
Yes open source is good. Yes you should give back and contribute. But, to move forward, money has to be invested, and it has to come from something.
Yes, removing ownership from the people is exceptionally bad. The trend towards the "sharing" economy undermines workers rights to the point that we are entering digital feudalism. But that is another topic.
I don't own a smartphone/any Apple/Google products, or Dell/HP machines for that matter (not surprising to me that they're bad). My frame of reference is typical PCs, which aren't in the habit of applying vendor lockin with regards to what peripherals you can use - and other SBCs.
I don't understand the premise that not applying vendor lockin to a camera module would make SBCs unreasonably expensive. Competing SBCs are more open, many predate the rpi and despite the fact that they probably sell fewer units than the rpi (due to less brand exposure, not cloning), and thus possess lower economies of scale, I've never seen one which costs "£500".
>Don't get me started on DRM. Look I know you think that everything should be free, but that means I can't earn a living.
I think this is clearly untrue from the success of other, more open SBCs.
>Teaching kids that copy/pasting someone's work, making a halfarsed cheap copy and selling it for loads of profit, with no support is a good moral choice, you might want to re-asses your world view.
But I never claimed that kids should be taught that. What I do claim is that kids should not be taught that it's ethical or normal/acceptable to try and use malicious functionality to institute vendor lockin, creating an artificial monopoly in compatible peripherals, because it isn't.
Moreover, it appears this DRM has had a chilling effect on third-party cameras with novel functionality (i.e., not competing for the same applications as the official camera), according to a comment posted above: https://news.ycombinator.com/item?id=20261787
So there's real evidence of harm done by this.
>Yes, removing ownership from the people is exceptionally bad. The trend towards the "sharing" economy undermines workers rights to the point that we are entering digital feudalism. But that is another topic.
It isn't another topic. There is a systemic trend of hardware vendors using firmware to advance and prioritise their own interests over those of the device owner, essentially using technological measures to undermine the first sale doctrine. Of course, this isn't illegal, though it ought to be.
> But I never claimed that kids should be taught that. What I do
> claim is that kids should not be taught that it's ethical or
> normal/acceptable to try and use malicious functionality to
> institute vendor lockin, creating an artificial monopoly in
> compatible peripherals, because it isn't.
I'm sorry, I didn't buy a Raspberry Pi for my little brother in order to teach him about vendor lock-in. I bought him a Pi so that he'd learn Python and explore the boundary between the electronic, software world and the real physical one that so often seems shrouded in mystery. I did not buy the Pi because I like supporting non-open companies- I bought it because A) it has a great value proposition, especially once you factor in the community support, and B) I like the Raspberry Pi people. They have hearts. They care. I understand them DRM'ing a $10 part when it's not crucial to standard operation (nor a lot of projects) and it's not like you can't just get an old PC and stick a webcam on it.
The way I see it, a Raspberry Pi is more than the sum of its parts- while commercial companies care about the chips and the complexity a solution like a Pi can remove, I personally bought a Pi for my brother to learn on.
2006, gumstix with bluetooth was $180, but that was a very small system to dev for. embedded 386 in volumes of 1 were ~£380, CF flash(certified) £80-200. Software was extra, as was the debugging interface, which may or may not be a fancy serial port. Ethernet, RF, all extra. Wifi? naaa.
The arm board, I don't know how much they cost because they were custom made. So hundreds of thousands I suspect.
Then we have the tools that supported it, not exactly kid friendly.
This doesn't add up either, since they seem to be effectively walling themselves into not opening up some of the software needed to boot rpis, which remains proprietary. See https://news.ycombinator.com/item?id=20261549
The assumption that they would prefer to handicap their main product because of some unknown deal they have on an accessory seems biased to me. The SoCs that they use are Broadcomm ones, a company known for being hostile to open source. Perhaps the fact that they have now sold over 25 million units of what is essentially their old designs that wheren't selling anymore is opening their eyes on the money that can be made. The developer of the new open source gpu driver is a Broadcomm employee.
The point is that for some unknown to us reason they deemed it a good idea to add DRM on a side-project they have. But that doesn't mean they can't make a new camera at some point.
I agree that if rpi decides not to do camera DRM anymore, this motivation not to get rid of the boot blobs in the main product disappears.
Currently they are defending this practice, however. It logically follows that as long as they institute this policy of using DRM on the main product to prevent use of third party camera modules, the boot blobs will remain...
I think people misunderstand what this DRM is about. The Pi has a special connector for a camera. Anyone can use this connector to build their own camera module and they can use the kernel driver that the RPi foundation paid for. But you also need to develop an App for it. The RPi foundation made their own camera accessory and an accompanying app for it. What the DRM does is that it only allows their own accessory to work with the app. So it's more like brand protection than it's market control. There are plenty of other cameras on the market that you can buy. Why they chose the camera to DRM out of all the peripherals I don't know, but the only reasonable place that they can stick it into is the blob that Broadcomm provided.
Btw the situation with the binary blob has been going for years and from what I've gathered in the past what's happening is that the foundation members signed an agreement to not reverse engineer anything on the SoC when they made the first Pi. Since then they offered a price for an open source gpu driver but didn't follow it up with more competions. So either Broadcomm shut them down or it paid off in internal pressure since Broadcomm paid and provided the mainline driver used in this version.
> This stated intention and rpi.org's actions are simply not isomorphic. If they want to make Raspberry Pi more open, firstly, why do they publish only abridged (read: fake) schematics?
Why do people tend to dismiss the openness of the RasperryPi with completely unrelated things? This is the classic case of Whataboutism[0].
Just because you can't replicate the entire product in your garage, it doesn't mean that it isn't quite open already.
I don't agree with the parent but it isn't an example of whataboutism.
He's taking issue with the stated aim of openess, with examples he feels aren't open.
They're possibly thinking about something like the Amlogic S922X. A 12nm SoC with Quad A73 and Dual A53, its definitely more modern and more efficient, though "better" is pretty subjective. As implemented in the Odroid N2 [0], its not cheaper at $79 with 4GB RAM, compared to Raspi 4's $55 (or $63 and $45 for 2GB, respectively). Hard to say if they cut the board down to remove all the things Raspi doesnt support and scaled up to Raspi scale if they could match the price. If not, it'd likely be close.
Missing: SD card performance benchmark. Previous models could only read about 20 MB/s or so, while the modern SD cards can do 10x (maybe even more) that.
Is there any improvement in this? It's pretty important as RPi usually boots off it.
"The Pi4B has a dedicated SD card socket which suports 1.8V, DDR50 mode (at a peak bandwidth of 50 Megabytes / sec). In addition, a legacy SDIO interface is available on the GPIO pins."
Finally! First RPi SD card performance upgrade ever.
Doubled, so I guess around 40 MB/s. Still a far cry from fastest SD cards that can both read and write over 250 MB/s. Hopefully RPi4+ will improve on this. :-)
People have spent however many years since the release of the last pi suggesting improvements. It's announced today offering basically everything that people were asking for, and you are already planning for the next one?
Huh? I'm not allowed to hope for future improvements?
I'm very happy for RPi4 SD improvements. Until now, RPi has had same lacking ~20 MB/s SD performance from the start. AFAIK, this is the first time ever there's been any hardware improvement in this regard.
I've had a microsdxc card with 90 MB/s reads and 80 MB/s writes since early 2014. Some current cards can read & write more than 250 MB/s, so
there's still room for improvement.
As 95-99% of all RPis boot from (micro) SD and use it as primary storage, I'd say SD performance is a rather fundamental (often ignored) aspect of RPi performance.
Correction: it seems like RPi4 doesn't boot from USB or Ethernet yet; they will release a firmware update to turn it on.
"Support for these additional bootmodes will be added in the future via optional bootloader updates. The current schedule is to release PXE boot first, then USB boot."
My young son got a 3B+ for Christmas and it has provided no end of entertainment. He discovered Minecraft early on and that ha led to him starting learning to code so I think that was a great investment, probably the best ever.
What does he do with it? From what I’ve seen, most people throw these in a drawer instead of find educational value in it. Would be interested in anecdotes where it led to something positive and what approach worked out.
You can edit posts but only within a limited time window, I think it's something like an hour. If you don't see an edit button you can't do anything about it.
I'm most excited about the modern A72 cores, upgraded hardware decode, and up to 4 GB RAM. They really listened and delivered what most people wanted in a next gen RPi.
HDMI to micro-HDMI is just a straight passive adapter - only the physical shape of the port is different, and feature support isn’t really a thing. In this case it’s obviously done because the HDMI connector is annoyingly bulky if you want two of them, though I’m surprised a stacked connector wasn’t a better option.
Stacking is risky, because the end of cables might be unreasonably large, thus two might not fit on top of each other that close. Also HDMI cables tend to be pretty bulky, they might simply strain the board too much?
Sure I can just buy an adapter, but then I have to keep track of the adapter and always deal with an adapter hacking of the board making it more awkward to handle.
You make it sound like they are being unreasonable, but just like USB-Micro to USB-C and X to headphone jack, it's a huge inconvenience when you can't find/don't have it to hand.
I used to lose the Micro to C adapters all the time, I lost the C to headphone for my OnePlus6T while travelling and was unable to find a replacement so no headphone use for me.
Also, I have about 12 Raspberry Pis, as many display devices, a dozen or so other HDMI devices such as consoles etc, and tens of normal sized HDMI cables lying around and they're all compatible.
You're not wrong, but it feels like an unnecessary problem to have, stacking a couple of full size HDMI ports would have been nice, or just putting a single port on since having two on a Pi is kinda unnecessary.
If I remember correctly (and that's a big if), DisplayPort started royalty-free but then they introduced royalties. From my blurry memory: I checked once many years ago and it was free, but then I checked again a couple of years later and it wasn't exactly free any more. Did it change again? Or am I completely wrong from the start? If someone remembers the history better than I do...
More common in monitors perhaps, but I don't think I've ever seen a TV with a DP input. Since AFAIK the target market of the RPi is "plug it into a TV, plug a cheap USB keyboard and USB mouse, and you have a working computer", having HDMI output is a requirement.
Exactly, you can solve the dual-display problem via daisy-chaining a couple of DisplayPort monitors instead. Using Micro HDMI was a bad decision in my opinion.
Gigabit Ethernet? I'm tempted to grab a few and try running Pi-Hole for my whole organization (~1200 users) and see how that goes. :) I know you can set up Pi-Hole on Ubuntu VMs. But there's something so attractive to me about running it separate from your hypervisors and closer to the core switch on very very cheap hardware.... Been curious about Pi-Hole on my bigger work scale for a while, this may very well tip me to trying it out.
Pi-Hole is a wrapper around dnsmasq, so you could just run that on your base OS with the same configuration files. Add gravity.sh if you want automated blocklist updating:
It shouldn't be a problem even with older Pis. Pi-Hole only answers DNS requests, which are comparatively tiny. The actual web traffic goes through your regular layer-3 network.
This seems like it could start replacing cheap firewall hardware. I believe pfsense no longer requires aes-ni (?), so it seems like a good choice for that.
AES-NI for x86 CPUs was going to be required for the next release of pfSense (2.5), but that requirement was dropped when the API that necessitated AES-NI was pushed back from the 2.5 release. AES-NI will probably still be a requirement at some point in the future.
AES-NI won't be a requirement for ARM CPUs - Netgate sells first-party ARM-based pfSense appliances that have non-AES-NI hardware crypto acceleration that they've confirmed wouldn't be affected by the AES-NI requirement. Not sure how that applies to third-party ARM systems.
they're not a lot less expensive by the time you have an equivalent system.
The SG-1100 is based on the v7 espresso.bin board. a 2G version of that is $99 by the time you have case, heat-sink, eMMC and power supply. We sell them for $159.
A 2G RPi 4 with case, heatsink, and power supply is $76.30. You have WiFi, but no second (or third0 Ethernet.
Specifically, I'd like to find software that makes the cluster appear as a single memory address space and N identical CPUs/cores.
When I'm in that OS/VM/kernel, whatever you want to call it, I want to be able to experiment with running stuff like Elixir or Go and have the runtime handle the virtual memory and cache coherency stuff between the nodes. I don't want to deal with any manual memory management at all. I just want rules of thumb regarding rough latency between N nodes and M routers for whatever network topology it uses.
Icing on the cake would be if I could run something like IPFS (or another hash tree) and have data distribution handled under the hood and appear as a single directory structure.
The goal being to play around with stuff like neural nets and ray tracing without having to use any proprietary frameworks. It should just appear as say a 256 core computer with however many GB or ram and however many TB of hard drive space I give it.
It provides MPI bindings in Go, but they really should have provided an MPI layer internally so that the Go metaphors of things like channels and goroutines "just work" with no special syntax.
I think this is where I'm getting stuck. So much cluster software provides interfaces to send/receive data and get the current thread's CPU id and total number of CPUs. But I'm not finding much info on doing this directly in the kernel or language runtime so that the client can be written in a topology-agnostic fashion.
> Specifically, I'd like to find software that makes the cluster appear as a single memory address space and N identical CPUs/cores.
This is called "single system image". A couple of coworkers played with openMosix on our desktops several years ago, it was fun seeing bash processes moving on their own from one desktop to the other. But I haven't heard anything about that in a while, it seems single system image clusters have fallen out of fashion.
It supports it in the sense that it provides the extra taps on the Ethernet magnetics for the actual PoE converter to connect to. That's all the onboard support they have, as far as I know - just the few extra connections required to feed the PoE hat which contains all the extra support electronics.
I needed to build a low power Nas with ZFS and the only board that could do the job (gigabit Ethernet, sata IO, powerful 64 bits processor, 4GB of ram) was the rockpro64. Sadly I discovered after receiving it that it can't run regular Linux distros out of the box. I had to download an image from some guy on GitHub. He seemed reasonably trustworthy since it was linked from the manufacturer's page but there's still a small chance that my Nas is part of some botnet now. It's something that can't happen with major manufacturer's like raspberry.
Damn I read the article a bit too fast, neither does the rockpro64 but it has pci-e and the manufacturer sells a sata card.
An usb3-> sata adaptater is ok if the USB bus is reasonably fast. I care more about being able to update my board and trusting what runs on it than pure speed
How exactly do you plan to connect hard drives to a Raspberry Pi? Use drives in external USB enclosures just lying around? Looks a bit hacky to me.
I am planning on doing a NAS to, but decide to forgo the low power aspect and use an older PC hardware. But not too old, apparently, as FreeNAS requires 64-bit and 8+ GB of RAM, nowadays, for the ZFS, apparently.
The book 'kubernetes up and running' had a description on how to set up a kubernetes cluster out of raspberry pies. I guess a setup with the new pies would look a bit more real.
I have a PicoCluster 3S [1] kit I bought to make a clean Kubernetes cluster. I just ordered 3 4GB RPi 4s for it. I'm very excited. I use an ODROID-C2 as a home server for most tasks, and while the ARM cores are sufficiently fast, with some work I run out of memory and swap a lot. I will test to see if one of these could be a suitable replacement vs. an ODROID-XU4/XU4Q.
Does the 4 have a composite output, or is there a hat for it? My game room has a CRT and while I'd love to stick to original hardware, the price of getting it to read games from non-original media (which is getting harder and harder to come by) is pretty high. An RPi 4 could be reasonable alternative.
The RPi was already exceptional for its price point, and this version seems to address the few problems it had (lack of Gigabit, USB speed and RAM capacity) and add onto it even more features. It almost seems too good to be true.
The only issue I have with the Rpi is the microSD lifespan. Of course you can make it read only, but at that point I would pay $10 more to have at least 1GB or less of quality flash memory on it. I've heard microSD will always die at some point. Of course you can also boot from USB too.
I wish there were some improvements in the I/O aspect of the board. Having tried to use an RPi for a small automation project, I felt limited by the single ADC input and single PWM output. I was faced with using an Arduino daughter board to do the actual IO or going with a BeagleBoard.
Does anyone know if it can boot from USB? I know one of the last models was able to after changing some firmware settings and the Raspberry Pi foundation mentioned adding better support for that in the future. MicroSD cards are just too prone to corruption to be running your OS off of.
"Support for these additional bootmodes will be added in the future via optional bootloader updates. The current schedule is to release PXE boot first, then USB boot."
Anyone have a good NAS setup guide using the Raspberry Pi? I've been wanting to get a Synology but gawk at the crazy prices for such limited specs. I'm thinking a DIY solution will be much cheaper.
Not for the pi specifically, but if you want a simple FileServer, then a debian (raspbian) samba setup is pretty much done in a few minutes: https://wiki.debian.org/SambaServerSimple
Storage could be attached via USB3 or you can just use a big SD Card (maybe ~200GB)
The Raspberry Pi has an ARM processor, while macOS requires an x86_64 processor, so you could not build a Hackintosh. One could conceivably run iOS, as enough is understood to emulate the OS on similar hardware (see https://alephsecurity.com/2019/06/17/xnu-qemu-arm64-1/).
I'm using Rancher's k3s[1] on a Rock64, and it works perfectly. I maxed out the capacity recently and I was looking to add more nodes to my cluster. Looks like the new Rpi 4 would be a good addition !
Radarr, Sonarr, Bazarr, Plex, a small Nginx, and soon a Pi-Hole and VPN. It's a great companion for an iPad Pro, so you can also use it as a development machine when coder.com finally release an ARM docker image ! (https://github.com/cdr/code-server/issues/35)
Thanks! I was especially interested in the cooling situation with RPi, and the article you linked to seems to provide some valuable information & further links in this regard (section "Heating and Cooling": https://blog.hackster.io/benchmarking-machine-learning-on-th...)
+ USB 3: Very nice! Finally a pocket-sized, fast USB host.
+ Dual HDMI: Could be useful as a projector computer.
+ 1.5 GHz: Good, it might be fast enough for some real work.
+ Gigabit Ethernet: Excellent for those using it as a NAS.
+ USB-C for power: Not surprising, it's the standard now.
Cons:
- MicroHDMI: Incompatible with the 800x480 HDMI 3.5" screen [1]. Also different again to the MiniHDMI on the Pi Zero (will there be a new Pi Zero soon? Who knows.)
- Power consumption! They recommend a 15W power supply, which means I'm pretty sure this won't run on batteries.
certainly a good idea for stuff / locations that have regular writes, just be nice to use a more durable boot disk, hopefully booting via USB storage is well supported
I stand corrected. Turns out it's implemented quite differently and neither netboot or booting from mass storage is supported for pi 4 yet. Hopefully we won't have to wait long.
Yes, there are adaptors, but it won't fit with the same loopback HDMI board that's included with the screen. It'll need a wire, which effectively doubles the space it'll take up on a desk.
If there are any other high-res small (<= 4") screens, please let me know - the highest DPI that I know is an iPhone 4S 960x640 3.5" screen with a Creotech adaptor.
Looks like the ethernet is connected directly through RGMII - meaning the USB 3.0 controller is likely completely out of the picture. That would mean you can run full throttle ethernet without affecting the USB speed at all.
Only if both drives are saturated at the same time. But since the context here is a NAS usage, there's a max 1gbps upstream/downstream in the first place so you could comfortably do a 4 drive mirror NAS off of this and not have any bottlenecks on the USB 3.0 -> SATA side of things.
I have several 4GB machines that's used in production (ZFS on Linux), however, it's not problem free and require lot's of tuning, the correct workload and smaller sizes of disks.
Does it use ECC memory? If not, you should not use ZFS. Without ECC memory, ZFS carries the nasty risk of writing good data with a bad checksum, leading to data loss which would not occur in other filesystems (those that do not try to correct errors on the fly like ZFS).
As I understand it, this is just a myth. Here's a post [0] from Matthew Ahrens, one of the co-founders of ZFS who has remained active in its development:
> There's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag (zfs_flags=0x10). This will checksum the data while at rest in memory, and verify it before writing to disk, thus reducing the window of vulnerability from a memory error.
> Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag (zfs_flags=0x10).
Ooooo, I hadn't heard about that one. I have one non-ECC box running ZFS where I might want to enable this. If the performance impact is negligible, it'd be worth trying.
Without ECC memory, ZFS carries the nasty risk of writing good data with a bad checksum, leading to data loss which would not occur in other filesystems
Assuming a memory error somewhere:
On ZFS you'd be writing bad data with a bad checksum, which would be caught by ZFS later on.
On (most) other filesystems you'd be writing bad data and no checksum, and you'd be none the wiser until garbage comes back.
It's a myth that ZFS requires ECC memory; ZFS is safer when running with ECC, but without ECC it will still save your data in a lot of places where most other filesystems won't.
In the case of a traditional filesystem, similarly unfortunate memory corruption would just affect the actual data directly. Forgoing ZFS throws out the happy path where the checksum does catch a data error and repairs it transparently.
Edit: To add on to this, ZFS doesn't do parity-based repair at the checksum level. In the mentioned case (good data, bad checksum) no copy of the block will match the checksum and there will be no supposedly-good block to copy over.
On the other hand, it carries the advantage of being able to correct bad data (or at least detect it) if it writes bad data and a good checksum, which other filesystems won't do. Seeing as the data for a check-summed block is much larger than the checksum itself, this seems like a net win as assuming a uniform probability of memory errors for each byte of memory, an error is more likely to occur in the data (though I guess it depends on the specifics of the implementation).
I have the same setup on 2 pis, and even on a 3b+ I've found it's easier to just put a cheap little microSD card in to hold the bootloader than to try to get USB boot working.
The Pi3b+ can boot of USB without any SD card. Can also do this on previous model https://www.raspberrypi.org/documentation/hardware/raspberry...
but as you can read, that would make a permanent change to enable via a one time programmable bit and remove the ability to boot from SD card.
Though as you have the 3b+, then you have none of that dilemma and it will just boot, done it myself.
ah, because of the out of order instruction processing with the SMP. Not thought about that, but the speed bumps all round will more than absorb any OS mitigation overhead.
This was my first thought when I saw "out of order" in the post. Would have been fair to mention this introduces customers to a whole new class of vulnerabilities, but I guess that's not exactly promo material...
Going by the "3x faster than Pi 3" claim it should be roughly on par with desktop CPUs from around 10 years ago (Core 2 Quad Q6600, Athlon II X4 600e) or the modern quad core Atom CPUs like say the x7-Z8750.
Rocking a Core2 duo 6600 for the past 11 years, so I'll let you know as that is what it will be replaced with. Been looking at a ARM solution for while, whilst the current 3b+ are good, the ram limitation and IO speeds as well as just short on some graphics grunt, did limit what they could do.
Whilst no SATA, the USB3 should be enough and the boost in memory alone, makes desktop replacement utterly viable.
But there are solutions out there with SATA and even PCIe slot(s), though support is a factor and with Raspberry, you have that support base that tips the balance. After all, having extra features with bugs compared to less extra features and solid support to deal with any bugs in a timely manner as well as a user base that can eyeball saturate an issue. Well, that's priceless as that will save you so much time, hassel and stress.
Geekbench puts the Q6600 and existing A72 SoCs (that are clocked a bit higher than the Pi 4) directly on par at ~1500 single core score but it would certainly be interesting to see some detailed real world benchmarking between them.
I wonder why they don't price the 4GB model at $50. And get rid of the 2GB Model.
I know they are a Charity, but the pricing structure seems odd, would it be the case the 2GB and 4GB and making some money and 1GB being a loss leader?
I wonder how much is the actual BOM Cost for the $35 model.
And in terms of VideoCore, how does it compare to other commercial GPU like Adreno and Mali?
Nice upgrade, I chuckled at the pricing of 1G, 2G, and 4G models as $35, $45, $55 (at least in the US). And that they are pushing it as a desktop. They might start looking like a computer company if they aren't careful.
At some point I think they really should consider offering an official case or something :-).
So on one hand, I get the excitement. This is a very popular platform that has a ton of community support and does a bunch of stuff. On the other hand, it isn't great at anything (besides the community - which shouldn't be underestimated, sure). It's also using a pretty proprietary chip - if you build a project off of a Pi, it's hard to migrate to a custom PCB.
As a NAS, you probably would want ECC and a bunch of SATA ports (I'm using a Helios4 for this purpose). You can't use it as a router / pfSense without a second gigabit ethernet. As a media box, you'd want a SATA port and more display out options (and maybe beefier GPU).
But on the other hand, everyone including me has one. (I got one for flashing some SPI chips with Coreboot BIOS). Perhaps the versatility is the killer feature.
I think that aside from versatility and community, the other big thing these have been great at is long term availability. Just look at how many other Pi "killers" have come and gone pretty rapidly, or never ended up getting good OS support.
I will say, I suspect that for a media box, USB3 storage may be good enough.
That is a thirst of it. There is little it does massively better than other options, in fact for everything I can think of there are better options, but it is powerful to do most things well enough.
An other third is the cost. There are not many options with a similar price/utility ratio, particularly when you count support (see point three).
The thirst third is support: up-to-date Linux builds supporting the hardware (a common complaint with other devices is old and/or buggy drivers that are a faf to build), community size & momentum, commercial add-ons, ...
> As a media box, you'd want a SATA port
Not for a media display box, which is what my currently active pair are used for. Local media storage is on the network in a box hosting many drives and doing other jobs too, and other media is remote anyway. And if you are using something for storage you want multiple SATA ports (I can't be the only one paranoid enough to apply RAID1+ to anything intended to survive the month!).
The Pi3 (and 2 for that matter) does admirably as a Kodi box, though it struggles with x265 (720p is fine though it drops frames on some encodes, 1080p is sometimes surprisingly OK in the winter but causes the thermal throttle to kick in after a while when the ambient temperature is higher) so I'm quite interested in the fact that the 4 seems to support this in hardware - I'll be keeping an ear open for news that Kodi supports its hardware support for that codec (and if it handles the commonly used format options well not just the baseline).
> router / pfSense without a second gigabit ethernet
You can use an external device, but yeah that does wreck the nice small form factor somewhat, adds to the cost, and you have the hassle of finding a reliable well-supported one. Though the main complaints I've seen for using simple SoC systems like this as a router (and one of the reasons why I've not got around to trying it myself yet) is not that 100Mbps is a limitation for most home users (anecdote: have ~76mbit down, ~17Mbit up, I know few here with much better) but that they don't have the umpf to keep up with that level of traffic, especially in both directions, with any degree of extra processing (i.e. being a VPN endpoint for a chunk of that traffic).
Wow, they solved one of the big problems with raspberrys, ethernet was terrible.
Now for me there is another very important one for things that stay connected 24 hours: refrigeration. There is a need at least for aluminum cases with holes for wifi.
Right now only Chinese make those, so it takes a long time to get those.
> The BCM2835-based chip in Raspberry Pi 1 to 3 provided just one native USB port and no Ethernet, so a USB hub on the board provided more USB ports and an Ethernet port. The 3B+ added a dedicated LAN chip, which gave it Gigabit Ethernet, but this was limited to USB2 speeds. The Pi 4 has dedicated Gigabit Ethernet, and because it's no longer throttled over USB, its networking speeds are much faster.
"The Ethernet controller on the main SoC is connected to an external Broadcom PHY over a dedicated RGMII link, providing full throughput. USB is provided via an external VLI controller, connected over a single PCI Express Gen 2 lane, and providing a total of 4Gbps of bandwidth, shared between the four ports."
Every year we upgrade our autonomous sailboat controller to the latest Raspberry Pi. This year we face a situation to choose from Jetson Nano and RPi 4. Even the decision is hard to made, now is an exciting moment for robot makers.
Interesting that they added dual HDMI but no hardware clock. I don't think I've ever owned a device with dual HDMI outputs. Gigabit Ethernet will be great, I might finally get around to setting up FreeNAS server
Depends entirely on how you want to use it. You're not going to be opening Slack, but it's more than plenty for a KODI player, home server, VPN server, seedbox, piHole, robot controller, home automation, etc.
Hi, thanks for running this. It's helpful information. Something strange is that in your cpuinfo the SoC is detected as BCM2835, while in all the spec sheets online it's supposed to be BCM2711. Do you know if you are possibly working with a different version of the hardware?
One reason this is important is that some of us in this thread are trying to work out what the video decoding and 4K HDMI capabilities are with this new hardware. In particular, the specs say the BCM2711 is supposed to be a VideoCore VI SoC, but your dmesg is showing that the vc4 driver blob is being used.
If you could add any information that would help that would be awesome!
I bought an NVidia Jetson Nano 4GB memory- because I liked the Raspberry Pi so much, but the Pi 3 does not really work as a development machine. The Nano does really pretty well with the addition of a fan, swap space, a good power supply (adafruit). The BIG problem with the Nano is that it is arm64 and the availability of the Arm64 linux things (like Docker images) is limited.
I ordered a 4GB Pi4 which seems to have advantages, and I assume the availability of Arm64 images will sky rocket?
Last year VMware demoed ESXi on ARM. With the Raspberry PI getting 4GB of RAM that would be usable. Hopefully they release it. It would make a great low power homelab.
I really don't need to upgrade from my rPi3 since I only use it as a Pi-hole, but I'm considering a rPi4 just for the fact that it's going to be an insane performance increase over the 3. Not to mention the move to 28nm might make it run cooler/etc as well. Maybe I can find a different use for the 4, who knows. At this kind of price, it's a no-brainer, though
I'm a bit disappointed that they added type-D (micro) HDMI instead of just two more USB-C ports. If I need to use a dongle anyway (since I can't just plug inn normal HDMI) why not USB-C?
Preferably I'd like a version with 4 usb-c ports, 1 normal hdmi, and two normal usb ports. That way I could have it setup with just usb-c, but can still plugin legacy connectors when needed.
microHDMI-HDMI is passive, as mentioned by another poster. The issue here is the software - two standard HDMI (electrically) ports can be really well supported by all the software, meaning that it can all be multi-monitor friendly. USB-C uses hardware so wouldn't have the support, see the issues with HDMI-VGA when the first pie came out.
As an aside, they probably went with microHDMI as they're smaller (and hence cheaper in cost and area) on the board, and put less force with heavy connectors. Lastly, many android tablets have the microHDMI connector so they're easy to get hold of (first world anyway).
I'm happy to see the 4GB RAM and better overall performance now available for a mere $55. I admin over a dozen RPis at work for signage and sometimes they barely keep up, but as a non-profit, we don't have the budget for much else. We can afford the new Pis, however, so I will likely be upgrading them this year.
I'll likely buy one for home to replace my aging Pi-hole.
We use Screenly. It's rather basic and managed from a cloud account that can see the devices. There is no SSH or other "admin" access beyond the Web GUI that allows you to name/rename/remove and group screens based on their onsite locations for common signage. We've had issues with them and I'm looking for software to replace Screenly that will run on Raspbian or other Unix-like ARM OS.
I'm working on this. I have a solution about ready to launch and I have a question for you. What do you wish it had SSH to do ? I decided not to include it for security, what is not available in the web gui that you would find useful ?
> Full-throughput Gigabit Ethernet
> Two USB 3.0 and two USB 2.0 ports
Does it mean no more shared bus and full bandwidth for ethernet and USB ?
> We’ve moved from USB micro-B to USB-C for our power connector. This supports an extra 500mA of current, ensuring we have a full 1.2A for downstream USB devices, even under heavy CPU load.
For each USB port ? Also, does it mean USB-C is soon to be ubiquitous ?
I fell for one of those videos last week... Didn't notice it was April fools until I went back this morning, confused. Guess I need to stop letting Youtube pick the videos I watch.
Great!! What about GPU upgrade? I have a client with a project which only 'kind of' works on a model 3B, he needs about 2x the GPU encoding capability. Is there a chance model 4 will be able to encode a single full HD (1080p/30fps) or two 720p/30fps video streams, into H264, even if on easiest encoding settings?
Comparing the shipping prices across the official retailers, Chicago Electronic Distributors was the cheapest at ~$6 whereas CanaKit and Element14 were both more than $10. Also bought a USB-C charger for it since the various old phone chargers I have don't supply enough amperage to the board.
For the 4GB model, my very usable if somewhat low-end IBM Thinkpad ~15yrs ago had about 1/10 as much memory. Probably similar on the processor—I bet one core on this is something like 3x as fast as the single-core Celeron I had in that thing. Video capabilities are so much better it'd be hard to even compare them. IO's much faster—5400RPM spinning rust on that laptop, the SDCard's probably a lot faster even, let alone the USB3. The only reason this thing might not make an excellent desktop these days is all the bloated web shit we use.
So all those many hundreds of thousands of VM hosts out there deployed in production, running their hypervisors from USB must be failing left right and centre... or not.
Too bad the USB-C slot is only used for supplying power instead of being a proper Thunderbolt slot with support for acting as a USB device or attaching external PCIe cards. But well, that's a thing for the 4+ then I guess...
It says the USB-C connector supports On The Go (OTG), which means it can act as both a host and a device.
I could imagine being able to power the RasPi4 and communicate with it over Ethernet using just a USB-C cable if the Raspberry Pi 4 supports USB Ethernet 'gadget' mode, which I'm guessing it will if that USB-C port is fully USB OTG compatible.
Hm yeah, many people are still very critical of USB-C. (It obviously still takes some time to become as mature and well-adopted...) But I'm totally buying into this, all my new gadgets have USB-C and I totally love it.
USB 3.0! Gigabit Ethernet! WiFi 802.11ac, BT 5.0, 4GB RAM! 4K! $55 at most?!
What the!? How the??! I know I'm not maintaining decorum at Hacker News, but I am SO mighty, MIGHTY excited!
I'm setting up a VPN to hook this (when I get it) to my VPS and then do a LOT of fun stuff back and forth, remotely, and with the other RPI at my folks.