Hacker News new | past | comments | ask | show | jobs | submit login
Raspberry Pi Connect (raspberrypi.com)
199 points by vquemener 58 days ago | hide | past | favorite | 78 comments



Had to dig to the bottom of https://www.raspberrypi.com/documentation/services/connect.h... to find confirmation, but using wayvnc. So a little funny to me that they only support Wayland; x11vnc is easy to use.

EDIT: Actually for that matter I can't see why this is arm64-only; AFAICT everything it needs should run fine on 32-bit. I wonder if this is just a minimal first version?

Also, I can't find any confirmation - is all of this open source or not?


All else equal, targeting and supporting one thing is easier than two, so limiting your target to (wayland, aarch64) makes sense just in terms of resource allocation.

Secondarily, I would wonder if older/32-bit devices might not end up being a little too resource-constrained to be either a bad customer experience (driving more support load), and/or making development too costly.


It would need to be tested and properly integrated with x11vnc. As the goal is to move away from X11, it doesn't seem likely that they would invest time into that.

Besides, if you have x11, you can still use RealVNC.


> It would need to be tested and properly integrated with x11vnc.

True, although if you already have wayvnc that's like a day of work; AFAICT x11vnc and wayvnc are nearly drop-in replacements.

> As the goal is to move away from X11, it doesn't seem likely that they would invest time into that.

It may be the goal, but is it the present? All the articles I can find about the pi transition to wayland say it's the default for the pi 4 and 5 but not anything else. Which is consistent with

> First of all, Raspberry Pi Connect needs your Raspberry Pi to be running a 64-bit distribution of Raspberry Pi OS Bookworm that uses the Wayland window server. This in turn means that, for now, you’ll need a Raspberry Pi 5, Raspberry Pi 4, or Raspberry Pi 400.

but raises the obvious point that supporting X11 would be helpful to support earlier models. (Along with 32-bit support, which I also am surprised at them not including.)

> Besides, if you have x11, you can still use RealVNC.

And if you use wayland, you could just use wayvnc directly. The whole point is to wrap it up in a nice package and add just enough central server to deal with connectivity.


X11 is legacy, and the faster we bury it, the better. A display server that predates Perl and Windows 2.0, and has some of the worst code quality in the world, has no place in the future of Linux desktops.


Getting rid of something just because it's old does not seem to be a valid justification. Sure, it's great to rewrite something and make it better, but unless the new thing supports all of the legacy display devices, modes, and protocols, you'll lose something when you "bury" the legacy project.


Well, how about the following justifications:

1. It's ancient, and was made for the graphical requirements of computers from a time before Windows 2.0 even hit shelves. Go back and look at Windows 1.0 - that's the kind of graphics this was made for.

2. Almost nobody understands the code. The contributors have openly said they are probably the only dozen people who could ever work on it.

3. Those contributors hate the job, and have basically abandoned Xorg since 2018, with only one minor release in 2021. Xorg is nowadays abandonware. It still works - but it's still abandonware.

4. X11's design was feature creep from the very beginning. At one point it even handled printing before CUPS was invented and that part was ripped out. The fact that it tried to be many things at once, then was "simplified" into "only" being a graphics server, has caused the code to be abysmal [https://www.x.org/archive/X11R6.9.0/doc/html/Xprt.1.html].

5. Nobody knows how many security vulnerabilities are in X11. When you have a decades-old codebase in C, anything can happen - especially when for most of the time, it was never fuzzed or testable. In 2013, just one security researcher found over 120 bugs in just one part of X11 (GLX) [https://media.ccc.de/v/30C3_-_5499_-_en_-_saal_1_-_201312291...]. Just last year, two major security bugs were found, both dating back to February 1988 [Note 1, https://lists.x.org/archives/xorg/2023-October/061506.html].

6. Criticism of Xorg as being an extremely flawed design is not new, or even a criticism because we have modern devices. Even in 1994, scathing reviews were well known. https://web.archive.org/web/20091111071410/http://www.art.ne...

It's time to move on. Yes, some things will be lost. That's the cost of progress. We lost the ability to run 16-bit DOS programs decades ago. Ironically, X11 is older than many DOS programs.

[Note 1] Living proof that "open source" does not necessarily mean "more secure," especially when the source code is so complex that "security by obscurity" becomes the actual security strategy. 35 years is older than many people in this forum.


> Those contributors hate the job, and have basically abandoned Xorg

In fairness to X, Wayland contributors seem to hate their jobs as well.


One more thing, even though this is going to be a controversial point:

Some might say, "But Wayland breaks XYZ, or can't do XYZ!"

I'm just going to quote Adam Jackson, who was the project owner for X.org at Red Hat:

"I'm of the opinion that keeping xfree86 alive as a viable alternative since Wayland started getting real traction in 2010ish is part of the reason those are still issues; time and effort that could have gone into Wayland has been diverted into xfree86."


Controversial is underselling it. "Just break things for users to try and get them to give up the working thing and move to our half-baked replacement" is everything that's wrong with Wayland.


I think you're missing the point. Xorg would have broken in every way Wayland has, and far worse, had it not been consuming all of the resources that should have gone into Wayland. Xorg is like a 1987 Hyundai Excel held together with duct tape, being replaced with a 2008 Toyota Yaris. With people complaining the Yaris has smaller cargo space, so we clearly need 7 more rolls of duct tape.


I don't think so; Xorg would have (and arguably has in fact) broken in a completely different way than Wayland. Xorg sucks in that 1. its underlying model of display hardware doesn't really map to how modern computers/GPUs work, and 2. its entire protocol has 40 years of cruft. Wayland sucks in that it declared everything beyond drawing pixels an optional extension, and then took 16 years to implement enough extensions to actually compete with X on features (hence my "half-baked" dig). Or more succinctly, X sucks on the backend, Wayland sucks on the frontend. In your analogy, X is the 1987 Hyundai Excel, and Wayland was born a motoped - fast, fuel efficient, and useless as a car replacement. What I firmly believe the X devs should have done (with 16 years of hindsight) is put all their initial effort into replacing the graphics backend while keeping Xorg for users - basically, make rootful XWayland the only Xorg server on Linux - in order to quickly burn a lot of the parts that were painful to maintain with minimal impact to users. Then, if the rest of X really needed to go, they should have written all the protocols needed to implement at least GNOME and KDE before releasing anything, so we didn't spend years on stupid "we have 3 different incompatible screenshot APIs" games. Instead, they shipped a minimum "viable" product and got upset when users didn't want to switch.


Now we have ageism for software too, eh? Is "vi" legacy? Is "ls"? Do we need a modern electron-based directory-listing application?

It is not "old"! It is "complete", "settled", and "stable". Not everything needs to be replaced and rewritten just because it is old.


If you ask an Xorg contributor, Xorg is anything but "stable" or "settled." A more correct analogy would be "collapsing under its own technical debt."

"Programming X is like reading one of those French philosophers where afterwards you start wondering whether you really know anything for sure." - Thomas Thurman (GNOME developer)

"Wayland wasn’t designed to be a drop-in replacement for X11 any more than Linux was designed to replace Windows. Expectations need to be adjusted to reflect the fact that some changes might be required when transitioning from one to the other." - Nate Graham (KDE developer)

"Three people on this earth understand X input, and I wish I wasn't one of them." - Daniel Stone (Freedesktop Project)

"Let me summarize every wayland discussion on the internet: I'VE SEEN A WINDOW SYSTEM SO I KNOW HOW THEY SHOULD WORK PAY ATTENTION TO MEEEEEE" - Adam Jackson (former Xorg project owner, Red Hat)

And the classic 1994 criticism:

"If the designers of X-Windows built cars, there would be no fewer than five steering wheels hidden about the cockpit, none of which followed the same principles -- but you'd be able to shift gears with your car stereo. Useful feature, that." - Marus J. Ranum, Digital Equipment Corporation


Sounds like X11 needs its own equivalent of LibreSSL, but it looks like the will is there (many supporters of X continuation), but not the manpower.

In days of yore (when Windows 3 was young), there were even commercial X servers (Hummingbird I think it was), so there's plenty of other people implemented the protocol.


I'm sure it will only be a matter of time before somebody builds an open-source app for this service and turns it into a general-purpose remote desktop service. Hopefully the RPi Foundation doesn't get so much illegitimate traffic that they have to shut this down, as it's a neat idea.


Apache Guacamole is FLOSS actually, RustDesk as well, while not much known they do offer such service already, Guacamole as a web-desktop, RustDesk as a classic desktop sharing app.

But honestly we do not need this paradigm, we do need file sharing and syncing in a classic real desktop paradigm. Remote desktops are a kind of new dumb terminals and mainframe model, useful for people that can't really work remotely.

Just as a personal experiment I've tried a different distributed enterprise work model:

- employees receive a new desktop at home, not a laptop, empty storage media AND a usb stick with a self-installing ciphered live distro, they get the key from other media, there are various options like "after you get the iron write down the serial on it and we reply back with the key" or direct paper mail and so on;

- they mount their work desktop in their own work desk, plug the USB and first boot. The live image auto-install and offer a recovery desktop environment with SSH (reverse proxy) and remote desktop (RustDesk for instance) so in case of trouble they can receive support even at this stage;

- they boot their newly installed system and it start syncing relevant data form company servers;

- they works locally as much as possible syncing back data to the company as frequent as possible, of course certain dataset can't be locale because of the size etc, but most users do not work on such large/high-bogomips stuff, the rest is typically some WebApp. Local systems are FDE and demand a smart-card and it's pin to log in;

- a spare machine can be delivered NBD and similarly deployed, the broken one being FDE can be sent back issueless, data sync avoid the employee running away with valuable stuff, oh, yes, nothing can stop him/her to copy company data and do nasty things with them but... It's not really different in an office. If you can't trust your employee and still need to give them complete usable data there is no IT protection, you can act only at human level.

Well, the above is a simple dumb paradigm but the purpose is just showing that:

- we need to have IT match company structures/human life

- we need to be resilient not creating more SPOF

- it can be done with what we already have, it's more a matter of habit then tech


They’d still need to host servers somewhere.


With WebRTC, aren't the servers just used for initial handshake/signaling? The rest of the traffic would be P2P so not a lot of capacity needed except for the initial handshake.


FTA:

> "Our intention is that Raspberry Pi Connect will remain free (as in beer) for individual users with non-relayed connections, with no limit on the number of devices. We don’t yet know how many people will need to relay their traffic through our TURN servers; we’ll keep an eye on the use of bandwidth and decide how to treat these connections in future."

So yes it is possible to only provide servers for the initial connection / negotiating the best direct path between server and client; but they expect some non-zero percent of connections to need to actually have the whole thing going through their servers, and an open source software that doesn't offer that would therefore not be able to support that portion of usage.

edit v2: my original edit was too verbose to be worth keeping, I think, so I'll just write a TLDR of the idea below and if you want to read my rambling 600 word elaboration that I'm too lazy to rewrite more concisely it's here: https://pastebin.com/67iQQvtC

Dynamic DNS service of some kind could be responsible for making sure the client can always reach the server's latest public IP, with the following options:

1. DDNS hosted on server by the open source tool (i.e. abandoning the no server needed hope)

2. Using using an existing free DDNS service (potential trust issues)

3. Users providing their own domain for DDNS, hosted by a domain provider with API functionality for changing A records so that the new remote desktop tool can do that directly

4. Finding a domain provider who offers such granular API access that the open source tool could own a single domain, and allow many people access to update over the API their individual subdomains, or sub-subdomains (but even if you can find one that technically offers this kind of API access, they might not be happy with someone paying for a single domain and then letting thousands of users all have individual API keys sending updates any time their public IP changes...)


They offer a tunnel when connections cannot be established. But beyond that someone has to host it. Why not RPi foundation? I have no reason not to trust them, especially when they make the device itself.


> Our intention is that Raspberry Pi Connect will remain free (as in beer)

So not free as in Freedom, no source code?


I asked about self-hosted relays on the forum [1]. Right now it doesn't look like they have plans to open the source code behind the actual service. Not sure if that's a 'no, never', or a 'we didn't think about it', but it would be good to ask on the Pi Forums maybe.

Otherwise... the service uses wayvnc for the Pi server part, and you could replicate the setup with TigerVNC easily enough over local LAN or through a VPN. The Pi Connect service is the hosted backend; not sure what they're using there.

[1] https://forums.raspberrypi.com/viewtopic.php?t=370380


Apparently not.

.deb file: http://archive.raspberrypi.org/debian/pool/main/r/rpi-connec...

It contains two Go executables in its /usr/bin path: rpi-connect and rpi-connectd.


How is beer free? I never understood that


It makes more sense to state it as "Free as in free beer", as opposed to "Free as in free speech":

https://opensource.stackexchange.com/questions/620/what-is-t...


Free as in “free beer”. You don’t own the rights to the beer, you don’t have the ingredient list, but it costs no money


> You don’t own the rights to the beer, you don’t have the ingredient list

But beer's recipe is open source, anyone can brew their own. It's a really bad analogy, it should be free Coca-Cola or something.


The idea of "free beer" is if I'm giving away free beer at my establishment during an event, there are restrictions around that free beer. I'm not gonna fill up a tanker truck for you, I'm gonna kick you out if you start trying to resell it, I'm gonna cut you off it you've had too much, you can't get any if you're under age, etc, etc, etc.

It's free, but you can't do anything you want with it. Really it's free to drink on my terms - and that's certainly "free", but it's not "freedom" (as in free speech).


The recipe for soda is also open source - anybody can make their own carbonated soft drink. But I think if someone offered you "free soda" it would be pretty clear that they are offering you a specific soda whose recipe you almost certainly don't know, not the umbrella concept of "soda".


Coca-cola is a type of soda so... I think you're agreeing with me? Kind of hard to tell.


"Free soda" and "free beer" are analogous, if that helps.

Or, since it seems to need explaining, the point is that "beer" is not one thing with one recipe and if somebody offers you "free beer" it is pretty obviously a specific kind and batch of beer.


You hang out with me? You get free beer.


Can someone ELI5 for me how STUN and TURN work to make peer-to-peer happen? I get basic web protocols, but peer-to-peer stuff has always been a little confusing for me.


STUN is a way of breaking NAT using uPNP.

What I mean is that: You don't have a public IP, you likely go to the internet via a router. That router is stateful and allows traffic destined to go to some other internet address to return to you, even though your device is not technically routable on the internet.

So, what a STUN server does, is give you information about how to initiate connections to each party; that allows traffic to go through each of your routers.

    CLIENT1 <-> STUN    // (what ip/port combo is needed for CLIENT2 ;;; there is nothing in the table)
    CLIENT1 <-> CLIENT2 // (initiate a connection attempt that will fail, but will be remembered by the stateful NAT/firewall for return traffic)
    CLIENT1 <-> STUN    // (CLIENT1's incoming info for CLIENT2, this combo will only work for CLIENT2, so it requires CLIENT2 to ask about it)
    CLIENT2 <-> STUN    // (what ip/port combo is needed for CLIENT1 ;;; information is now in the table and will be fetched)
    CLIENT2 <-> CLIENT1 // (direct connection based on previous incoming connection attempt *from* CLIENT1)

NOTE: this is not required for ipv6; this is a hack we needed to bypass NAT because we ran out of ipv4.

TURN is the same idea, but instead of coordinating a peer-to-peer connection, it routes traffic via itself, it's just a neutral relay.


Excellent explanation—thank you. Great example of a handshake too.


Device A sends a request to the STUN server. STUN server responds with the public IP address, port and other NAT details that it is able to see. Device A forwards this info to device B, and periodically sends keepalive packets so the connection remains active. Device B is now able to hit device A's public IP/port directly (the router/firewall thinks that the packets are coming from the STUN server).

If the NAT is more restrictive then a TURN server can act as a middleman to relay the packets between device A and device B.


> we establish a secure peer-to-peer connection between the two using WebRTC

So it's a Pi Foundation hosted STUN. Basically a Zerotier clone, neat.


Zerotier does way more: From what I read, RPi Connect uses WebRTC to wire VNC Server & Client up. Sounds more like Remote Desktop than L2/L3 (like ZeroTier/Hamachi/Tailscale/etc)?


Ah yeah you may be right, they do say it's only from a web browser. Makes sense why it's webrtc then, and not something less convoluted.


reverse ssh on a $4/m vps should give you something similar under your control too.


> At the moment, the Raspberry Pi Connect service has just a single relay (TURN) server, located in the UK.

Wow, one relay server! A company with rpi size should be able to do better. The problem the latency might render it useless, just use wireguard.


Note, this is a Beta version of the software, as such it doesn't have all the functionality and has purposefully been limited.


Anyone found source code for this tool ?


It's based on wayvnc and noVNC with WebRTC transport instead of Web Sockets. The WebRTC transport layer has not been made public.


This is nice from a usability perspective, for me the remote access problem has been solved by Wireguard / Tailscale.


I know that using a raspberry pi is not where you run projects that take massive resources, but does using the VNC protocol give you good enough performance? Was really hoping that they had come up with a solution that works very well on the pi, which of course means it would run amazing on desktop machines.


I'd love a dead simple, secure, in-browser way to push through NAT and SSH into a Pi.


Not sure what you mean by in-browser, but Tailscale meets the first two requirements: https://tailscale.com/download/linux/rpi


Enter a link, have a terminal emulator in that page already logged in to your Pi


You can use this c3rl.com/roxy


> choose “Sign in” to get started

That implies tracking. I'm not touching it.


Does this have any advantages over Tailscale?


Looks interesting.

  * Does it not work on Raspberry Pi Zero W?
  * Can I install and set it up in Pi Imager?


Not on Zero W or Zero 2 W at this time, and from what I've heard, they intend to add it to Imager so you can pre-configure it when flashing the OS, but not sure when. They would also need to have the package installed in the default OS image before doing that, too.


Both a shame.

Can this even be set up on a Pi over SSH without a graphical view of the Pi?


Yes; in my blog post and in the docs, the headless setup method is mentioned. Assuming you have a GUI-enabled installation of Pi OS, you can SSH into the Pi and install rpi-connect, then run:

rpi-connect signin

This will give you a URL to copy and paste into a browser. Authenticate with your Pi ID, and then the Pi will be connected.


Ah great. Thanks Jeff.

Do you think Raspberry Pi Zero Ws might get support eventually?


All I need it SSH to a remote rpi


Same here, I only ever see any Pi graphics during the initial setup.

I guess the use case here is something like a remote workstation easily accessible from any device.


You can configure a lot of the initial setup in the rasp flashing tool so even initial doesn’t need graphics or vid out


Yeah; I think this service is more targeted at beginners / GUI users, not at people running Pis remotely hosting little services. For those of us who do that, a self-managed VPN or Tailscale is the better option.


I think I may be old-man-yells-at-cloud-ing here, but maybe making products to solve all these use cases for a beginning user is just what an educational tool like raspberry pi shouldn't be doing. Inconveniences like this are what really spurred on my own education in operating systems and networking, making me ask "ok, well, how can I do this myself?"


You can use this c3rl.com/roxy


Well this is just cool as can be!


People complaining about Raspberry Pi being too expensive versus a used and abused intel NUC being superior going to swarm this thread in 1..2..3!

On unrelated note this is pretty cool service, considering how much effort it takes to properly setup vnc server together with novnc and nginx in reverse proxy mode properly configured with LE TLS certificates, local firewall and a port forward on a router.


> People complaining about Raspberry Pi being too expensive versus a used and abused intel NUC being superior going to swarm this thread in 1..2..3!

You're the only one talking about it... but I mean yeah used x86 boxes do tend to beat a pi at price, performance, and software compatibility, and only lose at power consumption (unreliably, at that) and physical size.


> only lose at power consumption

Not even that. The J4125 box I have runs circles around my Pi4, was only 40€, and has a lower power consumptions (though at those levels the difference is mostly meaningless) The size is a huge difference, but as they are both in the living room cupboard (the Pi4 now runs Proxmox Backup Server), that doesn’t really make a difference for my case.


There's a reason I said the pi only won power consumption unreliably:) It's very dependent on workload (pi idles low, but under load that doesn't go as well) and the exact competition (x86 includes boards that happily run <10W, and also machines that idle in the tens of Watts:]).


But the trouble is they are too expensive. Not sure I would buy a NUC but tons of gently used/new overstock thin-client machines from Lenovo (ThinkCenter) and Dell Optiplex that cost $75-200 depending on spec and give you as good or better bang for buck, more reliable, etc.

(I would however separate out unique applications with the RPI Zero given the form factor)


At the risk of continuing this off-topic thread — the Zero 2 W is probably the sweet spot for value for the Pi right now, in terms of what you get for the price, and the utility of the device. (I'm setting one up as a PiSCSI emulator for my old Macs this week!)

Pi Connect doesn't work on the Zero 2 W though ;)


Do you know why that is? The Zero 2 W (and Pi 3) support the 64-bit verison of Raspberry Pi OS. Don't they also run Wayland? Not enough RAM?

EDIT: I should have taken a minute to check the Pi OS release notes:

> Desktop now runs on the Wayfire Wayland compositing window manager on Raspberry Pi 4 and 5 platforms; on X11 using the openbox window manager on older platforms


Yeah, I think it's just a resource thing, maybe on the GPU side.


Still cheaper than NUCs and usually come new with warranty, much smaller form-factor and power requirements.


Browse some government auctions near a university and you'll buy dozens of older Dell Optiplexes for $200.


They have older CPUs, that consume more power, support limited RAM, and have power supplies that may fail anytime.


> But the trouble is they are too expensive.

Are they really?


I think we can all agree that instead of using a Raspberry Pi, you should use a computer you found in a dumpster somewhere.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: