The short of it seems to be that Linux used to perform floating point emulation on MIPS by writing instructions to run to the usermode stack. The stack therefore had to be executable. This is known to be a bad idea.
A couple of years ago, this code was moved from the stack to a new segment, but this segment is still writable and executable, and the address is fixed. This is known to be a bad idea.
And of course the stacks are still executable because the commonly-used compilers haven’t been updated to request non-executable stacks.
The paper does not propose a solution.
MIPS never really got out of their gate count niche that'd make an FPU a drop in the ocean.
Ubiquiti and a few others are still using older Cavium Octeon network processors that are MIPS-based and don't have built-in WiFi driving their obsolescence. However, I think 2.5Gb/5Gb Ethernet will eventually push those products to adopt the newer ARM-based Octeon processors, leaving MIPS very dead in the networking world.
Why is this necessary? I see no reason why you can’t emulate floating point arithmetic in software with no writable and executable segments at all.
MIPS floating point support requires that any instruction that cannot be directly executed by the FPU, be emulated by the kernel. Part of this emulation involves executing non-FPU instructions that fall in the delay slots of FP branch instructions. Since the beginning of MIPS/Linux time, this has been done by placing the instructions on the userspace thread stack, and executing them there, as the instructions must be executed in the MM context of the thread receiving the emulation.
i.e., the kernel needs to find some place in the userspace process to stick the emulation code, because the emulation code needs to happen in the context of the userspace process (e.g., memory access permissions etc.), and ~20 years ago a convenient place the kernel could write to in the userspace process was to push things onto the stack and nobody realized that was weird. Historically the stack was executable everywhere, and when we realized that was a bad idea because exploits can put shellcode on the stack and stopped doing it on most architectures, we weren't able to stop on MIPS because of this reason.
If we were rearchitecting this from scratch today I bet there would be something like a vDSO for this, i.e., the kernel maps a read-only executable page at a random address when the process starts, and when you try to execute a floating-point instruction it sets some registers (or pushes data and only data onto the stack) and updates the instruction pointer to point to that page. But the simplest fix to the existing architecture was apparently to specify a non-stack area for the kernel to use, which still involves a W|X segment, but at least it isn't the stack.
This is what I was getting at with my question. There is a way to do this; sure, branch-delay slots are annoying but they’re not impossible to work around.
> But the simplest fix to the existing architecture was apparently to specify a non-stack area for the kernel to use, which still involves a W|X segment, but at least it isn't the stack.
I mean, you protect yourself against trivial buffer overflows, but a good write primitive is still enough to get code execution. It’s more of a band-aid than an actual fix.
NX stacks are a comparatively new idea, from an era when even embedded CPUs have lots of spare cycles. It hasn't always been that way.
The lesson I took away from this isn't about architecture, it's that Linux on MIPS has a pretty severe shortfall in maintainer bandwidth for this to have gone unnoticed and unfixed. I mean, the first question in review of this patch should have been "OK, who's going to fix the stack permissions now?"
The issue is that the emulated FP code can include FP branch instructions, and those FP branch instructions have branch delay slots, and the instruction in the branch delay slot can be an ordinary integer instruction like a load or store - that has to be done with user permissions - and it can even be an instruction from some weird extension to the ISA that the kernel might not have heard of.
It's executing those branch delay slot instructions, in the branch-taken case, that uses this W|X page.
That's just a guess though, I read sideways through the paper and I don't think they really explain that. They link to this page but it doesn't really give any details: https://www.linux-mips.org/wiki/Floating_point#The_Linux_ker...
In particular I'm not sure why you'd want to put it in the stack instead of some allocated page dedicated to that endeavor (besides "it was already there so we used it").
On the other hand, my Nest thermostats were bricked after a software update this week, so maybe today I'm starting to see a crack in my "auto update is best" dogma...
It works best when the vendor can be trusted to only push security updates and occasional quality of life improvements. In general, automatic updates tend to be a vector for bloat, user-hostile features (e.g. spyware), and user-hostile business practices (e.g. remote bricking).
I am not aware of any, except—somewhat ironically—for some open source projects.
None. Hence, personally, I dislike auto updates.
For me that crack came when an automatic update to Android removed Exchange server support from my tablet (around 2015). I no longer had a good workflow for keeping up with work communication, which ultimately, as a sometimes-remote dev at the time, cost me a lot of productivity and reputation at that job.
Now, anything that involves my productivity or quality of life (e g. thermostat) is on a manual update process as much as possible.
I've tried playing around with DHCP but nothing changed, maybe my it's my ISP's router which is a pita to work with...(vodafone).
If you've got less square footage (or don't need top speeds everywhere), want to spend less money, and want something really easy to set up, try Ubiquiti AmpliFi. Here's Troy's writeup on that: https://www.troyhunt.com/how-i-finally-fixed-my-parents-dodg...
Personally, I am amply served by a quad-core HP thin client ($70 on eBay) running pfSense (w/ Intel server NIC, $25 on eBay) and a Ubiquiti UniFi UAP-AC-LR ($75-90 used on eBay). My living space is 1500 sqft. Internet speed is 150Mb/s up and down, and I get that speed of WiFi nearly all over the house. Total cost was less than $200.
But I was rather surprised by the lack of almost any configuration options in the UniFi. No dedicated WebUI, just a few basic configuration options (SSID + WPA2 password) in the mobile app.
Do I need to procure a Windows machine and download the Windows app to configure and take full advantage of the AP? What extra options can I set up with the desktop app in addition to the basic configuration options in the mobile app? As said though - it works really well as it is.
There is a dizzying amount of information and options in the web UI, but it's designed well so the basics are well-presented and accessible.
The UniFi line is rather easy to configure with the UniFi Controller. The EdgeMAX line can utilise UNMS (which can be run in a VM or Docker) for configuration but also HTTP(S). EdgeMAX devices are much more powerful, allowing fine-grained configuration via SSH when HTTP(S)/UNMS doesn't suffice. For UniFi line you should be OK with the UniFi Controller software.
TL;DR: The UniFi line has a lower barrier of entry.
Something else of note: it appears Ubiquity wants users to be able to use UNMS for UniFi products in the future. That's good news because right now you need 2 controller software for 2 product lines from the same company.
I gather they have a bug bounty which is a good start, but so do Netgear and their routers are still full of bad vulns.
The real irony of how vulnerable these devices are, is that often they are based on tech that is foss that has updates to fix those issues but has been carefully packaged up inside a blackbox the consumer doesn't get to control and therefor doesnt get those updates.
Once again, why we need a "right to root".
For national security!
Apparently you’ve not really looked at EdgeOS. https://www.theregister.co.uk/2017/03/16/ubiquiti_networking...
Vyatta was acquired by Brocade in 2012. Ubiquiti forked after that.
The internet-facing side of my router does nothing interesting (I haven't enabled the onboard VPN, remote management, etc.) and I trust everyone with access to the LAN side of my home wifi not to attempt to exploit my router in the first place. That said, I do wonder how much attack surface is exposed to websites making permissible cross-domain requests (blind GETs from image loads, blind POSTs from <form action="http://192.168.0.1">, etc.).
I'm personally running OpenWRT on an EspressoBIN with an Atheros card, but I keep eyeing the Turris pretty hard.
Could even encourage security by using read-only flash, only a reboot is needed to clear any viruses.
To clean up the odd PCIe behavior on the Espressobin with mainline 4.17, there are 6 kernel patches required. Arch and OpenWRT both already ship them, and I'm in the process of getting them in to Buildroot. I'd expect they'll land in the mainline eventually.
edit: sorry; 3 PCI patches. The other 3 mitigate other board/chip oddities.
Likewise, OpenWRT (and similar open-source router firmware) is a big step up in quality than probably pretty much anything any router manufacturer ships.
Here's the thing... <rant>
You and I -- and, I'd wager, the overwhelming majority of HN readers -- are easily capable of replacing our stock firmware, locking it down, keeping it up-to-date, and so on. Unfortunately, the average person (who likely just buys one of the cheapest routers they can find on the shelf at Walmart or Best Buy or similar) isn't.
The average consumer simply doesn't understand that many of these devices they buy -- especially all of the new "Internet of Things" devices that have been popping up the last few years -- are completely insecure pieces of trash. Hell, many of them don't even care -- well, until it directly affects them personally, at least -- so long as "it works". They have no desire to learn a ton of stuff about computing, networking, or security -- they just want the ability to monitor what's going on in their house while they're away or whatever -- and they cannot understand why they should be required to (and I, personally, don't blame them). They don't know about the whole "convenience versus security" continuum or just how far away from the "security" side of that continuum that these devices they're buying to make life more convenient are.
The average consumer (rightfully) expects that these devices that are available for them to purchase and install in their homes are (reasonably) "secure". They simply aren't aware of the sorry state of (in)security in the software industry.
I think that within the next few years we'll begin to see (in the U.S., at least) some regulation with regard to security and software. I don't think any of us really WANT this to happen (it would be much better if the industry were "self-policing", of course) but it has become apparent that those who are producing these devices simply aren't going to devote the resources required to improve the security of their products until they are forced to. </rant>
(Related: for the last seven years (until very recently) I worked for a small ISP. I was amazed at how very little many of the employees -- including the ones responsible for all of the networking gear! -- knew or even cared about security. With the exception of myself (and a recent college graduate who we hired as, basically, my "junior") nobody even thought about security unless or until "something happened" that required them to. Having experienced that, it became clear to me that the average person REALLY isn't gonna give a damn.)
I really care about security, but I think it's mostly because the aesthetics of insecure code bothers me, and not because of a carefully considered cost-benefit analysis.
If $VENDOR has a reputation of being hacked, it might deter people from buying their products.
I hope a new round of Bricker bots continue killing devices that are unsupported, incredibly insecure devices. It is a public service (albeit a felony, don't get caught)
I personally like the summaries. Papers are typically written to sound smart, not to read nicely. HTML scales, users can configure the font, you can click pictures to enlarge... but instead we choose to use PDFs. Why? I speculate it is because everyone else who sounds smart does it. This particular paper is not as bad as most (no serif font, 8pt text, small and unreadable diagrams, references instead of practical links) so it's definitely not all directed at this particular paper, but I do question the usefulness of linking to a longer text instead of to a useful summary that has a link to the longer text.
GP is right in that the first 15 words of the tweet told me more than the whole 269 word blog post. I also recognized the author on Twitter but not on the blog. I'm happy the tweet is still linked in the comments.