Hacker News new | past | comments | ask | show | jobs | submit login
ORWL – The first open source, physically secure computer (crowdsupply.com)
342 points by kungfudoi on Sept 29, 2016 | hide | past | favorite | 184 comments



Having some physical security in a OSS-hacker-compliant form factor is really quite nice. This is not going to replace a proper HSM, and almost certainly is a less secure place to store your data than an iPhone; but it's a good start for those unwilling to give up on (the performance of) PCs.

It's worth noting that QubesOS, which is supported by this system, protects against e.g. USB-based attacks by running a virtualized Linux for just the USB port (simplified.) This has its limitations, but should be pretty decent.

If at all possible, try to ensure that this device is powered down when an attacker gets it. Several attacks are easier if that's not the case (e.g. USB-based attacks, but also cold boot attacks on the encrypted disk - the security monitoring should trigger when one opens the case, but if an attacker can still extract your disk password from RAM before the RAM fades you're in trouble...)


ORWL will go in stand-by if the user is further than 10meters away from the device, if moved when away, it will shut down. If the hardware is tampered with, or chilled, the SSD encryption key is deleted within milliseconds. iPhone or any other consumer product at this point have less or no physical protection. The physical level of protection is taken from the payment industry standard and applied to the consumer device. The scanning of RAM content and possibilities of EM scanning, side channel attacks are all covered in the secure section of ORWL by the Maxim Secure controller. Nothing is impossible, but we tried to make it really really hard to get hold of the SSD encryption key and what we refer to as the Root of Trust. Qubes is really helpful with many other attack vectors.


If you guys succeed, you should think about making phones.


We are actually talking about this. Thanks for your interest.


If you made such a phone I would buy it. I hope you guys have a great success because it seems like an awesome product.


Alternatively, you could consider http://neo900.org/.


If I can get a word in on this subject, I'd like to say this:

If Android is stripped completely back to minimum and rebuilt fully from source, it's arguably a trustworthy platform, but who has the time to basically do the equivalent of Gentoo on their phone, potentially weekly?

Even if the binary blob problem was miraculously solved, a rooted android device is basically a sitting duck (https://www.reddit.com/r/netsec/comments/3hr9f0/i_am_john_mc...), but an unrooted device isn't sufficiently hackable (flexible) to ensure its continued security (ie, unrooted + depending on a central service for Android updates = no thanks).

To me, I personally don't equate "Android" with "secure" in any way shape or form; I consider the platform practicably unsecurable.

I was just doing a bit of thinking. How about: use the 100MHz secure processor to run Linux or some another open-source lightweight kernel, and add a second CPU (something run-of-the-mill but decent, 1GHz+) that the first one can switch on and off. Both CPUs can see the GPU (running a low-res display, to make it easier - and cheaper), controlled only using open drivers, and the CPUs arbitrate for who has control of the GPU.

I see two software use cases for such a model.

First, you could use the 100MHz secure processor to actually run the phone. The resulting UI would be pretty basic, but open and verifiably secure (this is not currently possible with any other device AFAIK, and would get you a noteworthy demographic). You could use the 1GHz+ secondary CPU in lieu of hardware GPU decode - as in, you setup the fast CPU with libx265 on a unikernel, and feed it data via DMA. That sidesteps the blob problem, and lets people securely chat via/watch video.

Second, you could use the secure processor to do basic system tasks (again, providing a minimal UI), and provide an option to boot Android on the second processor. Caveat emptor, but that would cater to the people who only want to go so far, and all on the one piece of hardware.

Hmm. A modem is just straight CDC with no weirdness, right? Also, are there cellular-class Wi-Fi chipsets with open drivers out there?

To me, I absolutely envisage a secure phone as a secondary device. I might not want it in my possession all the time. Under certain circumstancs it might make sense for me to do a lot of activity on another phone so I do generate decipherable noise. I might want different/unusual notification policies (eg, maybe calls shouldn't even vibrate under certain circumstances).

The above is just me in stream-of-consciousness mode - but I've personally wanted a truly secure communicator for a very long time, not because I actually have anything to hide, but because I find the idea of being able to achieve near-perfect security (in particular, secure boot) really compelling.

I can understand why x86 was the only viable solution for the desktop, and kudos for just going ahead and making the effort with that design. I think that for mobile communications, being able to send and receive simple text messages using a secure hardware design that's running carefully-vetoed software would just be really really cool.

PS. Host USB would be incredibly useful. I really like your idea of having the port lock down though.


what are you doing about offline attacks against the SSD crypto?


Offline this SSD is protected with "AES 256-bit Encryption". Simple Brute force attempts would need WAY too long to make any sense. Does this answer your question?


AES-256 ! Excellent.


Why is it potentially less secure than an iPhone? Attackers can't scan the RAM because of the enclosure, whereas they might do so on an iPhone, correct?


Correct me if I am wrong, but if that was possible, wouldn't the FBI have had a lot less of a challenging time getting the data off of that guy's phone?


They didn't have a challenging time, they just paid some hackers to do it[0].

What they wanted was to set a legal precedent[1] which would allow any law enforcement agency to force manufacturers to subvert the security of their devices on demand.

[0]http://www.reuters.com/article/us-apple-encryption-fbi-idUSK...

[1]https://www.theguardian.com/technology/2016/feb/25/fbi-direc...


> almost certainly is a less secure place to store your data than an iPhone

How can this be assessed when we don't know the code that runs on an iPhone?


He covered himself by using the word "almost". :)

I agree with you, I'd choose to rephrase the quote to:

> most uncertainly an iPhone is a secure place to store your data


In a rare event, I actually like what I see here. They've clearly studied prior designs in high-security space, likely HSM and smartcard mitigations. The mesh enclosure strategy was adopted by older HSM's. There were potential bypasses that led to even more features, esp membranes and radiation sensors. The best ever made, per Ross Anderson's team of talented IC breakers, was IBM's 4758 whose protections and potential attacks are described here:

https://www.cl.cam.ac.uk/~rja14/Papers/SEv2-c16.pdf

Best route is just to clone that thing somehow. IBM themselves already depreciated it in favor of a new product. Might still try to patent sue you or pull some other crap but worst case should be Chinese clones becoming available after new design is published. :) Designers of ORWL should try to copy more of the IBM thing's techniques to close gap between the two.

Far as design itself, I like that it's relatively simple, leverages a secure IC, easy to disassemble, will allow low-level modifications like firmware, and can run standard software. The next step will be a model that replaces the Intel chip with OpenSPARC, OpenPOWER, or RISC-V multicore with added components for trusted boot or I/O protections. Some are available with some coming online. Next step is using crypto to protect confidentiality & integrity of anything leaving SOC boundary so RAM is untrusted. There will be a lot of money involved for initial development and prototyping of even the first, open chip. So, I understand if they're taking it one step at a time. That's cool as long as they keep the advertising honest about risks they're keeping in for compatibility, etc.


How do they deal with the intel management engine in all intel chips? https://libreboot.org/faq/


FTA:

This project is about having a standard, physically secure computer that anyone can use – as open as we can make it. All these concepts are important, and they mean that x86 and flawless out-of-the-box Windows support are not optional. There are reasons everyone is using x86, even in the security community and in governmental agencies around the world: compatibility, performance, and security. Make no mistake, some of us own Yeeloongs, and others are veterans of the silicon industry. We would love to ship a completely free and usable desktop processor, but we know very well that there is no alternative. Some people seem to think that switching to AMD can solve problems related Intel’s Management Engine (ME), microcode, or SMM. It doesn’t, as there are equivalents of these technologies in all recent x86 processors.


That's all fine and good but they should not advertise it as secure if it's not.


There's no such thing as "secure", and frankly complaining that anything that has better security than regular products shouldn't advertise as such is ridiculous.

How else is there any progress when the community's just pulling everyone down with this "it's no good if it isn't completely perfect" crap?


This product makes full disk encryption a bit more convenient, but that's about it. Even that turns something to know in something to have, which you could argue is easier to coerce someone into handing over.

The parent comment is right to point out this computer has a fully functioning Intel ME, running it's secret, unaudited, possibly backdoored, firmware on the co-processer which runs even when switched off, and can interact with the rest of the system undetected. Any "secure" system with this foundation isn't really secure.

IMHO a product which would focus more on this (like libreboot laptops) can make a bigger claim on doing something for security than this.


I respect your opinion on this. We sought to build hardware that is a significant step up from what is available on the market today in terms of access control and tamper protection. We also open source the BIOS and customize it the most we can afford to minimize the ME capability thanks to Eltan's help. Secure cannot be an absolute state, it can only be temporary... till the 1st one finds a way to get in. Execution of non-auditable code is what we deal with on nearly every machine on the market. We are minimizing this, while still staying compatible with peoples use models. We are as open and transparent as we can legally be. In our plan, this is only one step in the direction of taking back control of the machines we all are working on and want to trust completely. https://www.orwl.org/wiki/index.php?title=File:SowDESIGN-SHI...


[flagged]


> Frankly I'm not sure it is any better than regular products.

Care to elaborate?

> It still has a backdoor baked in.

True, but this does not explain or validate your previous statement. You are saying: "A has these pros and this con, B has the same con but none of the pros, hence A and B are the same". This sounds like a poor argument to me.

> Keep on downvoting because you disagree, though.

I believe that you are being downvoted because your last two comments are poorly articulated.


They advertise it as "physically-secure" in the text. I didn't watch the video. They explicitly call out many risks, why they're there, and what they do about them.


We have built a list of hacks that we addressed and how. Please feel free to read up on all of these and more in our Product Description in the section "Potential attacks prevented" : https://www.orwl.org/wiki/index.php?title=File:ORWL_PRD_v0.6...


It's secure, "trust us".


In order for the Management Engine to really do much, you need to have a network card that the management engine knows how to talk to. If you don't have such a network interface, the ME can't do all that much, and any adverse security risks are near zero. Add to that things like the firmware write line being controlled by a completely separate microcontroller, and the big things that are discussed are completely infeasible. I'd be much more worried about the lack of microcode updates that libreboot users shun in the name of "freedom".


In order for the Management Engine to really do much, you...

...have to trust Intel's publicly available documentation. In other words, this has to be taken on pure faith that it does exactly what they say it does in exactly the way they say it does it, and no more.

The problem with ME is that it consists of unauditable code that could be doing literally anything on the computer, completely transparent to the user. Furthermore, even if there is nothing untoward happening when the chip leaves the fab (again, must be taken on faith), a documented threat is having hardware interdicted, modified, and sent on its way.


Interdiction could happen with open source hardware too. Swipe the SoC, and no one would be the wiser. Given everything that has to be in place for vPro/the ME to work, looking at the pcb would give enough information to tell exactly how much, if any, information it could steal, if all the malicious items were in place. All of which could be easily undone by reflashing it, because the write line on the bios is not controlled by the CPU. A lot of steps have to be done, physically and in software in order to exploit it, and that exploit would melt away on the first bios flash. I'd be much more concerned about the security of the software running on the system than any theoretical hack on the ME.


>Swipe the SoC, and no one would be the wiser.

It would have to get swiped on the way from the manufacturer to the OEM. Once the OEM has sent it out, it's protected against this exact kind of attack. And while it may make sense for interdiction of a single package to a known target, doing the same with an entire batch of chips seems prohibitively expensive.

>Given everything that has to be in place for vPro/the ME to work

Again, per Intel's documentation. For all anyone here knows, data exfil begins the moment a certain sequence of bits crosses the right registers - and it's not like this is beyond the capabilities of what ME lets you do.

There is no good reason that the entire subsystem can't be disabled by the user, permanently. But, come to find out about it, the chips are configured to shut the system down within 30 minutes if the ME firmware doesn't pass checksum.

That, in my mind, puts it uncomfortably close to malware territory. Every one of these concerns evaporate if the ME area could be wiped or dumped - it's not as if remote management is some secret competitive advantage.


No, this is not per the documentation, this is per the physical specifications, the circuitry that needs to be in place, the support that needs to be in each component. The Management Engine is not as all-seeing as you make it out to be.


It's all seeing enough to poll the installed system for info on installed software, to have keylogger rootkits installed in it, and so on. This is all per the (exhaustively sourced) Wiki article.

The point I'm trying to convey here is that every piece of information available on this thing comes straight from the horse's mouth, and the horse is not necessarily a trustworthy actor.


First time I hear this. Can you elaborate or give a source for this?


"In order for the Management Engine to really do much, you need to have a network card that the management engine knows how to talk to. "

For both ME and debugging purposes, Intel's chips are wired through and through in ways that could do interesting things in the hands of an attacker. ME gives attacker the ability to act. What you just said is you believe Intel's technical statements and marketing claims about that. In reality, you can't know about any digital, analog, or RF backdoors it creates unless you get whole thing torn down at transistor level with analysis by people who understand all those categories. It's normal in ASIC work, for trade secret protection and dodging patent suits, for firms to use tricks to hide I.P. in I.P.. They also reduce NRE & mask costs by putting circuitry in whole families of product while only visibly enabling them in some at factory. Still there, though. The guy that originally taught me about this stuff gave an example where one component they used had wireless connectivity because it was a mobile SOC where they visibly, but only temporarily, disabled some components to make it look like a microcontroller and I/O combo. They were trying to score extra ROI off it without building dedicated product.

Lots of stuff like that in ASIC design. Hell, the firewall industry should've already taught all of you this lesson. Grimes reviews of them showed they often had all kinds of undocumented stuff running that wasn't advertised but was result of poor quality or some internal benefit. He rarely ran into one that did what it advertised and only what it advertised. That wasn't even open-source testing. ;) Intel's stuff is a combo of their specs, their implementation, the analog circuits, the effects of the materials involved on those, and the interactions with other things on the board if you're talking EMSEC. There's no way for you to verify these are secure by reading their public claims. This is neither a new nor uncommon problem.


According to their documentation, the network module is "Intel Wireless Module". I assume Intel ME know how to talk to this one?

https://www.orwl.org/wiki/images/9/95/SowDESIGN-SHIFTORWLPUB...


Only for the m7, because they enable vPro there. For the m3, vPro's not available, so the management engine won't be able to talk to the network card. Alternatively, you could just plug in your own USB wifi device, because it definitely wouldn't have the components for vPro to work.


Didn't know about this. They should have gone with AMD.


If you want to get away from that kind of thing, right now I think you're options are POWER8: https://www.raptorengineering.com/TALOS/prerelease.php

AMD has something similar to the Intel Management Engine: https://libreboot.org/faq/#amd


If only I had ~8000 dollars to spend on a desktop I'd be waist-deep in porting Linux packages.


AFAIK IBM's spent a good amount of engineering effort on making linux stuff run on power.


Yes, in practice the only things that really need "porting" to POWER are low-level compilers, languages, runtimes, tooling, etc. that may have arch-specific code. Sometimes IBM does this (e.g. Google v8 + Node.js), sometimes not... yet? (e.g. rust).


There is also this, which is also still pre-release, but looks like it might be a but more affordable.

http://www.lowrisc.org


Not SPARC?


Specifically, the OpenSPARC T2:

http://www.oracle.com/technetwork/systems/opensparc/openspar...

The ASIC implementation was nice:

https://en.wikipedia.org/wiki/UltraSPARC_T2

On a 65nm node... very outdated one... it might get 8 threads in 8 cores at 1.6GHz each with hardware RNG, crypto accelerators, and hypervisor support. That's the kind of implementation that would be useless for an open-source, secure workstation, server, or HPC node. ;) Especially if one slightly increased single-core performance or cache when porting it to 45nm.

Forget that, though. Let's see what IBM will charge for their admittedly-faster pile of complicated silicon that only certain people can see which they've already turned into non-backdoored chips for you. ;)


Sadly the T1/T2 are ancient now. The non-open SPARC is up to T7, which is 20nm process, 256 threads on 32 cores.


Sort of. I run the bloated Web, movies, IDE's, servers, and VM's on a Core Duo with 2 cores under 2GHz done on 65nm. I'd loose some single-threaded performance probably with an OpenSPARC T2 but otherwise it should handle my modern workload. Doesn't quite feel ancient. ;)

The cool thing about open-source CPU's is one can always improve on them to, say, have an extra dozen cores on more recent nodes. Like Oracle does but probably less impressive with less money.


Some anecdata:

Somebody tried to use a university department's T2 because it was unused and reasonably parallel for some experiment on hashing.

Single core SHA1 speed on a t2 was ~300KB/s or so. Even with its 32 threads I recommended going for any ancient desktop that happened not to be used for a couple of days because the experiment would finish much faster there. Those desktops were about Core Duo class and managed 50-100MB/s SHA1 throughput (I think, it's been a couple of years).


What the hell...??? Ok, it's looking like Im going to retract that recommendation for workstations. Maybe still servers thst are I/O bound.


As a 2011 MBA user, I totally understand sticking with old tech that works. T1/T2 isn't really apples-to-apples, though. I came across this random quote from the libgmp devs that made me laugh:

"SPARC chips before T4 under-perform on GMP. This is not because the GMP code is inadequately optimised for SPARC, but due to the basic v9 ISA as well as the micro-architecture of these chips. The T1 and T2 chips perform worse than any other SPARC chips; they compare to a 15 year older 486 chip."

ouch!


Ouch indeed. That's pretty bad. It's believable when I think back to the purpose of these chips: handling workloads that were more I/O and concurrency bound than CPU bound. So, probably identify and eliminate these problems early on even if developer has to drop some threads or cores.

Note: There's always my other recommendation of turning open-source Leon3 into a multicore. Im curious what it or Leon4 get on such a benchmark versus Intels on similar process nodes.


My Rockchip Chromebook doesn't have proprietary microcode, libreboot supported, costs $200. Only proprietary issue is the 3D acceleration


That is the case with many ARM processors. There is a lot of work being made on iGPUs on ARM processors aswell currently such as freedreno, will hopefully be possible to have a good fully open source experience in a few years with arm linux computers.




ARM has TrustZone.


TrustZone alone is much closer to a SecureBoot or TPMish feature than ME. It's possible it would allow implementation of an ME-like software stack running inside the TEE.

It's worth mentioning that there is at least one OSS implementation of a TEE software stack, OP-TEE, making this even less of an issue. Assuming your device/soc mfg doesn't lock you out.

http://www.linaro.org/blog/core-dump/op-tee-open-source-secu...


TrustZone is, for practical purposes, a little more open if you're the one designing the board. Most of the time, the crypto is opt in, and you as the final assembler control the keys.


Any plans to offer a 16GB RAM version? The commentary from Qubes users in the 3.2 release thread [1] seems to indicate 8GB would be borderline for Qubes.

[1] https://news.ycombinator.com/item?id=12604417


The current Intel chipset we selected does only support RAM up to a max of 8GB. We heard the Qubes users loud and clear and the request for 16GB RAM. We will only be able to offer this in a next revision though.


Since the monitor is external, I wonder if they've considered monitors as attack surface: https://github.com/RedBalloonShenanigans/MonitorDarkly


This is an example of someone reprogramming the monitor, not using a monitor to attack the computer that it's connected to the video-out of. I'm not clear if there's an attack against the ORWL itself you have in mind here.


Once you reprogram the monitor, you can store or exfiltrate all of the data the user sees, and do clever things like erase and redraw the mouse pointer, or draw new prompts, to induce the user to click on the things they wouldn't have otherwise.


An attacker swapping the monitor with one that records or displays other content isn't very different from the possibility that an attacker replaces your keyboard with one that keylogs or inserts crafted sequences of keypresses. I don't think ORWL tries to do anything about these possibilities, and it's difficult to imagine good fixes that don't massively change the scope of the project. And even if you do make the keyboard and monitor tamper-proofed and securely paired with the ORWL, it can't prevent an attacker from hiding a video camera in the room or using skimmer-like devices between the user and the devices.


Or the one I coinvented and described before the leaks:

https://www.schneier.com/blog/archives/2014/03/ragemaster_ns...

There's no direct solution to the subverted monitor problem that I'm aware of. You basically get them from random places under different names or from people unlikely to be spies then use I/O protection both ways. Same with most hardware you can't produce yourself. There's potential to market something here where the monitor is immune to code injection, does I/O filtering, can't store anything, and does these with visually-inspectable chips & board. Add TEMPEST shielding while you're at it since that's an existing market that will drop lots of cash on improving security. See EMCON's products for examples.

EDIT: Forgot to mention that spectrum analysis is often used to try to catch radio emissions. There's techniques to tell if monitor sends out stranger than usual signals. That's just kind of limited and doesn't help if it's black bag job where person can show up twice (eg maintenance person).


The VGA/DVI connectors have DCC pins for getting the monitor information. I've seen this connected to the SMBUS in the computer so it might be possible to gain access by hacking through the DCC pins on the video connector.


Correction. The temperature monitor is INSIDE the secure shell.


He's talking about display monitors AKA the screens which can be exploited via the i2c bus over the graphical interface (e.g. HDMI).

The GP is 100% correct, if you can't trust your keyboard, mouse, and the monitor the "secure computer" concept in this case is problematic, while it does reduce the attack surface somewhat it just focuses the attention of the adversary onto a different vector.

If we take their "cleaning man/evil maid" scenario then while implanting the computer might not be possible, implanting the keyboard, mouse or screen would be very possible, and in fact somewhat easier than implanting a regular computer with decent security measures such as an encrypted drive.

Add a USB storage device with a micro-controller to the keyboard and you own the computer once it's connected, a monitor today comes with a CPU powerful enough to run custom code which can be used to exfiltrate data as well.

Additionally both the keyboard and the monitor could potentially be used to exploit software flaws on the software running on the ORWL unit also.

The concept is interesting however this is mostly "security theater" any adversary which would be sophisticated enough to require taking these measures would likely be able to circumvent them, and for the rest these measures don't really do anything; if you use this for day to day operations or on-net activity you'll get pwned via the network; if you keep secrets on this thing worthy of sending some one into your home to implant your PC then they'll implant something else which is connected to it.

Oddly enough the only "high tier" adversary that this might thwart would be law enforcement since their computer forensic SOP would pretty much melt down when encountering something which is tamper resistant.

But hey, you gotta start somewhere.


I'm a bit surprised that, in 2016, there is no standard way for a computer to authenticate its keyboard and monitor. Has anyone even thought about how that could be done?


HDCP is arguably the standard for authenticating the monitor, but it's not quite intended for this purpose. I'm not aware of a standard for authenticating input devices, but disabling USB HID and relying solely on tamper-evident PS/2 input devices goes a long way.


Even if you can, yous implant a keylogger onto the keyboard, and some malware/implant into the screen you get a full readout of every keystroke and every pixel displayed.

If you are going to prevent physical attacks from adversaries that can circumvent basic protection (e.g. FDE) you have to make sure that every device is as secure because the system is as secure as its weakest link.

If your adversaries are just the random person that might steal your PC then any full disk encryption even a cryptographically insecure one would be sufficient because the people who end up dealing with these devices won't have the knowhow or the resources to attack even bad encryption.


yes but since an application is DRM the hacker groupthink decided that this was a double unplus good thought and so no one should think it lest evil happen.


GP probably means display monitor, not temperature monitor.


I do mean display monitor, not temperature monitor.


Thanks for the correction. I was not sure at the time.


It's an interesting concept, for sure – but could someone more knowledgeable than me explain whether this leaves the system vulnerable to the potential, alleged backdoors present in Intel's chips via the Intel Management Engine?


Our long term plan is to limit ME capabilities using the BIOS configuration. We just released the SOW of the 1st BIOS with Eltan on the WiKi. https://www.orwl.org/wiki/index.php?title=File%3ASowDESIGN-S... We are planning to investigate how to further limit ME capabilities with Eltan and we will update the SOW as we make progress. We also believe that the current secure micro controller implementation severely limit the ME capability through power management and the SSD key management.


No need for that, you just pawn it via USB.


The proximity based lockdown is interesting but won't prevent the likely scenario of being grabbed while you are using the computer. Silkroad is a famous incident, but I think its the only option in any scenerio where the attacker knows you are using an encrypted disk.

I'm curious about your supply chain risk mitigation. Given that the project is open source, can you publish a list of all of your suppliers and the country of production?

Great work overall, looks like a nice design.


Thank you for the feedback! This is still a desktop so if the device is grabbed the power won't stay long. You need NFC to restart and then enter your password. We are still working on opening as much as we can of the design. Bill of Material and drawings will be detailed on www.orwl.org/wiki


If your device is grabbed by the FBI from the Glen Park library, I don't think they'll be turning it off. Wouldn't they use some sort of device[0] to maintain the power supply? I assume they would also be aware a key fob was in use, at least if you were using reasonably well-known hardware like the ORWL.

I noticed the campaign details indicated the power supply voltage is monitored. Will this protect against a hot-plug?

[0]: http://www.cru-inc.com/products/wiebetech/hotplug_field_kit_...


If anyone's wondering, here's the important bit: https://youtu.be/erq4TO_a3z8?t=259

I suppose this technique could be modified for most UK pugs too, but I've no idea how you'd manage it for a recessed EU-type socket.


I was very recently looking into physically secure computing solutions, and the "industry standard" seems to be the SafeNet Network HSM, formerly Luna SA Network-Attached HSM (for example, it's what Amazon uses for their CloudHSM service: https://aws.amazon.com/cloudhsm/), which costs like $30,000. With that number in mind, the ORWL price of $700 is quite enticing!


Yes, HSMs are very expensive and after talking to a few people in this industry we got the impression, some of the ORWL features are not present in HSMs. Like the ability to Geo-lock the device using the key (mounted in the ceiling). Waling away with the device will render it useless as the keyFOB is missing.


Appreciate the fully secure boot process, even if the Intel situation isn't fully secure. Wish I had an external uC with burned-in firmware on my machine that I trusted to verify my BIOS firmware and orchestrate the boot process.


Thanks. We agree with your statement fully. We are raising the threshold of entry to your personal data substantially, but we are still far from perfection. We did a number of steps up in the security ladder. We are also taking steps to minimize ME abilities in ORWL. Our long term plan is to limit ME capabilities using the BIOS configuration. We just released the SOW of the 1st BIOS with Eltan on the WiKi. https://www.orwl.org/wiki/index.php?title=File%3ASowDESIGN-S... We are planning to investigate how to further limit ME capabilities with Eltan and we will update the SOW as we make progress. We also believe that the current secure micro controller implementation severely limit the ME capability through power management and the SSD key management.


Enjoy spending the next 10k years auditing the security of the chipset with your scanning tunneling microscope.


Even if it doesn't protect yourself against some transistor level NSA backdoor, that doesn't mean it can't thwart other attackers who would usually take advantage of physical access.


Now this is really cool! Though as with all things wireless I'd worry about working and somehow the key fob getting interfered with and boom computer locks up or if a sensor thinks I'm moving the computer when it's really just an Earth quake or maybe even my cat jumping on the table and then the encryption key is deleted.

So I love the idea but not sure of the practicality. Those sensors has to essentially work perfectly at all times and I'm not convinced until it's released and reviewed.


The SSD encryption key will only be deleted in case of a tamper event, NOT when the unit is moved. Tamper events are: * freezing the unit * drilling the secure enclosure or other wise breaking the traces on it * prying the enclosure off the PCB So I don't think you have to be too worried about a false trigger of a key erasing.


Is there a specific temperature about which I should be concerned? I live in a cold climate and would hate to lose data (even if it is backed up) in the event that I lose heating.


what temperature (approximately) is "freezing" (or less likely, overheating)?


We currently have a Spec temperature of triggering a 'freezing event" at just above freezing Temperature 33-35F.


so if I live in a cold place and take it outside then it might erase my data? or what yellowapple said about losing heating, if I live in a really cold place; if pipes can go below 0 C due to cold temperature, why can I expect that ORWL won't?


Can a tamper event be triggered through software? It would really simplify the ability to create trap passwords which wipe the device.


Particularly by overheating the processor or other components like one might do if melting stuff off the outside with flame or acid application.


I was hoping for something a little more direct, deterministic, immediate, and non-damaging.


> The battery itself is projected to last about six months without being connected to power.

It seems like a lot of the security of the device depends on active scanning (e.g. the LDS clamshell mesh, the IMU, the temp sensor, etc.), which stops working after 6 months. Is the vector of a malicious actor taking the device and waiting 6 months before breaking in considered not worth protecting against?


The webpage says it zeroes the key material when the battery runs low. So it'll fail "secure" in that case, presumably.


Ah, I missed that bit! Thanks :)


> If someone has physical access to your computer with secure documents present, it’s game over!

Err, why? Is AES encryption not sufficient? And the key is secure in my head - not something someone could steal.

So, why is this even a thing?


Unless you also want to perform the AES operations in your head, you have to rely on the hardware and software of your computer to perform them. An attacker could then replace the AES routine you use with one that stores a shadow copy of your key, or exfiltrates it over some covert channel.


Well yeah, but this assumes I don't know the fact the physical access had place and proceed to enter the passphrase revealing the key unawaringly afterwards. If this is what "having physical access" means - then yes, I agree, game over, and ORWL solves it.

But if I know the attack took place (FBI broke into my house, the computer is locked in a safe etc.) - the data should be secure.


What if the crypto side of things is remove-able and carry-able on your person? Or what if it could be subdermally implanted so you know no one can pick pocket you and replace it?

Just a thought.


> What if the crypto side of things is remove-able and carry-able on your person?

You might as well have the entire computer removable and portable.


And that's pretty much what ORWL is.


The key isn't in your head: you know the passphrase used to decrypt the key which is then kept in system memory. Techniques like the cold boot attack[0] or row hammer[1] can be used to retrieve the key and access your data. In the case of non-hardware-TPM secured encryption schemes the kernel or bootloader which must remain unencrypted for the system to boot can be backdoored to record the passphrase and/or the key.

0: https://en.wikipedia.org/wiki/Cold_boot_attack

1: https://en.wikipedia.org/wiki/Row_hammer


There is a section at the end of the page outlining mitigation techniques against these sorts of attacks. I'm not knowledgable enough to determine if these are sufficient measures but I just wanted to point it out since it sounds like you didn't see that.

They specifically cover cold boot attacks for example.


Yes, correct. We think we cover the cold boot vector. Here is a section from the product description. Security mesh physically protects DRAM from both freezing specific components and removal. Breaching the mesh will trigger reactive root key delete and system power down. Secure Microcontroller protects against chilling the full device - temps below -37℃ will trigger a tamper event, deleting root key and removing power from system. Link to the complete Product Description: https://www.orwl.org/wiki/index.php?title=File:ORWL_PRD_v0.6...


Right, OP was asking why having physical access to a "normal" machine means pwnage: cold boot is one of those ways. The ORWL page describes how they mitigate that.


Oh I see, thank you.


if someone has physical access to your computer with secure documents present, and then you use it again, it's game over.


Yes, physical access in most cases is a point of no GOOD return. That is why we emphasize so much on preventing this access. We have detailed the measures we have taken to prevent this here: https://www.orwl.org/wiki/index.php?title=File:ORWL_PRD_v0.6... Look at section: "ORWL Primary Key of Security or Root of Trust"


In past I've heard about couple of cases where people had some startup idea that involved some clever application/algorithm that would have had to be deployed to customers premises but kept secret.

With slight modifications (* ) this sounds like a low cost, but easy to deploy solution to the problem. Some of the security would be lost, but you would still get a reasonably tamper proof computer that is capable of running standard software stack for <$1K.

With the same kind of changes you could also build a low cost HSM solution based on this.

(* ) Instead of controlling booting, the keyfob could be used to enable/disable console access and the device should be able to recover from short power losses.


By the way, making other people secure is big business. For that reason I can see a pointy haired decision maker buying loads of these for the functionaries to use. I wouldn't really want one of these for myself though.


This doesn't seem all that secure. Against an Evil Maid attack, your best mitigation is to be able to keep everything, OS and all, on a portable drive which is self-encrypting; essentially an encrypted PE.


Why not? Seems secure against evil maid to me (barring hardware backdoors like Intel).


They explicitly address several attacks here: https://www.crowdsupply.com/design-shift/orwl#specific-attac...


None of those are evil maid attacks.


Let me state a section of our product description here, detailing the way we are approaching the "Evil Maids" USB volume boot blocked at BIOS, BIOS access controlled by security key + PIN, Intel TPM is enabled, and we do not enter a passphrase to unlock encryption (unlike software based full disk encryption) In addition, attacks that don’t rely on booting to a USB device are protected by powering off the USB interface when the user keyfob is out of range More details here: https://www.orwl.org/wiki/index.php?title=Resources#Resource...


Does that work? Can't the evil maid install a malicious hypervisor to dump interesting pieces of memory every few minutes?


That hypervisor would have to be at the hardware level, I'd imagine. A read-only PE for boot (WORM drive) with all your typical programs pre-installed, and then a separate encrypted removable data drive would be the mitigation against that.


And the BIOS?


BIOS is not part of an evil maid attack.


ORWL was designed specifically to prevent undetected tampering with any of its electrical components, including the entire motherboard and storage drive. When tampering is detected, ORWL immediately and irrevocably erases all your data, even if it is unplugged at the time.

and...

Upon any tampering, the secure microcontroller instantly erases the encryption key, causing all data on the SSD to be irrevocably lost.

If only the key is deleted, wouldn't that leave the drive susceptible to brute force?


There are reasonable issues that could be raised about various meta-data leaks with full-disk encryption. For example, in a completely naive per-file encryption scheme, the (approximate) file sizes would be visible. But I don't think "brute force" is a concern for reasonably modern encryption schemes. Of course, if they are using weak/short pins with a key derivation function, then that is vulnerable to brute force.


Uhh.. yes but enjoy brute forcing a 256 bit key.

See you in a few trillion years.


Quite a lot more than a few trillion.


You have to account for Moore's Law within the few trillions GP mentioned


Bruce Schneier and others[1] have done the math on brute forcing 256 bit keys: even with a perfectly efficient computer using the least amount of energy possible, you would have to deplete the entire energy content of the Sun to just iterate over a 225 bit keyspace once, let alone do anything meaningful with those keys.

Moore's Law doesn't really factor into it.

[1]http://security.stackexchange.com/a/6149


It's estimated there are 10^80 atoms [1] in the visible universe, so 2^256 is definitely a huge number. I didn't realize 256 bit brute force was nigh feasible with only a solar system.

I'm a bit surprised the quantum algorithm only gives a polynomial speedup.

[1] https://en.wikipedia.org/wiki/Observable_universe#Matter_con...


10^80 = (10^3)^80/3 = 1000^80/3 = 1000^26.67

2^256 = (2^10)^25.6 = 1024^25.6

These number seem very close.


Sure it does. It just happens to necessitate our transition to Kardashev III.


https://www.reddit.com/r/theydidthemath/comments/1x50xl/time...

tl;dr if all the matter in the whole universe was a computer, it'd still be unlikely.


Isn't brute force a chance? Should it not be "see you in next minute to few years?"


It's all down to probabilities, yes our hypothetical attacker could guess your key correctly the first time or within a few years but the chances are so tiny it approaches zero for practical purposes on practical timescales.


NSA Engineer: Hey boss, this one's using a 256 bit key.

NSA Manager: Connect it to the quantum computer that doesn't "exist".

Five minutes later..

NSA Engineer: We now have access.


quantum computers, at best, divide the bit-strength of a symmetric key like AES in half[1]. Brute forcing a 128 bit key is theoretically possible (in the sense that you can do it if you marshal the entire world energy output to the cause, you could crack 1 key/yr), but not a 5 minute process.

[1]https://en.wikipedia.org/wiki/Grover%27s_algorithm


that is assuming that there is no better quantum algorithm for aes specifically. grover's algorithm is only optimal if brute force search is the only possible approach and there are no other exploitable properties.

considering that there already theoretical attacks that (marginally) faster than brute force on classic computers who knows how much more one could squeeze out with quantum algorithms.

Of course those are fairly speculative concerns.


It's very obvious how special structure exists in cryptosystems that use finite cyclic groups, such as in discrete log cryptosystems.

But in AES? that sounds unlikely and really unfortunate.

I think it's more likely that large quantum computers would aid in mathmatical exploration that uncovers currently unknown vulnerabilities that could be exploited by classical systems.


I assume the encryption is strong so brute force would be useless.


I wonder how realistic it is to get the simplest ARM design and make "your own" chip? I mean there should be blueprints somewhere, if you could ask someone to make a small batch of these chips? It would be too small to be worth it for someone to inject a backdoor into it. Or you could make that into an FPGA... Am I talking nonsense?


If you're going to go through the process of taping out your own chips, RISC-V is probably your best bet at the moment.


This appears to be a good solution to the wrong problem. Maybe if they team with someone working on secure computer software...


I thought that, too, but decided to give them credit for an open solution to a hard, neglected problem. Let's face it: we need parallel developments in each of these areas since nobody is going to do all of them. It's good so long as the pieces can be securely composed by an integrator later. Just like the old incremental MLS paper or Karger's smartcard project. A piece at a time, even if extra time or cost on individual pieces, increases odds high-security product will emerge in long-term when each parties' interest (or funding or management) is often short-term.

If they were serious, I'd say work with Genode team for FOSS or partner with Sirrix to port their TrustedDesktop system to it. Both are using models with low TCB's and trusted paths architecturally similar to B3/A1 systems. Turaya used by Sirrix is basically Perseus Framework with pre-built drivers, VPN, disk crypto, management software, etc. Batteries-included. Here's Perseus:

http://www.perseus-os.org/content/pages/Overview.htm

I think a port of TrustedDesktop to ORWL-like solution would be a nice start on secure desktops for businesses or individuals willing to pony up dough. Can use something like Genode once it gets mature enough with necessary components. I've moved on from separation kernel stuff to HW-centric security but combining a thing that works with one that might seems like a good default for now.

So, a Turaya-like product combined with ORWL. I'll add requirements of parsers auto-generated LANGSEC-style with any TCB code run through SAFEcode or Softbound+CETS at a minimum. Kernel on bottom is seL4 or Muen (SPARK). Drivers done statically in subset of C or SPARK amendable to thorough analysis. Would that strategy for rapidly getting a high-security product out the door address most of your expectations or exceed them?


Kudos to the open source design but they should not encourage users to run windows on it.


The promotional video shows someone using Linux. Do they mention Windows somewhere else on their site?


They mention Windows compatibility as an explicit and mandatory goal of the project in the Crowdsupply project page.


Thanks, I missed that bit.


why not?


its great that the hardware privacy front is progressing


"open source and secure", powered by a potentially backdoored Intel processor (i.e: through the Intel Management Engine).


Where does the name come from? When pronouncing it I can't help but notice it's very close to "Orwell"


In their post on the Ubuntu blog [0] they say it is pronounced "or-well". Assumedly ironically.

[0] https://insights.ubuntu.com/2016/09/29/meet-orwl-the-first-o...


The name is from Orwell as you guessed. The idea is that we need an open source computer that protect privacy to enable freedom of speech ;-)


Considering their video starts off with a quote from George Orwell, I would say the pronouciation is intensional.


Great project! but unless the computer is used to do only offline tasks then having a physical key is as useful as locking a door and leaving the keys on (hardware level backdoors, zero-day attacks, lack of end to end cryptography etc.)


made a product like this before that passed FIPS 140-3, same idea, i.e. a battery-backup-mcu + mainboard.


Did you think about manufacturing or designing your own ARM CPUs and releasing HDL for the CPU?


We thought long and hard about the type of platform we want to use for this project. While the x86 platform has many shortcomings, it also provides a big ecosystem of OS and SW, as well as support. The fact that we have two subsystems, secure micro controller being in charge of supplying (and denying) power from the x86 gives us a lot of leverage and protection.


I'd be very interested to see this applied to phones.


Remember to put glue into the USB ports.


Not sure why the ubuntu.com is linked instead of the crowd supply link which has all the information including the ability to purchase one. See https://www.crowdsupply.com/design-shift/orwl



"When tampering is detected, ORWL immediately and irrevocably erases all your data"

IMHO that is beyond stupid, it's criminally irresponsible.

Well, to be fair, perhaps there are some uses cases, just not many. I'd rather go with tamper-proof seals instead.


how is it "criminally" irresponsible on a personal computer? I should be able to delete my own data whenever i want to, unless ordered by a court not to. Also OpenBSD has had the ability to wipe the system on failed password attempts for many years now.


It's criminally irresponsible to sell such a computer, because it will easily result in data loss and not all users are educated enough to understand the consequences of such a flawed "security" design. Of course, you can claim that it's ultimately the customers fault in this case, and I agree, but they should nevertheless expect some lawsuits.

There is always a tradeoff between security and data integrity, something which the people who downvoted my post apparently don't understand. When even your mom can mount a 100% successful denial of service attack with a screwdriver, then you're screwed.

If you disagree, I challenge you to show me a use reasonable case that couldn't also be solved by actual physical security or by locking down booting and the BIOS and using tamper-evident seals.


It's not illegal or irresponsible to sell a computer that deletes data when tampered with when "deletes data when tampered with" is _advertised as one of the primary features of the machine_.


I agree "criminally irresponsible" was the wrong choice of words and I rightly got flamed for it.


People should learn what a computer is. Those who don't will get screwed anyway.


The same argument could be applied to nearly any product. Knowing how to use the product is the user's responsibility and helping educate users is the manufacturer's responsibility. If you don't have data back ups, regardless of the type of computer, then you're setting yourself up for disappointment.


True, maybe I overreacted. What worries me is that is apparently supposed to be sold as a general computing device. Tamper-resistant hardware may be used in the military to protect implementations and keys stored in hardware that will eventually get stolen. For other types of data stored? Probably not so much. As I said, reasonable uses for this kind of device are limited.

For ordinary users, deleting everything immediately when someone tampers with it is a recipe for disaster. Sure, they can backup everything in encrypted form, but then the data is not really deleted when somebody tampers with the machine, isn't it?

Regarding the security, well, apart from software-based attacks, how about installing a tiny USB keylogger inside a USB cable that is already used by the user? Or in the keyboard itself? Or a camera that records your keystrokes?

That's what the <insert agency or special interest group of your choice> would be doing in such a case.


> Knowing how to use the product is the user's responsibility

Not all products are the same. It's irresponsible, but maybe good business strategy, to presume a complicated computer, which runs software that no human can hold in their head is the same as other products people are accustomed to.


> tamper-evident seals.

Doesn't help when the tamperer isn't hiding


I scowled when I read about the Intel chip, and I stopped reading when they mentioned USB. Assuming for a moment that there's no hidden backdoor in the Intel chip (which seems exceedingly unlikely from all that I've read regarding IME, not to mention the un-auditable microcode), all this fancy hackery is still going to get pwned by BadUSB.

Secure computing cannot and will not move forward until we have a way to mitigate against this.


They address this a bit. Their CPU supports device virtualization and the default OS install dedicates a VM to just the USB ports, and the USB data lines are electrically disconnected from the HCIs when the machine is in locked mode.

I'd be more concerned about the wifi and bluetooth chips' firmware.


You simply can't have a secure computer if untrustworthy chips are in the TCB. They did say open-source, physically-secure computer. They didn't say the whole thing was secure. The use-case is whatever security you usually have plus tamper-resistant case, drive encryption, better authentication, and whatever Intel's extensions bring to the table. Better than regular computer in terms of defending against many more attack vectors. Assuming design works.

Also, such a device can be combined with a secure, open computer to divide risk up among software and physical attacks.


I had a very similar thought. Also hdmi can double as ethernet. DVI uses a serial bus to negotiate features if I'm not mistaken. Probably room for exploits over that port too.

Seems like using built in input and display and giving up some external ports would be the only reasonable strategy if you were being as serious about physical security as this wants to be.


Isn't the phrase Evil Maid a bit off-key? I'm sure this must have been discussed at great length elsewhere.

We could express the same idea without the power and gender relations implied.


It's the common name of the attack that people get. So it's useful whether PC types, who are a tiny minority of many audiences, like it or not. I'm not sure how "Evil Maid" label was derived. I do know from espionage reading that the most common form of the attack came from maids in hotels of business or government people passing through. Janitors and maintenance types, too, but they were in a trusted, protected building instead of a third party's. French intelligence is particularly notorious for using maids or other hotel employees. Calling it a (adjective-here) Maid attack given all the maids involved makes since for historical and current significance of that attack vector. As in, the label is also a reminder to watch your ass and never leave gear unattended in hotels. ;)


DSK conspiracy theories are a fun example of French intelligence services in hotels. Certainly, the phrase has a titillating James Bond aspect to it.

If you haven't heard the phrase before it sounds weird, I do understand it's not intended maliciously.

I don't think I'd use the term without quotes in my own writing.


I Googled it. I don't know if it's true or not but it was entertaining. :) My memory is fuzzy here but I think one of my sources on it was the leaked MOD Security Manual of UK that talked about what various countries pull the most on their agents or diplomats. I know it was in a few places way before that DSK story. The one on Russia was worse, though. It said they not only would bug your hotel when they knew you were coming but might create a way to ensure you landed in a bugged one even if you switched at last minute. Very determined professionals over there haha.


HAHAHAHAHAHAHA

Why would you even think of this as an issue? I had to read your comment, then read the replies, then read your comment again, twice, in order to actually understand what you were saying because it made literally no sense to me that anyone would have a problem with this.

Why does it matter? There are plenty of words that used to mean something and now mean something completely different when in context.


Sure, but it's an industry trope now. The same way we use Alice, Bob, and Eve as everypersons when talking about communications.


Evil maid or the "cleaning man scenario" is a pretty standard term.

Also technically whilst it does originate from maiden it is mostly gender neutral today, just like busboy or a bodyman are.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: