Hacker News new | past | comments | ask | show | jobs | submit login
Thunderbolt 3 Unblocker (github.com)
100 points by okket on Feb 4, 2018 | hide | past | favorite | 80 comments



From the TB3 enabler GitHub:

"For unknown reasons, Apple decided to block the support for some categories of Thunderbolt 3 peripherals under macOS and when you connect those Thunderbolt 3 peripheral, you simply get an "Unsupported" message under Thunderbolt Device Tree. After digging around, it turns out the block is implemented in software level and it is possible to bypass the block by patching the related kext. This patch modifies IOThunderboltFamily and allows "Unsupported" Thunderbolt 3 peripherals to work under macOS Sierra."

I wonder why they did that. Stability, probably?


Security issues, most likely. Like Firewire, Thunderbolt suffers from the same security issue in that to achieve its high speeds, it gives connected devices direct memory access. In other words, any connected device can read and write directly from your RAM, bypassing any kind of access control built into the OS.

https://en.wikipedia.org/wiki/Thunderbolt_(interface)#Vulner...


Thunderbolt is basically external PCIe, so that's not surprising.

Quite frankly, I'm not worried about physical access being a "vulnerability" --- I'm of the "if you physically own it, you should logically own it" mindset.


While I mostly agree with this, I don't agree here. Maybe you need to connect your notebook to a projector and they hand you a thunderbolt to hdmi cable.

They don't phsically own your device and yet you're possibly fucked.


I think the attack vector is even bigger than that. In an open office it should be relatively easy to swap out one of your devices with a identical looking, tampered device. I can only imagine how much harm you could do leaving a modified usb-c to other stuff adapter somewhere in an office.


I mean sure, but in that scenario you can swap a keyboard for one with a built-in keylogger, no need to bother with some ultra-complex ram bumping TB3 devices. If the attacker has physical access to your machine, all bets are off anyway.


Right but in a office environment we could be talking about drive by attacks where yes someone has physical access, but not enough time to be able to disassemble the machine and replace the keyboard without anyone noticing.


Couldn't a simple OS "permissions" system mostly fix that loophole (as much as it can be fixed)?

"[device name] is requesting [scary sounding permission] Allow?"

It won't protect users from themselves, but it will protect "bad-usb" style attacks.


I'm a pretty technical user, and I'm not sure how that would help me at all. What am I supposed to do with such a prompt?

A conference coordinator hands me a T3-HDMI adapter (as in the above scenario), I plug it in to my laptop, and it says "This device is requesting direct memory access. Allow Y/N?" My talk starts in an hour. Should I ask for the source code to the adapter firmware so I can do a quick audit?


Yep. If the UAC prompts have taught us anything, it's that users will click on anything that gets their program to run no matter what the prompt actually says(and in most cases, they don't even read it).


No, because when you have full access to RAM, there's no actual distinction in those messages for thunderbolt devices.

Unless maybe you meant just the same scary message for every thunderbolt device, in which case people become acclimated to ignoring the message. And that may be after they've wasted support time asking why some device wants this scary access.


IOMMU? You still get to lock down access per device...or the Mac laptops don’t have an IOMMU?


Many laptops ship with a disabled IOMMU or without one.


I think they have, at least mine does.


I meant more the second one, although depending on how widespread thunderbolt devices are, it might become useless like you say.

But are thunderbolt devices really expected to be that common that a message like this would be treated like the windows UAC dialog?


I definitely wish for USB-C to become very wide spread because it converges so many functions across different device types. For me Thunderbolt adds another layer to that, enabling extra features like super fast data transfer, however at the cost of security.

A security prompt seems very reasonable to me here, however I'm not sure Intel or Apple are very interested in that becasue they would be admitting a weakness and hinder the adoption of their own technology in some ways.

However for user trust this would be a very big win in my eyes.


I agree, however I could absolutely see it getting away from us.

If Thunderbolt gives even a slight advantage over USB-C, vendors will want to use it over USB-C. And the layperson will want it as well, which leads to "creep" making the most insecure but fastest method the "main".

There would need to be a way to dissuade vendors from using Thunderbolt for everything... And sadly a warning message isn't enough.


This isn't an apples to apples comparison. Thunderbolt 3 is a protocol, and USB-C, is a connector type uses for a lot of things, including Thunderbolt 3.

There is Thunderbolt 2, but it never saw any adoption outside Apple, and it's basically dead.


I was comparing the protocols underpinning them, apologies if that was confusing.


I dock my laptop on a regular basis through Thunderbolt, as it's my workstation as well. It really depends on the person and their usage patterns.


I still have to meet the user who will ever click “No” at one of those prompts.


Well please let me introduce you to some of our clients that click "no" to the "[b2b app] is requesting permission to use your camera to scan barcodes" about 40% of the time according to our analytics...

But in all seriousness, I agree that it's far from perfect, however I still think there is a way to allow the possibility without almost completely giving up on security.


>I'm of the "if you physically own it, you should logically own it" mindset.

I agree with the premise, but accepting that as a truism sure makes running an off-site server rough.


> Quite frankly, I'm not worried about physical access being a "vulnerability"

But it is. Imagine a dongle that installs a rootkit once you plug it in. And now give somebody your laptop to hold for a minute.

There were (still are?) very similar issues with Windows autorun on CDs and pendrives.


To execute a rootkit, it would need to create a process in memory. Windows Defender would scan it and probably catch it in the act.


The dongle actually doesn't have to install anything, just to know a bit about kernel memory structures or how to locate interesting portions of memory.

Full read access means you're pretty much screwed if the entity has bad intentions. No root kit needed.


Windows Defender or the OS can't do much to prevent a Thunderbolt device from directly accessing memory and doing whatever it wants.


Doesn't apple use IOMMUs? Are there actually motherboards out there with TB3 support but without IOMMU?


AFAIK Apple uses a custom motherboard for their machines, so I'm not sure where to look for specs on if the mobos support IOMMU/VT-d, but from what I can find, IOMMU/VT-d is not supported on any of the CPUs used in TB3-capable MBs, and it needs to be supported by both CPU and mobo for it to work.


http://ilostmynotes.blogspot.com/2014/11/thunderbolt-dma-att...

Apple in MacOS is using IOMMU to protect against Firewire/Thunderbolt attacks.

Here's a developer working on xhyve that is explaining how each PCI device has it's own VT-d mapper/domain: https://github.com/mist64/xhyve/issues/108#issuecomment-3592...


Interesting, I've been doing lots of searching and all I can find is forum or blog posts of unverified people, some saying that VT-d is enabled, but most saying it is not supported/enabled. In fact, if you look at the presentation by Russ Sevinsky that is referenced in your first link, it is specifically said that Apple hardware does not support IOMMU [1]. So I really don't know what to think.

1: https://youtu.be/q0HthE3qDMw?t=415

edit: I finally came across the below link from Apple which does seem to imply that IOMMU VT-d is enabled on Macs that are 2012 and newer. On anything before that, though, and DMA attacks could own you. So uh.. don't run macbooks that are 6 years old, I guess.

https://developer.apple.com/library/content/documentation/Ha...


Pre-logon DMA attacks were addressed by Apple in Dec 2016, https://9to5mac.com/2016/12/19/mac-thunderbolt-password/


That doesn’t make sense. Attackers will just pretend to be one of the permitted devices.

There are also hardware mitigations to make DMA safe(r) now. https://en.m.wikipedia.org/wiki/Input–output_memory_manageme...


> That doesn’t make sense. Attackers will just pretend to be one of the permitted devices.

I don't have much knowledge on how the block actually works, but I would assume it's not trivial to "just pretend to be one of the permitted devices."

> There are also hardware mitigations to make DMA safe(r) now.

From what I can tell, IOMMU is not supported on any of the CPUs used by Thunderbolt-capable MacBooks.


> but I would assume it's not trivial to "just pretend to be one of the permitted devices."

Your assumption is incorrect. Even for devices that are designed with the express purpose of being hard to emulate (auth tokens, DRM chips, iPhone cables), it’s at most a simple matter of a grad student or Shenzhen resident with access to a fume hood and an electron microscope finding some burned-in private keys. For devices that aren’t designed to resist emulation, which thunderbolt devices generally aren’t, it’s trivial. This is essentially one of the core messages you should take away from the field of hardware security.

> IOMMU is not supported on any of the CPUs used by Thunderbolt-capable MacBooks.

They all do. Intel calls it VT-d.


>Your assumption is incorrect. Even for devices that are designed with the express purpose of being hard to emulate (auth tokens, DRM chips, iPhone cables), it’s at most a simple matter of a grad student or Shenzhen resident with access to a fume hood and an electron microscope finding some burned-in private keys. For devices that aren’t designed to resist emulation, which thunderbolt devices generally aren’t, it’s trivial. This is essentially one of the core messages you should take away from the field of hardware security.

Every source that I can find regarding the 2016 DMA vulnerabilities disagrees with you. Most of them actually specifically require that Thunderbolt security features be turned off because otherwise signed drivers are required to be installed before the peripheral will even connect.

>They all do. Intel calls it VT-d.

Got a source for VT-d being supported on MacBooks? I've been looking pretty hard to find a definitive answer, but all I can find are random unverified forum posts, stackoverflow questions, blackhat presentations, etc, and all of them say that VT-d and IOMMU are not supported/enabled on recent MacBooks and MB Pros.

edit: I finally came across the below link from Apple which does seem to imply that IOMMU VT-d is enabled on Macs that are 2012 and newer. On anything before that, though, and DMA attacks could own you. So uh.. don't run macbooks that are 6 years old, I guess.

https://developer.apple.com/library/content/documentation/Ha...


> Most of them actually specifically require that Thunderbolt security features be turned off because otherwise signed drivers are required to be installed before the peripheral will even connect.

You’re on the wrong page here. I’m not really sure what your thought process is. The (easy) attack is to simply trick the host machine and driver into thinking you’re an approved DMA-capable device. It has nothing to do with host-side checks on the drivers.

> Got a source for VT-d being supported on MacBooks?

Intel’s website and my memory.

> So uh.. don't run macbooks that are 6 years old, I guess.

This thread is specifically talking about thunderbolt 3, so this precludes any such machines.


According to Wikipedia [1], the current 15" MBP base config has an i7-7700HQ. According to Intel [2], that CPU supports VT-d. Every mobile chipset in the 100 series (for Skylake) supports VT-d [3]. The 200 series is supposed to be the one for Kaby Lake, but there are no mobile chipsets listed in that series [4]. Also, there are several MBP models with IOMMU support on the Qubes HCL [5].

[1] https://en.wikipedia.org/wiki/MacBook_Pro#Technical_specific...

[2] https://ark.intel.com/products/97185/Intel-Core-i7-7700HQ-Pr...

[3] https://ark.intel.com/products/series/98456/Intel-100-Series...

[4] https://ark.intel.com/products/series/98457/Intel-200-Series...

[5] https://www.qubes-os.org/hcl/


I have a MacBook with processor that supports VT-d according to ARC. I cannot use VT-d features.


> I don't have much knowledge on how the block actually works, but I would assume it's not trivial to "just pretend to be one of the permitted devices."

Why would you assume such a thing? There's no historical precedent for peripheral authentication in the PCI space, nor indeed for any of the common peripheral interfaces.

The closest you get is "you get the driver that claims to best service the thing you claim to be", which makes pretending to be something you're not less useful from a functional perspective, but does nothing for system security.


I'm not sure why you would think of any of that. Peripherals for a long time have had authentication in the form of requiring signed drivers to work properly. Reviewing the information from the published 2016 MacBook DMA vulnerabilities specifically addresses this.


You’ve just changed your argument from “it’s probably difficult to spoof a vendor/device ID” to “drivers have historically required signatures”.


> any connected device can read and write directly from your RAM

Which means that unrestricted Thunderbolt can be used to access DRM-protected data...


Not for security reasons at all but rather so you’ll have to buy Apple made devices like their own external GPU enclosure.

Their restriction does not prevents actual DMA attacks at all just specifically limits high end TB3 devices.


Apple does not make a GPU enclosure, and does support external ones. I have a sonnet one with an rx580 in it. Works fine for me.


>Apple does not make a GPU enclosure

Are you fucking kidding me? https://developer.apple.com/development-kit/external-graphic...

Apple has been specifically targeting competing TB3 devices they do not white list but rather black list their competition.

And what is even more obvious is that they do not block TB2/1 devices at all which are the ones used for all the publicly available DMA attack toolkits like Inception.

This has nothing to do with security.


> Are you fucking kidding me?

This breaks the site guidelines. Please keep that sort of thing out of your comments here! The rest of your comment is just fine.

https://news.ycombinator.com/newsguidelines.html


That enclosure is made by Sonnet, and the card is made by AMD. They make neither. They just say that they support it specifically as they roll out support of eGPU generally. So, no, I am not kidding you.

That Apple dev kit costs the same as buying the components separately, or have you not looked at that possibility?


They began blacklisting 3rd party enclosures shortly before it’s announcement.

They do not restrict TB1/2 devices at all (including a TB2 GPU enclosure like the one I’ve been using for over 3 years) which have the same DMA access as TB3 the only difference is the speed of the PCIe lanes.

But sure tell me how this is done for security.

If Apple cared about security they would implement the same port and DMA protection MSFT has with Win10. But this is purely for business gains.


These third party blocks have been in place since the first Thunderbolt 3 Mac launched. They didn't start blocking anything in particular 'shortly' before they announced this dev kit at all.

These third party devices that Apple blocked were not just enclosures, there were other TB3 devices (e.g. dual display adapters) that Apple to this day have not announced or endorsed a competitor to, and in fact those same companies (e.g. StarTech) then released Mac compatible versions, which suggests Apple were just enforcing a standard that they wanted OEMs to meet before allowing them to work by default.

The blocks have been found to be based on an earlier chipset version, and that devices made using later versions of the TB3 chipset work just fine.

This is what a peripheral manufacturer said (Plugable):

The version of OS X on the new MacBook Pros (late 2016) will not work with existing Thunderbolt 3 docks and adapters that were certified for Windows prior to the release of the MacBook Pro. These existing devices use first generation of TI USB-C chipset (TPS65982) in combination with Intel’s Thunderbolt 3 chipset (Alpine Ridge). Apple requires the 2nd generation TPS65983 chipset for peripherals to be compatible. Certification of solutions across different device types is still in-progress for this 2nd generation chipset. From the Plugable product line, our dual display graphics adapters for DisplayPort and HDMI (TBT3-DP2X and TBT3-HDMI2X) are affected… We’ve also postponed our TBT3-UD1 Docking Station to update to the TPS65983 chipset and re-certify to make this docking station MacBook-compatible. Our Flagship TBT3-UDV dock with Power Delivery/Charging was already planned to use the next generation controller chip from TI, and will be compatible with the 2016 Thunderbolt 3 MacBooks.

This does still fit with your argument?


Yes since Razer and the GB enclosures use TPS65983 and have been since their earliest revision.


Not according to this:

https://egpu.io/forums/thunderbolt-enclosures/review-razer-c...

Which has photos of the chipset which is labeled TPS65982.


I originally thought blocking access could be due to a licensing thing, given that Apple and Intel collaborated on it [0]. However I found [1] stating that the licensing fees and royalties were waved in order to ensure larger market adoption.

[0] https://en.m.wikipedia.org/wiki/Thunderbolt_(interface) [1] http://appleinsider.com/articles/17/05/24/intel-making-thund...


> I wonder why they did that. Stability, probably?

Not malice anyway... From this project:

"Note there is likely a reason why IOThunderboltFamily considers a peripheral unsupported in the first place. Use at your own peril."


It could possibly be because of security issues?


Security? DMA is notorious.


The code for xnu_override (the library the author wrote to patch functions in the kernel) is wrong for a number of serious reasons.

It assumes (with no verification) that the first twelve bytes of any kernel function you could want to patch are all one basic block and do not contain a jump target or a relative branch. These reasons ARE why live patching is hard and why proper support for it is basically required in original code (making sure each func starts with a basic block at least twelve bytes long). Without that, it's quite a dangerous game to play in kernel space.

Furthermore, it assumes RAX is safe to clobber. This may not be so. Compilers are generally free to ignore standard ABI when calling provably-static provably-leaf functions. Who said RAX doesn't have a useful value on entry to patched function?

Additionally, it edits cr0 and cr3 with interrupts enabled, allowing it to be preempted during this action. If said preemption happens, another edit to those registers could happen in between. Scary, since what's being done is a non-atomic read-modify-write on CPU control registers.

Strongly advise anyone from running this on any system.


> Strongly advise anyone from running this on any system.

Oh come on. Your para 2-4 are serious conceptual defects with xnu_override in general, but this is the kind of hacker shit you'd expect to find in something that will live patch your kernel code. At the point in time you find yourself turning off SIP to load some code to hot patch your kernel, you've long since crossed the Rubicon of caring about running particularly good code.

The odds are good that the functions this thing is set up patch do hold with your first two invariants (RAX is scratch, there's 12 bytes in the entry BB). The third is the race, and you'll probably win this race most of the time.

"You'll probably win this race most of the time" shouldn't be in any serious engineering design document and is probably grounds for demotion or firing if you bring it up as a defense during a design or code review, but that isn't what's going on here. This is some hacker shit posted to github.


And if this project was one monolithic thing, I'd agree. But the page sells this xnu_override as a "reusable library" which might lead someone to try to reuse it in another place. This is scary and hence my warning. Hell, tomorrow's apple 10.13.0.0.0.0.0.0.minor.insignificant.0.1 patch could change these "invariants"


You're right, and I was reacting poorly to the "don't run this on anything" statement. It's some hacker magic off github, caveat empor. You're also right that future updates could break it. It's brittle. I won't be running it myself.

I bet, though, that 99% of the time, it works every time, and for its use case, that's "good enough." Hot patching the kernel is scary, though, and hopefully no one walks away from looking at that github thinking "oh yes, I know so many problems that can be solved by hot patching the running kernel" because honestly getting the act of patching the physical code itself is only the start of your troubles.


Author here. Yes, the issues you pointed out are mostly already captured via // TODO and // FIXME comments. The library was written in about a day. I could probably add the basic block length check (as it is, I just check for 0xC3), but I wasn't able to readily figure out how to disable interrupts during patching. If you have a suggestion, I'll implement it. The nice thing about a reusable library is that safety improvements can be made and shared with other clients.

(As an aside, can I suggest posting an issue to GitHub? I might not have seen this thread, and xnu_override users who want to know of its shortcomings should also be able to find your comments easily.)

I hope that someone who has the know-how to reverse engineer the kernel, devise a patch to add desired behavior, and write a kernel extension to implement that behavior, will understand that patching the kernel is inherently a dangerous thing to do—and will actually test that their change works.

I wrote this as an alternative to other patchers which provide a string search and replace on the system kexts. Reverting these patches, dealing with the code signing fallout, and updating them for a new release are significantly more challenging with this style of patching.


I don't GitHub, but here are some thoughts: you only need to disable ints when you mess with cr0 and cr3 and applying patches. There are apis in the kernel for that.

Basic block detection isn't easy. You CAN find branches in the block, and even patch them to work when you copy them elsewhere (you'll need this since most x86 branches are relative). There is provably NO 100% certain way to make sure no branch anywhere else targets those 12 bytes though.

You can use heuristics to almost-reliably guess though. If there is a "call", the instruction after it likely is a target of an indirect branch (ret). Also you'll have to disassemble the entire function (breadth first search till every traversal path hits a "ret") and see if any jumps to these 12 bytes exist. This will cover 99.99% of cases. The remaining 0.01% is VERY hard.


FYI, this block was found back at the launch of the first Thunderbolt 3 Macs to be because Apple (for whatever reason) didn't wish to support the older Thunderbolt 3 chipset, according to peripheral manufacturer, Plugable:

The version of OS X on the new MacBook Pros (late 2016) will not work with existing Thunderbolt 3 docks and adapters that were certified for Windows prior to the release of the MacBook Pro. These existing devices use first generation of TI USB-C chipset (TPS65982) in combination with Intel’s Thunderbolt 3 chipset (Alpine Ridge). Apple requires the 2nd generation TPS65983 chipset for peripherals to be compatible. Certification of solutions across different device types is still in-progress for this 2nd generation chipset. From the Plugable product line, our dual display graphics adapters for DisplayPort and HDMI (TBT3-DP2X and TBT3-HDMI2X) are affected… We’ve also postponed our TBT3-UD1 Docking Station to update to the TPS65983 chipset and re-certify to make this docking station MacBook-compatible. Our Flagship TBT3-UDV dock with Power Delivery/Charging was already planned to use the next generation controller chip from TI, and will be compatible with the 2016 Thunderbolt 3 MacBooks.

I still don't think we exactly know why, but the allegations that they're doing it for profit or for some attempt to control the ecosystem don't seem to hold up, especially since most of these peripheral manufacturers since updated their devices with the new chipset to work with macOS just fine.


Why would anyone (specifically developers) buy another apple product if they now start blocking hardware from working on PCs?

Them blocking battery extenders with headphone jacks on the iPhone is already bad enough, but doing it to PCs is a whole other level of walled garden.


> Why would anyone (specifically developers) buy another apple product if they now start blocking hardware from working on PCs?

Quite frankly, it doesn't affect me. The only hardware I plug into my MBP are 1.) the charger and 2.) sometimes a monitor. They're (usually) pretty good laptops.


What if the hardware doesn't work on the PC though? Isn't that the point?


FWIW, the current case seems to be "hardware works just as the spec says, OS blocks in anyway."

Mind you, I've seen many types of HW, some of which indeed wouldn't work "with a PC." The expected course of action was always "complain at peripheral vendor," this "we know what you want, but nope because FU that's why" has historically been reserved for vendors trying to keep out competition.

So, while it is theoretically possible that an OS-enforced soft block is there to protect the user, a statistically more probable version is that it exists to protect the OS maker from competition; IMNSHO the ball is in Apple's court to prove otherwise.


FWIW, the current case seems to be "hardware works just as the spec says, OS blocks in anyway."

I've noticed a disappointing and somewhat repulsive trend increasing over the past few years, one which not only Apple participates in (but they are one of the most notable), of moving from "not supported" meaning "we won't help you with it" (fair enough) to "we will actively stop you from doing it" (FU for even trying to do something we didn't think of/don't want/etc.).

IMHO it's perfectly reasonable for a company to not offer support for literally every hardware combination out there, because that's a huge space; but to deliberately sabotage attempts at doing things outside of that boundary is incredibly hostile. I have no definitive evidence but this trend seems at least somewhat correlated with the rise of authoritarianism.

As a developer myself, and also someone who has done many "unsupported" things with no ill effect, I constantly try to fight this position from others, but it's difficult.


> FWIW, the current case seems to be "hardware works just as the spec says, OS blocks in anyway."

Based on what?

The comments from Plugable make a strong case that there's one or more un-fixable issues with certain combinations peripheral silicon that lead to an unacceptable customer experience.

As can be clearly seen in this thread, given the opportunity a significant slice of the population will choose to blame Apple, so the only sensible response is to avoid the situation.

Put yourself in their shoes; given these two scenarios:

a) some set of customers' peripherals stop working with "this peripheral isn't supported" b) some set of customers suffer random hangs, crashes, freezes, data loss

And given that you only get to choose one of those two, which one demonstrates a real commitment to the customer?

I realise this doesn't fit the "Apple is always out to screw you" narrative, but I guess you have to ask... if it doesn't, perhaps your story's not so true?


“Statistically more probable”.

Based on what statistics, exactly?

Apple has a long history of blocking many things to protect users, e.g. unrecognized app developers and unsigned kexts.

OTOH, blocking certain TB3 devices hardly seems like a move that puts their competitors at any disadvantage. Apple doesn’t make any TB3 peripherals.


That is great, I found Thunderbolt the best solution for using notebooks at home and work (multiple monitors, USB ports). I only hope in the future you can connect external GPUs in some way. I have tried via PCIe with an external board but it is not the best solution (performance, unsupported), it is just a hack.


I've been using eGPUs for gaming (on Windows though) on my MacBook Pro for a few years now. macOS introduced official eGPU support and now there seem to be numerous compatible enclosures: https://egpu.io/macos-high-sierra-official-external-gpu/


A laptop with powerful external GPU is also a nice way to have quite portable VR setup


Which GPU are you using?


nvidia gtx 970. My setup is pretty diy with an external 200W power supply. Today I would get one of those nice enclosures mentioned in the article. You loose around 10% performance compared to an internal GPU.

BTW: Windows 10 also has pretty solid eGPU support on my MBP at least.


I use my 980 Ti via thunderbolt on my laptop, it’s about a 3-10% performance hit. The main issue is that quite a few laptops struggle to do the full 40gbps error free.

Nice thing though is my laptop gets power over the USB-C cable thunderbolt 3 uses, so it’s only 1 cable to plug in.



599 USD - ok that's crazy


Considering current GPU prices and the fact that a RX580 is included, it's actually a steal. :)

(No wonder it's unavailable...)


Oh I didnt see the were including an rx580.

The razer core, that looks like this kind of system, costs 499 without a GPU...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: