It is difficult to believe that there is no backdoor.
DDR3&4 is a good example of this where the CPU scrambles what is written to physical memory to get the data closer to 50/50 1s and 0s.
In this case I doubt this is the primary reason though.
Here's an open source controller - you can see yourself: https://github.com/enjoy-digital/litedram
1. Protecting their trade secrets. Even if it's eventually leaked or copied, the time between introducing the product and a clone happening might represent billions in sales.
2. Reducing patent damages. Hardware is a patent minefield with suits and licensing deals pretty common. The companies that want to troll need to see clearly that their patents were infringed. Making the product as opaque as possible reduces the number that will notice in the first place and/or how certain their claim will look. Patent suits are the single, greatest threat to open-source, hardware companies that operate in an existing ecosystem.
i.e. it may reduce the total sum of patent damages across all potential cases by reducing the number that result in litigation, while increasing likelihood of increased damages for an unsuccessful defense.
Can't help but think 2. really seems to me a double edged sword.
So, without even getting into the whole DRM argument, yes?
From the user, since SGX is also used/intended for DRM purposes.
This doesn't mean it isn't a deterrent, but doing things in an obfuscated and unusual way also means you're potentially treading on unexplored ground, and so there will potentially be other exploits, bugs, innefficiencies which can come with the territory.
You might have a point if someone had ever made a perfect scheme, but everything that seems to have attracted attention has been broken AFAICT.
So, a moat around a castle does help. Calling it good after digging a moat and not building any walls is probably less useful than doing nothing. The moat is only useful in the broader context.
I worked for a thin client vendor years ago that was obsessed with encrypting our boot image etc, and poured all our resources into that. Which lead to encryption that was still trivial to work around, and a lot of other real security issues were never dealt with because we were fixated on the ineffective obscurity layer.
It also made our images more insecure, because it added no real security, while making it harder to improve security where it mattered.
In general, to protect the hardware IP. ME and FSP both do a lot of low level stuff that is instrumental to the silicon working in the first place.
This is big news. It should be #1 on HN, instead of the failed pentest story.
Right now I am ordering a GB-BPCE-3350C to replicate their work and see how to extend it to other platforms. Intel will limit the vulnerability. No time to wait.
A modern AMT free laptop would be of great value.
I remember this being very well received when it was first published on Twitter.
I get most news here now.
...but this leaves only a single conductor to do the entire thing?
This is a good example of how USB3.0 works almost like a separate, parallel bus to USB2.0.
> Like DAL, the OpenIPC library includes a command-line interface (CLI), written in Python and provided as a library for Python as part of Intel System Studio, which can be installed on the system with the help of pip.
Intel System Studio doesn't ship its own python, it depends on the OS' python version. So here we have an industry-infra dependency on python, and these typically seem to still be stuck in 2.7 land.
Python is being used here to actually wrap/coordinate the components that do the JTAG handshake. A neat approach, just using a whole programming language as the REPL. I like it. But this is one situation where I'm going to be very conservative with the tooling, as I don't want to break my system! :)
So, that's the real reason why.
Also; the repo actually has, right in view, a recent commit message of "fix for Python 3 compatibiliy". The authors are clearly just using 2.7 because ISS requires it and considering the bigger picture of tinkering with PoCs, mucking about with multiple Python versions seems like a waste of cycles and unnecessary management+verification of moving parts (I don't savor the idea of figuring out "ok how do I tell ISS where 2.7 is and how do I absolutely make sure it's avoiding 3.0?").
HN, please endeavor to pursue higher quality discussion. Devolving into meta is an excellent way to encourage stagnation. (I know this sounds a bit of a self-righteous thing to say but it's tricky to objectively articulate.)
I'd love to know why this project specifically chose Python 2.7 too. Python 2 reaches its end of life in 2020, any new Python code should really be written in Python 3.
By the way, if you want to prevent getting a bunch of downvotes like your comment did, be polite! Not including any capitalization or punctuation is considered downright rude by many. Your text reads like a robot's printout.
I'm not a Python programmer, but I can find my way around without much issue. Still, from where I'm sitting all that stuff about envs and the 2, 3 or 4 redundant, overriden, or deprecated mix of solutions sounds strange to me, I don't really want to buy into that just yet (I need Python like once per month, no more). For the casual programmer ("just want to change this line and run, please") it's just a block in the road that virtually all instructions you might see on any random README in Github might tell you to use "pip install..." but then you have to second guess if they mean "pip2" or "pip3", and whether they assume you'll be running with v2 or v3, maybe they tell you to run "pip install something" (which runs pip2), then in the script there is a Python 3 shebang... the installed dependency won't be found and now you have to understand all this mess if you want to continue.
This is in Ubuntu, where the system's "python" is Python 2, and you need to explicitly add a "3" to all your commands if you want to run v3. I'm sure there are clever ways to solve the issue, but I bet they'll break some use case. I'd just rather none of this nonsense existed at all, then no funny workarounds would be needed.
I wonder, how come other languages are able to slowly introduce some breaking changes over time (I'd swear Ruby was mentioned in some other HN thread about doing this) but not cause this kind of mess (as perceived from my humble external point of view)?
Most people will stop running python 2 when malware targeting it becomes rampant. So people being able to run your code is another benefit.
Python 3 might occasionally require some extra steps when consuming strings compared to Python 2, but the reality is that those steps were always necessary. Python 2 just hid those details in a way that was only really safe for english-exclusive development. That doesn't mean that Python 2 is easier to use or less brittle. In fact I would say it means the opposite.
Last unicode issue I had was on a system german characters, because some library assumed it had to explicitly perform encoding with a bad default setting. If the library didn't try to be smart the program would have worked independently of system or language, instead it failed on any non english system by trying to convert a perfectly fine, system specific encoding to utf8.
The NHS chose not to upgrade to from Windows XP and 2003 and look where that got them last year: a massive crypto locker infection.
All interchanged Unicode text should be UTF-8, never use another encoding* (without a really compelling reason).
No, storing it as an array of unicode characters isn't a compelling reason during interchange.
ALSO, never use a BOM; that will break things.
The second answer (should be anchor-linked) in this goes over MOST of the advantages of UTF-8, but it doesn't capture that some carefully input operations in otherwise completely Unicode //unaware// 'string' functions result in no change to string validity.
The only potential issue is if recognizing something in different normalization representations is important. However, for nearly all quick and dirty tasks (where a short script is most likely) it usually doesn't matter. For everything else a different paradigm than the one Python3 picked would be better. (One where adding filters to a read file is OPTIONAL and they can be invoked on individual byte-strings as well.)
Also you don't have so much longer until support drops entirely for 2.x, and then you really should port your projects. Why wait and port? Just write them in 3.
There is nothing wrong with 3. Just starting in 3 is easy and not an issue. It is basically the same language.
What if Python 3 was a separate project that someone other than GvR had put forward, would adoption have the same impetus? I don't think so.
My guess is they just went with whatever was most convenient and worked. I doubt if this kind of questions were even on the radar. It's security research, not a production website. In research you tend to go with whatever tool does the job.
Look at the overall complexity of this work. Maybe the author simply felt more confident with Python 2. You take whatever shortcuts that make things simpler, even if it means using 20 year old software.
It’s not just “perceived”, it is actual reality e.g. on CentOS. If you need compatibility still the best option is 2, regardless.
I’m no Luddite but the simple fact today if you want to reliably deploy and can’t or don’t want to ship a container, Python 2 is the reality.
Additionally; expecting them to properly support all the new features and manually creating (not backporting) fixes for issues is accrediting RedHat with having more resources than the reality allows.
For that matter, given the inexplicable popularity of emojis, it isn't even a matter of international text anymore.
So yes, go ahead with 3 if you are into emoji.