Yeah, I've come to similar conclusions about Agile. It can be a great way for an already well-disciplined organization to think about the work they're doing. But many, many undisciplined organizations thought that Agile would be a catalyst for them to become disciplined. But the nitty gritty of "doing Agile" required EVEN MORE discipline than what all these organizations were already capable of exercising, so it just highlights all the frustrations everyone already has.
That does seem to be the downside of Agile. It's a collection of maybe a dozen different techniques and practices. But if one of those practices falters, whether it's the TDD, or the business side still wants a certain deadline, or you don't bother to demo at the end of a sprint, then the whole house of cards falls like dominoes..... checkmate!
One big factor that I don't THINK the article touches on is how we are using 'space' metaphors (speed and distance) for our work that's very 'time' based (working productivity and duration). And I think when we estimate, we try to estimate distance/duration but we forget that we're really trying to estimate our speed/productivity.
Where that gets painful is, say you estimate you can get something done in two days. In reality, you're twice as fast (there's that speed metaphor), so you actually get it done in one day. Yay, you saved a day! Now assume you're twice as slow as your two day estimate. Boo, you spent TWO days longer. So in terms of duration, getting it incorrect in the painful direction seems like a bigger mistake.
I don't think this is the same phenomenon as the author's mean vs. median dilemma. I'll bet both the mean vs. median and the productivity vs. duration dilemmas are real factors though.
A counter-argument is that even someone like you who doesn't use facebook, amazon, and google is still irrevocably affected by their power and behaviors. You seem to think you've escaped them, but you surely have a Shadow Profile in facebook, you browse websites that show you AdSense ads (google), and oh yeah, your civic institutions are manipulable via popular opinion on facebook. Hmmm, you do have a strong point about Comcast and Verizon, but I think there's a false choice in there.
And how would breaking them up prevent any of that? Facebook doesn't have a monopoly on data collection, Google doesn't have a monopoly on display advertising, and Facebook doesn't have a monopoly on peer pressure.
The value and impact on commercialized surveillance depend critically on the ability to generate large databases, both on the any specific indivual, and on the the number of individuals thus affected. Breaking up dataminers probably would help.
And if we take a step back: this isn't necessarily only about privacy, but about capitalism and competition. Data-miner customers, and data-miner subjects might both get a better deal if there were competition between the middle men, at least more than now.
They have monopolies on the political leverage that comes with all of the above.
Verizon and Comcast could play the same game, but they're still more about money than influence - which is why they've chosen to make their stand on killing net neutrality, not on influencing referendums and elections.
FB particularly is incredibly toxic to genuine democracy - not necessarily more toxic than some of the other monsters in the mainstream media shark tank, but certainly not a company that should be allowed to run riot without oversight.
Two months ago, Apple announced an ECG sensor for your wrist. A year before that, they announced face detection for the purposes of identity with enough accuracy that it can be used for financial transactions. Also, AirPods are incredible. I'm not saying Apple has a monopoly on invention, but to say they 'used to be an inventor'? That's weird.
Personally, I find FaceID to be vastly inferior to TouchID for many of my regular use cases.
Perhaps the worst one is that you cannot easily unlock your phone while it lays on a conference room table to see the contents of a message. You need to picking it up and point it at your face. Likewise when using the phone while it is in a stand/holder.
The one and only benefit I found is during the winter, it is easier to unlock the phone with gloves on.
Please stop propagating this falsehood, or at least accept that it comes with caveats. Biometric ID on Apple devices is likely to be a significant improvement for many users.
It _always_ depends on your threat model. Most people need protection from snooping family members, or people who find your phone if you lose it. For these use cases Face/Touch ID both work great. If you are trying to secure your data from the NSA, well you have probably already lost, but by all means, turn off Face ID.
> If you are trying to secure your data from the NSA, well you have probably already lost, but by all means, turn off Face ID.
If you're trying to secure your data from the NSA, carry a flip phone and turn it off and throw it in the freezer before you have any sensitive in-person conversations. Also have all of your sensitive in-person conversations right next to a loud white noise generator (i.e. on the seashore). And memorize all of your confidential information. And always carry a highly reliable suicide method in case you get captured and interrogated.
I'm not being funny here, these are literally the precautions that people take against state-level espionage.
I find Face ID to be a vast improvement over Touch ID. It's much faster and less fiddly to unlock the phone than with the old Touch ID. I love that you can activate and unlock the phone just by looking at it. Apple Pay is much faster. And I like that it's smart enough to not dim the screen when you're looking at it, even if you haven't interacted with the device for a while.
My personal inconvenience is that I'm myopic so I tend to hold my phone too close to my face. The camera doesn't see me and have to tap the PIN every other time. TouchID doesn't depend on that and where my face is. My fingers are always near the phone.
I am also near sighted. I have to back the phone away from my face to get it to unlock when I am not wearing glasses/contacts. At least it will retry on a swipe-up.
I find that Face ID is only a mild win over Touch ID in situations where it's better, such as when you're taking the phone out of your pocket in one motion, or when the phone is propped upright and you get more details on a notification.
But in situations where it fails I find that it fails harder and repeatedly, which makes you want to choose a simple password (perhaps that's why Apple tucks away the alphanumeric option). When a phone is laying flat on a desk, you can lean your face over the phone. When your head is on a pillow, you can lift your head off the pillow. When the lighting conditions aren't good, you can just turn on the lights and position the phone at that "magic distance" until it unlocks. But if you don't it'll just fail again and again.
As a minor point, I'm surprised that you prefer to double press while looking at your phone, versus having a fingerprint reader on the back so you can unlock your phone in one gesture of hand toward the payment system.
"I'm surprised that you prefer to double press while looking at your phone"
The double-press for Apple Pay? In many situations you don't actually have to do this. Just place the phone up against the store's reader, then look directly at the phone and Apple Pay will activate without further interaction.
On London Underground and Buses, though, that's awkward and might hold up the queue, so I do double-tap to activate Apple Pay in advance before getting to the reader. But that's certainly no more difficult than with Touch ID, when you had to double-tap the home button.
(Also, you don't have to double-press and look simultaneously. The double-press will activate Apple Pay, then a quick glance at the phone will unlock it for payment. You then have a minute or so to actually touch it on the card reader).
I have an old iPhone, so I can't compare the two. But what I will say is that I REGULARLY have trouble with TouchID. Hands not dry enough, or just failing to recognize fingerprint for other reasons. If FaceID does better than work, say, 2/3 of the time then I'll have a better experience than TouchID.
I have unusually greasy fingers, so TouchID never worked for me on iPhone. (It seems to work fine on my MacBook Pro, though, so maybe it got better). FaceID doesn't consistently recognize the particular smushed shape of my face first thing in the morning, but other than that, it's perfect.
Just a reminder, that under US law, the 5th amendment only applies to passwords (what you know), and not biometrics (what you are). If you choose to use FaceID, thumbprints etc, the government can force you to unlock your device.
Another reminder, you can temporarily disable Face/Touch ID by holding the volume up and power buttons for two seconds, something you could probably do while your phone is in your pocket without anyone noticing.
>something you could probably do while your phone is in your pocket without anyone noticing
that's easy to do if you're driving a car and you get pulled over, but what if a cop stops you on the street? reaching into your pocket is asking to get shot.
> A year before that, they announced face detection for the purposes of identity
Somewhat like Windows Hello, that also existed, based on technology from the Xbox Kinect?
> Two months ago, Apple announced an ECG sensor for your wrist.
Okay, so this is cool. But, putting on my paramedic hat for a moment, there is _SO_ much disinformation about what this does and what it is capable of detecting, what the difference is between FDA _clearance_ and _approval_, etc.
It can detect A-fib. This is a common, but usually not life threatening medical condition. It's good to have it diagnosed, but even undiagnosed, many people live happy lives blisfully unaware of it. Another way you can potentially recognize A-fib? It's not quite as fancy as the Apple Watch, though: put your fingers on your radial pulse by your wrist. Feel yourself skipping every fourth beat? That _could_ be a problem (though there are other diagnoses).
The Apple Watch does not and _cannot_ (despite ill-informed articles by Cnet and others) take the place of a "12 lead" ECG (random detail, in the medical field, ECG usually refers to an echocardiogram, an ultrasound imaging, and EKG, for electrocardiogram, is most commonly used for what the Apple Watch and other devices are doing).
From Cnet[1]: "Traditional EKG machines have 12 leads with electrodes that are attached all over your body to measure the electrical signals. Apple compares what the Apple Watch Series 4 does to a single-lead EKG, which research shows is just as effective at measuring the heart's electrical signals as a 12-lead machine."
This is flat out and factually wrong. The linked research shows nothing of the sort, and tries to walk someone through using a single lead system multiple times (up to 10), to get the full results of a 12 lead (if you've ever wondered why a 12 lead EKG only requires 10 physical leads, think of them more as 'axes', measured multiple ways, i.e. from lead 1 to lead 4, lead 1 to 5, etc), and then being able to aggregate them manually. For one, this requires moving the end points of the leads multiple times, something you could not do with the Apple Watch (or, to be clear and fair, any other watch), unless you're planning on holding it in many different spots in sequence (which then has issues of being more a time lapse, than a snapshot).
What does that mean? It can't diagnose impending heart attacks, nor heart disease, valve problems, circulatory disorders, and it likely never will, especially with current hardware.
This is also why it's obtained FDA clearance, not approval. To use a metaphor, it's more like a fitness device on steroids, so to speak, with some minor overlap into general health. But not that much more.
Having said that, an incidental 1 lead EKG on a wrist that can be brought up to your primary care and cardiologist at a later date is a game-changer. In clinical practice we will often use a Holter monitor (4 lead wearable EKG) to look for paroxysmal atrial fibrillation, but having a passive monitor such as this on for months at a time will help doctors have more information regarding tachycardias like atrial fibrillation. I disagree profoundly with the assessment that the Apple Watch is just a glorified fitbit. (Source: medical resident)
ECG on your wrist is a gimmick, not a true innovation. Extremely impressive from a technical standpoint, but not actually relevant to many consumers.
The criticism always comes back to how impactful the original iPhone and iPod were. Apple has failed to live up to that standard ever since, but to be honest it's an impossible standard to be held to.
Yeah... Just yesterday I was driving at night, through road construction, with a torrential downpour, and nowhere to pull off, and it was freaking scary. And I remember just thinking, "This kind of thing happens to me maybe a few times a year and there's no way in 30 years that we're going to trust autonomy with this"
but isn't that part of the point? I doubt any automated system would consider driving in those conditions 'safe', so it would deal with the situation by pulling over to a safer spot and stop driving. Humans make terrible risk decisions in cases like that - continuing to drive in horrible snowstorms, etc, when the risks a way higher than our already-risky roads in normal conditions.
By not having the human make that decision, you save lives, even if some people arrive home late.
It does raise the point of 'rescue' in certain dangerous conditions like winter storms. Extreme rain in the dark can probably normally be waited out, but snowstorms and other road-closure type conditions probably warrant a different proactive rescue type response if we'll have riders with no driving ability in self-driving cars.
"Not driving in the insane conditions when humans are foolish to do so anyway" would IMHO allow routine drives in good weather in known terrain without roadworks, about half the year. Which is a great and magnificent improvement, in all honesty - that is, once we can get the marketing types to cool down from their current hype "it drives itself, full autonomy, everything and a pony*!!!!!!!"
I think this is a critical point. We need to reign in the customer's expectations that have already been set too high. They're already expecting to just get in and go anywhere while watching things on their phone or reading a book... soon. We already see this with people posting videos of sitting in the passenger seat while their Tesla rips down the highway in traffic.
Like anything it should be a graduated phase in. It will handle some of the conditions some of the time, and in time it will get better. It would be like me being frustrated I can't carry on a conversation about philosophy with my Google Home. "... but you said I could talk to it and ask it questions!!!"
Expecting a conversation on philosophy would be completely understandable - if the vendor sold it to you with the tagline "it has all the parts it needs for a philosophic conversation!" Google Home doesn't do that, Tesla does. (Musk doesn't even try to weasel around it: says "full self-driving features", Tesla marketing materials repeat. That is, in my opinion, a blunt lie.)
If you're already struggling financially, being forced to pony up 30% of this month's net revenue just to keep some public agency afloat so it can fine and fee more people later is the kind of bad break that kills people (https://www.thecut.com/2016/12/america-is-failing-the-bad-br...).
It's cruel and unjust and it has no place in America.
Because it creates an organization that deviates from its purported mission, abusing the public trust instead of serving a common need.
The self-funded USPTO has a bias toward approving bad patents to generate revenue and consequently enables the predatory behavior of NPEs. It becomes a net detriment to society.
If it rejected 99.9% of patents, the expected value of the typical application drops to 1/1000 its current value. So fewer people would pay the application fee.
Fewer patent clerks would be needed, so their operating costs would also decrease. But presumably not below 1/1000.
If it gets a reputation for being stricter on granting patents, a lot of people won't waste their time or money in submissions that are likely to be rejected.
> To understand some of the distrust of police that has fueled protests in Ferguson, Mo., consider this: In 2013, the municipal court in Ferguson — a city of 21,135 people — issued 32,975 arrest warrants for nonviolent offenses, mostly driving violations.
> A new report released the week after 18-year old Michael Brown was shot and killed in Ferguson helps explain why. ArchCity Defenders, a St. Louis-area public defender group, says in its report that more than half the courts in St. Louis County engage in the "illegal and harmful practices" of charging high court fines and fees on nonviolent offenses like traffic violations — and then arresting people when they don't pay. The report singles out courts in three communities, including Ferguson.
Because it's a very plausible risk for incentive misalignment. If the department has a role outside of people doing finable activities but is only financed through catching finable activities, false positives are strongly incentivized.
The alternative would have been either no appropriation out of congress or an appropriation beholden to the evildoers, which is kind of where we are now anyway with Mulvaney running the wrecking crew.
It is though. The CFPB has to be able to impose fines large enough to balk the largest financial players in one of the largest economies in the world.
Imagine if the cop that writes your speeding ticket gets paid on commission...
But if that then becomes an incentive for self-dealing, it is very problematic. Instead that money should go directly to citizens in the form of remediation and barring that, deficit paydown or underfunded government services (the VA comes to mind...)
I wouldn't complain if that was the outcome. But I might also imaging using the fines cross-agency like giving the FDA more operating budget to pursue cross-state food safety issues.
So is speculative execution just inherently flawed like this, or can we expect chips in 2 years that let operating systems go back to the old TLB behavior?
Yeah I was wondering this myself. Even if there's some fiddly hardware fix to make speculative execution secure, how much of its performance gains will we have to give up to get there?
As I read through the meltdown paper, it looks really difficult to have the security we want and the performance we want at the same time. It's pretty crazy, but here's my limited understanding:
There's a huge shared buffer between two threads. 256 * 4K. One thread reads a byte of kernel memory, literally any byte it wants, and it then reads one of those 4K pages from that buffer in order to cache that one memory page that corresponds to the byte it just read. Then at some point the CPU determines that the thread shouldn't be permitted to access the kernel memory location, and rolls back all of that speculative execution, but the cached memory page isn't affected by the rollback.
The other thread iterates through those 256 pages, timing how long it takes to read from each page, and the one page that Thread A accessed will have a different (shorter?) timing because it's cached already. It now understands one byte of kernel memory that it shouldn't. That's just one byte but the whole process is so fast that it's easy to just go nuts on the whole kernel address space.
So what would the fixes be? Disable speculative execution? Only do it if the target memory location is within userspace, or within the same space as the executing address? Plug all of the sideband information leak mechanisms? I dunno.
Keep a small pool of cache lines exclusive to speculative execution, discard when non taken, rename affected cache lines (like register renaming so no copy) when taken.
In the simplest Meltdown case, the offending instruction is really executed and a General Protection Fault occurs. That is handled in the kernel which at that point could (simply?) flush all caches to remove the leaked information.
The real problem with Meltdown seems to occur when:
1) The offending instruction is NOT really executed because it is in a branch which is not actually taken.
2) The offending instruction is executed but within a transaction, which leads to an exception-free rollback (with leaked information left in cache though).
AFAIK neither is (or can be made) visible to the kernel (which could explain the very large PTI patch), but I do wonder if they are events that can be hanlded at the microcode level, in which case a microcode update from Intel could mitigate them.
The MELTDOWN one is the easy one (as is evident by the fact that this is the one that only seems to affect Intel CPUs).
When a load is found to be illegal, an exception flag is set so that if the instruction is retired (ie. the speculated execution is found to be the actual path taken), a page fault exception can be raised. To prevent MELTDOWN, at the same time that the flag is raised you can set the result of the load to zero.
SPECTRE is the really hard one to deal with. Part of the solution might be providing a way for software to flush the branch predictor state.
Maybe separate BTBs. Or maybe disable branch target prediction when in kernel mode (but then some VM process may still observe some other process running inside a different VM via a side channel).
Not allow user processes to recover from a SEGV. The attack depends on a signal hander that traps the signal and resumes execution. If this is disabled then the attack will not work. This would affect two types of systems:
1. Badly written code where bugs are being masked by the handler.
2. Any kind of virtualization?
So, for cloud providers it looks like a 30% performance hit, but for the rest of us I would rather have a patch that stops applications handling the SEGV trap.
The attacks do not rely on recovering from SIGSEGV. The speculated execution that accesses out-of-bounds or beyond privilege level happens in a branch that's predicted-taken but actually not-taken, so the exception never occurs.
Ah, ok - then I read the paper wrongly. i’ll go back and have another look.
Edit: yes, I missed the details in section 4.1 when I skimmed through. I’m not familiar with the Kocker paper, but I assume the training looks like this?
I can imagine some ways to armor the branch predictor, similar in principle to how languages like Perl have to include a random seed in their hash code (in some circumstances) to avoid being able to pre-compute values that will all hash to the same thing [1]. There should be some ways to relatively cheaply periodically inject such a randomization into the prediction system enough to prevent that aspect of the attack. This will cost something but probably not be noticeable to even the most performance-sensitive consumers.
But no solution leaps to mind for the problem of preventing speculative code from leaking things via cache, short of entirely preventing speculating code from being able to load things into the cache. If nobody can come up with a solution for that, that's going to cost us something to close that side channel. Not sure what though, without a really thorough profiling run.
And I'd put my metaphorical $5 down on someone finding another side channel from the speculative code; interactions with changing processor flags in a speculative execution, or interaction with some forgotten feature [2] where the speculation ends up incorrectly undone or something.
It's going to be really hard to give up real world gains from branch prediction. Branch prediction can make a lot of real world (read "not the finest code in the world") run at reasonable speeds. Another common pattern to give up would be eliding (branch predicting away) nil reference checks.
> short of entirely preventing speculating code from being able to load things into the cache
Some new server processors allow us to partition cache (to prevent noisy neighbors) [1,2]. I don't have experience working with this technology but everything I read makes me believe this mechanism can works on a per process basis.
If that kind of complexity is already possible in CPU cache hierarchy I wonder if it's possible to implement per process cache encryption. New processors (EPYC) can already use different encryption keys for each VM, so it might be a matter of time till this is extended further.
It's possible to key the cache in the kernel on CPL so at least there should be no user / kernel space scooping of cache lines.
It's possible we can never fully prevent all attacks in same address space. So certain types of applications (JIT and sandboxes) might forever be a cat and mouse game since we're unlikely to give up on branch prediction.
AFAICT injecting any sort of delay that prevents this attack would also completely negate any benefit from caches and that would take us back to 2000s performance at best, even with 10-16 core Xeon monsters. The branch predictor is really just a glorified cache prefetcher so you'd not only have to harden the branch predictor but anything that could possibly access the cache lines that the branch predictor has pulled up.
"The branch predictor is really just a glorified cache prefetcher so you'd not only have to harden the branch predictor..."
I was just thinking of the part they were talking about where it was too predictable, not the rest of the issues. Instead of a single hard-coded algorithm we could switch to something that has a random key element, like XOR'ing a rotating key instead of a hard-coded one, similar to some of the super-basic hashing some predictors already do. Prefetching I just don't know what to do with. I mentally started down the path of considering what it would take for the CPU to pretend the page was never cached in the first place on a misprediction, but yeow, that got complicated fast, between cache coherency issues between processors and all of the other crap going on there, plus the fact that there's just no time when we're talking about CPU and L1 interactions.
Timing attacks really blow. Despite the "boil the ocean" nature of what I'm about to say, I find myself wondering if we aren't better served by developing Rust and other stronger things to the point that even if the system is still vulnerable to timing attacks it's so strong everywhere else that it's a manageable problem. Maybe tack on some heuristics to try to deal with obvious hack attempts and at least raise the bar a bit. More process isolation (as in other links mtanski gives you can at least confine this to a proces). (What if Erlang processes could truly be OS processes as far as the CPU was concerned?) I'm not saying that is anything even remotely resembling easy... I'm saying that it might still be easier than trying to prevent timing attacks like this. That's a statement about the difficulty of fixing timing attacks in general, not the easy of "hey, everybody, what if you just wrote code better?", a "solution" I often make fun of myself.
That does seem to be the downside of Agile. It's a collection of maybe a dozen different techniques and practices. But if one of those practices falters, whether it's the TDD, or the business side still wants a certain deadline, or you don't bother to demo at the end of a sprint, then the whole house of cards falls like dominoes..... checkmate!