This seems like the real-world analogy of a similar trend I've noticed in software development where, to "improve code quality", humans are replaced with stupid tools that give false positives all the time, and to placate their warnings and satisfy the metrics of the management imposing them, writing code becomes perverted into trying to figure out how to stop the tool from complaining instead of focusing on the actual problem being solved.
But someone somewhere high up in the ivory tower ticked a box. I suspect the same here happened with Amazon and an insurer somewhere. Push the hell down to the bottom of the stack.
The funny thing in this case is the process holes are still so enormous that you can drive a tank on fire through it and straight into production. The net gain is zero and the outcome is overconfidence.
Company was primarily using Java for the backend. The Application Security team decided to start scanning all our deployable JARs and WARs with a static analysis tool. Well, immediately the tool started complaining about our libraries. For example, the official Elasticsearch library, for the version of the database we used, had multiple System.out.println calls. Well those were all flagged and the static analysis tool deemed our code unacceptable. We needed to use a logger, not stdout. So we had to go and take it up with AppSec. And explain, no thats not our code. Yes, it does run in production. No we can't use a different library.
The whole thing was exactly as you put it, a kafkaesque nightmare where I'm being punished for something I don't understand by people who don't understand. An no-one can tell me why.
At some point I came across a tool (I think it was white source) that was going through Github page of each library used and was checking if there are no bugs reported.
Obviously very often there were totally bogus tickets, support requests created as bugs, rants as bugs, duplicates of already fixed tickets, etc. so investigating all that crap was really hard.
Another problem was licencing. There are a lot of libraries that have dual licence, e.g. GPL and CDDL, obviously the tool was panicking that GPL is used, not noticing that there is a second "business-friendly" license.
Those static analysis tools are not bad as a concept, however ratio of false positives to real issues found is too large to make them truly useful unless one can invest a lot of resources into tuning them.
The seemingly very smart people designing these systems don't understand that their 1 in a billion edge case happens hundred of times every single day when you scale it to the world
Of course the most interesting is to use counter-AI to generate video of right behavior without repetition and with matching location/street view in case if the Amazon/Netradyne AI is smart enough to have geolocation based street view integration.
Only if you know what the AI cam sees, if it's like a blackbox to you then I doubt there's way that you could do to fool it.
"Your camera doesn't get correct image"
"Why?! What is wrong?"
"See for yourself"
Uber drivers used to look up time slots for big planes arriving at the airport. They would then all go offline on the app to trigger "surge pricing" and then all benefit from it.
No one knows exactly how the algorithm works, but they know how to fool it.
The thing to consider is that no actions are taking place in a vaccuum. The drivers gaming the system is not an isolated input that is occurring. Instead, there are a number of different inputs that are putting downwards pressure on the rates a driver gets to charge / how much money the driver actually takes home after costs, taxes, etc.
While it's not incorrect (or even unjust) to describe the behaviour as being potentially 'gray' (although the use of colour is a different matter, I understand the sentiment being expressed), by only highlighting and focusing on the least powerful members in the system, it unconsciously paints a picture that draws focus away from the other inputs into the system that have resulted in this situation occurring in the first place.
And even if we just limit ourselves to tech users - some companies have long been logging coders' Eclipse (mostly not in the 1st rate countries of course), and I suspect these days that may get fed to AI. And some day it will make into US and West Europe too.
While I acknowledge that some people will prefer to be a driver than to work in any of the places whose doors would open with this skill, I suspect this is a minority.
It sounds like a system that needs to be banned or thoroughly regulated. Naive driver distraction systems like this do not make traffic safer. The company should prove that this system is safe just like autopilot manufacturers have to.
Of course the unfairness is also a valid concern, and I'd love to see them both addressed.
You should interview at Netradyne. From your comment you seem like a good fit.
What the AI is attempting to solve is a complex problem. I’m willing to give the drivers here the benefit of the doubt ;)
I’ve worked on smart interfaces for cars by the way. My experience not only as a driver but also as a developer in this space tells me that AI described in the article needs further development and your presumptions in this discussion are BS.
TFA even points out drivers making excuses for not wearing a seatbelt. If they won't even take personal responsibility for their own illegal actions, of course their opinions on more subtle things like following distances aren't reliable.
They did mention it: not every driver has the AI installed.
Also you’re assuming the bonuses are awarded like prizes to the top performers. So if everyone is penalised equally then the same people still get awards. My impression was those bonuses were awarded to anyone who passes specific milestones. So even if everyone is penalised equally it’s still unfair to everyone.
> TFA even points out drivers making excuses for not wearing a seatbelt. If they won't even take personal responsibility for their own illegal actions, of course their opinions on more subtle things like following distances aren't reliable.
Yes, that is a fair argument. I interpreted that in a more charitable way saying “they’re acknowledging some bad practices on their side so at least trying to meet the AI developers half way”. But I will admit your interpretation is just as, if not more so, plausible.
Even so, that doesn’t justify being ignored to by Amazon and the AI developers. Any new tech needs debugging in the field. And this is more so true with AI and complex problems like driving assistants than it is in most other software specialties.
That’s the part of the article that stinks the most and the original point I was making yesterday. I couldn’t give a rats ass who is to blame because the biggest fault lies with the company for not working with the users to ensure the software is performing correctly. And this is especially important when people’s income are being directly effected by the software too!
> Use cases that involve public safety
> First, you should use confidence thresholds of 99% or higher to reduce errors and false positives.
> Second, you should involve human reviewers to verify results received from a face detection or comparison system, and you should not make decisions based on system output without additional human review. Face detection and comparison systems should serve as a tool to help narrow the field and allow humans to expeditiously review and consider options.
> Third, we recommend that you should be transparent about the use of face detection and comparison systems in these use cases, including, wherever possible, informing end users and subjects about the use of these systems, obtaining consent for such use, and providing a mechanism where end users and subjects can provide feedback to improve the system.
From Amazon's past behavior I would side with the drivers and delivery company owners in that it seems like an excuse for amazon to deny payment - which amazon uses as demerits & punishment, not actually bonus
CA tried to do this for individual contractors but I think we should think about having legislation tackle these cutouts which are ostensibly a part of the company.
I'm sure there are lots of edge cases I'm missing, like maybe franchises. but if there is no difference in how a contracting company operates - besides forcing cost savings from bad labor, safety, & liability practices unto working people and small business owners - it shouldn't be allowed.
Heck, an easy test is when you present your vans to the public branded as the parent company AND force those independent delivery company employees - who do not actually work for you - to dress like and follow policies as if they did.
Don’t get me wrong, though. I have considered selling a few times.
Well, that's a little tough. Is it "drivers hate seatbelts" or "drivers feel so much time pressure that extra few seconds eats into their tiny wage"?
Your head just explodes after the third infraction.
I stopped buying from Amazon literally decades ago. When they were just selling books and after their privacy bate and switch.
This decision looks better by the day.
> Your head just explodes after the third infraction.
Was that in the 1987 movie? I'm failing to remember anything like that.
The head explosion may come from the add for a telecommunications company in which the protagonist shoots himself. Possibly that was my head explode memory.
As it is, since false positives cost amazon nothing (they actually /save them money/), there's zero incentive to work to reduce them.
That solution would solve a lot of the DMCA issues on things like youtube, come to think of it.
I've just seen an ad for one of them on social media this morning. It's mostly targeted to young drivers or people who've been caught DUI / recklessly driving.
Is this car insurance or a gacha game?
This sounds bad on the surface -- you are supposed to stop at stop signs. But I remembered that self-driving cars have to roll through stop signs too, because nobody remembers the right-of-way rules from driver's ed and so people will just take their turn early if it doesn't look like it will cause a crash. That means coming to a complete stop means you don't get your turn at all.
Very unfortunate that the letter of the law and what people do in the real world aren't aligned. This isn't so much of a camera / AI problem -- a cop could give you a ticket here too.
Here in Australia, the usual setup for uncontrolled residential intersections (that don't have roundabouts) is to give one axis (?) right of way, and put stop signs on the cross axis. If neither direction is substantially wider or busier than the other, it's usually just arbitrary.
This solves the problem of people being confused about right of way. When you come to a stop sign, you just give way to all traffic coming across the intersection.
When I think of conversions from 2 to 4 way stops, I think of an intersection in my town where the road was listed at 25mph but the enormously wide pavement easily supported driving 40-45mph comfortably, especially being at the bottom of a steep hill, and people drove accordingly.
Completely separate from that is the issue of neighborhood groups trying to get inappropriate stop signs added to calm traffic. Stop signs are only for managing right of way; traffic calming should be handled by changes to street design.
Disclaimer: Obviously any answer to this question will be a generalization that doesn't cover all circumstances
Where I am, they are certainly overused - of course, because people agitate for them because safety.
But you can read all about the traffic engineering justification for them in the Manual on Uniform Traffic Control Devices:
But won't. Or probably won't. Unless it's a speed trap. And if they do, they're not in the wrong to do so, but it doesn't mean you were 100% in the wrong to slow down to 10% instead of stopping. So it doesn't necessarily mean you need to change your behaviour, except to be wary of speed traps. And in case of danger, being able to stop is important, but there could be situations where slowing down is safer than jamming the breaks. And you might be able to explain that to the cop, or the cop might see what happened and give you a pass.
Point is: situations can be complicated and nuanced, and AI is still very bad at this kind of social thinking. Personally I've come to the opinion that AI will never really be good at this kind of thing until it actually lives in and participates in society, whatever that may mean.
Just because it is "industry leading" does NOT mean that it is worth a pile of rat sh*t.
>> (in whiteboard photo) Signals that trigger... WITHOUT audio alerts: Hard Braking, Hard Acceleration, High G forces, Hard Cornering..
All of these are COMPLETELY AMBIGUOUS and depend entirely on the driver's skill. In inexperienced drivers with low situational awareness and poor car control, they indicate likely hazard. BUT in highly skilled drivers, higher values on every one of these, right up to the limit of adhesion, and the ability to maintain high values without breaking adhesion, SIGNIFY THE HIGHEST SKILL LEVELS.
The actual indication of skill is not the dynamic range of acceleration, braking, and cornering G forces, but smooth application, generally low, but when needed going right up to the adhesion limit and not going over, and thus not hitting something or being hit.
But obviously neither Amazon nor this Netradyne company who supposedly specializes in driving metrics has a fkn clue about what they are doing.
>> (in whiteboard photo) Signals that trigger... WITHOUT audio alerts: DRIVER DROWSINESS
Not alert on Driver Drowsiness - WTAF!?! If there is one thing on which a driver monitoring system should make an alert, it is driver drowsiness - [wake 'em up] or [stop the vehicle]. How are they even stupid enough to consider this a silent event?
And people wonder why the formerly highly respected technology industry is rapidly losing it's esteem in the general population.
"There is often a mentality in the workplace that with sufficiently detailed protocols and procedures, the village idiot can perform theoretical physics just as well as Einstein.
In fact, no amount of procedure will make that happen; quite the contrary, all that procedure ensures is that if you ever do hire Einstein, their output will closely resemble that of the village idiot."
We know this shit is coming.
Just send those company towns u-hauls and luggage, so they can move to where the jobs are.
And like it or not (not, probably), housing is 40+% of low-income worker living expense, so even fixing rent could turn a worker's life around...
But they are not considered desirable places and they often end up on 100 worst places to live in the United States lists made by the same sort of people rationalizing their high cost of living. Amazon has apparently taken notice of this and decided to monetize it. But if it becomes a broken Orwellian dystopia like this camera system, that's a made for TV horror movie.
And if you suggest someone in an expensive place like San Francisco who is sinking into debt should move to a place like that and bootstrap you will hear an unending stream of profanities from them. Because the people who are willing to do that sort of thing have mostly already done so. Your priorities are not their priorities. They'll take the NIMBYs if they're close to their friends and family.
Look at all this anti-vax nonsense...look they are free to act the way they want but it does scare away investment into those communities.
Low taxes could mean less red tape but it can also mean non-existent governance and investment into infrastructure. People are starting to realize this scam for what it is.
So I conclude people are unwilling to walk their talk. You're not wrong though, but all those really cool communities were built by pioneers who did something exactly like the above. The only way to beat the stupid is to infiltrate and vote their insane representatives out of office. As long as the educated are corralled and contained to the coasts, this will just keep getting worse IMO.
You are holding that person to a ridiculous standard. They talk to you about wanting to do something, and if they don't uproot their entire life just to shift a single vote around you're going to pretend they were speaking some enormous talk and now refuse to back it up?
But you're demanding an enormous act with a tiny benefit. It is completely unreasonable for you to complain that someone doesn't meet this standard you made up.
Should I do a dumb analogy? Imagine if they asked for advice on getting fewer under-pressure tires and you suggested buying an entire new car with pressure sensors. And then declared they'll talk the talk but not walk the walk when they refuse that option.
That said, there are plenty of inexpensive used cars out there as well that will be both easier on the environment and safer to drive at no additional charge. For this must be a pre-2000 or so car to not have at least an idiot light for the tires and unless they've treated it lovingly (which seems unlikely) it probably has one tire in the grave already.
Indeed, it cost almost nothing to suggest somebody should move to another state/city just to cast a vote.
The sanders approach had hope: Invest in all these communities to help bring those people up to the same level of quality society as the coasts and the hope was that enough would abandon Trumpism as their prospects improved. Instead we are repeating the mistakes of the Obama years and I guess after Biden, the only way forward is more pain and suffering when the next demagogue makes it to the white house.
This excellent cartoon video explains the seriousness of all the lesser known sources of carbon: https://www.youtube.com/watch?v=yiw6_JakZFc
Seems like Chairman Xi is serious about tackling Climate Change so he does not get deposed if the Gobi desert grows and eventually swallows up half of China.
The real problem is that there is no Amazon delivery and logistics equivalent. Workers would leave for it. There would be price competition on wages.
Perhaps antitrust split of Amazon should divide the company down the middle in each state, creating two logistics and delivery companies nationwide, evenly and randomly splitting their assets.
Its those same companies that laid people off that are now trying to tempt them back with increased wages. Chipotle, for example.
Why is there a shortage of labour if restaurants and shops are still mostly empty?
I think the real reason is that people took the opportunity (or were forced to financially) to try other careers.
Maybe you live in Europe or another country where growth is stagnant?
What we call 'anti-competitive' behaviour now is basically down to deep pockets being able to subsidize projects unrelated to the core business (whatever the hell that is).
So this is our western version of free society expected to become a norm. Sign consent to be fucked over and over or else. And of course it was your "free" choice (never mind loosing a job if you do not sign) so all is nice and dandy.
Some technocrat ran amok --hope Andy intercedes. (They could fix this by un-tying the cameras from the economic activity, though nobody would probably believe them if they tried now, so the whole System must go and be replaced someday with something New.)
Sure, they might eventually be sued, but it will take so long to get any results that by the time there's actually a decision on the case (IF there's ever a decision) that the damages will be so minisculely small to a giant like Amazon that it will just be another line item that's lost in the rounding errors of their financial report.
Expected Judgement amount (not very big in the grand scheme of things for Amazon) * % probability of judgement against them = even smaller relative amount relative to things Amazon cares about.
> In this kind of life,” Weil realized, “those who suffer aren’t able to complain. Others would misunderstand them, perhaps laughed at by others who are not suffering, or thought of as tiresome by yet others who, suffering themselves, have quite enough suffering of their own. Everywhere the same callousness, with few exceptions.” To complain to a supervisor was an invitation for further degradation. “It’s humiliating, since she has no rights at all and is at the mercy of the good will of the foremen, who decide according to her worth as a worker, and in large measure capriciously.”
i guess the dystopian part of the future got built.
i wonder if ups has something similar. they've always been early adopters of technology and run a very smooth and equitable operation for customers and employees alike.
Your coat is such a thrill
But your coat won't pay my bills
I want money
That's what I want
her story should unfold in the next 48 hrs. Shes got a meeting to attend about "the incident".
Amazons culture is just excessive and harmful. I try and pick up as little work from their systems as possible.