Hacker News new | past | comments | ask | show | jobs | submit login
Amazon’s AI cameras are punishing drivers for mistakes they didn’t make (vice.com)
230 points by echo_hessel 35 days ago | hide | past | favorite | 127 comments



If I look into my mirrors to make sure I am safe to change lanes, it dings me for distraction because my face is turned to look into my mirror.

WTF.

This seems like the real-world analogy of a similar trend I've noticed in software development where, to "improve code quality", humans are replaced with stupid tools that give false positives all the time, and to placate their warnings and satisfy the metrics of the management imposing them, writing code becomes perverted into trying to figure out how to stop the tool from complaining instead of focusing on the actual problem being solved.


I am living in this hell at the moment. This situation originated from compliance box ticking and trickled down to a kafkaesque nightmare where you are punished for ancillary problems. This is of course while being hammered because delivery has declined to a snail’s pace because the numerous hoops that need to be jumped through and general cruddiness of the process get in the way.

But someone somewhere high up in the ivory tower ticked a box. I suspect the same here happened with Amazon and an insurer somewhere. Push the hell down to the bottom of the stack.

The funny thing in this case is the process holes are still so enormous that you can drive a tank on fire through it and straight into production. The net gain is zero and the outcome is overconfidence.


Reminds me of something that happened a few years ago to me.

Company was primarily using Java for the backend. The Application Security team decided to start scanning all our deployable JARs and WARs with a static analysis tool. Well, immediately the tool started complaining about our libraries. For example, the official Elasticsearch library, for the version of the database we used, had multiple System.out.println calls. Well those were all flagged and the static analysis tool deemed our code unacceptable. We needed to use a logger, not stdout. So we had to go and take it up with AppSec. And explain, no thats not our code. Yes, it does run in production. No we can't use a different library.

The whole thing was exactly as you put it, a kafkaesque nightmare where I'm being punished for something I don't understand by people who don't understand. An no-one can tell me why.


Phew, this at least spot some real issue (in some cases System.out.println can make multi-threaded program to get "synchronized" on console acces).

At some point I came across a tool (I think it was white source) that was going through Github page of each library used and was checking if there are no bugs reported.

Obviously very often there were totally bogus tickets, support requests created as bugs, rants as bugs, duplicates of already fixed tickets, etc. so investigating all that crap was really hard.

Another problem was licencing. There are a lot of libraries that have dual licence, e.g. GPL and CDDL, obviously the tool was panicking that GPL is used, not noticing that there is a second "business-friendly" license.

Those static analysis tools are not bad as a concept, however ratio of false positives to real issues found is too large to make them truly useful unless one can invest a lot of resources into tuning them.


Just want to point out that Kafka based his stories in part on his experience working in the (Hapsburg i think?) bureaucracy, in particular the idea of a faceless and impenetrable organization that can't be reasoned with an is full of catch-22s (which I haven't read). So I find it funny we call it Kafkaesque when Kafka was just referring to how bureaucracy is. We should call it bureaucratic and admit that is what all big orgs end up becoming.


Taser the camera. "Dunno, hoss, didn't notice it wasn't working. "


Speak softly and carry a can of spray-paint. (Hairspray might work just as well and be more defensible.)


sure, so the AI knows you're driving and it can NEVER see your face.


I think we're in a huge bubble in term of automation/AI these days, people are convinced they're very close to being able to automate most things while in reality we're faaaar away from even simple tasks. We're high on tech and the afterglow will hit hard

The seemingly very smart people designing these systems don't understand that their 1 in a billion edge case happens hundred of times every single day when you scale it to the world


We're in for a few decades of poorly implemented automation, to the degree many will feel they live in a dystopian nightmare. The arrogance of our management class to think society can be automated without a society wide analysis of what that automation's unexpected side-effects are going to be is equal to the colossal disaster we're going to see. I serious feel sorry for everyone that is going to live through this period. We're gonna be creating misery for a few decades, until the assholes on top figure out this needs massive pre-planning and the seat-of-the-pants methods being used now is just setting up for a planet-wide disaster.


can the driver just put a photo of himself in front of the camera and score total bonus for being always on attention. Or (using the Moscow street surveillance contractor recipe) if the system is smart to not be fooled by a photo - record one ideal day (or even just an hour) and put the smartphone with the video in front of the camera on repeated play.

Of course the most interesting is to use counter-AI to generate video of right behavior without repetition and with matching location/street view in case if the Amazon/Netradyne AI is smart enough to have geolocation based street view integration.


> can the driver just put a photo of himself in front of the camera and score total bonus for being always on attention. Or (using the Moscow street surveillance contractor recipe) if the system is smart to not be fooled by a photo -

Only if you know what the AI cam sees, if it's like a blackbox to you then I doubt there's way that you could do to fool it.


After partially obscuring the lenses by say an accidental drop of ketchup :

"Your camera doesn't get correct image"

"Why?! What is wrong?"

"See for yourself"


I suspect the people who understand AI well enough to fool it also know they can get more money not being a driver.


You would be surprised how quickly people can learn how to fool something, without knowing exactly how it works.

Uber drivers used to look up time slots for big planes arriving at the airport. They would then all go offline on the app to trigger "surge pricing" and then all benefit from it.

No one knows exactly how the algorithm works, but they know how to fool it.


Yeah, that’s fair. Cargo cults got famous because they worked, despite the mocking way they are often reported.


interesting aspect here is that otherwise honest people at some point of unjust pressure put on them start to allow themselves to cross into that gray zone, and without feeling shame it leads to that being done in the open inside those specific social groups affected and thus becomes the acceptable social norm there.


> interesting aspect here is that otherwise honest people at some point of unjust pressure put on them start to allow themselves to cross into that gray zone, and without feeling shame it leads to that being done in the open inside those specific social groups affected and thus becomes the acceptable social norm there.

The thing to consider is that no actions are taking place in a vaccuum. The drivers gaming the system is not an isolated input that is occurring. Instead, there are a number of different inputs that are putting downwards pressure on the rates a driver gets to charge / how much money the driver actually takes home after costs, taxes, etc.

While it's not incorrect (or even unjust) to describe the behaviour as being potentially 'gray' (although the use of colour is a different matter, I understand the sentiment being expressed), by only highlighting and focusing on the least powerful members in the system, it unconsciously paints a picture that draws focus away from the other inputs into the system that have resulted in this situation occurring in the first place.


I think people are able to segment moral behaviours very effectively. Uber’S ride allocation system would be very far down the list of things I would feel bad about gaming.


I think that deepfake tools will soon be available to everyday user just like filters in Instagram or basic video editing tools.

And even if we just limit ourselves to tech users - some companies have long been logging coders' Eclipse (mostly not in the 1st rate countries of course), and I suspect these days that may get fed to AI. And some day it will make into US and West Europe too.


What makes you think people who take a job driving are incapable of understanding things as well as you?


As soon as someone has developed that understanding, they get a chance for a massive pay rise and better working conditions.

While I acknowledge that some people will prefer to be a driver than to work in any of the places whose doors would open with this skill, I suspect this is a minority.


It's hard to believe or every single driver would get the same consequences. This reminds me of drivers who complain about speeding tickets right after the speed limit changed. Well, slow down before the speed limit changes and you'll be fine. Some people just have bad habits without realising.


Automatically analysing safe traffic behaviour from a camera is a hard problem. From the article, it sounds like this system is incredibly naive and doesn't really understand real world traffic situations, so it ends up punishing people for doing the right thing while demanding they do the wrong thing (like not looking in the mirror), and on top of that, distracting them and adding extra stress.

It sounds like a system that needs to be banned or thoroughly regulated. Naive driver distraction systems like this do not make traffic safer. The company should prove that this system is safe just like autopilot manufacturers have to.


I'm talking about the unfairness, not safety. The article doesn't say that some drivers are allowed to do it while others aren't but still presents the whole thing as unfair.


Sure, but I'm talking about safety. And safety is also an important issue. And one that's at least as much subject to regulation. And regulating this infringement on driving safety would also help with the unfairness, even if it doesn't address it directly.

Of course the unfairness is also a valid concern, and I'd love to see them both addressed.


“When I get my score each week, I ask my company to tell me what I did wrong,” the driver told Motherboard. “My [delivery company] will email Amazon and cc' me, and say, ‘Hey we have [drivers] who'd like to see the photos flagged as events, but they don't respond. There's no room for discussion around the possibility that maybe the camera's data isn't clean.”

You should interview at Netradyne. From your comment you seem like a good fit.


This comment reminds me of a, thankfully small, subset of developers who believe their code is infallible and it must be the users fault if the application has bugs.

What the AI is attempting to solve is a complex problem. I’m willing to give the drivers here the benefit of the doubt ;)


This comment reminds me of a, thankfully small, subset of drivers who believe their driving is infallible and it must be the other drivers' fault if they're penalized. What the driver is attempting to do is a complex task. I’m willing to give the AI here the benefit of the doubt ;)


You clearly didn’t read the article then. The drivers are open to dialog. It’s the maintainers of the software and/or Amazon who are assuming the system is perfect.

I’ve worked on smart interfaces for cars by the way. My experience not only as a driver but also as a developer in this space tells me that AI described in the article needs further development and your presumptions in this discussion are BS.


You're missing the point. If the AI is consistently penalizing drivers for looking in the mirror, then it'll affect them all equally and not be unfair. If it treats some drivers worse than others, why didn't they mention that instead of leaving us to assume that there really are drivers who are never penalized for looking in the mirror?

TFA even points out drivers making excuses for not wearing a seatbelt. If they won't even take personal responsibility for their own illegal actions, of course their opinions on more subtle things like following distances aren't reliable.


> You're missing the point. If the AI is consistently penalizing drivers for looking in the mirror, then it'll affect them all equally and not be unfair. If it treats some drivers worse than others, why didn't they mention that instead of leaving us to assume that there really are drivers who are never penalized for looking in the mirror?

They did mention it: not every driver has the AI installed.

Also you’re assuming the bonuses are awarded like prizes to the top performers. So if everyone is penalised equally then the same people still get awards. My impression was those bonuses were awarded to anyone who passes specific milestones. So even if everyone is penalised equally it’s still unfair to everyone.

> TFA even points out drivers making excuses for not wearing a seatbelt. If they won't even take personal responsibility for their own illegal actions, of course their opinions on more subtle things like following distances aren't reliable.

Yes, that is a fair argument. I interpreted that in a more charitable way saying “they’re acknowledging some bad practices on their side so at least trying to meet the AI developers half way”. But I will admit your interpretation is just as, if not more so, plausible.

Even so, that doesn’t justify being ignored to by Amazon and the AI developers. Any new tech needs debugging in the field. And this is more so true with AI and complex problems like driving assistants than it is in most other software specialties.

That’s the part of the article that stinks the most and the original point I was making yesterday. I couldn’t give a rats ass who is to blame because the biggest fault lies with the company for not working with the users to ensure the software is performing correctly. And this is especially important when people’s income are being directly effected by the software too!


Yea not excusing Amazon ignoring their requests or blindly using AI results to decide compensation, or that it can encourage dangerous driving to get points, or that it's probably inaccurate. Overall awful, but might be a fun game if you didn't depend on the job for your livelihood.


Ironically, if you skim through the documentation for Amazon Rekognition [1], you'll find this:

> Use cases that involve public safety

> First, you should use confidence thresholds of 99% or higher to reduce errors and false positives.

> Second, you should involve human reviewers to verify results received from a face detection or comparison system, and you should not make decisions based on system output without additional human review. Face detection and comparison systems should serve as a tool to help narrow the field and allow humans to expeditiously review and consider options.

> Third, we recommend that you should be transparent about the use of face detection and comparison systems in these use cases, including, wherever possible, informing end users and subjects about the use of these systems, obtaining consent for such use, and providing a mechanism where end users and subjects can provide feedback to improve the system.

[1] https://docs.aws.amazon.com/rekognition/latest/dg/considerat...


Should means "probably will not". It's just there to look nice to everyone else :)


AI is applying a gradient (loss of bonus) of a loss function to organic neural networks (human drivers) to train them to optimize against the AI’s value function (how the AI thinks is the optimal way to drive to minimize running stop signs, accidents, etc… for the company). Each paycheck is a training iteration. The organic neural networks (human drivers) are complaining that the valuation function (a machine neural network) has poor performance. The irony! It sucks to be slave to the machine.


The article mentions how many small delivery companies there are through which amazon gets to skirt liability and move cost savings onto working people.

From Amazon's past behavior I would side with the drivers and delivery company owners in that it seems like an excuse for amazon to deny payment - which amazon uses as demerits & punishment, not actually bonus

CA tried to do this for individual contractors but I think we should think about having legislation tackle these cutouts which are ostensibly a part of the company.

I'm sure there are lots of edge cases I'm missing, like maybe franchises. but if there is no difference in how a contracting company operates - besides forcing cost savings from bad labor, safety, & liability practices unto working people and small business owners - it shouldn't be allowed.

Heck, an easy test is when you present your vans to the public branded as the parent company AND force those independent delivery company employees - who do not actually work for you - to dress like and follow policies as if they did.


It's time to regulate Amazon. Fine them until they squeal. Kick Bezos and his awful labor practices all the way back to 1890 where they belong. The bottom line is the only language Amazon management will understand.


[flagged]


Guilty as charged. I mean, look at it.


I like capitalism. I'm also disgusted by this. 100% agree with grandparent: time to turn the thumbscrews. Until sniveling bureaucrats are incentivized to do meaningful things, they will continue to do bureaucracy. This is one of the results.


I imagine a somewhat similar issue happening with Tesla soon. To get access to the beta FSD next week, you have to be deemed a “good driver” by their metrics confirmed via sensors and telemetry. There are so many false positives I get in a day it’s ridiculous. Whoever came up with the policy must not drive a Tesla. I’ve had my car engage emergency braking on the freeway because (I think) it got spooked by a bridge’s shadow. To this day, I’m still afraid of passing through that area.


If you distrust the vehicle to the point where you experience fear, why are you still driving it?


$40k in depreciation. Cautious optimism that they’ll get it right eventually. Lifetime supercharging. All cars are dangerous and despite that one slightly irrational fear of passing that bridge, Teslas have the highest safety ratings. I was making more of a point on how their system to rate drivers isn’t as reliable as they think it is.

Don’t get me wrong, though. I have considered selling a few times.


Those improvements in the safety numbers (e.g. "Since Amazon installed Netradyne cameras in its vans, Miller claims that accidents had decreased by 48 percent") are surprisingly large. I don't really trust Amazon to study and report them properly, but if they're real and if they correspond to significantly fewer serious or fatal accidents it justifies the system a bit.


I'm willing to bet those numbers are fudged. For instance, Amazon will tout how robots improve worker conditions at warehouses, when an external study says the opposite.

https://www.dailymail.co.uk/sciencetech/article-8800139/Amaz...


The article implies that delivery drivers frequently drove without seatbelts on because taking them on/off was a hassle on short distance drives. It also says seatbelts and stop signs were the most commonly flagged "incidents". Make your own conclusions from that.


You don’t need a dystopian AI camera monitoring system to measure seatbelt usage. Plug into OBD-2 and detect the car moving faster than 20mph while the seatbelt is not connected.


In Germany, this is actual law - door-to-door delivery services (postal, parcel, couriers) are exempted from the seatbelt mandate (per §21a StVO, https://www.gesetze-im-internet.de/stvo_2013/__21a.html).


> Make your own conclusions from that.

Well, that's a little tough. Is it "drivers hate seatbelts" or "drivers feel so much time pressure that extra few seconds eats into their tiny wage"?


Having a seatbelt on or off probably wouldn't prevent an accident, so how could that reduce accidents by 40%?


Yup. I'd like to see the yearly fatality rates before and after, though, since they'd likely find those harder to manipulate or cherrypick.


You could almost copy-paste the premise into a Black Mirror episode.


I was more thinking about Robocop.

Your head just explodes after the third infraction.

I stopped buying from Amazon literally decades ago. When they were just selling books and after their privacy bate and switch.

This decision looks better by the day.


> I was more thinking about Robocop.

> Your head just explodes after the third infraction.

Was that in the 1987 movie? I'm failing to remember anything like that.


Well, this scene from Robocop is pretty relevant I think

https://www.youtube.com/watch?v=xsuo3FnG4g0


Driver, you have 20 seconds to comply!


I may mix it up some. There were those spoof adds. I remember that the car anti-theft device electrocuted the thief.

The head explosion may come from the add for a telecommunications company in which the protagonist shoots himself. Possibly that was my head explode memory.


aka the news


I like AI and its promises but a lot of these real world implementations seem so half-baked that it baffles me that they're even deployed or are allowed to be deployed, legally speaking, especially when it affects cars and driving and jobs, it's crazy. I hope they somehow manage to get rid of it before it becomes even more wide-spread and damaging.


The only way to make something like this acceptable is if the company pays the drivers a equal amount of money/"performance points"/whatever for every false positive the camera produces.

As it is, since false positives cost amazon nothing (they actually /save them money/), there's zero incentive to work to reduce them.

That solution would solve a lot of the DMCA issues on things like youtube, come to think of it.


I wonder when insurances will offer discounts if you install one of those cameras voluntarily. And then, when the governments feel the need to make "drivers and communities more safe".


There are lots of products like that already, idk about cameras but some black boxes type devices register your acceleration/deceleration patterns, how many Gs you take through corners, &c.

I've just seen an ad for one of them on social media this morning. It's mostly targeted to young drivers or people who've been caught DUI / recklessly driving.


I worked with the founders of https://www.insurethebox.com/ on the project they did after that business exited. There are many similar startups.


> You can buy additional Top Up Miles that can be rolled over to the following year at renewal.

Is this car insurance or a gacha game?


What I find puzzling is that tech workers at Amazon apparently don't have any problem with being a cog in a wheel that is consistently making life more and more miserable for the working class.


Incentives. Amazon pays well. They give out stock options. Many people in the world will do terrible things for money. Being a tech worker doesn’t exempt you from the power of incentives.


> "It’s impossible to stop at stop signs every time like they want you to."

This sounds bad on the surface -- you are supposed to stop at stop signs. But I remembered that self-driving cars have to roll through stop signs too, because nobody remembers the right-of-way rules from driver's ed and so people will just take their turn early if it doesn't look like it will cause a crash. That means coming to a complete stop means you don't get your turn at all.

Very unfortunate that the letter of the law and what people do in the real world aren't aligned. This isn't so much of a camera / AI problem -- a cop could give you a ticket here too.


Genuine question, why are four-way stops a thing at all?

Here in Australia, the usual setup for uncontrolled residential intersections (that don't have roundabouts) is to give one axis (?) right of way, and put stop signs on the cross axis. If neither direction is substantially wider or busier than the other, it's usually just arbitrary.

This solves the problem of people being confused about right of way. When you come to a stop sign, you just give way to all traffic coming across the intersection.


I'm guessing your pictured intersections have lower speed limits and tighter geometry. As speeds increase, the amount of physical space needed to enter or cross traffic drastically increases. Even beyond that, as speeds increase, the estimation problem moves further away from human perception into something that would require the perception of a fast predatory bird, so even more space is needed just to be safe.

When I think of conversions from 2 to 4 way stops, I think of an intersection in my town where the road was listed at 25mph but the enormously wide pavement easily supported driving 40-45mph comfortably, especially being at the bottom of a steep hill, and people drove accordingly.

Completely separate from that is the issue of neighborhood groups trying to get inappropriate stop signs added to calm traffic. Stop signs are only for managing right of way; traffic calming should be handled by changes to street design.

Disclaimer: Obviously any answer to this question will be a generalization that doesn't cover all circumstances


> why are four-way stops a thing at all?

Where I am, they are certainly overused - of course, because people agitate for them because safety.

But you can read all about the traffic engineering justification for them in the Manual on Uniform Traffic Control Devices: https://mutcd.fhwa.dot.gov/htm/2009/part2/part2b.htm#section...


Left turns out of a busy street work better if you have a turn to yourself, instead of waiting for traffic to break. There are some arrangements of stop lights that can make that better or much, much worse for one-lane traffic (I have one of each near me).


> a cop could give you a ticket here too.

But won't. Or probably won't. Unless it's a speed trap. And if they do, they're not in the wrong to do so, but it doesn't mean you were 100% in the wrong to slow down to 10% instead of stopping. So it doesn't necessarily mean you need to change your behaviour, except to be wary of speed traps. And in case of danger, being able to stop is important, but there could be situations where slowing down is safer than jamming the breaks. And you might be able to explain that to the cop, or the cop might see what happened and give you a pass.

Point is: situations can be complicated and nuanced, and AI is still very bad at this kind of social thinking. Personally I've come to the opinion that AI will never really be good at this kind of thing until it actually lives in and participates in society, whatever that may mean.


None of this thinking needs to be done in the moment. Someone looks at the rules, considers how they actually work out in practice, and decides on a policy. Then the AI can easily follow the policy. Not that you need anything anywhere close to AI to comprehend a stop sign braking policy.


There's a similar story with speed limits --- the AI that sticks to the limit may become a danger to itself and others if the normal traffic goes faster, and likewise if some weather conditions slow down everyone else, it might be the idiot trying to stay at the limit and being a hazard to others.


>>"One of the safety improvements we’ve made this year is rolling out industry-leading telematics and camera-based safety technology across our delivery fleet," Alexandra Miller, a spokesperson for Amazon told Motherboard. "This technology provides drivers real-time alerts to help them stay safe when they are on the road."

Just because it is "industry leading" does NOT mean that it is worth a pile of rat sh*t.

>> (in whiteboard photo) Signals that trigger... WITHOUT audio alerts: Hard Braking, Hard Acceleration, High G forces, Hard Cornering..

All of these are COMPLETELY AMBIGUOUS and depend entirely on the driver's skill. In inexperienced drivers with low situational awareness and poor car control, they indicate likely hazard. BUT in highly skilled drivers, higher values on every one of these, right up to the limit of adhesion, and the ability to maintain high values without breaking adhesion, SIGNIFY THE HIGHEST SKILL LEVELS.

The actual indication of skill is not the dynamic range of acceleration, braking, and cornering G forces, but smooth application, generally low, but when needed going right up to the adhesion limit and not going over, and thus not hitting something or being hit.

But obviously neither Amazon nor this Netradyne company who supposedly specializes in driving metrics has a fkn clue about what they are doing.

>> (in whiteboard photo) Signals that trigger... WITHOUT audio alerts: DRIVER DROWSINESS

Not alert on Driver Drowsiness - WTAF!?! If there is one thing on which a driver monitoring system should make an alert, it is driver drowsiness - [wake 'em up] or [stop the vehicle]. How are they even stupid enough to consider this a silent event?

And people wonder why the formerly highly respected technology industry is rapidly losing it's esteem in the general population.


This reminds me of a nice quote I once read on slashdot (originally in the context of Agile Programming), which I liked enough to save in my anki quotes collection, paraphrased below:

"There is often a mentality in the workplace that with sufficiently detailed protocols and procedures, the village idiot can perform theoretical physics just as well as Einstein.

In fact, no amount of procedure will make that happen; quite the contrary, all that procedure ensures is that if you ever do hire Einstein, their output will closely resemble that of the village idiot."


Wait till the AI becomes your parole officer like in Elysium

https://youtube.com/watch?v=flLoSxd2nNY

We know this shit is coming.


Speaking of shit, wait until Amazon starts tracking your bowel movements. Not only how often you go for number 2, but also how hard you squeeze, how often you wipe and how many revolutions of TP you consume.


Naturally - how else would their ML models personalize the default value for your TP subscription upsell?


the rate at which Amazon is burning through workforce vs it’s rate of automation is starting to look like the balance is diverging fast. They’ll collapse before they automate


Which is probably why they want to build company towns where you won't have any alternative except for Amazon.

https://www.bloomberg.com/opinion/articles/2021-09-16/amazon...


Sim Kinison solved this problem decades ago.

https://www.youtube.com/watch?v=P0q4o58pKwA

Just send those company towns u-hauls and luggage, so they can move to where the jobs are.


As much as I loved Sam Kinison, I doubt they chose to live in a place that is sand today and will still be sand in 100+ years. But when someone chooses to live in SF, Seattle, or now Austin in exchange for taking a low wage job, they need to own that they are creating their own personal hell to subsidize the lifestyle of the top 5% or so. You can do better than that. This is not a binary problem. It's analog. If enough people leave SF, there will be much furrowing of brows and bellyaching of the wages, but things will improve. That FOMO keeps things status quo is exactly why that's not going to happen. Too bad. I passed on SF for exactly these reasons myself and I'm a techie.


America is literally composed of people who came here for work. The trope that people have their feet nailed to the ground and can't move is just not reality.


Man sitting in comfortable home: those people shouldn’t have comfortable homes; it’s bad for them.


Entity that stereotypes based on content stereotypes based on content. Sad.


Given how bad most US cities are at building sufficient housing, it wouldn't actually shock me if a well-planned company town was short-term financially much better for their employees than average... at least Amazonville won't have NIMBYs complaining about new high-rise condos ruining the neighborhood.

And like it or not (not, probably), housing is 40+% of low-income worker living expense, so even fixing rent could turn a worker's life around...


There are plenty of places in the United States where there is little resistance to building more housing and the local government would love for people to come there and start businesses.

But they are not considered desirable places and they often end up on 100 worst places to live in the United States lists made by the same sort of people rationalizing their high cost of living. Amazon has apparently taken notice of this and decided to monetize it. But if it becomes a broken Orwellian dystopia like this camera system, that's a made for TV horror movie.

And if you suggest someone in an expensive place like San Francisco who is sinking into debt should move to a place like that and bootstrap you will hear an unending stream of profanities from them. Because the people who are willing to do that sort of thing have mostly already done so. Your priorities are not their priorities. They'll take the NIMBYs if they're close to their friends and family.


Those undesirable places are partially that way because a community is not just housing. It is good infrastructure, good governance, good society among many other things.

Look at all this anti-vax nonsense...look they are free to act the way they want but it does scare away investment into those communities.

Low taxes could mean less red tape but it can also mean non-existent governance and investment into infrastructure. People are starting to realize this scam for what it is.


I remember an earnest soccer mom sort asking me what she could do to stop Trumpism in 2017. And I replied that she should move to a blue city in a red state before the 2020 election. She bristled and responded with profanity at the very thought of that. And then walked away in a huff.

So I conclude people are unwilling to walk their talk. You're not wrong though, but all those really cool communities were built by pioneers who did something exactly like the above. The only way to beat the stupid is to infiltrate and vote their insane representatives out of office. As long as the educated are corralled and contained to the coasts, this will just keep getting worse IMO.


> So I conclude people are unwilling to walk their talk.

You are holding that person to a ridiculous standard. They talk to you about wanting to do something, and if they don't uproot their entire life just to shift a single vote around you're going to pretend they were speaking some enormous talk and now refuse to back it up?


Talk is cheap. Lowering standards is how you get craptastic AI like the camera system here deployed. But it's even worse than that because it's obvious they could improve its false positive detections, but seemingly given they already got paid, they don't care. Now imagine an entire city designed around that principle. That's either a British sitcom or a made for TV horror movie depending on where you go with it IMO.


Talk is cheap, sure.

But you're demanding an enormous act with a tiny benefit. It is completely unreasonable for you to complain that someone doesn't meet this standard you made up.


They asked me what to do not the other way around. I told them something that would make a difference, even if tiny. No fate but the one you make and all that.


They asked, and then you set up an extreme scenario, and then you blame them for not taking it.

Should I do a dumb analogy? Imagine if they asked for advice on getting fewer under-pressure tires and you suggested buying an entire new car with pressure sensors. And then declared they'll talk the talk but not walk the walk when they refuse that option.


It's not an extreme scenario at all. We just see things differently. Don't ask my opinion if you can't accept my observation which in your strawman would probably be exactly what you suggest because in my experience getting a person who asks a question like that to use a tire gauge is pulling teeth.

That said, there are plenty of inexpensive used cars out there as well that will be both easier on the environment and safer to drive at no additional charge. For this must be a pre-2000 or so car to not have at least an idiot light for the tires and unless they've treated it lovingly (which seems unlikely) it probably has one tire in the grave already.


> Talk is cheap

Indeed, it cost almost nothing to suggest somebody should move to another state/city just to cast a vote.


Exactly! She was being cheap as usual and she got the best free advice no money can buy! Pay up chumps.


Its is happening but the people who are moving to these places seem like those who cannot make it in the more competitive markets. At the same time, you only live one life so people who can afford to thrive in the expensive markets don't want to spend their time not living their best lives. So this catch-22 exists.

The sanders approach had hope: Invest in all these communities to help bring those people up to the same level of quality society as the coasts and the hope was that enough would abandon Trumpism as their prospects improved. Instead we are repeating the mistakes of the Obama years and I guess after Biden, the only way forward is more pain and suffering when the next demagogue makes it to the white house.


If we fall as a nation, just under 7.6B other people get a shot at running the show and fix what's broken to whatever extent we can. Good for them, bad for us.


I can't see any other nation becoming the world superpower any time soon with the exception of China. The US is unique in that it has a trifecta of components necessary to maintain its power. It has the natural resources, it still has a great talent pool and one thing it seems to have over everyone else is a level of net immigration that allows it to mask the population implosion that is happening in all other western nations and China. China may very well surpass the US or at the very least rival them but I just don't see how they are going to overcome the population implosion that is coming for them in a few decades.


A loose confederation of 1.8 billion Muslims or approximately 1.4 billion Indians might have a thing or two to say here in the long run. It's not all about China and the US. Even more so as they both pursue policies leading to their own irrelevance in that same long run. But also; "Prediction is hard, especially about the future." - Niehls Bohr


Well none of us will know what happens in the long run. Maybe climate change will wipe most of us out? Best we can guess is short term (next few decades). That time frame is where my comment was really aimed at.


The next few decades will determine the impact of climate change. It doesn't seem like a particularly hard problem to solve: embrace solar and nuclear power ASAP globally, but nothing is easy when people get involved. But even China seems to have read the memo now. But also, if someone even blurts out "clean coal" point and laugh.

https://www.nytimes.com/2021/09/22/world/asia/china-coal.htm...


You are not representing the seriousness of this situation adequately (in my opinion).

This excellent cartoon video explains the seriousness of all the lesser known sources of carbon: https://www.youtube.com/watch?v=yiw6_JakZFc

Seems like Chairman Xi is serious about tackling Climate Change so he does not get deposed if the Gobi desert grows and eventually swallows up half of China.


Amazon has adverts on TV saying how wonderful it is to work for Amazon. My first thought when seeing them is, that's a PR campaign, and the truth is almost certainly the opposite.


Will they, really? Their workers would not work at Amazon if they had other choices.

The real problem is that there is no Amazon delivery and logistics equivalent. Workers would leave for it. There would be price competition on wages.

Perhaps antitrust split of Amazon should divide the company down the middle in each state, creating two logistics and delivery companies nationwide, evenly and randomly splitting their assets.


There are plenty of retail jobs. If you haven't seen the news recently, there's a record shortage and large wage increases happening in that entire sector. The reason people still work Amazon is because Amazon pays better than the competition. For example, a lot of restaurant workers have left that sector for better choices. You don't see that happening in Amazon but I would think that workers in both sectors are somewhat overlapping (unskilled labor).


Im not sure where you have been for the last year and a half, but a lot of retail workers and restaurant workers have been laid off or suspended on reduced pay. Shops were empty, restaurants were empty. Thats still mostly the case. Thats why there is a shortage.

Its those same companies that laid people off that are now trying to tempt them back with increased wages. Chipotle, for example.


> Shops were empty, restaurants were empty. Thats still mostly the case. Thats why there is a shortage.

Why is there a shortage of labour if restaurants and shops are still mostly empty?

I think the real reason is that people took the opportunity (or were forced to financially) to try other careers.


there is only a shortage of people willing to go work for the same companies that shit on them when covid happened.


My local Chipotle is advertising $17.50/h wages and $750 signing bonuses. My local Amazon starts at $19/h. The market for entry level unskilled labor has never been better in the last decade.

Maybe you live in Europe or another country where growth is stagnant?


When was the last time you stepped outside? The shops and restaurants are not empty anywhere in the country.


I keep feeling companies should only be permitted to operate in a single trade category.

What we call 'anti-competitive' behaviour now is basically down to deep pockets being able to subsidize projects unrelated to the core business (whatever the hell that is).



>"The Netradyne camera, which requires Amazon drivers to sign consent forms to release their biometric data..."

So this is our western version of free society expected to become a norm. Sign consent to be fucked over and over or else. And of course it was your "free" choice (never mind loosing a job if you do not sign) so all is nice and dandy.


Lemme get this straight: They created a deep learning system that is provably flawed in a way that makes it easy for drivers to show economic damages and they have resources with which to pay those damages? What am I missing here? They hope that they won't be sued because of ________?

Some technocrat ran amok --hope Andy intercedes. (They could fix this by un-tying the cameras from the economic activity, though nobody would probably believe them if they tried now, so the whole System must go and be replaced someday with something New.)


"They hope that they won't be sued because" Amazon has enough resources to tie up anyone attempting to sue them in court for 20 years so that any drivers affected by it have long since died or run out of resources themselves to continue the suit.

Sure, they might eventually be sued, but it will take so long to get any results that by the time there's actually a decision on the case (IF there's ever a decision) that the damages will be so minisculely small to a giant like Amazon that it will just be another line item that's lost in the rounding errors of their financial report.

Expected Judgement amount (not very big in the grand scheme of things for Amazon) * % probability of judgement against them = even smaller relative amount relative to things Amazon cares about.


If you haven’t, please watch the movie Brazil, as this type of thing would exactly fit in the narrative.


I see a future where there is writer like Hana Arendt describing Amazon and Jeff Bezos.


I haven't actually read her works and is from back in the 1930s and 1940s but I think Simone Weill is the closest thing we have to that [1]:

> In this kind of life,” Weil realized, “those who suffer aren’t able to complain. Others would misunderstand them, perhaps laughed at by others who are not suffering, or thought of as tiresome by yet others who, suffering themselves, have quite enough suffering of their own. Everywhere the same callousness, with few exceptions.” To complain to a supervisor was an invitation for further degradation. “It’s humiliating, since she has no rights at all and is at the mercy of the good will of the foremen, who decide according to her worth as a worker, and in large measure capriciously.”

[1] https://www.zocalopublicsquare.org/2018/06/29/one-frances-su...


wasn't there a scene in the fifth element where bruce willis had some talking box mounted under his rear view mirror that monitored driving and verbally announced when it was taking points off his license?

i guess the dystopian part of the future got built.

i wonder if ups has something similar. they've always been early adopters of technology and run a very smooth and equitable operation for customers and employees alike.


How is this even legal? I dont think that the employer is allowed to have cctvs that can surveliance the personel if this would be in Sweden.


If it's legal, it's because it's not illegal. (Laws say what you can't do, not what you can, in a system like the United States' )


Lol


"which determined whether he received prizes, such as rain jackets,"

Your coat is such a thrill

But your coat won't pay my bills

I want money

That's what I want

That's what I want

That's what I want


Amazon Flex Reddit has a story of a girl whose anal fissure exploded on route causng... a situation.

her story should unfold in the next 48 hrs. Shes got a meeting to attend about "the incident".

Amazons culture is just excessive and harmful. I try and pick up as little work from their systems as possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: