The risk isn't that much that your employer gaining access to your email (though you may potentially be risking the contents of emails that you view from that machine getting saved and accessed by someone at the company). It's more that you've legally entangled things. If your employer is sued or investigated, a judge can issue a subpoena for them to turn over records. If those records show that employees accessed external accounts from work systems, now they can get a subpoena to access those accounts and any other devices that have accessed those. I've seen this happen to friends. Employer gets sued and as part of discovery, they had to hand over all of their personal devices because they hadn't kept church and state separate. Took them many months and significant legal expenses to get their stuff back. If you never access personal stuff from your work devices and never access work stuff from your personal devices, you'll never be in that position.
2FA only protects login. If you're already logged in, someone with access to the computer can just copy the session token. Or instruct the email client that is already running to dump all your emails to a local file.
I asked this last time about the one in London, and was told that one of the checks is that the image has changed since the last run. Otherwise the data isn't used.
Prevents not only technical issues but attacks like someone blocking the camera or putting a static photo in front of the camera.
Interesting - sounds like it would have some negative effect then. Thanks for sharing.
Now I wonder about some periodic offsets. E.g. if the lights are off at night, or if the skies are overcast in winter, does it skew the results in some significant way. I seriously doubt that though.
Not really. It's just one source of randomness among many. The entire point of having multiple sources is that they are redundant, you don't need them all.
If there’s one thing about AI, it’s that you cannot avoid it. The idea that individuals can just “opt out” of plastic, sugar, artificial ingredients, factory farms, social media and all the other negative extrnalities the corporations push on us is a fantasy that governments and industry push on individuals to keep us distracted: https://magarshak.com/blog/?p=362
On HN, people hate on Web3 because of its limited upside. But really look at the downside dynamics of a technology! With Web3, you can only ever lose what you voluntarily put in (at great effort and slippage LOL). So that caps the downside. Millions of people who never got a crypto wallet and never sent their money to some shady exchange never lost a penny.
Now compare that to AI. No matter what you do, no matter how far you try to avoid it millions will lose their jobs, get denied loans, be surveiled, possibly arrested for precrime, micromanaged and controlled, practically enslaved in order to survive and reproduce etc.
It won’t even work to retreat into gated communities or grandfathered human-verified accounts because defectors will run bots in their accounts and their neuralink cyborg hookups and meta glasses, to gain an advantage and approach at least some of the advantages of the bots. Not to mention of course that the economic power and efficiency of botless communities will be laughably uncompetitive.
You won’t even be able to move away anywhere to escape it. You can see an early preview of that with the story of Ted Kazinsky — the unabomber (google it). While the guy was clearly a disturbed maniac who sent explosives to people, as a mathematician following things to its logical conclusion he did sort of predict what will happen to everyone when technology reaches a certain point. AI just makes it so that you can’t escape.
If HN cared about AI unlimited downsides like it cared about Web’s lack of large upsides, the sentiment here would be very different. But the time has not come yet. Set an alarm to check back on this comment in exactly 7 years.
reply