> find a seemingly unrelated reason to terminate you (or
> make you leave).
That doesn't make sense. The guy who is going around harassing people is creating the liability, not the victim, and will probably create more liability in the future.
I would be very surprised if there was any jurisdiction in the world where terminating an employment contract terminates employer liability for harassment; a terminated former employee who has been harassed is a much bigger threat to company profits than a current one.
The only reason any reasonably competent HR department would fire someone for making a valid complaint of harassment would be if they were ordered to by their superiors to protect themselves or their management colleagues (which is, sadly, a risk in some companies).
Obviously, going to HR should not be someone's first move for a one off comment. The person probably doesn't realise that they are being offensive, so a reasonable escalation is to tell them you are offended (and why) first, and if it continues, then go to HR. If it still continues, and HR doesn't do anything meaningful to address the problem (or fires you wrongfully in retribution), then escalate by hiring an employment lawyer to bring a lawsuit against the company, or depending on your jurisdiction, complaining to the relevant government body. This is fair to the person making the remark (they have a chance to learn that their conduct is offensive), the company (they have a chance to know about and address the problem before facing any external action), and the person who was offended (the problem is fixed one way or the other).
In this case, both sides were potentially in the wrong - throwing your drink on someone could be viewed as an assault and more serious than a single offensive remark. I think that had Kelly Ellis followed a reasonable pattern of escalation, there would be less harassment at Google now, all three parties (i.e. the two people and the corporation) involved would be better off, and we wouldn't be hearing about this now. Obviously it would be even better if the remark had never been made in the first place.
It makes me wonder if Google HR does not give employees training about not harassing people and how to respond to harassment - that is pretty much standard at most big companies of their size, and probably pays for itself in reduced liability (if they get sued out of the blue, they can point to the training to try to shift the blame to the person doing the harassing rather than the company - obviously that won't work if the plaintiff complained to them and they did nothing).
> That's a failure at the management level, not the developer's level
Management don't necessarily accurately know how much effort the work will take. They can ask developers to estimate this, but even then it will not always be accurate.
The crux of movements like the agile movement (and management processes like scrum within it) are to make things more process orientated - management ensures employees work for x hours a week following a process, and adapts based on what output they produce (increasing resources if they want more output).
Product orientated processes (where managers tell employees they have to produce y output, and after that they can slack off or go home if they want to) might work if someone's job is to sit on a production line doing a well understood process, but it does not work well for software. I think both businesses and developers are better off for the trend to abandon such processes when it comes to software.
Given a process orientated environment, it is the prerogative of management to make sure everyone works as close as possible to their contracted number of hours - someone who is working less should rightly get called out on it. They may have finished their current task, but they should help someone with their task, or work on improving processes / infrastructure / addressing tech debt to make likely future work easier.
Not everyone needs to work the same number of hours in a process-orientated environment, but if they work less, that should be in their contract (and most likely they will get paid less than they would have if they worked longer) and not just the employee unilaterally deciding that.
If you physically protect your book sufficiently and don't let anyone who is a threat see it, and choose strong passwords (which baNana3 isn't for most purposes - it's only 7 characters long, and based on a dictionary word with minor modifications) then yes.
If someone willing to put in the effort to do some cryptanalysis obtains a copy of your book, then no, you are most likely not safe. Firstly, the Vigenere cipher is extremely vulnerable to a known plaintext attack on the key - if the person who obtained your book knows your password to just one site (for example, because it was lost in a compromise and published on the Internet), they can work out your master key and then get all your other passwords. Even if they don't know any passwords, if you use passwords that are not made up of equiprobably randomly selected characters (and especially if they are dictionary words), the attacker will usually be able to use that bias to work out the master key. For example, the attacker might cycle through all words in the dictionary to obtain the key that decrypts aykwmy to the word, and try the master key they obtain on other entries in your book until they find one that yields a lot of other dictionary words.
It is better than using the same password everywhere, but not by much.
When password databases are leaked, there have been instances of people / groups who take passwords from those leaked databases and try to log in on other sites (for example, to steal money or data, defraud customers, or to plant back-doors to allow future criminal activity).
Suppose that after this becomes popular, there are leaks of at least two plain text databases from popular websites (not that unlikely, unfortunately). These websites might be relatively low value - someone might get permission to comment as someone else, our change their preferences on the site, or something like that, if they had their password. Suppose some people believed this card was safe, and so put a password generated by this card into two of these low-value sites that don't put too much effort into security (since they don't even bother hashing their passwords with bcrypt / scrypt or the like), and also into a high-value site (bank, domain name registrar, GitHub account that hosts puppet scripts, important e-mail account).
Using the two low-value site password databases, I could easily automatically identify likely candidates for these types of passwords that are common between the two databases - they both start with the same 8 'spacebar' characters. I could have a set of likely endings prior to the substitution cipher for the passwords in each database, and this would allow me to use something like the E/M algorithm to work out a distribution of most likely partial substitution cipher table, common word, and space bar values, which I could then combine with likely 'identifier' plaintexts to prioritise the order in which I send passwords to use against the secure site.
All of this would likely be completely automated - and if a significant number of people are using these cards, for certain types of criminal enterprise there is a good chance that it would be cost effective.
All in all, people using this card are taking a very real security risk that is completely unnecessary when there are other better alternatives (like using a password manager, and generating a completely different secure random password for each site). Encrypting the database with a strong password and an expensive key derivation function also complicates other types of attacks (for example, someone secretly going into your wallet and photographing the card) - obviously, they could try to install a keylogger on your phone or computer with the password database, as well as copy your password database, but that probably takes longer and carries more risk of getting caught than photographing a card.
The person who gets to decide how to break down policy into binary decisions and when to ask questions is the one with all the power here.
Suppose they want to suppress a 49% minority who care far more than the 51% majority and will pay far more. Assume that if the question (e.g. do you support gay marriage) was asked once, the minority could afford the votes for it to pass. If the person deciding what questions are asked wants to suppress this, they simply formulate the policy so that for the minority to get what they want, they have to answer 'No' to n binary questions (e.g. each of the n questions is a measure that bans gay marriage in a slightly different way). The minority can afford to overcome the majority on one question, but for some n, they can't afford to defeat the majority repeatedly. Therefore, asking the same question more than once would change the outcome.
Bundling decisions would also allow manipulation of binary preferences - for example, by mixing popular and unpopular measures (e.g. cutting taxes and re-establishing slavery) in a single decision so that just enough people considered it worth supporting, even though they don't support all line items.
The mechanism is therefore useless as a voting mechanism without some way of controlling how things get on the ballot.
Of course, the bigger issue (assuming, as the paper does, that a real currency is used and not an artificial one) is that the laws in place are never perfect, and measuring how much influence a group should have to make new laws based on how wealthy became under current laws will likely lead to dynamic evolution towards a solution that benefits a tiny minority.
For example, suppose we live in a fictional world where the currency is apples with 100 people. A person needs 1 apple a day to live (which is consumed, destroying it). The world has enough trees to produce 125 apples a day (and no more land to plant more trees). Due to an archaic and unfair law, people numbered 0-49 get 1.5 apples a day, while everyone else gets 1 apple a day. People 50-99 perform services to people 0-49 and get a little bit of extra apple in exchange. People 50-99 never vote, because they can't afford it (or if they do, it is the minimum - they always vote for everyone to get 1.25 apples per day), while people 0-49 put forward a bit over the minimum and easily win to retain the archaic law.
One day, people 0-48 decide they want more apples, so they propose to change the law so that person 49 gets only 1 apple per day. Person 49 puts in all their savings, but it is not enough, and the law is changed. Person 49 is now impoverished and in the same state as people 49-99. Gradually, this continues until one or two people have virtually all the superfluous apples - and everyone else even has to work hard for that small group of people to get even the one apple they need to survive.
> This website is simply _enabling_ people who intend to
> violate this (potential) term of their airline ticket
Which might, in some jurisdictions, open the website operators to claims of tortious interference (https://en.wikipedia.org/wiki/Tortious_interference) if they can prove that the site operator knew about the contractual term and incited people to breach it.
Hopefully a term requiring someone to take a flight is unenforceable (if such a term even exists) - so that might give them a defence even if such a term exists.
The article doesn't mention anything about clinical trials or anything beyond anecdotes and some n=1 experimentation that Cronise did on himself. And yet he has already started selling a product. That is really the sort of thing that needs more testing first, especially if it might have adverse effects as well (hopefully it was tested for safety and efficacy and it is just that the journalist doesn't mention it).
They've also been tested and licensed to do that. If GE, LG, et al. decide to come out with a new Microwave oven that leaks so badly it effectively is a jamming device then it won't be licensed and it can't be legally sold.