I suspect the author of the article is American. Everyone thinks that their own accent is neutral and measures everyone else relative to that.
As a non-American, the mentioned singers don't sound American to me, although I would also agree that they sound neutral.
I suspect that a more reasonable explanation is that the phonemes used while singing are more universal than those used in normal speech, and so everyone perceives singing to be closer to their own accent.
> But what if the otherwise loathed real name policy could
> be turned to service this particular need?
The link between a real person and a Facebook account isn't secure - I could make an account with your name today without too much stress (no need to provide ID unless Facebook thinks your name isn't a real name).
I think the grandparent chose the wrong end of the stick with relating this to "famous" people, which, in turn, threw you off.
Sure, you can register an account in my name, but there are quite a number of people who will not be fooled: people who actually know me. People who know me in real life can tell whether an account is real or not, because they can tell whether I post about things I do, whether I post pictures that are...well, me.
In that case, they can be reasonably sure that the account in question is, in fact, my account. If I attach my GPG key to this account, they can thus also reasonably assume that the GPG key belongs to the account that belongs to me. This essentially gets you the online equivalent of a key-sharing party.
While it seems (based on other comments) that the product mentioned here doesn't do that, if the phone has an alternative link (e.g. GPRS / 3G / 4G) data link to the server that stores the credentials, it would be possible to make this more secure.
For example, suppose Alice wants to connect to Bob's 802.11g wifi hotspot using 802.11i-2004 (WPA2) authentication in PSK mode. Charlie and Bob have the password; neither want to give it to Alice, but Charlie wants to facilitate Alice to access Bob's system. The first step of the normal WPA2-PSK process is to take the password and generate the Pairwise Master Key (PMK) by putting the password through a key derivation function (PBKDF2-SHA1). However, the PMK is essentially just a stretched version of the password - so we will assume Charlie also doesn't want to share that. To continue with the connection, Alice needs to compute the same Pairwise Transient Key (PTK) as Bob. The PTK is a hash function computed from the PMK, Alice's random nonce, Bob's random nonce, Alice's MAC address, and Bob's MAC address. Alice could send the latter four pieces of information to Charlie over the existing link, then Charlie could send the computed PTK back to Alice, allowing Alice to make the 802.11g connection without revealing the PMK or password to Alice.
A similar way of transferring enough information to allow the connection to continue is likely possible for other authentication modes like the various 802.11X options.
Of course, implementing this would probably require, at least, a rooted phone (for example, to replace the stock wpa_supplicant on Android).
Also, the server would need to store the password or PMK itself (either in plaintext or encrypted with a key that is kept available at all times to process incoming requests), so it would have a huge database of credentials that could be compromised.
> find a seemingly unrelated reason to terminate you (or
> make you leave).
That doesn't make sense. The guy who is going around harassing people is creating the liability, not the victim, and will probably create more liability in the future.
I would be very surprised if there was any jurisdiction in the world where terminating an employment contract terminates employer liability for harassment; a terminated former employee who has been harassed is a much bigger threat to company profits than a current one.
The only reason any reasonably competent HR department would fire someone for making a valid complaint of harassment would be if they were ordered to by their superiors to protect themselves or their management colleagues (which is, sadly, a risk in some companies).
Obviously, going to HR should not be someone's first move for a one off comment. The person probably doesn't realise that they are being offensive, so a reasonable escalation is to tell them you are offended (and why) first, and if it continues, then go to HR. If it still continues, and HR doesn't do anything meaningful to address the problem (or fires you wrongfully in retribution), then escalate by hiring an employment lawyer to bring a lawsuit against the company, or depending on your jurisdiction, complaining to the relevant government body. This is fair to the person making the remark (they have a chance to learn that their conduct is offensive), the company (they have a chance to know about and address the problem before facing any external action), and the person who was offended (the problem is fixed one way or the other).
In this case, both sides were potentially in the wrong - throwing your drink on someone could be viewed as an assault and more serious than a single offensive remark. I think that had Kelly Ellis followed a reasonable pattern of escalation, there would be less harassment at Google now, all three parties (i.e. the two people and the corporation) involved would be better off, and we wouldn't be hearing about this now. Obviously it would be even better if the remark had never been made in the first place.
It makes me wonder if Google HR does not give employees training about not harassing people and how to respond to harassment - that is pretty much standard at most big companies of their size, and probably pays for itself in reduced liability (if they get sued out of the blue, they can point to the training to try to shift the blame to the person doing the harassing rather than the company - obviously that won't work if the plaintiff complained to them and they did nothing).
> That's a failure at the management level, not the developer's level
Management don't necessarily accurately know how much effort the work will take. They can ask developers to estimate this, but even then it will not always be accurate.
The crux of movements like the agile movement (and management processes like scrum within it) are to make things more process orientated - management ensures employees work for x hours a week following a process, and adapts based on what output they produce (increasing resources if they want more output).
Product orientated processes (where managers tell employees they have to produce y output, and after that they can slack off or go home if they want to) might work if someone's job is to sit on a production line doing a well understood process, but it does not work well for software. I think both businesses and developers are better off for the trend to abandon such processes when it comes to software.
Given a process orientated environment, it is the prerogative of management to make sure everyone works as close as possible to their contracted number of hours - someone who is working less should rightly get called out on it. They may have finished their current task, but they should help someone with their task, or work on improving processes / infrastructure / addressing tech debt to make likely future work easier.
Not everyone needs to work the same number of hours in a process-orientated environment, but if they work less, that should be in their contract (and most likely they will get paid less than they would have if they worked longer) and not just the employee unilaterally deciding that.
If you physically protect your book sufficiently and don't let anyone who is a threat see it, and choose strong passwords (which baNana3 isn't for most purposes - it's only 7 characters long, and based on a dictionary word with minor modifications) then yes.
If someone willing to put in the effort to do some cryptanalysis obtains a copy of your book, then no, you are most likely not safe. Firstly, the Vigenere cipher is extremely vulnerable to a known plaintext attack on the key - if the person who obtained your book knows your password to just one site (for example, because it was lost in a compromise and published on the Internet), they can work out your master key and then get all your other passwords. Even if they don't know any passwords, if you use passwords that are not made up of equiprobably randomly selected characters (and especially if they are dictionary words), the attacker will usually be able to use that bias to work out the master key. For example, the attacker might cycle through all words in the dictionary to obtain the key that decrypts aykwmy to the word, and try the master key they obtain on other entries in your book until they find one that yields a lot of other dictionary words.
It is better than using the same password everywhere, but not by much.
When password databases are leaked, there have been instances of people / groups who take passwords from those leaked databases and try to log in on other sites (for example, to steal money or data, defraud customers, or to plant back-doors to allow future criminal activity).
Suppose that after this becomes popular, there are leaks of at least two plain text databases from popular websites (not that unlikely, unfortunately). These websites might be relatively low value - someone might get permission to comment as someone else, our change their preferences on the site, or something like that, if they had their password. Suppose some people believed this card was safe, and so put a password generated by this card into two of these low-value sites that don't put too much effort into security (since they don't even bother hashing their passwords with bcrypt / scrypt or the like), and also into a high-value site (bank, domain name registrar, GitHub account that hosts puppet scripts, important e-mail account).
Using the two low-value site password databases, I could easily automatically identify likely candidates for these types of passwords that are common between the two databases - they both start with the same 8 'spacebar' characters. I could have a set of likely endings prior to the substitution cipher for the passwords in each database, and this would allow me to use something like the E/M algorithm to work out a distribution of most likely partial substitution cipher table, common word, and space bar values, which I could then combine with likely 'identifier' plaintexts to prioritise the order in which I send passwords to use against the secure site.
All of this would likely be completely automated - and if a significant number of people are using these cards, for certain types of criminal enterprise there is a good chance that it would be cost effective.
All in all, people using this card are taking a very real security risk that is completely unnecessary when there are other better alternatives (like using a password manager, and generating a completely different secure random password for each site). Encrypting the database with a strong password and an expensive key derivation function also complicates other types of attacks (for example, someone secretly going into your wallet and photographing the card) - obviously, they could try to install a keylogger on your phone or computer with the password database, as well as copy your password database, but that probably takes longer and carries more risk of getting caught than photographing a card.
The person who gets to decide how to break down policy into binary decisions and when to ask questions is the one with all the power here.
Suppose they want to suppress a 49% minority who care far more than the 51% majority and will pay far more. Assume that if the question (e.g. do you support gay marriage) was asked once, the minority could afford the votes for it to pass. If the person deciding what questions are asked wants to suppress this, they simply formulate the policy so that for the minority to get what they want, they have to answer 'No' to n binary questions (e.g. each of the n questions is a measure that bans gay marriage in a slightly different way). The minority can afford to overcome the majority on one question, but for some n, they can't afford to defeat the majority repeatedly. Therefore, asking the same question more than once would change the outcome.
Bundling decisions would also allow manipulation of binary preferences - for example, by mixing popular and unpopular measures (e.g. cutting taxes and re-establishing slavery) in a single decision so that just enough people considered it worth supporting, even though they don't support all line items.
The mechanism is therefore useless as a voting mechanism without some way of controlling how things get on the ballot.
Of course, the bigger issue (assuming, as the paper does, that a real currency is used and not an artificial one) is that the laws in place are never perfect, and measuring how much influence a group should have to make new laws based on how wealthy became under current laws will likely lead to dynamic evolution towards a solution that benefits a tiny minority.
For example, suppose we live in a fictional world where the currency is apples with 100 people. A person needs 1 apple a day to live (which is consumed, destroying it). The world has enough trees to produce 125 apples a day (and no more land to plant more trees). Due to an archaic and unfair law, people numbered 0-49 get 1.5 apples a day, while everyone else gets 1 apple a day. People 50-99 perform services to people 0-49 and get a little bit of extra apple in exchange. People 50-99 never vote, because they can't afford it (or if they do, it is the minimum - they always vote for everyone to get 1.25 apples per day), while people 0-49 put forward a bit over the minimum and easily win to retain the archaic law.
One day, people 0-48 decide they want more apples, so they propose to change the law so that person 49 gets only 1 apple per day. Person 49 puts in all their savings, but it is not enough, and the law is changed. Person 49 is now impoverished and in the same state as people 49-99. Gradually, this continues until one or two people have virtually all the superfluous apples - and everyone else even has to work hard for that small group of people to get even the one apple they need to survive.
> This website is simply _enabling_ people who intend to
> violate this (potential) term of their airline ticket
Which might, in some jurisdictions, open the website operators to claims of tortious interference (https://en.wikipedia.org/wiki/Tortious_interference) if they can prove that the site operator knew about the contractual term and incited people to breach it.
Hopefully a term requiring someone to take a flight is unenforceable (if such a term even exists) - so that might give them a defence even if such a term exists.