My typical answer for a security question is something like "39arsrc uyrsrsaulsr8832r" and that's saved in a password manager
Security questions weakens the security of an account, they are easily found information that people can just guess.
The problem with this is that the "security" question will often be asked over the phone. At this point an answer of "Oh I just mash the keyboard for those" is probably going to get an attacker access to your account..
I used to do this and then lost my password file. Fast forward to a call with AT&T. I told them I forgot my secret answers. They offered that it was "a super weird answer," which let me use the "mashed keyboard" line and got in. TL; DR I think this system is less safe than just making up cars, cities, et cetera.
Still, I expect "oh, it's a random word not related to the question" would clear phone screen human layer of verification a good percentage of the time.
I'm still bitter about that. I put garbage in the answer to the secret question because I planned not to forget my password. I didn't forget my password, but Blizzard nevertheless locked me out of my account, for the crime of using a payment card that was listed on my account, but wasn't listed as my "preferred" payment option.
These are supposed to be the very last line of defense for security, including if lose your password manager. As an exaggerated analogy, imagine that being unable to answer these questions meant your house, car, and life savings are taken from you. That is how important these answers are, except you're "only" losing one online account at a time.
Of course, it's terrible to use personal information that can be known to 3rd parties. It's also bad to reuse the same answers across multiple companies, as a compromise at one means you're at risk everywhere. The reason behind why security questions exist is a good one, but they don't offer enough security when used as intended (memorable, non-random data). The problem is there is currently no better alternative, short of requiring you to tie your legal identity to every account, and having to show up in person with photo ID to regain control of an account you've lost access to.
Anything relying on tech (like a password manager) is a bad idea for the general public. The average person does not have multiple off-site backups to guarantee that the information is physically impossible to lose.
Where they stand at the security line is irrelevant, because their mere existence on a place is already a symptom of a deep level of incompetence and an almost sure prediction of a compromised system. Besides, security is usually chain-like (compromise one node and it's broken), not army-like (compromise one node and you'll have to fight the next).
Besides, most people do not have a favorite color, do not remember the name of their 3rd grade teacher, and have severe doubts about what counts as their "first" pet. Yes, they are intended into solving a real problem, but nothing about them survives any amount of questioning.
For things like house, car, and life savings, I'm perfectly glad to go somewhere with physical ID. Heck, I'd love to see police stations offering this as a municipal service. Lying via internet form is pretty easy. Walking into a building with 100 cops bearing fake ID is a whole different level.
This is a great idea. Not only can the police verify that a given photo ID matches the person in front of them, they can also verify that the ID is valid and unaltered by verifying that the details on the ID match the details in the DMV's database, eliminating fake IDs from being an issue. This wouldn't be 100% perfect -- maybe a really determined ID thief could get the DMV to issue them an ID in someone else's name -- but it would dramatically increase the risk and makes ID theft much harder to scale.
A federal effort to standardize an identity verification service across federal and local offices nationwide would be helpful. The service should be available to any entity (not only banks or financial entities) who wishes to verify the identity of a counterparty. The process and fee should be standardized nationwide, with the fee being break-even and paid by the entity requesting the verification.
Post offices are a good candidate to offer such a service, but would need some work to set up (unlike police agencies, I presume post offices don't have access to DMV databases).
This is much more common than you might think. I believe in Illinois there was some sort of ongoing problem with people at the DMV selling licenses to truckers who didn't actually pass their tests. I'm sure any criminal with a wad of cash could get them to make a fake ID.
Driving a truck is generally legal. Stealing somebody's life savings generally isn't.
This matters because once an underqualified truck driver is on the road, they're going to be hard to distinguish from a normal truck driver. You have to issue a lot of licenses before the pattern of fake licenses becomes obvious enough to trigger an investigation.
Granting fake licenses for serious theft, though, is another matter. Every single one of those will trigger a police investigation. It's much higher risk, meaning it'd be very hard to sustain an ongoing business in fake licenses for theft.
With a password manager such as Lastpass or 1Password you only need one very strong password you as human can remember. The passwords it manages don't need to be human-rememberable. They can have as high entropy as allowed.
> Anything relying on tech (like a password manager) is a bad idea for the general public. The average person does not have multiple off-site backups to guarantee that the information is physically impossible to lose.
2FA of the strong password plus physical OTP (like YubiKey) with one backup key is more than suffice. Sure, its not 3 letter agency proof. They can easily break in your house and steal your backup key temporarily, whilst recording you typing in your password, or catching you on the go. But against most criminals (a much more common vector for the general public) this is going to work just fine.
Security questions aren't for security, they're against it. They're a tradeoff between security and usability, in the direction of usability. Assuming you answer security questions truthfully, they weaken the security of your account. It's like having multi-factor authentication, but instead of requiring all the factors, they just require any one of them. That's not necessarily a bad thing, as long as it doesn't weaken the security so much that it's easy to break.
> Of course, it's terrible to use personal information that can be known to 3rd parties. It's also bad to reuse the same answers across multiple companies, as a compromise at one means you're at risk everywhere.
And here's the problem. Many/most sites that use security questions have a dropdown list of acceptable questions and don't let you enter your own. Often the only thing you can do to avoid making your account easily compromised is to make up answers to some of the questions.
The downside, is, of course, the usual downside with security tradeoffs that favor the security side of the equation: you may be completely unable to access your account again if you screw this up. And that's also not necessarily a bad thing, if you believe compromise to be a really bad outcome. I think it might be ok to do this for, say, a bank or brokerage account. If you manage to fully and truly lock yourself out online, likely you'll still be able to prove who you are and gain access through some means like visiting a physical branch and showing them your ID. A hassle, to be sure, but if it means that much to you, it might be worth it.
In the end, social engineering is still the biggest problem: other posters in this thread have claimed that they've gotten past the security questions by saying things like "oh, I just mashed the keyboard, that's why my answer is gibberish", or something like that. So there's no way to win, unless perhaps you invent plausible (but incorrect) answers to the questions. "Mother's maiden name? Well, it's actually Jones but I'm going to put in Smith." I imagine a talented social engineer might still be able to get past that, but at some point you just have to acknowledge you've done the best you can.
And it's a shame to lose that feature, but they compromise your security so terribly that you're far better off not using them.
> it is possible to lose them - and that is unacceptable
Ten steps forward, two steps back. I find that acceptable.
The entire security question situation makes me incredibly pessimistic that we will ever get good security. The idea of security questions is so mind numbingly stupid to me yet it's widely used. One would have thought that after the Sarah Palin hack years ago everyone would have realised that but it seems like nobody did. The support agent didn't see my security question and go "oh that's clever". That's despite him being a person who deals with these all day they should realise the overwhelming stupidity.
In a sane world companies who tell their users to use special characters etc. in their passwords and rotate them but then encourage them to mess it all up by storing information from their Facebook page ad a replacement for the password should have to pay massive fines. Yet hardly anybody is even seeing a problem with this.
This situation to me is so demotivating because it makes me think that whatever security mechanism we come up with well meaning people will undermine it.
The only way I can think of that somebody could steal only the first few characters of your security answer is by looking over your shoulder at a very unfortunate time. That seems unlikely, and most of the questions they use are predictable from the first few characters when answered genuinely anyway (surnames, car names, streets and towns).
> This was not the last encounter between Bobby Shaftoe and Goto Dengo
Other than being pronounceable I see the exact same requirements for security questions as for passwords. If anything they need to be stronger.
log2(6^5) = log2(7776) ≈ 12.9 bits
Since we pick the words by literally throwing dice, English grammar has nothing to do with it.
You could probably improve on it considerably by selecting fewer books, and only taking quotes starting at some punctuation mark.
For a naturally throttled attack like here (on the phone) that's fine, but for an offline attack (where the attacker has access to the password hash) that can be cracked within days.
I'm guessing that having every book loaded into a password cracking database, subdivided and indexed by each leading phrase word, is still computationally infeasible for non-government actors.
If I walk into a library, pick a floor, aisle, shelf, book, and page at random (just walk, don't think about it), and use a phrase that is a minimum of 12 words long -- is that more random than what I presume happened here, where someone knew that their target liked that style of poetry and was able to concentrate their search on that genre? ( a "crib" in Bletchley Park terms)
The comments about English grammar are correct - classes of words (nouns, verbs, adverbs, etc) do fall in certain positional order and frequency analysis becomes important. A brute-force attacker would have to work through four types of passwords - the commonly used passwords like "12345" and "letmein", language-based phrases (like my not-great idea), language-based phrases with letter substitution (leet-speak, etc), and then truly random letter sequences.
The logic of passwords is simple, once you realize that all humans are terrible random number generators.
When you allow any part of your password to be chosen by a human, i.e. yourself, you have to assume that the human-chosen part is known to an attacker. The solution is to generate passwords with enough random bits to satisfy current demands. And by “generate” I of course mean to allow a real number generator (either a computer, or dice, or anything really random; i.e. something a casino would accept) to choose the password for you. Without any restrictions except a desire to minimize length, you get the classic unmemorable 0vT2GVlncZ4pZ0Ps-style passwords. If you add the restriction “must be a sequence of english words”, you get xkcd-style “correct horse battery staple” passwords. Both are fine, since they contain enough randomness not generated by a human.
But if you yourself choose, either old-style “Tr0ub4dor&3” or passphrase “now is the time for all good men”-style, you have utterly lost, since nothing has been randomly chosen, and “What one man can invent, another can discover.”.
Note: this also applies if you run a password generator and choose a generated one that you like. Since you have introduced choice, you have tainted the process, and your password now follows an unknown number of intuitive rules (for instance, there was a story here on HN some time ago about how people prefer the letters in their own name over other letters of the alphabet), and these rules can be exploited by an attacker.
I'm sure there's some math that could be applied here to determine how much a user selecting from one of n generated passwords. Human intuition in cases like this can often be wrong as human psychology hasn't evolved to solve problems like this, so please correct me if I'm wrong, but mine tells me that a user choosing a password from whole cloth has much less entropy when the user is taken into account than a user choosing a password from a small set of those generated with high entropy.
While the latter is less than leaving it up to be chosen purely at random, I think it's much closer to pure random than it is than from the one that's created by the human. It's likely not your intent, but your note comes across as not acknowledging this. Am I reading it wrong? Or are my intuitions wrong? If one were to choose between (a) human generated or (b) human chosen from a set of non-human generated, how much stronger do you think (b) is than (a), and how much weaker is (b) compared to (c) randomly chosen from non-human generated?
I'm not trying to catch you out here. I'm trying to see how far my intuition works in this case and how to read you note in the context of the rest of what you've said.
So, to answer your questions: Your intuition is correct – since user-chosen passwords do not contain any guaranteed randomness, generated passwords are better. How much better depends on the values of X and Y in the formula above. The value of X can only strictly speaking be said to depend on the generating algorithm for the passwords, and not any specific value like length or presence of special characters, etc. Yes, I try to always force myself to choose the first one of generated passwords if many are available. The importance of doing that, i.e. preserving those bits, depends of the size of X; a large value of X might stand to lose log2(Y) bits without any real downside.
The default pwgen(1) password algorithm appears to generate a display of 8 columns by 20 lines of passwords, each 8 characters long, like so:
Uvee5exo aiXae6mi OoR5eiph thoo1Mo3 Ac0quiep woo5Ing7 uh2AiXei poh1Aigh
ab1Mayai aeHaing4 eip0Wae1 Ho0jaeku Ahxah4Ec Kei4daez Gohmaib6 Chisaib3
eiphim5U jiepai8C aeXohN3u SeiDahy2 cee9oiVu kei1Eel2 foht6iuY Kievei6o
Eequ6Aeb eeng9wuS Kog6cie3 sapi7ooP ek9Aitie ohX6eese Eez5oth8 evaeL3oo
gae1caeF io8EiNga ceaxaY6t eiZ1Lee1 Wagh2Bee maPh0een zoBi0Pee Kou8iel9
ahj7Ooph eB9beGhe MieV6pe1 loGhae0F ughueTh1 eBohHae2 Eiv1aaQu ahRohv7b
Iehoo7qu Ga6Buwuh We0UK9Ee gu8ahSoh Ahn2ash8 pee7Airo ey1Faish aeFaiQu1
Einge6ai vi6uWeir eine8ooK Bae0lugh hewu5Hol hohd1nuH ohn2aeVa nei3oo4L
Oob6aira Aij4Gila hieNgih7 Ax5iej7O lohLood6 thoo2ahG Thie6aeh Cee7Aajo
zoot0Ief VaeN4uL5 SaiLa6ie Fii8Xeer uPhoo7os Iew7roh8 Kootu6ei Ohngue7e
xah4aiPh OVeiT0th Ca3ohjae uiCohs0N Quei9eet Xoh5oobo eicaRae2 ahp1Joom
Eequeer5 deiZ5uZa ApooSah4 Ca2wuale Xei1aifa qua1jooR oo9haiJo ie2rei2K
sah4Kai7 Aiphoos3 Di7naip5 uo4sooG3 Aiw7luph ooL6xir0 seo2ooBo shib8eeL
aem7kieJ aphei9Ie uo1ohF9A choh4Noo EijuF5Uy DohmieJ8 op5cieSh Barauk1o
EePhi2el oFabee9i AiGhoP8G yaeZa6ah ca6ooTh8 Houc2ro4 Pi9phee5 Ahng1ief
Eew2Eewu Vu3Wahm6 niep7Wei Gezai2no loR7noh5 aiph0aeT eiW2ap7o aiD6MeSu
ahgh5Uaf ahse4Aid Yaenei5t ooV4mooc HauYey3r pho1uSah uZuy8fie aiTiek8B
osh8Chae ee1Ju2Uo eet4Xo4U cheaw6Ee Ri2eoyei eesooh7X du3Pee0a hi8chohV
ung6Ju7u thahMai1 Cho5ahs0 beipam6A ooSeich0 pohx5Eiy Iene0me8 eBo7aegi
ohn6uaT7 iami8Aef Nooh6yai vaPhae7u aipai6Oe yaiPh0ue apohSh7i aiNgu8zo
These assumptions give us all the information we need to calculate the actual number of guaranteed random bits in a password chosen from this output. There are 7 letters in a password, each a-z, which gives 26⁷ combinations. Then one of the 7 characters is made upper case, which multiplies the number of possible passwords by 7. Then a random digit (0-9) is inserted in a random place (1-8), which multiplies it again with 10 and 8, respectively. The resulting number is
26⁷×7×10×8 = 4497813698560
Now, 4497813698560 possible passwords is equal to log2(4497813698560) bits; i.e. 42.03236104393261 bits.
The number of password choices is 8×20; i.e. 160 different passwords. Our formula above thus gives us
log2(26⁷×7×10×8)−log2(8×20) = 34.71043294904525 bits of randomness if the default options for pwgen(1) is used, and one of the displayed passwords is chosen by a user.
Now, whether 34.7 bits or 42 bits is to be considered high or low is not my area of expertise, and I am given to understand that this changes rapidly over time as computing technology advances.
7 letters a-z which are either upper or lower case, plus an unknown digit at an unknown location, gives:
(26+26)⁷×10×8 = 82245736202240 possible passwords, giving log2(82245736202240) = 46.225006121875005 bits. Subtracting the bits for the 8×20 choices of passwords gives
log2((26+26)⁷×10×8)−log2(8×20) = 38.90307802698764 bits as an upper bound of the security of a password chosen by a user from the default output of pwgen(1). This is a bit more than the 34.7 bits I first thought it was, but not much more. And this is an upper bound; since I can see that the source code does not choose each character completely randomly and does, as you say, seem to prefer lower case letters, the correct number of bits is guaranteed to be lower than 38.9.
Or they could go through a few things like that, always giving the excuse that they give false answers until they stumble on the right one.
For sites that force you to set them (and where I care - otherwise they just get random nonsense), and for my bank, I have a set of plausible but false answers I use. Not bulletproof of course, but definitely not googleable and avoids the "I just set it to something random" attack.
"You .. give real answers for your security questions? Seriously?"
I do the same thing, real birthday if it's financial or employee related, but for everything else, I'm a few years older on another date. I often pick a security question that I don't have a real legit answer to as well.
City you were born? Just pick any (random/unrelated) city instead of 2DXSDGREDV@#!
It's easier if you have to go through a person (which is usually forced to go through a script) also easier on the phone
As such most helpdesk employees will accept the answer "Oh I forgot, I do remember I put some random characters in there"... and your random password end up not helping you after all.
Nah, "well, it kinda looks like random characters" is information a support rep will give you.
Welcome to social engineering and info escalation.
The random character thing isn't great for this use, it seems, as a result.
There are ~35,000 cities and towns in the U.S., but if you start weighting those by populating (and birthing hospitals and centres), you're going to reduce that count considerably.
There are a lot of lovely and easy to remember names in other countries ;)
There are about 300 in the U.S. of over 100k population (corollary: the other 34,700 locations have fewer than 100k people each, or are at most 10% of the population). A 1/300 chance of cracking a security question on any given transaction is pretty good odds. Particularly if the crack is then reusable.
Another 10% of the U.S. population (roughly) lives in the 10 largest cities alone. That's a 1% likely success rate based on just ten values.
The point being that "legitimate sounding but fabricated" may still not be a particularly good option.
You don't have to answer the challenge with a 100% truthful, legitimate, accurate response, because the point is to NOT provide an answer that could be guessed by framing the response in truth, or even reality. So long as you've picked one that matches with what you've preseeded, use a random word/phrase as your response.
q: What is the name of your favorite teacher?
a: bumble bees in the desert
The amount of stupidity needed to build such a system is staggering.
Still, if that helps in one case per thousand, it's still better than none.
> Do NOT give ANY hints; only accept an EXACT answer; I will NEVER say I "forgot" this answer. 2DXSDGREDV@#!
Maybe add an "I test you occasionally." :D
If there's a length limit, trim and remove parts of that as you see fit. For example:
> NO hints! EXACT answer! NO exceptions! 2DXSDGREDV@#!
I'm going to do this at a few places, then call to test them :D.
Likewise DOB and SSN have been long established as auth secrets.
They never should have survived the transition to the internet
Changes from "Jane Doe" to "Jane Smith"
Good times. Except some of my friends actually sent out some money. I'm pretty sure I know who did it.
Since then I enter garbage in these security questions. Better lose my account than that.
I didn't believe you when I read this, but you are right. => https://krebsonsecurity.com/wp-content/uploads/2016/08/unite...
> Yes, you read that right: The answers are pre-selected as well as the questions. For example, to the question “During what month did you first meet your spouse or significant other,” users may select only from one of…you guessed it — 12 answers (January through December).
> The list of answers to another security question, “What’s your favorite pizza topping,” had me momentarily thinking I using a pull down menu at Dominos.com — waffling between “pepperoni” and “mashed potato.”
Source: United Airlines Sets Minimum Bar on Security => https://krebsonsecurity.com/2016/08/united-airlines-sets-min...
Video => https://www.youtube.com/watch?v=vmrdLAp7wSw
Fortunately, my bank doesn't disable pasting (Banc Sabadell in Spain). Instead the password is restricted to maximum 6 numbers for login. Yay banks!
One place I had an account has a password input that restricts all of those, so it's like an 8-10 character string of all capital letters. I don't understand it at all.
"What city were you born in" == "city"
"what was the name of your first pet?" == "pet"
Mom's maiden name: InfectedPussyPimple
How she got dad, I'll never know!
Granted, this most likely was caused by that other Doug providing my email address to the airline, but the airline is at fault too for assuming that access to a given email address is proof of identity. That's a very common mistake, often made intentionally to provide a more "user-friendly" experience. Had I been malicious, I could have caused that other Doug a lot of un-friendly grief.
I was not able to see any contact information on the reservation, and I didn't have full access to his account. (I don't know if a "Forgot Password" request would have given me that, though it probably would have.) I contacted the airline customer support to tell them they had the wrong email address on the reservation and they should contact their customer through some other means if they could. I think I got a form-letter thank you and never heard from them again, but I did get a few more boarding passes for a while.
I also get a lot of online shopping order/shipment confirmations, and plenty of personal correspondence. I try to tell the senders to fix their address books, and when I get a CC with the real address I contact the other Dougs too, but most of the time there's no response. I've had to set up a filter that puts all email with TO addresses that aren't the one I use into an "Other Dougs" folder, which I treat like spam.
I get mail from a bank for someone who misspelled their email but their name is very close to mine.
I called the bank, reported that I was getting their email and they tried to sell me their identity theft service. ( Give us your SSN to check to see if you ... )
American Express didn't care that one of their subscribers personal information wasn't getting to their customer, but wanted to sell me service.
This is one of my repeat-offenders. I see a lot of email out of Kingston with this same variation on my email address, and I've tried many times to reply and get people to tell him he's using the wrong email address, but to no avail. This has been going on for years.
This is most likely intentional.
Most business travel gets booked by assistants / travel agencies / client reps / etc. They are going to use their own account when booking tickets, and then forward reservations or boarding passes to the actual passenger. That passenger then wants to for example reschedule in a hurry when a meeting overruns, or change seats or meal choice without having to explain their seating preferences over the phone (is 25C still available? No? Then get 27A).
Security wise it would be better to have some sort of delegated permissions system, where the travel agent can add email addresses who are allowed to access the booking, you then have to create an account with the airline and prove that you own that email... but I don't see the airlines pissing off their most profitable customer segment with extra hassle to add protection against misforwarded emails.
- Thailand holiday itineraries and airline tickets
- A PayPal money request for $1800
- Congratulations from someone's godfather that I am now able to play the opening riff of AC/DC's "Hells Bells"
- South African real estate quotes
- A bar mitzvah invitation
- A reply to a Thanksgiving invitation sent by someone else
- Inquiries about racehorse sponsorship
- South African Taser training course booking confirmation
- British Heart Foundation cycling team invitations from a BBC reporter
- Complaints from an Ebay purchaser that I'd sent them a Nutribullet with a broken blade
- Confirmation that my NJCAA hardship application had been granted
- Pictures of 5th graders riding trail bikes in Eagle Lake, Maine
- Solicitations from the Greater Palm Harbor Area Chamber of Commerce to run a stall at the 13th Annual Palm Harbor Parrot Head Party
- Sports tipping results
- House painting estimates
I'd be living a much more exciting life if all of these had been intended for me.
Whenever somebody register on any website using it, I use the recovery options from the emails they send me to disassociate my email address from their accounts (I never ever keep access to those accounts).
For direct / personal emails (usually in Spanish) or anything else with some customer support involved I just send a short reply in English stating that they've got the wrong person and email. Then I usually spam flag everything not English (I'm only a little sorry for doing that).
There was this one day recently when somebody kept re-registering on this one site about a dozen times, and I kept resetting the password because they used my email every time. I have to guess that they eventually figured out their mistake, because it stopped. I hope...
He lives in Texas and teaches a sport. I got a reminder that he had to visit the doctor a while back. I replied and got a real human and asked her to tell him he was giving the wrong email. I don't think it happened, something new showed up later.
I had never considered doing anything to mess up something he had done (like canceling his appointment) to get his attention.
Overall it's not that big of a hassle. It peeves me a bit, but I guess I'll let it continue.
Something like a Qr code saying "this stuff in that position relative to this code is sensitive", giving the user a prompt saying "this was redacted; undo?"
They obviously didn't know the barcode contained the precise house address of the recipient (presumably the user's home address). Anonymization is hard!
Large mailers (billions of pieces per year) get a postage discount by applying such barcode to all the pieces. (edit: any mailer can get the discount. it just adds up for the larger mailers) Those pieces are delivered to USPS facilities, dumped into the auto-sorters and end up at the local post office with no human handling.
It should not be used for anything else except handling mail.
It would be simple to run barcode detection over any post and blur the result (maybe prompt the user just in case they actually wanted to post one?).
Almost any barcode is assumed to be private information, even a barcode on a store receipt can be used for return fraud in certain circumstances.
Saying 'don't post barcodes online' is all well and good, but that message will never reach the general public.
You don’t print a paper with all the information you need to hijack accounts. You don’t use ‘secret questions’. You don’t treat birthdays as secrets. You don’t use a number as a secret if it’s on the ticket.
Edit: An hour later driving and thinking about it, I think it is the right move from the airline. The risk is small because identity theft and authentication hacking is not possible in this case. The Airport is a highly controlled environment and thus someone pulling this will have a higher chance of getting arrested. On contrast, you can't just take anonymous IPs on the Internet for their words. You have to carefully authenticate them and even then you can still have issues.
A friend of mine was once travelling to Bali and she posted pictures of the boarding pass on Twitter. It was a few weeks after the CCC talk by Karsten Nohl and Nemanja Nikodijevic (https://media.ccc.de/v/33c3-7964-where_in_the_world_is_carme...), so I warned her that it might be not the best idea to post these images. She was very self-assured and replied that she's almost in the plane so there's not much risk.
I've asked if it would be OK for me to test and she was fine with it. I could log in to her booking without problems (booking code and the name which I knew anyway were on the images). In the system I saw the other person she was travelling with., I could change seats and names of passengers. I think I could even change the date of the flight back (but I'm no longer sure about it).
But this is how I'm pretty sure that if you've booked together, this might habe been visible in the booking system.
It seems to me that it would be trivial to squiggle on your boarding pass yourself, and then claim that you've been checked already. I wonder how much security theatre is happening there, too.
Having spent some time working on staff management systems in airports I can say with some confidence that (at least in australia) most of the ground staff will immediately flag someone not at least offering their passport, and/or trying to talk their way out of needing to do so as sus.
And let's not forget that if your entire plan was to get on a plane under a fake name, it's a hell of a risk to just hope that you end up in a situation where some chap is squiggling on boarding passes.
If the flights were booked together, I don't think this is out of line.
But that doesn't mean a smile and polite word won't get you around that...
The risk of someone doing real harm there is quite low ...
Not that I have any expertise in this particular situation, but not every 'threat' when armchair analysed in isolation is a threat when put into its correct domain and context.
The system is not set for security only for convenience and assumes a world of 80-90s of regulated travel with never full planes and no change penalties. At the time (US) airlines were even honoring competitor tickets at gate (assuming they has space, which they almost always did) -- show up with AA ticked at a United gate and get it swapped for a United flight by agent on the spot. Gratis.
The system had lots of problems, but malicious changes were not one of them.
No the problem as outlined in the post is people not thinking through what they are sharing on social media.
I don't think that's really the case, I've deliberately embedded QR codes in images on Facebook. Your feature would be very annoying if it could not be toggled off.
Something like “This image contains the following info: <Sensitive info you didn't mean to share>. Would you like us to blur that out? (Y/n)”
User: srsly fb? OK
You're basically evaluating the cryptographic merits of CSV.
I am not. I am weighing features vs unintended harm. Yes, the airlines shouldn't be including this data in the barcodes. It is improper to expose end users to this liability. And simply telling them not to expose them isn't a solution.
But if FB can detect harmful barcodes in an image, by all means they should remove the photo.
This is no different than Github scanning for AWS creds or MongoDB passwords in repos.
But Github doesn't do that either.
Amazon pays a contractor to scan Github repos for keys.
Case in point: app sandboxing. I, for one, don't want it, but it's everywhere.
Stuff like this should be configurable or over-ridable, especially when it has legitimate uses.
There will always be a balancing act between features, security and usability, to ram the needle one way and to say 'tough luck' to everybody else is not a solution because then people will try to find ways around the block.
For one feature that means we have
1,0 states (two states).
1,0/1,0 (four states).
1111111111 (1024 possible states).
Now as a programmer that frightens me because the possible paths through the system has become incredibly large for us to handle and it's a UX/UI disaster unless handled very carefully, you end up with features that interact with other features (set a do not back up flag on a file, then a different flag for always back up all files) in unpredictable ways for us and for users.
You see this complexity in things like hierarchical role based permission systems and the like.
Not sure what the solution is but I can understand why programmers and users push back on adding features (not least because as a programmer I know that doubling the complexity for 1-5% of users just seems like a poor trade off in general - there are of course specific cases where it makes sense like the 5% of users is roughly the percentage who are paying for your product etc.).
Thank you for pointing this out, it is a very important thing to realize and it applies to configurables, global variables and feature switches alike. The more you have seen of the guts of complex systems the more amazed you will be that they work at all.
Facebook already scans the image, probably even for QR codes, they could prevent users from harming themselves. And airlines shouldn't expose this info in the first place.
Do you really believe the problem here is FB? Do you really believe FB should be the arbiter of what incidental information their users's pictures can and can not convey?
And even if they did parse pictures for sensitive data do you believe that FB, given what we know about them would simply redact that information from photos and then discard that sensitive data? I think we can safely assume that FB doesn't discard data on individuals.
Since there's no obvious single entity to blame (and even if there is, so what?), we should be working together to prevent and reduce attacks like this. Apart from anything, Facebook popping up a warning about a barcode would go a long way to making people realise that they contain easily readable, and potentially private information.
Also, given how well image classifiers work these days, how hard is it to do the same for photos of (physical) keys, bank cards, and other commonly posted things?
Aren't they already do it for other stuff they don't want to see online ?
Surely a nipple isn't a barcode and legal implication aren't the same. And people sharing personal stuff ARE responsible for sharing those stuff.
So I guess it shows us again that FB is not our friend :)
It's a hilarious perversion of the technology to use computers to blur the thing we created so computers could read.
Wouldn't the most common barcode be the EAN-13, which is not private information?
And what the OP article is basically copying: https://www.theverge.com/2017/1/10/14226034/instagram-boardi...
I don't see this changing anytime soon (although there are some tests to move towards facial recognition).
I told her numerous times its not a good idea but she never listened! Then I told her publicly on her car photo that she should at least wipe out the plate number, which created a long trail of comments where basically all her friends thought I'm weird and creepy and why would I be warning her (perhaps I want to commit some crime??). No amount of explaining helped. Even telling cops will tell her the same thing got me bunch of her "friends" answering "you ain't a cop, bro". And then one fine Friday I saw her posting they leaving for another state to visit family. Boy it was a discovery when they come back Monday morning their house was cleaned out from every possible valuable belongings. And thieves must have came with a large enough truck to fit that 85" TV screen.
Not long after she removed me from her FB even though I never told her "told you so".
The bottom line is I don't believe people will learn not to give a clues online and I think in these days of age it should be an hour mandatory lesson at the school what NOT to post online.
The real question: Perhaps we can politely convince these services to display safety warnings & blur the sensitive bits? Want to be proactive about it: Help develop a plug & play library for services to use to accomplish this feat.
Doesn't seem to stop them from trying to find naughty photos and block them.
This is not their job and not their responsibility, period.
Relevant xkcd: https://xkcd.com/463/ ("You're doing it wrong")
Imagine you break into your friend's car, and rewrire the stereo system so the left speaker doesn't work. Then, you say, "yo, I broke into your car and rewired things. The locks on this car are faulty, better let the car manufacturer know. I should contact them myself and collect my bug bounty." And when your friend, a decent chap, thinks you're joking, and finds out you're not kidding, is his response supposed to be, "Oh shit, you're right. You could have just [rewired my speaker system]. This is crazy." or instead, would he no longer be your friend, and probably report you to the police?
Bad comparison. Breaking into a car is a locally constrained high-risk attack vector.
This is a low-risk unconstrained attack vector. A bored person anywhere in the world could fuck their shit up with no risk or consequence.
I always feel that pointing out vulnerabilities is okay. Penetrating to point it out is another thing altogether. Continuing the analogy here would be pointing out to your friend that they shouldn't leave their car unlocked rather than entering and making a mess of things.
And sure, bored person anywhere can do lots of damage and may be your damage won't be as bad, but just the act of going through someone's belonging is unwelcome.
 Also, there's a huge difference I feel from penetrating systems from orgs that have dedicated security teams...and picking on a private individual to make a point.
Someone could write a bot to scrape Instagram for photos with #airport #[name of airline] #[airport code], identify photos with tickets, and steal information that way.