I'm always curious how a bug like this ships. I mean QA & Testing should catch it, sure. But even before then. Some engineer wrote code for FaceTime that has it open the microphone before the call is accepted. And transmit the audio over the network before the call is accepted. Who did that? And why? I'm not suggesting malice but I do wonder at the lack of defensive programming.
My guess is that it’s an unfortunate combination of several problems:
- audio and video capture has to start going before call is actually established at signaling level, in order to minimize call establishment delay. Audio maybe going through Bluetooth, for example, and waking up Handsfree mode of BT may take 1-2 sec
- most of the group calling functionality was developed by a separate team, and group calling signaling may be loosely integrated at UI level, where, once UI triggers a switch to a group call - internally, the whole new library may kick in and get the current 1-1 call state transferred to it.
- when this “transfer” happens, the state of the first 1-1 call gets affected (at either local or remote side (due to signaling), which leads to either remote side think that the call was answered (a lack of protection in the call signaling state machine to ensure it was users UI action) or local side thinks it’s ok that remote users answers the call (in this case FT must have streamed audio even during 1-1 call establishment phase)
- lack of a check for your own phone number added to a call. This, due to having the same IDs/tokens twice in a group call, may lead to unexpected call signaling state machine switch
- lack of manual testing with focus on edge cases (like the described flow to repro the bug may not be the main flow for how users start group calls on FT)
I never worked at Apple, but I built VoIP stuff for the past 20 years.
latency of local hardware is much lower then cellphone/internet latency... so even if you bring local hardware latency to 0, you still have major network lag... I know you are kidding but this is ridiculous
They supposedly have these chips that make your phone much more secure and they can't get stupid stuff like this right? LOL ... GO APPLE. My trust level was already close to zero before this...
> leads to either remote side think that the call was answered
Maybe that's triggered by adding your own number? Since you're clearly on the call already, your own number is obviously going to answer immediately and that kicks the whole call into "active" (since you presumably want a call to become active when more than one person has answered) without considering that you've actually got A+A+B instead of A+B+C.
maybe - it's hard to tell, and I did bring up that other option as well. It's just that when a 1-1 call upgrades to a multiparty call, there should a be a lot of new stuff going on at signaling level to convert that 1-1 call to multiparty - and a chance is that it's a combination that leads to this bug - a call gets upgraded to a multiparty AND a number being added is already part of the call...
> audio and video capture has to start going before call is actually established at signaling level, in order to minimize call establishment delay. Audio maybe going through Bluetooth, for example, and waking up Handsfree mode of BT may take 1-2 sec
As a user, while I can accept capture starting before I answer, I cannot accept sending. I understand how it helps the speed of establishing calls.
But it means the only thing needed to spy on me from that is a software change ON THE OTHER SIDE. No way to know from my side if I'm good or not.
But your comment it missing the point. Parent commenter said it needs to start recording audio and video earlier, and a combination of other circumstances cause the device to send the data. Nobody argues this is acceptable.
I am not saying whether sending happens at the establishment phase or not - it's just that when your AV capturing is started and even encoding goes on - it's a matter of dropping the actual AV packets or sending them at the network stack/jitter buffer level. By the way, if not for privacy/security, it's actually very useful to start sending AV stream over the network, to pre-heat the network & test its throughput. For example, cellular data bitrate ramps up only when you actually send data, and it takes some delay to ramp up at more power-hungry levels and for cellular to even test what's possible. Also, estimating network bandwidth for the application layer requires measuring the round-trip time for a few seconds...
It reminds me of this MacOS bug from last year, where simply hitting the login box over and over with no password would eventually bypass the security entirely:
This bug only applied to grub authentication, which isn't a widely used feature. And you could achieve the same result with boot from disk/USB if that is enabled.
The vuln doesn't give you access to the actual accounts on the computer.
Yes - from what I recall there was not even a pretense of security. Everything was just unencrypted FAT (VFAT rather than FAT32) and if you logged in as one user all other user's data was clearly visible - it was just a means to have your own user workspace and customisations applied. Windows 95 and everything up to (not including) XP was a toy OS for home users ... If you wanted "grown up" features you had to go for NT.
I believe that was by design: the dialog was an opportunity to authenticate with the domain. If you just wanted local access you could hit cancel. Remember Win9x was not a secure OS itself.
Pretty sure I've seen a similar trick on XP or later as well. (I learned it from someone I didn't meet until long after I last saw a 95/98/2000 machine.)
I remember an article somewhere about these kinds of bugs. A lot of medical hardware/software combos are/can be compromised. And here comes the problem: do you disclose the vulnerabilities since it means potentially killing people? How long do you wait before manufacturers acknowledge and fix the problem (and they often don't)?
So yeah, these types of vulnerabilities are very very scary.
Honestly, I wouldn’t but that in the same class if bugs as those that preceded it because if the attacker he removed the HDD he will have access to your contents anyway (unless the HDD is encrypted) and it’s not a quick and convenient process either (unlike tapping backspace multiple times).
That said, I also don’t agree that this bug should never get fixed either.
This is a very naive estimation of what might happen when someone has access to all your running software. Since it is linux, it loads executables into memory and is able to run them even when they are deleted along with runtime dependencies.
Anything that caches anything to anywhere other than disk will be accessible. Your memcache, your redis, some databases, keychains, your non-userdata browser sessions.
I'm aware of that but it's a moot point because you'd be popping the storage device back in anyway. Which is why I didn't address that in my previous comment.
In any case, half the examples you've provided there are server specific and you really shouldn't be allowing untrusted physical access to your servers (nor running Xorg to be honest).
It's actually the opposite situation. Slackware's policy is to stick to the original software as distributed by the author with only minimal patching where necessary, so if a particular program (say, KDE) has a big support community, Slackware benefits from all those eyes on the code. This leads to a very mature and stable system at the expense of running older versions of software (though there's always the -current branch for those wishing to live on the edge).
Beyond this, the small but dedicated Slackware team is working daily to find and patch bugs when they do appear. You can look at the changelogs for examples of that workflow[1].
There's no such thing as a software project without bugs, but Slackware is consistently one of the most stable and robust OSes out there.
I remember turning on my iPhone and briefly seeing a picture of myself from a time when I had not taken any photos of myself. In fact I had not been using the front facing camera at all.
I think user ‘sixothree’ was stating that a picture would show after unlocking the phone, not after a call. I myself, have had it happen to me several times couple months ago and thought it was really creepy. It almost seemed as if someone was using the front camera without my knowledge. Thought it to be glitch or something at that time.
I remember some people discovered that you could kill the xscreensaver lock screen on Debian with Alt+SysRq+F some years back. Well, a decade back actually — 2009.
A fee years ago, I discovered that if someone was running dual monitors and using XScreenlock, you could unplug one of the monitors and it would bypass the lock screen. I have no idea if this is still possible, I've not used XScreenlock since then.
On Windows 10, if you have dual screens and unplug one while the screen is locked, it will reconfigure the displays and give you a flash of what was under the lock screen. Hope you didn't leave anything sensitive on your screen!
I find those bugs less puzzling because the timeline makes sense. You're not logged in, there's a prompt, you're logged in. Obvious bug in the prompt, but the A happens before B happens before C order is there. Call comes in, audio is recorded, call is accepted is not the expected order. I could imagine a bug where declining the call still accepts the call, because that still obeys the proper ordering, but this bug does not.
I don’t use face time often, but isn’t there a "preview" of the camera feed during the prompt? I guess so that the user can check if he’s looking decent before engaging the call.
The bug could then be that the feed is sent over the call too early instead of being used solely for this local feedback.
From the details, it sounds like "adding" the caller to the call before the call recipient accepts probably puts it in a weird state. Could be some kind of off-by-one error, where for some purposes Participant 2 is the caller and for some it's the recipient.
Yes, I experience this routinely. As do many other people at my company.
In particular it happens when attaching external monitors while the screen is locked. There's a flash of the unlocked desktop.
I am guessing this is because the screen lock is an application drawing over top of the monitor, like XScreensaver on Linux does. A more secure-by-default architecture would have screen locking built into the display server at some lower level: If the screen is not unlocked, it will not allow the data to be passed to the GPU. It's easy for me to arm-chair architect though.
There was one guy on reddit who had a scare when his computer flashed a picture of a dead person when shutting down, and he thought his computer was haunted. It turned out to be a frame from a youtube video he was watching earlier. It may be that macOS is not that good at clearing out GPU memory sometimes.
I have the exact opposite problem. When I reboot after system updates, half a dozen YouTube videos in Chrome tabs start playing over each other on the password screen. They get through about 20 seconds before I can stop the last of them.
Audio only on the password screen, but all audible.
I’ve long had a similar problem on iOS. Whenever I return to an app, I briefly see an image from a few minutes ago. Which can be bad if I’m showing the phone to someone that old state was exposing sensitive data.
I used to see this regularly but haven't noticed it in a long while. I always assumed they were doing something like a double-buffering type trick and switching framebuffers before wiping stale content from the destination.
Possibly a product owner trying out the latest build, receiving a call, accepting it, and then waiting for the call initiator to receive the message that the call has been accepted, and then start sending data and asking:
"Why doesn't it take X seconds before I can start talking".
To which the engineers possibly explained the reasons and the product owner saying:
"But I want it instant, let's bypass all this extra stuff and get a proof of concept instant answer working"
To which the engineer said:
"But we'd technically be sending data before the call has even been accepted"
To which the product owner said:
"That's okay, the user can't actually see that data, let us just get this in for now, we can worry about the security/privacy side later".
To which the engineer said "but, but, but" saw the product owners eyes glaze over and just made the commit:
Commit 1279: Remove very important security/privacy feature of ensuring no data is transmitted until the call has been accepted. This is again my best judgement, do not come to me when this blows up, please speak to the product owner.
You hear this excuse all the time, don't FAANG employ the world's very best developers?
Maybe their code is a mess for orthogonal reasons - management, profit-motive?
Aside: I thought I'd heard devs have automated analysers that step through and find all possible code paths, allowing complex code to be audited for security issues and such? Presumably that's how these sorts of bugs should be found in testing.
Since they have an impeccable interview process that only selects the brightests.. They may be all too busy implementing linked lists and inverting binary trees instead of actually delivering a working product.
> don't FAANG employ the world's very best developers?
People have to stop putting these types on a pedestal. Some of the least intelligent people I've known have worked for some very big names. You shouldn't trust someone based on who they work for or what name is attached.
And some of the world's very worst. There are not 10s of thousands of world-class developers to hire in the first place and they would be focused on much higher-level details than implementing basic features and maintenance.
That gruntwork requires solid reliable workers with experience but the current screening processes do more harm more than help in getting that talent.
I think this is very close to spot on, though the version I've heard from developers involved with mobile involves VP's using the app/feature once it's been deployed: "Why is my group call taking 20 seconds to connect, this is unacceptable!". Fire drill ensues.
It generally happens because you don’t follow a defined state machine. An example of how this might happen is when starting the call the microphone isn’t opened. Then you add someone else to the Group FaceTime, the event handler for they didn’t stop to consider if the call is active (just assumed it is) and now the code for that handler opens a new port to the microphone so that it can encrypt the audio stream differently for that recipient.
Super easy and not remotely malicious. It’s a failed state check.
The actual bug here might be different but that’s an easy example. But it may also effectively be the bug since all the examples mention adding yourself to the call.
I did a bit of WebRTC development for video chat, and the state machine for that is one of the most complicated I've ever dealt with. Even household name-brand commercial providers don't handle all of the edge cases. It took me about a week to get it right. Session negotiation gone wrong can easily cause audio to be heard before the call is established (and this will certainly happen with the naive implementation -- even Google's own reference implementation had issues).
If you're curious, check out this flowchart slide from a Google I/O WebRTC talk:
I always wondered why FaceTime Audio has almost zero call establishment time on iOS, as opposed to WhatsApp or FB Messenger where you need to wait half a second before the audio feed kicks in after answering. Guess this is the reason!
This sound horrifying. If I use a hacked/modified FaceTime client, does this mean I'll be able to hear and see the other end before they pick up, even after the fix?
I wouldn't call it E2E encryption if only one party in the world has the key. E2E generally means both sides have the encryption key. But lolc points out the real problem with that scheme.
I think it is fair to say all applications must be written to assume the source code is opensource / can be modified / recompiled. A recompiled app should, in theory, only sacrifice the security of the person who did it, not the remote.
If FaceTime is implemented the way the parent comment mentions (which it very well may not) I don't see why you could make a client that performs a proper key exchange to set up an end-to-end call and simply pretend grab the video instead of hiding it behind the "dailing" screen?
"The initial FaceTime connection is made through Apple server infrastructure that relays data packets between the users’ registered devices. Using APNs notifications and Session Traversal Utilities for NAT (STUN) messages over the relayed connection, the devices verify their identity certificates and establish a shared secret for each session. The shared secret is used to derive session keys for media channels streamed via the Secure Real-time Transport Protocol (SRTP). SRTP packets are encrypted using AES-256 in Counter Mode and HMAC-SHA1. Subsequent to the initial connection and security setup, FaceTime uses STUN and Internet Connectivity Establishment (ICE) to establish a peer-to-peer connection between devices, if possible."
I'm not doubting whether FaceTime is end-to-end encrypted, I'm not sure whether FaceTime sends data before the call is accepted for speculation reasons.
Yeah, if you have a hacked iMessage client you probably could retire from bug bounties. At any rate the bug in question would pale in comparison to a rogue iMessage client.
Modified clients are definitely possible - review the recent Project Zero research into webRTC vulnerabilities, which included a look at FaceTime and discovered several exploitable bugs
> Similar to how Nextels cellular "push to talk" fake walkie talkies worked.
Not really. The reason the PTT was virtually instant connect has nothing to do with optimization tricks, but rather is a purpose built design of the network tech it was using.
Nextel PTT didn't go over a normal cellular network, it went over something called iDEN[0]. iDEN provides a trunked radio service[1] which has a similarish feature to a conventional two-way radio. Sprint acquired Nextel and as iDEN wasn't as relevant anymore with the advances in cellular networks (despite those who actually used the PTT functionality), in 2013 Sprint shutdown the network to use it for additional LTE bandwidth in the 800mhz band[2].
No, PTT would send audio from the caller without the receiver accepting anything, but it wouldn't send anything from the receiver without action on the receiver end.
It was obnoxious at times, but very different than the kind of privacy invasion that the receiver's sending data without active involvement is.
If you consider what happens when you add a third person to a call:
You start sending them everyone's audio!
That's the desired behavior and exactly what happened here.
Except the client should have probably checked if the call had been accepted first. That's why I say it's a state machine bug: The "Send audio" function should have never been activated in state "waiting to accept".
Most likely 2 different teams worked on it. One worked on accepting calls, the other worked on transmitting audio, another probably worked on video. Then it got integrated and complete e2e testing wasn't done.
How would an e2e test catch this though? What would the test condition be? Wouldn’t I have to be checking for audio from the other party during all seemingly random events like adding a person to the call group?
Apologies in advance for my ignorance, I haven’t written code for a long time.
I doubt there would be a specific test (or maybe there would, real testers are better than me at thinking of this stuff), but there should be logs for events like "microphone turned on" and "user joined group chat" and the testers should be monitoring those logs.
Not sending any audio or video until a call gets accepted seems like a clearcut test case. They probably have a few tests like that, too, but forgot to add weird edge cases.
> I'm not suggesting malice but I do wonder at the lack of defensive programming.
I've never worked at Apple but neither have I in 10 years seen people truly appreciate any attempts I did in defensive programming. On the contrary, usually I hear - sometimes loud - complaints about that. Usually from fellow engineers when they see it but on occasions also from non technical people.
It's a joke that everybody is so sad about bad application security but at the same time virtually nobody cares about it at all when involved in an actual development project.
Classical case of NIMBY. FWIW Apple's security is far over average compared to other companies. But I guess they cannot isolate themselves 100%.
I believe they have been working on FaceTime Group for 5+ years now.
In 2015, Modern Family had an episode where the entire plot focused around the family using FaceTime on their iPhones --- but they had the Group feature[1]! I remember being blown away when I saw that, figured it'd be coming in the next iOS update. It didn't. 4 years later, they finally released it..then recalled it immediately
I'm wondering if it's something really simple, like comparing the unique number of callers versus the total number who have accepted the call.
Imagine a method for group calling that returns whether or not the call should be considered active or not by seeing if the number of callers is equal to the number who have accepted the call. But now imagine the method that tallies up the number of callers does a unique count by phone number, while the method that tallies how many people have accepted the call does not.
Since adding your own number is adding a caller who has already accepted a call, you end up with unique_callers (2, you and them) == total_accepted (2, you and you).
This could be tested by adding a third person to a group call twice (if iOS will let you do that) instead of adding yourself to the call.
Furthermore, if the interface on the recipient's phone only looks at whether or not they've accepted the call, that would explain why the call doesn't auto-accept on their end and go into the call before they've accepted or rejected the call.
They can aim it at who they want. When it's invite-only they'll only get what they're looking for. I'm certain that just having open bug hunters looking would have at least brought this up. I remember noticing this bug myself but also remember it not being worth my while to report it.
I agree this should have been caught in QA/dev testing. However, I would bet a dollar this wasn't "Some Engineer" deciding to do it this way. This design was the culmination of some back and forth discussions and it is this way for a sound (no pun) reason.
The code involved is a lot more complicated, involving a lot more pieces written by different teams working in concert with each other. There's not a single routine that takes the steps you describe that got a few lines out of order or something.
If the implementation is complicated enough, the code that opens the mic and the code that accepts the call are so far away from each other that its impossible to see. That's why keeping things simple is the greatest art in programming.
Our company's phone system, or our provider, had a problem with outgoing calls once: you couldn't call users of a certain mobile provider, but only if they were currently using GSM. Anything newer would make it work. The symptom was that even when the cell user picked up, you'd still here the sound like their phone is still ringing. That alone was already mindboggling to me, but the crazy part was that with certain phone models (at our company), the cell user would be able to hear you while you still thought their phone was ringing.
Agreed - looks like something very simple to add as a test-case. If not manual, an automated QA process could've easily caught this if it was even remotely robust enough.
Lack of formal methods in the design phase. It would be trivial to express a rule that you cannot hear audio from the called person before they accept it with it.
Abstraction. I can tell you with 100% confidence that any of these logic bugs are created by unecessary abstraction.
Anybody working in security will tell you the same. Piles of abstractions make it impossible to find out these bugs. You need a month of work to understand these codebases, often only the main developer has the architecture in its head.
Apple's application programming has fallen off a cliff in the last 4 years. I blame Swift. There's nothing wrong with the language itself, but its outward similarities to.... other dynamic languages I feel has led to a rot in the core of what was once a rock solid culture of embedded C/C++/Objective-C programmers building robust, well architected applications. The average Apple app today is just as buggy and disjointed as anything else on the App Store, and that never used to be the case.
I actually liked some of the versions of iTunes for Windows back when I had to use it with an iPod Touch around 2008 to 2012 or so. For just playing music from my music library, the interface wasn't bad. You just had to turn off the option to automatically convert everything into Apple's format.
>Most of iOS is still written using Objective-C, including FaceTime. Swift has nothing to do with this.
Most of iOS is still great. The problem is all the cruft that has developed since around 7.0. It just seems like an endless march of features at all expense.
What? Objective C is a lot more dynamic than Swift - which was the whole point of making a new language, albeit Swift itself is not quite as well-developed, highly-performing or safe as, e.g. Rust.
I think it's programming practices themselves that have fallen off a cliff, at least in the non-FLOSS world, and nothing specific to Apple. You can even see this in the dumpster-fire that's popularly known as "Windows 10", as well as in the latest versions of Mac OS X which are a lot slower and less user-friendly than their predecessors.
>What? Objective C is a lot more dynamic than Swift
Sure, that could be argued. But with Objective-C, you still had exposure to the "metal". You had to understand what a pointer is, and why you would want to use it. You had to understand memory management, even though ARC does the job (mostly). It was firmly entrenched in the world of memory efficient embedded programming that mobile started out as, but now that developers coming from Javascript/Python land see a familiar syntax they've brought their patterns with them as well.
>So… your theory is that if people understood pointers this wouldn’t have happened?
Yes. But it has nothing to do with pointers specifically, just the mindset and training of the average developer who has had experience with them, vs. the average developer who has not.
There's an entire generation of developers now graduating from CS programs, hiring into Apple, and getting dumped on these application teams with zero real world experience and their only language being Python or Swift. The result is you have tons of brilliant people who can quickly whip up a DFS algorithm, but don't understand that using 4MB of RAM for a JPEG is unnacceptable, or that whatever dynamic thing they are asking the runtime to do might not always work as intended. That's why we get these massive feature lists nobody asked for with every iOS release, and zero emphasis on performance.
I think you over-simplify — engineers slot into different roles/disciplines within large companies.
There are going to be engineers that have to deal with driver-level code that know full well the limitations of memory constraints, thread overhead, etc.
No doubt you're describing the other half — the app engineers that use the API/SPI's. It might even be argued though that, given a well defined API, they should not have to worry about how much memory a JPEG requires ... the API decompressing the image only when rendering to the destination or what-have-you. Pointers, memory management should be managed by the low-level parts of the language or OS/kernel.
I happen to like the bit-banging, pointer walking, free-wheeling world of straight C but I don't begrudge higher level languages that are designed to tackle the more modern pressures of concurrency and "asynchronicity".
> The result is you have tons of brilliant people who can quickly whip up a DFS algorithm, but don't understand that using 4MB of RAM for a JPEG is unnacceptable, or that whatever dynamic thing they are asking the runtime to do might not always work as intended.
I’m not sure where you’re getting this anecdote from, because I have not found it to be at all true in practice.
I’m not sure about the situation at Apple, but this is 100% true in the web development world today; with the exception that many of the new programmers hired straight out of General Assembly and the like can’t implement a DFS algorithm either.
It’s a nightmare for security and performance - the number of obvious, blatant security issues I’ve spotted and fixed just through luck alone is horrifying.
I'm sorry, but I don't know what "General Assembly" is. I live in the US, so is this some part of a degree program in your country (if you live somewhere else) that I'm not aware of?
But coming back to your point, there have always been new engineers with weak skills, just like there have always been smart engineers as well. I don't think the choice of programming language changes this fact significantly, although certain languages may have a slightly higher proportion of inexperienced programmers than others.
General Assembly is one of the popular US 3 month coding bootcamps. There are others in the US, but I’m not familiar with their carriculums.
As programming gets easier to learn, people spend less time learning programming. This has a number of negative knock-on effects, eg less understanding & focus on correctness, performance, security, etc. Obviously there’s lots of wider benefits too - but I suspect that the average person writing objective-c today spent more time studying programming than the average person writing swift today.
It sounds like we generally agree on that - but my claim is that this effect size is big enough to dominate almost all other considerations. I suspect the average C program is more secure than the average JavaScript web app, despite how the absurd difficulty in writing correct C, just because of the ratio of new and old programmers in both communities.
Then, how do you explain that the last update improved performance a lot on older devices ? Last time I've checked there was no hardware upgrades in the downloaded update...
Rumor mill: FaceTime bug was submitted to Apple on 20 January 2019 by a concerned mother after .. her 14-year-old son discovered it.
>My teen found a major security flaw in Apple’s new iOS. He can listen in to your iPhone/iPad without your approval. I have video. Submitted bug report to @AppleSupport...waiting to hear back to provide details. Scary stuff! #apple #bugreport @foxnews
Interesting twitter account. First tweet 1/1/19, few followers, mostly politics, then a major bug report (not only in discovery but in knowing how to go through the reporting process). Not saying it’s fake at all - it looks 100% legitimate - but it adds some extra bit of weirdness to this story. Quite the providence, and a really bad bug. (edited for clarity)
The genuineness of the Twitter account is absolutely irrelevant in contrast to the validity of the bug itself.
Apple was reported a high priority bug at a specific time. Who reported it, how they look like, what their Twitter profile looks like should have no impact on Apple's bug fixing process and how long/short they took to fix the bug.
Oh I’m not questioning the existence or importance of the bug. It’s important and a big screwup.
However, I am extra sensitive to the degree to which twitter is being manipulated for all sorts of ends. Sometimes things look more than a bit fishy. Usually major bug reports don’t come from 2019’s version of egg avatar + letters/numbers username + very recent activity consisting almost entirely of political posts + past tweets with interactions with obvious political manipulation bots. That is on the stranger end of things, you have to admit. To be clear I think it’s real, but also real weird.
Stock manipulation perhaps? Happens a lot with Tesla apparently, short sellers will pump up any negative story and try to get it into press. This person was making several attempts to get in contact with press after all, and a story about a teenager finding a big privacy bug in a company that publicly touts its privacy chops has ‘news at 11’ written all over it.
Personally I think a bug report story is not a particularly plausible strategy for such a thing - this person’s concern seems entirely genuine - but crazier things have been done for money. I’m relatively skeptical of complaints from companies about short sellers and bad press, but also recognize that stock manipulation happens a lot more than most ppl are aware of.
Is it still called stock manipulation if the bug is critical and for real and the company deserves to lose shareholder value simply for the critical nature of the bug?
Imagine how many people are vulnerable out there - I'm already starting to read some complaints on the internet that some people were unknowingly sharing a video of them taking a shower, etc.
If you're are in possesion of information with the potential to impact the stock price when released and you use it in your own favor to try to make a profit, then I believe it can be characterized as an attempt of stock manipulation.
Yes, as they would have lost less stock price if a fix had been prepared and released along with the announcement.
Yes, if options purchases were made before the market found out.
Yes, if their intentions were to change stock prices with the announcement in any respect whatsoever, whether up down in price and/or in volatility.
No, if they were not intending to change the stock market with the announcement, regardless of the fact that they did. (Naïveté happens. So does unconcern. Still, No.)
Also timing, Apple publish their quarterly earning results this afternoon in an already peculiar context. It’s the first time they missed guidance over the decade.
Ps: As others noted maybe it’s fair enough if support failed to acknowledge the bug quickly enough.
If the bug was held by a nation state, and their use of it was burned for whatever reason, then the nation state could release it in this manner to sow chaos in lots of fun ways:
1) The entire world of iPhone users
2) The financial markets (Apple suffers)
3) The financial markets (non-Apple benefits)
4) The political sphere (distraction from)
5) Deniability (they got their recording and leaked the bug to deny how)
While the account does seem a bit odd — especially in our current state — it stills boils down to the fact someone discovered a bug and they seem to have receipts.
Ya that poorly obscured email in the message is strongest evidence to suggest this is a real twitter account & story - I believe it’s legitimate. Which makes this story really interesting, that the kid found it, the parent knew how to bug report, the parent went to a twitter account they only recently started using to try to get the word out, and nothing happened. Goes to show process needs improvement and that bug reports truly can come from anywhere.
I am willing to give details and provide a home video
that I to ok to show you the flaw, but would like to
discuss this with someone prior to doing so...
It is unclear whether Apple provides a reward program
for non-... [cut off]
I'm not surprised Apple didn't respond to this message.
I too found it odd and only looked into it because she chose to tag one (and only one) "news org" in the post @foxnews - Without the bug report, her account looks like many that have been "taken over" or re-purposed by political operatives.
If the mother did in fact submit a ticket a week ago, it's pretty shameful that the escalation / verification process took more than a week for a bug of this severity.
As someone who bought an iPhone specifically for privacy reasons, I'm not really upset about this. What I'm concerned about is passive, mass-scale corporate surveillance, not a one-off bug that allows an individual with mal-intent to listen through my microphone for a few seconds and also let me know about it.
Are you able to root an iPhone or use one without signing in with an Apple account (that's tied to a credit card, etc)? If not, then I believe the devices are still very much part of a mass-scale corporate surveillance network.
You can root, but it's security-suicide because the OS' security model isn't designed for that.
> or use one without signing in with an Apple account
Technically yes, although you couldn't download any apps so you really don't want to.
> If not, then I believe the devices are still very much part of a mass-scale corporate surveillance network.
What I always tell people is to look at the economic incentives. Apple makes the vast majority of its money from paying customers, not advertisers, so it has less use for big data. It has also invested huge amounts of money in marketing itself as a champion of privacy, a key diversifier from its competitors. If it tried to do some shady data-gathering, it would have relatively little to gain and everything to lose when that inevitably leaked. Apple usually can't even keep the next iPhone a secret; there's no way they could conceal a massive conspiracy to secretly collect data on customers. And if they did, and it became public knowledge, their sales would go down the tube. It's just bad business. I don't trust any corporation to do what's right out of the goodness of their hearts, but I certainly trust them to do what's best for business.
Well, that's a plain and simple lie, since you can't download apps from the App Store without an Apple account, nor do I believe you can install software updates.
I haven't tried in a while, but last I checked, updates worked fine.
It's not necessary to download apps from the App Store to use an iPhone.
I'm not sure what was the need for you to call me a liar. If anything I've said is inaccurate, I guess it has changed in the last few months, and I would love to know that.
Why would this be any more reputationally damaging than the numerous other bugs with iPhone behavior?
It’s not like iPhones have a reputation for not having bugs; it seems like every version has a passcode bypass or a DoS-via-iMessage. By some standards, this is worse (remotely triggerable, leaks audio/video), but in other cases it’s not as bad: the attacker’s Apple ID ends up in the call logs of the affected person.
Are there prior examples of any phone manufacturer being reputationally damaged by vulnerabilities like this? Heck, Samsung’s phones literally caught fire and they’re still selling phones just fine.
> Why would this be any more reputationally damaging than the numerous other bugs with iPhone behavior?
Oh I don't know, someone denied the call because they're possibly in the shower, or other inappropriate moments. Oh look now they're naked on a video call... Yikes!
I feel like you're answering a different question that I asked. I don't think the bug is low-severity.
I'm asking:
Is there any historical evidence that high-severity bugs in iPhones (or really any mobile phone) are reputationally damaging, sufficiently that Apple would worry about the impact of this bug?
I'm not aware of any instance in the past where a high-sev iPhone bug had noticable long-term impact. This is similar to other issues, like the Sony PSN hack, where despite the gravity of the issue, everything continued long-term as if nothing had happened.
> Is there any historical evidence that high-severity bugs in iPhones (or really any mobile phone) are reputationally damaging, sufficiently that Apple would worry about the impact of this bug?
That reads much clearer, you're right, I misunderstood what you meant. I think it depends, PSN isn't as personal as someone's potential unsolicited nudes being extracted by total strangers. If enough bad press came of it, it wouldn't be the same ballpark. I certainly hope nothing horrible comes of it.
> Is there any historical evidence that high-severity bugs in iPhones (or really any mobile phone) are reputationally damaging, sufficiently that Apple would worry about the impact of this bug?
I'm sure Apple "worries" about any bug and its potential impact on its reputation, particularly in the area of privacy, where it has a leg up on Android at least in perception.
That said, what historical bug is up to this one for iOS? This is a big deal and I cannot recall anything similar.
edit 1: this hasn't exactly been a banner couple of months for iPhone. You'd expect that mitigating any negative news about the device would be paramount
edit 2: look to Facebook. I find it encouraging that people have reacted so negatively to a company acting so cavalier with their personal data and privacy. Yes, I think Apple cares, moreso than with other bugs.
The article slug is misleading, and suggests a fundamental misunderstanding of the scope of the bug. A RCE in Messages does not allow attackers to steal your passwords.
The ask from the comment I’m responding to was for comparable vulnerabilities to this one, since this comment thread is discussing reputational damage from high-sev vulnerabilities. This vuln gives RCE in iMessage, which is an app that has microphone/camera access, so I’d say it’s clearly comparable.
Well this is the top Twitter trend right now, for starters. It's a very visible, very easy to reproduce bug in a very popular service, and it's definitely going to hurt their reputation with consumers more than if it was something more technical yet equally or more dangerous.
Again, are there examples that show it will “definitely” hurt their reputation at all? I’ll broaden my example set: are there any examples of consumer devices where the company suffered clear damage to their brand as a result of a security issue?
As an information security professional I wish security breaches would permanently damage a company's reputation, and even put some companies out of business if the breach demonstrates they have no business handling sensitive personal information. But if that was the case, Target and Home Depot and Equifax wouldn't exist, and no one would dare touch an Android phone or a Windows computer.
The sad fact is consumers don't care one bit. The only people who care are people who have some interest in disliking the company who was breached, in which case they're likely not paying customers anyway.
Surely you understand that different bugs create vastly different problems?
Audio and video leaking without knowledge from such a personal device that people take with them everywhere is about as bad as it gets in terms of privacy breach.
For me, reputational damage would be having a bug that allowed a phone to be compromised via text message and then not sending any kind of fix for months, or years. Oh wait.
If Apple fixes this bug this week, I will consider it a significant bug with a good response, and move on. If I wanted more privacy, what would I switch to anyway? Android? Ha!
Apple has shipped versions of its Mail program that delete email without warning¹ and versions of the Finder or OS X that delete² files. And much more. Yet their reputation is intact: the masses still believe that they put out quality software. They are truly the Teflon corporation.
Neither of your links seem to work, and the second one in particular seemed to have been dead for what seems like half a decade. Have you just kept these links around to whip out in support of this argument, without checking once whether they actually go somewhere?
Not to defend Apple but I'm not sure there are higher quality options really. I don't think Apple makes good software but it can be the best without being good.
Come on this is an overreaction. It’s a bug plain and simple. A bad one for sure but still Apple disabled the feature (allegedly) while they push a fix out. Shit happens.
Anyone who has filed a few rdars knows it is thankless work. The amount of work you have to invest for anyone to even look at your bug is high. In the instance of this particular bug, I wouldn’t be surprised if at least part of the reason it took a week and a half to handle since it was reported was that the initial reply to the reporter was “please send us the exact steps to reproduce” and then nothing was done until the bug reporter replied back. I wouldn’t be surprised to learn there were even a few iterations of this, since I personally experienced it.
Then, your bug gets looked at. But you don’t know anything about its status. Until anything from a few days to a few months later, it gets closed as a duplicate. Of course there is no way to know in advance that the bug was already opened, and that you could save an hour of work time instead of making a minimal reproducible version of your app which reproduces the bug.
Haven't seen this page before (although admittedly I only have a mac, I don't use any of Apple's other services/devices) but I like how it feels... "Apple".
First of all they convert the time automatically as someone already posted, which is a very welcome surprise.
The second (and most surprising for me) thing was that they only have 2 statuses: "Available" and "Issue". And while most status pages have trained us to expect that the antithetical of green is red that it immediately catches your eye, in this case it is a simple yellow diamond which is much less alarming. I sense the PR department had its saying in how to present service availability issues.
Very few communication services are truly P2P; it's way more likely to have syncing issues and is virtually impossible without at least a handshake. Even Signal, which does have syncing issues due to being mostly P2P, requires a server for the initial handshake.
It may be entirely unrelated, however this is the exact sort of behaviour you would expect to see associated with providing compatibility with Australia's newly introduced AABill, or to implement GCHQ's ghost participant proposal (https://www.lawfareblog.com/principles-more-informed-excepti...).
The bizarre thing is how little people seem to have paid attention to Snowden. Apple joined the NSA's PRISM program in 2012. [1] PRISM enables the government to access data from participating companies including audio, video, and live chat. I'm linking to Wiki there only as a catalogue of sources. The page itself is useless and has overtly fake (though at least mildly amusing) quotes from Google. There's no need for new bills for these sort of issues to be a concern.
I also tend to agree with you that this is not necessarily related to these programs. The reason I mention this at all is because I think there is a reasonable chance that this is related, that this overall issue is very important, and that the amount of cognitive dissonance on this is surprising and regressive. These programs are real. Companies facilitating access, including real time, to your data and "private" conversations is real. And it seems to only be getting worse. Yet people seem to convince themselves otherwise, including throughout this thread. Part of the reasons these programs are able to carry on mostly unchallenged is because people convince themselves that what is happening, is not.
That's a pretty huge flaw. Millions if not billions of people can suddenly remotely spy on almost any other ios or mac anywhere in the world, just by knowing their email address or phone number?
Perhaps Apple should simply pull the plug on the facetime servers for now.
Is it just me, or is such a phrase applicable to Apple far too many times in the past several years? I think their engineering is losing quality or is falling behind on what they have to cover.
Just off the top of my head: I think on three separate occasions, specifically crafted text messages have made the Messages app disappear from iOS, requiring a reboot. There was a comparable MacOS login bug not too long ago.
Unless Apple decides they face significant legal exposure over the bug somehow I don't see them doing that. It would attract so much more attention that it would almost certainly not be worth it economically.
I wonder if they (executives? engineers? the company itself?) could be charged with aiding and abetting wiretapping or something now that they know it's happening and are letting their servers keep doing it.
"We must keep fighting for the kind of world we want to live in. On this #DataPrivacyDay let us all insist on action and reform for vital privacy protections. The dangers are real and the consequences are too important." - Tim Cook today in what is now a bit ironic
I think it’s inappropriate to make (or imply) a call to action in a public forum to violate the privacy of a public figure (or anyone, but a public figure in this case). There’s low probability that Tim’s personal accounts or devices are easily accessible by the public, and I assume he has a team dedicated to his personal security, but let’s not encourage folks to start scheming...
I wouldn't be surprised if Apple has their own internal iCloud network. Just like Google has their Apps for Business, Apple could have one but just inwards-facing and not hosting any other business.
This means even if you have his internal FaceTime/iMessage ID, you wouldn't be able to contact him because his account and yours exist in 2 realms.
Then again I guess he'd need an external-facing one for public VVIP to FaceTime him. Maybe he just carries 2 phones?
Perhaps their NSA PRISM code got mixed up with their user facing code? I'm not just joking, we all know Apple only pays lip service to privacy and got caught red-handed during the Snowden leaks.
I just called my friend who was already on a call talking to his brother. He could hear me and his brother but his brother and I couldn't hear each other.
I was also able to call him, have him hang up and then see his video and hear his audio. He couldn't hear me, but I could hear and see everything.
The other day my friend Alice (not real name) attempted a FaceTime call to Bob. To both our surprise my phone rang with a FaceTime call from Alice (and as far as we know, Bob never received the call). Holding both our phones together, Alice phone was showing a call to Bob while my phone was showing a call from Alice. A very strange fluke which makes me wonder how robust the FaceTime code is.
Not related to face time but once I sent a two part SMS to my mother (over 140 characters and all that). Her reply made no sense, she was talking about stuff I have not said in my text at all. After much of confused back and forth, I have found out that she received the first part of my SMS correctly, but the second part was taken from some completely random message that probably somebody else sent. I suppose that in some other part of the world there were another two confused people who got the texts mixed the other way round.
Something similar happened to me: I got a FaceTime call and both my phone and an iPhone nearby rang! Both contacts were known to the caller, but the AppleIDs were of course different.
Not sure if this was intentional but in security, Alice and Bob wre the names in hypotheticals for the attacker and unwitting victim since the RSA paper.
Fun story: I tested this bug, initiating a call from my phone and then joining it on my Mac. After I had ended the call, the Mac's camera LED stayed on, even though the FaceTime app was not showing a video preview (in fact it had no windows open). Was it transmitting? Who knows! State management seems to be a mess all over the place.
This means your camera was in use by some application. Whether that video left your device is harder to ascertain, but you can try quitting FaceTime to see if the light turns off.
I'll take a stab at that, although I don't think it's stupid per-se. Everyone I talk to, I talk to via WhatsApp, LINE, or FB Messenger, and generally I only use that method to talk to them, whatever else they're on. All of those have easy one-press buttons to initiate video call. Using some kind of Apple-only technology?
I mean I think most people I know are probably on an iPhone, but having to remember everyone's device to know if they can FaceTime? Ugh. I suspect I could work out how to FaceTime someone if you made me, but I'd have to work it out rather than just knowing it.
But isn't everyone being on WhatsApp or FB Messenger or whatever just elevating the device uncertainty to platform uncertainty? You still have to remember or check who is on what.
It seems like if they generally are all on iPhone, FaceTime is the one thing you could reliably call them on, and it will even tell you right in the messages app if you can FaceTime them.
I wonder if the Apple representative who said this would be "fixed later this week" realizes this allows any attacker to wiretap any iPhone user (that has the vuln)
The team might be able to fix it today, but the fix needs to go through testing and then be built alongside the rest of iOS (a multi-hour process). I wouldn’t be surprised if this takes two or three days to roll out.
You didn't even bring into consideration the app store approval process which can take 2-3 weeks and likely would reject this as a feature removal without user notification.
I'm not sure if they have a special process for emergency bug fixes, but for internal testing a new build is done about daily from source, yes. One of these ends up being polished a bit and sent out to users.
So my confusion is generally it's best practices to keep branches off of these releases and then just apply fixes to that. Even better is if you can keep the meta build around or use a distributed build system so that you can do incremental builds for patches to that release getting the time way down, unless of course you're changing core libraries and have to recompile or re-link everything. It would seem a risk to me security wise that there wasn't a fast way to get patches out to prod per app or fo the os that don't require full blown rebuilds.
I guess I may also not be accounting for the QA process.
iPhones already have a "mute" slider switch. When I first looked into iPhones (after years of Androids), my instant reaction upon seeing the slider switch was "Ah! FINALLY! I can feel at peace in the knowledge that software vulnerabilities are powerless to hack my camera or microphone!"
Of course, stupidly, the slider doesn't "mute" my camera or microphone, but only my speaker. For Apple to modify this slider so that it mutes my camera and microphones would require the daunting addition of two transistors. And the act of such a simple modification would put an end to the incredibly creepy, Orwellian possible reality that we all risk taking a part in every time we glance at the familiar tiny screens we have become so intimately glued to.
Frankly, I’m totaly in favour of hardware interrupt switches for the camera/mic - but I think I understand why it’s not likely to happen. First, to a lot of people it will look like admitting your thing is hackable, which makes it seem vulnerable. Second, now every time I accept a call I have to check this switch - sounds like a switch most people will leave in the on-position 100% of the time and then when it accidentally gets flicked they’ll bring their phone to the Apple store because “it’s broken”.
Also, I don’t want my ringer to have to be on any time I take a call... but that’s just a debate about if it should be one switch or two.
Also the hardware switch doesn’t interrupt the speaker, it just turns the ringer to silent/vibrate mode - you can put someone on speakerphone or play music with it on.
sounds like a switch most people will leave in the on-position 100% of the time and then when it accidentally gets flicked they’ll bring their phone to the Apple store because “it’s broken”.
laptops with hardware wifi and bluetooth switches. Took plenty of support calls about those.
I agree, I don't think it's going to happen with the iPhone.
I realize most people aren't paranoid enough to want this. I am. I only hope that some company starts manufacturing a phone with this feature at some point, so people like me who have read too many Philip K Dick novels can feel at peace.
Also some smartass regularly find out way to disrespect this physical button for some reason in order to play theirs ads at full volume while muted in the most inconvenient settings. Like say taking a dump a the office and all coworkers present hearing an add for monster castle crusher of the hustle dragon birds candy.
TOTALY COUNTER PRODUCTIVE GUYS, FOR GOD SAKE STOP THAT SHIT!
Even crazier and also true: There are people who actually use their phone to make phone calls (Yes, kids, that still happens) while their phone is in silent mode.
I have my phone muted for most of the day, but I still want to be able to take pictures and use siri. It doesn’t even mute the speakers, only notifications. It sounds like you want to replace a switch that many people want to use, with one that few would.
I also have it muted most of the day; usually all day. Which effectively makes it useless as it is. Perhaps this puts me in the biased position of feeling it may be used for another purpose.
I encountered a bug a while back — when I would connect to a dial-out call to get onto my company’s conference service from its app, while using a pair of cheap Bluetooth headphones (I’ve upgraded since then!) the mute button didn’t work.
As in, the mute button would be clearly activated, but my audio still carried through to the call. As in, I discovered this in quite an embarrassing way.
I filed a radar but never got a real response about it. Really there appears to be a strong need at Apple to ensure QA is checking for confirmations of user interaction. As I haven’t experienced anything like this since I’m still with them but these types of problems are the very thing that would lead me to shake my deep investment in the Apple ecosystem.
It might trigger the bug to just invite any additional participant (say, a second phone the attacker possesses), in which case blocking only inviting oneself is not sufficient.
My theory is that the server routes messages to everyone who has been invited to the call, even if they have not accepted it. One message might be "participant left," in which case if you are the last one, the call ends.
Another would be "participant joined." The bug would center around the fact that the logic for handling a "participant joined" message does not check if the call has been accepted and makes an unexpected transition to a state that it should not be in.
The "participant joined" code likely handles the case that the new participant was already present on the call. Why? Apple wants to support seamlessly transitioning your call from one device to another. That's why blocking might not be so straightforward from the server side.
I would like to bring up something that happened to someone I know during WhatsApp call. Person A was in US and person B in India and were audio calling through Whastapp on something work related. Person A starts hearing someone else on his side, nothing unexpected on Bs side. A started a new call and everything was fine. A said it reminded him of the crosstalk that was common during the landline days. Could it be a bug or was someone listening and forgot to turn off their mic? No idea.
An quick stop for the possbile privacy disaster for Apple is to stop group facetime call at once but not wait for later this week for bug fix.
Could image that quite some one might already try to peek for privacy using this.
Premature optimization, perhaps. Don’t optimize by sending audio too early (as others have suggested) and why should I be able to add my own number to the conversation if I that number is the call’s initiator? Makes no sense.
A fun though experiment for a bug like this would have been to test lists of emails against validating the ability to FaceTime call with the email and determine if it was used as the iCloud email address...
Had this bug gone unknown longer - then you'd have a list of all the endpoints you could spy on.
What about updates for people who don't update often or don't even see that there's an update? This is way too critical a bug for people who don't follow the news to not be taken care of (their devices, I mean). Apple keeps pushing major new version iOS downloads forcefully down to devices without any way of opting out, yet what I've seen is that point updates are left as optional and may not even show up unless one checks for an update.
As much as forced new feature updates take freedom away from users, Apple needs to up its game on security fixes like this (that could be standalone) being pushed to all compatible devices ASAP (no waiting for the phone to be on the charger overnight, etc.).
The upgrades are not mandatory, but the downloads certainly are. I've had to delete the iOS downloads several times over years (including the downloads for iOS 12) because iOS doesn't have an option to disable this, and it automatically downloads the upgrade in the background when on WiFi for a long enough time. It's a cumbersome process to delete the download. It's even worse when you consider that a downloaded update would be installed overnight if you leave the device on the charger and connected to WiFi and by mistake tap on install when that prompt appears at some random time. The user interface for controlling and managing iOS upgrades does not exist.
> This is how you get phones running out of battery halfway through an update.
The OS could always put a limit saying the phone should be charged at least 30-40% for an update to start installing. Even now, the upgrades and updates are allowed to go through for iPhones without them having to be on the charger.
I must admit that when reading yet another privacy-invasion-headline I saw "Facebook", not FaceTime/Apple. Not sure whether it's more indicative of unjustified prejudice or actual precedent.
Oof. I am reasonably sure very very soon there will be a number of very polite but very pointed questions on Apple's desk from concerned lawmakers, data protection authorities and such not only from the USA but the European Union as well about how this happened and what are they doing to make sure this doesn't happen again. I can very well see the European Data Protection Supervisor fining them to some very interesting amount as well.
"Sagem GSM phone let's you hear the other party's audio before you pick up" - Demonstrated this last time about 6 years ago calling my phone from different networks: fixed, and two other GSM. My conclusion was that it's an implementation bug easy to trip on or a provision in standard that it's easy to misread.
Now - if the audio channel is _right there_ for you to read, what chances are for this to be only a Sagem firmware bug?
Just thinking out loud here - why isn't there legislation that makes it mandatory for phone manufacturers to send out a notification to all devices affected by serious security flaws (like this one)? Not only will fixing and rolling out an update take a while, there is also no guarantee that the update will be installed. Meanwhile, hackers will have a field day.
Or maybe there is already one, and I'm blissfully ignorant!
Just to play devil's advocate (not a lawyer though):
What constitutes a "phone"? Any device with cellular capabilities? What about WiFi calls? What if it's an industrial device with no network (LTE/data) access? Is a laptop with a 3G modem covered under this?
I would suspect the problem is in defining what devices to target, and also the fact that forcing any company to modify the functionality could be perceived as a slippery slope (i.e. security notifications first, NSA backdoors later...)
In Apple's defense, it is pretty difficult to miss an update alert considering it comes through as (a) a push notification, (b) a mandatory alert, and (c) a persistent red badge on the Settings app.
I agree that it might be a good idea to differentiate between a normal update and a security critical one, though.
Valid points. How about restricting the scope to devices connected to a network and having some sort of push notification capability?
> In Apple's defense, it is pretty difficult to miss an update alert considering it comes through as (a) a push notification, (b) a mandatory alert, and (c) a persistent red badge on the Settings app.
> I agree that it might be a good idea to differentiate between a normal update and a security critical one, though.
But there is no mention of severity like you pointed out, and that is crucial. And till such a patch is available, Apple should notify users to disable offending apps/features if possible.
Not sure if I'll define it that way, but why not? If my mobile device is capable of showing inane ads as push notifications, why can't I expect security advisories to be delivered that way?
I think Apple benefits from safe harbor law - but I can't cite the statue. In this case I think the legal liability is against the bad actor. Not the corporation that let this bug out into the wild.
I imagine at some point the government will regulate software and the liability may shift. It could be good, but it could also be bad.
> In a statement, an Apple spokesperson said the company is "aware of this issue and we have identified a fix that will be released in a software update later this week."
On the 1st of November a friend shared with me a screenshot from the Instagram of a concerned parent explaining a situation in which her child was being cyber bullied, and it was clear from the content of the messages that the bully could her their private conversations.
I guess this bug has been know and abused for at least three months?
I'll admit I have limited knowledge on iPhone security... But would it be technically possible to do this with other apps? For example if one has Wechat, can there be a backdoor to be listened in on? Thinking about state actors...
I don't want to be harsh, but Apple really should walk the talk regarding privacy. It has its heart in the right place but bugs like this show how careless they are.
This is a security issue, where Apple is pretty crap, just like MS or Google or everybody else. Taking a look at the security patches of macOS, Android or Windows is pretty depressing.
Privacy is concerned with the company purposefully spying on its users, like Google, MS or Facebook do.
I never thought of Apple as caring much about privacy given the insane amount of third party tracking scripts loaded in apps on the AppStore. What I would give for LittleSnitch running on iOS...
Set up https://www.charlesproxy.com/ on the same network as your phone, then set it to be the phone's proxy server. Voila, you can now be horrified by the amount of tracking going on. You can even MITM SSL traffic by adding a trusted CA to your phone.
I think you're referring to the Duo feature which shows a video of the caller to the person being called prior to answering. Basically the opposite of drop in and of this FaceTime issue.
I feel like around three years ago Apple engineering changed QA leadership or restructured org in that dept and the quality has been down. Prior to IOS9, I have had barely noticed bugs. But now I am in that "used-to" mode that I have learned to ignore such bugs or nuances.
Is it just me, or does it seem Apple can't secure their services as well as Google?
There's always 'concerns' about Google spying, but no proof, meanwhile there's been major hacks involving Apple that are quickly forgotten because the CEO claims to be all about privacy.
the quantity of papers with major security flaws in android versus ios shows you all you need to know. there is an endless stream of android ones but very few ios okes
The more pertinent question in this kind of situation is to the Lead/Architect. Namely, why have you created an environment in which quality can slip like this, and what are you going to change?
Imagine, all this time Five Eyes actually had a way of breaching privacy and they only started making noise about access enablement recently because they knew someone was going to report the vulnerability to Apple.
Seriously, I don’t think that this is so bad. Google advertised something similar like this as feature. Don’t get me wrong: it should be fixed. But the shitstorm is way too big
This would actually be a fun interview question - how to emergency patch 1B+ globally distributed mobile devices. I would say at least several days for the obvious QA which needs to be done.
Why would you need to patch the binary? If you're pushing out an update to fix the issue, the update may as well push out a rebuilt app than a binary diff.
I had been involved in a really hairy service interruption that involved hardcoded urls into an sdk (http://a.com/b.js), and a.com was remapped to a brand new web service / target which no longer had the file.
As if it's not enough, the server returned 301 permanent redirect with 60 day ttl to http://c.com. Our traffic dropped 40% because of that mistake (not 100 because in previous couple releases i have changed that url to be something that's widely used and had no risk of getting messed up, but old clients still had the urls embedded).
As if that is not enough, the IOS library for some reason had implemented "hand rolled "caching with a lot of hardcoding.
To fix the issue... i made c.com homepage also serve b.js at the bottom.
There is no way in hell this would be committed and tested within qa timelines of either a.com set up or c.com setup.
It's rather complex to fully explain without giving more context on what it is but when traffic drops 40% and you are generating 5M$/day.... You do whatever to bring it back.
I might have misunderstood your previous comment. You were saying that for this specific incident with Apple they wouldn't need to do binary patching, right?
They would probably need to at least blacklist every affected ios version from the facetime servers, there will be a long trail of devices not updating soon
Even if there is a server side mitigation, it's a big black eye to have devices being remotely tappable with no user interaction.
The devices are obviously not trustworthy anymore with the current software, and you are at the mercy of apple's servers. So a spying apple could always undo the server side mitigation (if even this is mitigatable server side).
It's also a wakeup call to see that it is even possible for devices to start sharing audio or video with no user interaction. Obvious in hindsight for a software engineer perhaps, but the public perception might be forever changed.
So I wouldn't get all the tinfoil over this matter, I would probably disable Facetime if I were most people though. Then it's likely to only be an issue of audio being streamed, much less horrible than inappropriate video streaming. I would hope they roll out a server-sided fix initially. "If calling user, if user ads themselves to group call, hang up group call" or some silly logic. I rather see that first, and then the client-side fix.
My colleague and I experienced the same thing last week on his Samsung phone.
Another colleague was calling us to talk about a project, we were chatting away and in the few seconds before the phone was 'answered' our colleague heard what we were talking about.
This is due to the very poor QA efforts Apple has, coupled with junior developers who lack a security-aware mindset. This is, sadly, the case with most companies these days. Zero secure coding training, zero push for security reviews, zero push for security QA, zero accountability.
The amount of bugs like this, specifically lock screen evasion bugs, bug also disk management bugs we've seen, etc. As I said, this is not specific to Apple. The whole industry lacks security awareness, and because people don't hold these companies accountable, there is little financial incentive to change that.
Your specific claim was that Apple has "zero secure coding training, zero push for security reviews, zero push for security QA, zero accountability"–this isn't true at all. Sure, Apple's software has had some serious bugs in it, but this does not mean that they have no security practices in place.
I am not speaking about dedicated security teams. What security training is there for end-user app developers at Apple? From discussions with developers I know, it doesn't seem to exist, or at least not spread everywhere.
Maybe it's not enough for your satisfaction, but there are resources available for writing safe and secure code (I'm not sure if this is required, though), as well as regular audits by the security team.
I come from a security background (6 years in a security firm), and I have seen some pretty paranoid practices. I do not wish that to be prevalent. One thing which I really did appreciate in that firm, and find very valuable, was putting every developer and product person on a security awareness and secure coding course, where basics are taught, but also an attempt is made to push a security-first mindset.
I am now in a consumer-oriented company, and while I appreciate the much more relaxed environment, I am often shocked at how no attention or thought is paid to security. It baffles me that management, at the very least, has little care for this stuff.
I worry about Apple. Software quality has been dropping, but it's not Swift or some other specific technology that's the problem. It's talent. The company has had a hard time attracting very senior talent.
The difficulty is self-imposed and comes from leadership's strict policy of not paying market wages at the high end. Multiple hiring managers at Apple have complained to me that they've lost good people because the company just won't match offers from the Googles and Facebooks of the world. Apple refuses to acknowledge the reality of the market.
Given that Apple's most experienced people keep retiring and that Apple isn't replacing them with equals, I expect software quality to continue to drop until leadership decides to abandon its shortsighted comp restrictions.