- audio and video capture has to start going before call is actually established at signaling level, in order to minimize call establishment delay. Audio maybe going through Bluetooth, for example, and waking up Handsfree mode of BT may take 1-2 sec
- most of the group calling functionality was developed by a separate team, and group calling signaling may be loosely integrated at UI level, where, once UI triggers a switch to a group call - internally, the whole new library may kick in and get the current 1-1 call state transferred to it.
- when this “transfer” happens, the state of the first 1-1 call gets affected (at either local or remote side (due to signaling), which leads to either remote side think that the call was answered (a lack of protection in the call signaling state machine to ensure it was users UI action) or local side thinks it’s ok that remote users answers the call (in this case FT must have streamed audio even during 1-1 call establishment phase)
- lack of a check for your own phone number added to a call. This, due to having the same IDs/tokens twice in a group call, may lead to unexpected call signaling state machine switch
- lack of manual testing with focus on edge cases (like the described flow to repro the bug may not be the main flow for how users start group calls on FT)
I never worked at Apple, but I built VoIP stuff for the past 20 years.
Ha! Facetime is now a non-causal filter.
They supposedly have these chips that make your phone much more secure and they can't get stupid stuff like this right? LOL ... GO APPLE. My trust level was already close to zero before this...
Maybe that's triggered by adding your own number? Since you're clearly on the call already, your own number is obviously going to answer immediately and that kicks the whole call into "active" (since you presumably want a call to become active when more than one person has answered) without considering that you've actually got A+A+B instead of A+B+C.
As a user, while I can accept capture starting before I answer, I cannot accept sending. I understand how it helps the speed of establishing calls.
But it means the only thing needed to spy on me from that is a software change ON THE OTHER SIDE. No way to know from my side if I'm good or not.
And this other MacOS bug, also from last year, where the password hint would contain the plain text encryption password:
All within a month of each other.
The vuln doesn't give you access to the actual accounts on the computer.
I think that if you hit Cancel there it would work just as well. You wouldn't get it logged into the domain though
So yeah, these types of vulnerabilities are very very scary.
So it's likely it won't ever be fixed.
That said, I also don’t agree that this bug should never get fixed either.
Anything that caches anything to anywhere other than disk will be accessible. Your memcache, your redis, some databases, keychains, your non-userdata browser sessions.
In any case, half the examples you've provided there are server specific and you really shouldn't be allowing untrusted physical access to your servers (nor running Xorg to be honest).
Beyond this, the small but dedicated Slackware team is working daily to find and patch bugs when they do appear. You can look at the changelogs for examples of that workflow.
There's no such thing as a software project without bugs, but Slackware is consistently one of the most stable and robust OSes out there.
And I'll be honest, this is when I started losing faith in technology.
The bug could then be that the feed is sent over the call too early instead of being used solely for this local feedback.
In particular it happens when attaching external monitors while the screen is locked. There's a flash of the unlocked desktop.
I am guessing this is because the screen lock is an application drawing over top of the monitor, like XScreensaver on Linux does. A more secure-by-default architecture would have screen locking built into the display server at some lower level: If the screen is not unlocked, it will not allow the data to be passed to the GPU. It's easy for me to arm-chair architect though.
Audio only on the password screen, but all audible.
This is an outright lie.
However, I'm not sure a typical iOS user cares.
"Why doesn't it take X seconds before I can start talking".
To which the engineers possibly explained the reasons and the product owner saying:
"But I want it instant, let's bypass all this extra stuff and get a proof of concept instant answer working"
To which the engineer said:
"But we'd technically be sending data before the call has even been accepted"
To which the product owner said:
"That's okay, the user can't actually see that data, let us just get this in for now, we can worry about the security/privacy side later".
To which the engineer said "but, but, but" saw the product owners eyes glaze over and just made the commit:
Commit 1279: Remove very important security/privacy feature of ensuring no data is transmitted until the call has been accepted. This is again my best judgement, do not come to me when this blows up, please speak to the product owner.
Then went to the pub in despair.
Maybe their code is a mess for orthogonal reasons - management, profit-motive?
Aside: I thought I'd heard devs have automated analysers that step through and find all possible code paths, allowing complex code to be audited for security issues and such? Presumably that's how these sorts of bugs should be found in testing.
People have to stop putting these types on a pedestal. Some of the least intelligent people I've known have worked for some very big names. You shouldn't trust someone based on who they work for or what name is attached.
And some of the world's very worst. There are not 10s of thousands of world-class developers to hire in the first place and they would be focused on much higher-level details than implementing basic features and maintenance.
That gruntwork requires solid reliable workers with experience but the current screening processes do more harm more than help in getting that talent.
Also - maybe they inherited the code from a startup and never had the chance to refactor and so there was a mess from the start.
Either way, bugs happen. Of all kinds, even bad ones'.
More circumspect than a 'bug' is that this got through their tests. Their end-to-end testing should have picked this up.
This phrase is used several times on their current website:
Super easy and not remotely malicious. It’s a failed state check.
The actual bug here might be different but that’s an easy example. But it may also effectively be the bug since all the examples mention adding yourself to the call.
If you're curious, check out this flowchart slide from a Google I/O WebRTC talk:
Not really. The reason the PTT was virtually instant connect has nothing to do with optimization tricks, but rather is a purpose built design of the network tech it was using.
Nextel PTT didn't go over a normal cellular network, it went over something called iDEN. iDEN provides a trunked radio service which has a similarish feature to a conventional two-way radio. Sprint acquired Nextel and as iDEN wasn't as relevant anymore with the advances in cellular networks (despite those who actually used the PTT functionality), in 2013 Sprint shutdown the network to use it for additional LTE bandwidth in the 800mhz band.
It was obnoxious at times, but very different than the kind of privacy invasion that the receiver's sending data without active involvement is.
If you consider what happens when you add a third person to a call:
You start sending them everyone's audio!
That's the desired behavior and exactly what happened here.
Except the client should have probably checked if the call had been accepted first. That's why I say it's a state machine bug: The "Send audio" function should have never been activated in state "waiting to accept".
Not before they accept the call connection request you don't.
Takes the engineering right out of software engineering if you ask me.
The Apple Watch walkie talkie feature allows you to talk with anyone who has approved you and has walkie talkie switched on.
But it appears to use FaceTime under the hood as when it was launched it failed if FaceTime was switched off on your phone.
So the “auto answer” mechanism is already in there, just getting triggered at the wrong time?
Apologies in advance for my ignorance, I haven’t written code for a long time.
I doubt there would be a specific test (or maybe there would, real testers are better than me at thinking of this stuff), but there should be logs for events like "microphone turned on" and "user joined group chat" and the testers should be monitoring those logs.
I've never worked at Apple but neither have I in 10 years seen people truly appreciate any attempts I did in defensive programming. On the contrary, usually I hear - sometimes loud - complaints about that. Usually from fellow engineers when they see it but on occasions also from non technical people.
It's a joke that everybody is so sad about bad application security but at the same time virtually nobody cares about it at all when involved in an actual development project.
Classical case of NIMBY. FWIW Apple's security is far over average compared to other companies. But I guess they cannot isolate themselves 100%.
I believe they have been working on FaceTime Group for 5+ years now.
In 2015, Modern Family had an episode where the entire plot focused around the family using FaceTime on their iPhones --- but they had the Group feature! I remember being blown away when I saw that, figured it'd be coming in the next iOS update. It didn't. 4 years later, they finally released it..then recalled it immediately
edit: managed to find a short clip from said Modern Family episode: https://www.youtube.com/watch?v=vy3jUOBxQuI
Though in the full episode you see a lot more of the Group feature
Imagine a method for group calling that returns whether or not the call should be considered active or not by seeing if the number of callers is equal to the number who have accepted the call. But now imagine the method that tallies up the number of callers does a unique count by phone number, while the method that tallies how many people have accepted the call does not.
Since adding your own number is adding a caller who has already accepted a call, you end up with unique_callers (2, you and them) == total_accepted (2, you and you).
This could be tested by adding a third person to a group call twice (if iOS will let you do that) instead of adding yourself to the call.
Furthermore, if the interface on the recipient's phone only looks at whether or not they've accepted the call, that would explain why the call doesn't auto-accept on their end and go into the call before they've accepted or rejected the call.
The steps to reproduce is trivial so I'm shocked.
Anybody working in security will tell you the same. Piles of abstractions make it impossible to find out these bugs. You need a month of work to understand these codebases, often only the main developer has the architecture in its head.
That's a lot of confidence…
(people have complained about it approximately since it was released)
and I ain't gonna stop complaining until I find a suitable music player to replace it!
Desktop iTunes is a dumpster fire, but their iOS software is what I was specifically referring to.
Most of iOS is still great. The problem is all the cruft that has developed since around 7.0. It just seems like an endless march of features at all expense.
I think it's programming practices themselves that have fallen off a cliff, at least in the non-FLOSS world, and nothing specific to Apple. You can even see this in the dumpster-fire that's popularly known as "Windows 10", as well as in the latest versions of Mac OS X which are a lot slower and less user-friendly than their predecessors.
Yes. But it has nothing to do with pointers specifically, just the mindset and training of the average developer who has had experience with them, vs. the average developer who has not.
There's an entire generation of developers now graduating from CS programs, hiring into Apple, and getting dumped on these application teams with zero real world experience and their only language being Python or Swift. The result is you have tons of brilliant people who can quickly whip up a DFS algorithm, but don't understand that using 4MB of RAM for a JPEG is unnacceptable, or that whatever dynamic thing they are asking the runtime to do might not always work as intended. That's why we get these massive feature lists nobody asked for with every iOS release, and zero emphasis on performance.
There are going to be engineers that have to deal with driver-level code that know full well the limitations of memory constraints, thread overhead, etc.
No doubt you're describing the other half — the app engineers that use the API/SPI's. It might even be argued though that, given a well defined API, they should not have to worry about how much memory a JPEG requires ... the API decompressing the image only when rendering to the destination or what-have-you. Pointers, memory management should be managed by the low-level parts of the language or OS/kernel.
I happen to like the bit-banging, pointer walking, free-wheeling world of straight C but I don't begrudge higher level languages that are designed to tackle the more modern pressures of concurrency and "asynchronicity".
I’m not sure where you’re getting this anecdote from, because I have not found it to be at all true in practice.
It’s a nightmare for security and performance - the number of obvious, blatant security issues I’ve spotted and fixed just through luck alone is horrifying.
But coming back to your point, there have always been new engineers with weak skills, just like there have always been smart engineers as well. I don't think the choice of programming language changes this fact significantly, although certain languages may have a slightly higher proportion of inexperienced programmers than others.
As programming gets easier to learn, people spend less time learning programming. This has a number of negative knock-on effects, eg less understanding & focus on correctness, performance, security, etc. Obviously there’s lots of wider benefits too - but I suspect that the average person writing objective-c today spent more time studying programming than the average person writing swift today.
>My teen found a major security flaw in Apple’s new iOS. He can listen in to your iPhone/iPad without your approval. I have video. Submitted bug report to @AppleSupport...waiting to hear back to provide details. Scary stuff! #apple #bugreport @foxnews
Apple was reported a high priority bug at a specific time. Who reported it, how they look like, what their Twitter profile looks like should have no impact on Apple's bug fixing process and how long/short they took to fix the bug.
However, I am extra sensitive to the degree to which twitter is being manipulated for all sorts of ends. Sometimes things look more than a bit fishy. Usually major bug reports don’t come from 2019’s version of egg avatar + letters/numbers username + very recent activity consisting almost entirely of political posts + past tweets with interactions with obvious political manipulation bots. That is on the stranger end of things, you have to admit. To be clear I think it’s real, but also real weird.
Personally I think a bug report story is not a particularly plausible strategy for such a thing - this person’s concern seems entirely genuine - but crazier things have been done for money. I’m relatively skeptical of complaints from companies about short sellers and bad press, but also recognize that stock manipulation happens a lot more than most ppl are aware of.
Imagine how many people are vulnerable out there - I'm already starting to read some complaints on the internet that some people were unknowingly sharing a video of them taking a shower, etc.
Yes, if options purchases were made before the market found out.
Yes, if their intentions were to change stock prices with the announcement in any respect whatsoever, whether up down in price and/or in volatility.
No, if they were not intending to change the stock market with the announcement, regardless of the fact that they did. (Naïveté happens. So does unconcern. Still, No.)
odd that this is so recent and the twitter account is so fresh, with so many facetime sessions esp with ios being popular among those in infosec
Ps: As others noted maybe it’s fair enough if support failed to acknowledge the bug quickly enough.
1) The entire world of iPhone users
2) The financial markets (Apple suffers)
3) The financial markets (non-Apple benefits)
4) The political sphere (distraction from)
5) Deniability (they got their recording and leaked the bug to deny how)
Linked tweet shows an email signature on reply from “Deven” of “Apple Product Security” and the senders original message.
I am willing to give details and provide a home video
that I to ok to show you the flaw, but would like to
discuss this with someone prior to doing so...
It is unclear whether Apple provides a reward program
for non-... [cut off]
I'm curious how they can mitigate the reputational damage.
It gets worse:
If the recipient rejects the call by pressing the power button, it starts sending video.
You can root, but it's security-suicide because the OS' security model isn't designed for that.
> or use one without signing in with an Apple account
Technically yes, although you couldn't download any apps so you really don't want to.
> If not, then I believe the devices are still very much part of a mass-scale corporate surveillance network.
What I always tell people is to look at the economic incentives. Apple makes the vast majority of its money from paying customers, not advertisers, so it has less use for big data. It has also invested huge amounts of money in marketing itself as a champion of privacy, a key diversifier from its competitors. If it tried to do some shady data-gathering, it would have relatively little to gain and everything to lose when that inevitably leaked. Apple usually can't even keep the next iPhone a secret; there's no way they could conceal a massive conspiracy to secretly collect data on customers. And if they did, and it became public knowledge, their sales would go down the tube. It's just bad business. I don't trust any corporation to do what's right out of the goodness of their hearts, but I certainly trust them to do what's best for business.
It's not necessary to download apps from the App Store to use an iPhone.
I'm not sure what was the need for you to call me a liar. If anything I've said is inaccurate, I guess it has changed in the last few months, and I would love to know that.
So it's not correct to say "you can't use an iPhone without an Apple ID" in any sense. People should be SPECIFIC.
It’s not like iPhones have a reputation for not having bugs; it seems like every version has a passcode bypass or a DoS-via-iMessage. By some standards, this is worse (remotely triggerable, leaks audio/video), but in other cases it’s not as bad: the attacker’s Apple ID ends up in the call logs of the affected person.
Are there prior examples of any phone manufacturer being reputationally damaged by vulnerabilities like this? Heck, Samsung’s phones literally caught fire and they’re still selling phones just fine.
Oh I don't know, someone denied the call because they're possibly in the shower, or other inappropriate moments. Oh look now they're naked on a video call... Yikes!
Is there any historical evidence that high-severity bugs in iPhones (or really any mobile phone) are reputationally damaging, sufficiently that Apple would worry about the impact of this bug?
I'm not aware of any instance in the past where a high-sev iPhone bug had noticable long-term impact. This is similar to other issues, like the Sony PSN hack, where despite the gravity of the issue, everything continued long-term as if nothing had happened.
That reads much clearer, you're right, I misunderstood what you meant. I think it depends, PSN isn't as personal as someone's potential unsolicited nudes being extracted by total strangers. If enough bad press came of it, it wouldn't be the same ballpark. I certainly hope nothing horrible comes of it.
I'm sure Apple "worries" about any bug and its potential impact on its reputation, particularly in the area of privacy, where it has a leg up on Android at least in perception.
That said, what historical bug is up to this one for iOS? This is a big deal and I cannot recall anything similar.
edit 1: this hasn't exactly been a banner couple of months for iPhone. You'd expect that mitigating any negative news about the device would be paramount
edit 2: look to Facebook. I find it encouraging that people have reacted so negatively to a company acting so cavalier with their personal data and privacy. Yes, I think Apple cares, moreso than with other bugs.
Where sending somebody a .tiff file via iMessage, web page, or email would give the attacker RCE on the device.
I also don't think it had the impact you're suggesting, nor would it be as immediately palatable as a privacy issue to the layperson.
The sad fact is consumers don't care one bit. The only people who care are people who have some interest in disliking the company who was breached, in which case they're likely not paying customers anyway.
Audio and video leaking without knowledge from such a personal device that people take with them everywhere is about as bad as it gets in terms of privacy breach.
If Apple fixes this bug this week, I will consider it a significant bug with a good response, and move on. If I wanted more privacy, what would I switch to anyway? Android? Ha!
Yes, I keep notes about things, some of which contain links. Obviously, these links used to work.
Anyone who has filed a few rdars knows it is thankless work. The amount of work you have to invest for anyone to even look at your bug is high. In the instance of this particular bug, I wouldn’t be surprised if at least part of the reason it took a week and a half to handle since it was reported was that the initial reply to the reporter was “please send us the exact steps to reproduce” and then nothing was done until the bug reporter replied back. I wouldn’t be surprised to learn there were even a few iterations of this, since I personally experienced it.
Then, your bug gets looked at. But you don’t know anything about its status. Until anything from a few days to a few months later, it gets closed as a duplicate. Of course there is no way to know in advance that the bug was already opened, and that you could save an hour of work time instead of making a minimal reproducible version of your app which reproduces the bug.
At least, that’s been my unfortunate experience.
First of all they convert the time automatically as someone already posted, which is a very welcome surprise.
The second (and most surprising for me) thing was that they only have 2 statuses: "Available" and "Issue". And while most status pages have trained us to expect that the antithetical of green is red that it immediately catches your eye, in this case it is a simple yellow diamond which is much less alarming. I sense the PR department had its saying in how to present service availability issues.
I thought FaceTime was a P2P thing... but it appears group FT requires Apple's servers?
I also tend to agree with you that this is not necessarily related to these programs. The reason I mention this at all is because I think there is a reasonable chance that this is related, that this overall issue is very important, and that the amount of cognitive dissonance on this is surprising and regressive. These programs are real. Companies facilitating access, including real time, to your data and "private" conversations is real. And it seems to only be getting worse. Yet people seem to convince themselves otherwise, including throughout this thread. Part of the reasons these programs are able to carry on mostly unchallenged is because people convince themselves that what is happening, is not.
 - https://en.wikipedia.org/wiki/PRISM_(surveillance_program)#M...
Perhaps Apple should simply pull the plug on the facetime servers for now.
Is it just me, or is such a phrase applicable to Apple far too many times in the past several years? I think their engineering is losing quality or is falling behind on what they have to cover.
Couldn't the same statement be made about Facebook, Google, Yahoo, and other very large tech companies too?
When a company has a billion users, pretty much any huge flaw is going to have a very wide reaching impact.
I wonder if they (executives? engineers? the company itself?) could be charged with aiding and abetting wiretapping or something now that they know it's happening and are letting their servers keep doing it.
I didnt want to post his supposedly actual public email for fear of being accused of doxxing him.
This means even if you have his internal FaceTime/iMessage ID, you wouldn't be able to contact him because his account and yours exist in 2 realms.
Then again I guess he'd need an external-facing one for public VVIP to FaceTime him. Maybe he just carries 2 phones?