Hacker News new | past | comments | ask | show | jobs | submit login
The iPhone X’s notch is basically a Kinect (theverge.com)
418 points by Tomte on Sept 17, 2017 | hide | past | web | favorite | 224 comments



Fun story... In December 2012 I bought a Prime Sense Carmine, The Kinect for near-distance objects like a face ... mostly to play around with Faceshift (http://faceshift.com/studio/2015.2/introduction.html#introdu...) and (http://www.cs.ubc.ca/~chyma/publications/ur/2015_ur_paper.pd...)

Apple announced its purchase shortly thereafter and the Carmine got yanked from public sale ... thereafter Carmines were selling at a premium on Ebay, I suppose for competitors to reverse engineer.

Been waiting for this to crop up in an Apple product...

Incidentally it is a shame an equivalent device is not available to hack on...


Same thing happened to me in about 2008, IIRC, with the Authentec fingerprint sensor.

We were working with Authentec's 72dpi parts and complaining that the resolution was poor. In an engineering meeting, they told us that they had a high-res large-format part entering production. ("Large format" meaning "as big as your fingertip"; the existing high-res parts were maybe 5x5mm.)

Amazing! we said. They agreed to send us some documentation and samples.

A month later we had no samples. They denied ever telling us about such a part.

A year or so later they were acquired by Apple and Touch ID was the result.


That is... cool. It's difficult for me to articulate further, except to say that's creepy in an awesome kind of way, if that makes any sense.


Bought by Apple for $360M in 2013. I've been wondering when did they start thinking about this feature (or rather this kind of implementation for it). It seems that even when you have a lot of money and experience, there's still a pretty long way from an idea to its successful execution.


That matches pretty closely with the three year design lead period for the Neural Engine that Apple revealed.[1]

[1] http://mashable.com/2017/09/14/inside-apple-a11-bionic-and-s...


Money has little to do with it. You can only add so many engineers to a project. Therefore, some things just take time.


Aka "9 women won't give birth to a child in 1 month". An example often used to describe how adding more developers doesn't necessarily solve an issue quicker.


"What one programmer can do in one month, two programmers can do in two months." - Fred Brooks


Credit for this goes to Fred Brooks, and appears in his famous book "The Mythical Man-Month"


Yes it does. I could not find my copy to find the quote I wanted to use so I used my own (not quite as effective) words.


Didn't mean to imply you'd done something wrong, just added to your comment.


I wonder what they are eyeing up to buy at the moment and what will turn up in iPhones in 5 years time.

Some sort of AI accelerator chip. Or clear phone / heads up display.


> Some sort of AI accelerator chip

like the Neural Engine in the A11? :]

https://www.theverge.com/2017/9/13/16300464/apple-iphone-x-a...


Am I wrong or that article gets many things wrong. These chips are not designed to do any learning, but run pre-trained models (much like FPGA's few years back or Google's TPU).

The actual heavy lifting (model training) still happens and the """cloud""".


You are wrong. For one thing, the learning of your face, and learning over time as your face changes (glasses, hairstyle, beard) happens fully locally.


According to Wikipedia the acquisition happened on Nov 24, 2013, basically 2014


The iPhone X will be released in Nov 2017, basically 2018. So that doesn't change the 4 years


If you want something to hack on, check out the Structure Sensor (https://structure.io). While it's mostly iOS-focused, you can get a USB hacker cable and interface it with a computer. (Disclaimer: I work for Occipital, makers of the Structure Sensor)


I picked up an Intel RealSense video+depth camera a couple years ago to mess around with. Haven't done too much aside from some experimentation in TouchDesigner and playing with stuff like Z-Vector to use it in projection visuals.

But there are still a few depth cams out there other than the old Primesense ones or hooking up a Kinect to your PC.


There's probably a handful of acquisitions in this release cycle:

WiFiSLAM in 2013

PrimeSense in 2013

LinX in 2015

Metaio in 2015

Faceshift in 2015

Emotient in 2016

Flyby Media in 2016

RealFace in 2017

and probably more that isn't obvious to me.

It was reported by Bloomberg, funnily enough in an article framed as Apple struggling in M&A [1], that Metaio took a lowball offer:

>Apple often refuses to work with investment bankers appointed by the seller, preferring to deal directly with company management, according to people who have been involved in such negotiations. Apple also dictates terms and tells targets to take it or leave it, betting that the promise of product development support later and the chance of appearing in future iPhones are alluring enough, the people said.

>That was the case when Apple acquired Metaio GmbH in 2015. Bankers appointed by the augmented-reality firm to negotiate weren’t allowed in the room, and while Metaio executives felt the offer was low, Apple’s vision for the technology convinced them to sell, according to a person familiar with the discussions.

>Apple’s current M&A strategy works well for acquiring startups developing new technology that can be added to existing Apple products. It bought 15 to 20 companies per year over the last four years. But buying larger companies presents a different challenge, particularly if there are rival bids. Bankers often diffuse tension between bidders and targets, but Apple’s approach can make that process difficult.

>“There’s a swagger -- you may call it arrogance -- about the culture there,” said Risley of Architect Partners. “They’re used to being able to muscle their way in and get attractive economics.”

Which seems completely logical on Metaio's part. It's obvious a lot of these startups working on fundamental technologies are just going to toil in obscurity, and selling to Apple is a chance to have your technology deployed and used in the biggest way possible.

[1] https://www.bloomberg.com/news/articles/2017-02-15/apple-str...



>That was the case when Apple acquired Metaio GmbH in 2015. Bankers appointed by the augmented-reality firm to negotiate weren’t allowed in the room, and while Metaio executives felt the offer was low, Apple’s vision for the technology convinced them to sell, according to a person familiar with the discussions.

In other words, the offer was realistic.


It hasn't been confirmed but it's highly likely the X has a liquidmetal back. Apple's had an exclusive arrangement for quite awhile and this would be the first significant use of it.


Isn't the back glass to allow for wireless charging?


Primarily yes but, to be clear, liquidmetal is a (metallic) glass.

So yes Apple's main reason is for wireless charging but they also want to differentiate where they can and liquidmetal gives them "the hardest glass in a smartphone ever." (Ive quote).

Analysis and background including a very relevent patent application made a year ago and published in March for "using Liquid Metal (Metallic Glass) for the Backside of an iPhone":

http://www.patentlyapple.com/patently-apple/2017/09/apples-l...


This is the reason that even the makeup, glasses, etc. that people have come up with to counter facial recognition are going to be completely ineffective in a few years.

Contemplate if you will a hallway immediately past Customs in an airport. Equip it with multiple sensors of this type to provide complete coverage and redundant imaging/sensing. While you're at it, set up gait recognition as well. Then correlate the received profiles with the passport/identity data of the people just passed through customs.

Congratulations, you've just started building a database of everyone entering the country with biometric data that can be checked in the field with equipment costing less than $1000, and which can later be cross checked to find people traveling with false or multiple IDs ("this facial structure comes back as matching (person x) and (person y), and the gait is almost the same. Pick him up."), and it could all be done with technology that exists today.

Edit: autocorrect


I wish as technology becomes available which could be used for more Orwellian surveillance someone stood back and thought about ways it could be used for good rather than defending some absurd fear of terrorist attacks.

Like... I remember a few years back there was an article about using high framerate cameras to detect the heartbeat of people the camera viewed in hospitals. I wish surveillance could be used to watch people at risk for heart attacks, bring it to their notice if one is detected in the early stages, and direct them to their nearest hospital - in a way that involves a concert of technologies/devices.

Instead of CCTV cameras on every corner being used for criminals, use them to watch for health risks? Alert the person through their Apple Watch, send directions to the watch and phone, alert nearby first responders, notify family, etc... A route could even be established for an ambulance ahead-of-time, rather than as an ambulance rolls up to a signal.

I'm tired of worrying about terrorists. Maybe I want to be blind to the small chance a carbomb is possible. I know the most innovative stuff is often a result of defense spending/planning, but I hate thinking a driving force behind technology today is people afraid of other people.

</end-rant>


I also am tired of "Terrorism" as a life concern (we should be worried about more realistic concerns).

I will say however that the Apple Watch is now being (or soon will be?) used to detect heart problems and notify the wearer. So this kind of thing is happening.

Similarly crime watch apps do exist and could use the Apple Watch for notifications and safe navigation.

But if we had a crime detecting surveillance system there's no way to only use it to help good people stay safe. It would also be abused in ways that cause innocent people to suffer.

I'd just like to see a society that doesn't throw people in to jail (ruining their life) for trivial things. I mean, such societies exist - I'd like the USA to be one of them.


>I also am tired of "Terrorism"

Many people are concerned. But as a matter of fact. More people die every single day in car accidents. We as a society should not give up our freedoms (the right to be forgotten, the right to remain anonymous) for the illusion of safety. As the recent London, Barcelona, or Charlottesville attacks have shown, you only need one crazy person to carry it out for whatever reason. And even with a sophisticated tracking network you won't be able to stop this effectively.


think you're preaching to the choir here


That’s how you get 'em to sing.


OT: interesting you put Charlottesville in that list. Do we have any reason to believe James Fields planned to drive his car into a crowd of people that day? I think to call it a terrorist attack would require establishing it was planned / premeditated. From what I can tell the prosecution will be hard pressed to prove even a hate crime, let alone terrorism.


Planning has nothing to do with it according to The Patriot Act 2001, however Charlottesville doesn't seem to have been defined as terrorism. "The USA PATRIOT Act of 2001 defines domestic terrorism as "activities that (A) involve acts dangerous to human life that are a violation of the criminal laws of the U.S. or of any state; (B) appear to be intended (i) to intimidate or coerce a civilian population; (ii) to influence the policy of a government by intimidation or coercion; or (iii) to affect the conduct of a government by mass destruction, assassination, or kidnapping; and (C) occur primarily within the territorial jurisdiction of the U.S."

Everyone's favourite source. https://en.m.wikipedia.org/wiki/Definitions_of_terrorism

However far right violence has a track record of not being labelled as terrorism quite as easily as attacks by Muslims. https://www.google.co.nz/amp/s/static.theintercept.com/amp/t...


That is not a meaningful distinction. A car attack planned 30 seconds in advance is as dangerous as one planned 3 weeks in advance.


I think it's a reasonable distinction--one basically can't be defended against, and the other can benefit from ubiquitous surveillance (theoretically) in advance.

The problem is that if random acts of violence are conflated with systematic terrorism, we grossly inflate the numbers in such a way as to be constantly crying out for further surveillance and erosion of rights. And when the attacks keep happening, because they're spur-of-the-moment lunatics, we will proceed to clamp down harder.

So, no, we have to draw the difference between the two.


The law certainly finds a meaningful distinction between spur of the moment versus premeditated attacks. Also a terrorist attack is carnage for a specific purpose, not just any carnage at all.

I'm not sure there's even any evidence it was planned at all (even 30 seconds in advance) versus a panic response to being under attack by a surrounding mob.

I'm not trying to defend any particular action, just that it does seem very distinct from, for example, a bucket of TATP and nails left in the subway. I personally am waiting for after the trial to draw any conclusion about Charlottesville.


I'm not sure there's even any evidence it was planned at all (even 30 seconds in advance) versus a panic response to being under attack by a surrounding mob.

Oh, please. Take one minute to watch the (graphic, disturbing) videos from the scene and observe that, before the attack, the Charger was nowhere near the "mob" when he made the decision to gun the engine to plow through it. Fields started his attack run from well down the street, perpendicular to the marching counter-protestors.


> I personally am waiting for after the trial to draw any conclusion about Charlottesville.

Why? Unless you have some kind of power over the defendant, there's no reason to reserve judgement until after the trial. Are you unwilling to make a judgement about, say, Julian Assange until he's tried in a court of law? Did you reverse your opinion about Joe Arpaio when he was pardoned? I have a lot of faith in the justice system, but verdicts are wrong all the time, and should be one of many factors in how you view the world.


The purpose of a court of law, at least historically in the US, is to make sure that evidence is brought out and the situation explained and explored. We should not dismiss that lightly.


I'm not dismissing it, but unless you have power over someone involved, there's no reason to pretend you don't have an opinion until after the trial, or to unquestioningly accept the verdict. In this particular case, there's enough evidence already available to draw a nonbinding conclusion. Of course you should still pay attention to any new evidence or testimony in the trial, and don't dismiss the result if it disagrees with your early conclusions.

But it's ok to think OJ was guilty, it's ok to think Arpaio is guilty, it's ok to have an opinion about Julian Assange even though he hasn't been tried.


Terrorism is violent crime intended to further an ideological goal by intimidating opponents. The amount of planning involved is irrelevant.


Most attacks (most like in "98%") are prevented before the crazy guy even goes out of his room, by intelligence. At least guys from local special forces share and approve that in regular talks. If they lose all tracks, they become completely blind. CCTVs and detectors are there only to calm down people and are somewhat useless, in a statistical sense. If it began, you'll know it anyway.

These acts happen because intelligence made a single mistake in a day-to-day work, not because terrorists invented yet another way to do it. The key points are communication and identification. No one will crash into a crowd without talking about that with co-crazies. It's also naive to assume that they prepare actions only few times a year. They really hate you, your life style, your everything. If you turn your defenses off, car accidents will appear like an insignificant loss in few weeks.

Why I'm stressing this? Because intelligence methods are naturally contradicting your freedoms. Blissful ignorance should not drive a freedom train.


>Most attacks (most like in "98%") are prevented before the crazy guy even goes out of his room, by intelligence

[citation needed]


Citation of what, of officialy classified terrorist do and don't tutorial? Are you joking or do you really think it works [citation needed]-way? No wonder NSA/etc fools you for decades then.


So, did you just make up the number?


Most is not a number.

Upd: for more specific details:

https://thebreakthrough.org/generation_archive/effectiveness...

http://www.tandfonline.com/doi/full/10.1080/10242694.2011.65...

Just few of dozens with proper citation and research, for those who need it.

This one is for UK known as very serious in police regard, afaik: http://www.bbc.com/news/uk-39183003

And finally, this one is world-wide. Scrolling through 2016 on mobile may be pretty enlightening or frightening, depending on your views: https://en.m.wikipedia.org/wiki/List_of_thwarted_Islamist_te...


> Most is not a number.

...You said "98%". It's right there.


Someone also said "free like in beer" but it wasn't beer. Your arguments to the point (not to misinterpreted words) are still welcome, though I'm pretty tired now that three different persons put a honk and lost in fog. Cool argumentation, dudes, hold it like that.


So the answer is yes, you did make up the number. Thank you.


I assume you have no viable internet connection to follow provided links, so citing may be useful:

>Additionally, the CCTV has had only a 3-15% success rate in identifying suspects of crime, burglary, etc. Given the generous assumption of equating the cameras' ability to stop crime to stopping terrorism, there would still only be a 15% success rate at the most. Counter-terrorism methods must be 100% effective or people will be murdered. This seriously impacts the argument that CCTV surveillance (as it is being done currently) would be able to effectively stop terrorism.

Hope these non-made up numbers clear some details on topic, my negative friend.


It could be abused in ways that cause innocent people to suffer, if we don't have the checks in place to prevent this. It could also be used to protect the innocent from being mistakenly charged.


cue RichardStallmanism, we need a stable open hardware effort to provide valueable technology to humans that actually need it.


Yes! Imagine some sort of AI camera surgery assistant that can help keep track of things like surgical spouges. Imagine a set of eyes watching your back at hour 6 of a long operation that never gets tired or frustrated.

It does create a different sort of "machines replacing humans" scenario. Instead of AI destroying us or replacing human workers because they are more effecient, machines replace us because they are just nicer people. They never get tired, hangry, frustrated. Superhuman levels of patience and empathy. I myself am a teacher, but I think I might prefer my children taught by TeachingUnit7000 who is just as excited about the lesson for the last class of the day as for teh first.


Ah you just reminded me of an idea I had when I was pursuing my bachelor's. Basically I wanted to use Google Glass to create surgeon accountability software. It would record a surgeries and store the video then use CV to look at shaking hands, technique, tools, and process. I figured it would allow us to correlate processes to recovery time and outcomes. Could you imagine the data we could have gotten? It would've been astounding. However I went to the head of surgery in the med school at my University and was told that in no uncertain terms would any surgeon wear that or submit the recording that in fact it would turn into a malpractice liability. He just couldn't get past the harm that it would cause him.


I don't want to dismiss your idea, but it's also worth noting that the majority of surgical errors are decision-making errors, not technical errors. Most mistakes aren't anything you'll see on video; they occured before the case started when the surgeon decided to perform operation X when Y was more appropriate.


Yes, I remember watching Atul Gawande talking about reducing Surgical Deaths by some 30ish % after creating a checklist for surgeons to do x after y after z!


That sounds like a very general statement that could benefit from evidence such as the one that might have been provided by actual recordings of the procedures.


These technologies are only used to penalize surgeons and never to enable new things. Obviously course your idea would be rejected.


My argument would be either use them now where the surgeons can still use the tech to benefit them or be forced to by malpractice insurance later where you can't reap as many benefits.


> help keep track of things like surgical spouges.

A simple checklist suffices for this. No need for AI camera surgery assistants.

cf http://content.time.com/time/health/article/0,8599,1871759,0...

> The study focused on six checklist items, all involving basic safety issues, [...] whether all the sponges used in surgery were accounted for after the procedure.


It might seem like an absurd threat to you, but 8 people were killed by terrorists near my office in London a few months ago, including one of my colleagues. I would also not like to see Orwell's book bought to life, but perhaps you can find a more sensitive way to express your view.


I will admit that terrorism does not affect me personally here - not in the direct way you have experienced. I am sorry if I seemed insensitive.

I feel that largely we don't innovate for the good of society anymore - only to gain greater control over unpredictable masses that can riot over a Tweet. (((But mostly to catch terrorists & pedophiles.)))


Can you help me understand what the purpose of the "echo"[1] in your second paragraph is?

1: https://www.adl.org/education/references/hate-symbols/echo


I was using it like an inside thought? - that's a weird reference to make. I chose an arbitrary number of balanced parenthesis to signify an inside thought.


I don't feel it was that weird a reference to make, and the reason I asked was that I didn't want to jump to conclusions about it's use.

Three parens surrounding words like "pedophile" is something that shows up a lot on Twitter, and is meant as a dogwhistle for antisemitism - just so you know and don't get caught off guard of someone assumes that's what you mean in the future.


> I wish as technology becomes available which could be used for more Orwellian surveillance someone stood back and thought about ways it could be used for good rather than defending some absurd fear of terrorist attacks.

The issue is the fact that a "binary test" has 4 outcomes.

The good ones: Test says "True X" and "X is true". Test says "False X" and "X is false".

The bad ones: Test says "True X" and "X is false". Test says "False X" and "X is true".

Some of these can be problematic depending upon what "X" is, what the response to "X" is, what the probability of "X" is in the population, and what the probability of the "false positive" of "X" is.

In medical, this can pose a real dilemma.

At any individual test let's call the diagnosis rate 1%. If the false positive rate is about 1%, we are diagnosing one person who has "X" along with diagnosing one person who doesn't have it.

Not too bad, right? Ummmm ...

If "X" was heart disease, sure. Most of the interventions for heart disease are things like fix your diet, get some exercise, quit smoking, etc.

If "X" is breast cancer, not so much. Interventions for cancer start with something like a biopsy which can have lots of complications (hematomas, secondary infections, etc.). In addition, a cancer may not spread, or something else may kill the patient before if you left it alone.

So, the damage done in the response determines how low the false positive rate has to be.

In addition, there is a psychological aspect to a diagnosis that a lot of people overlook. A false positive for "heart issue" on your wristband can upset certain people so badly that it causes an issue.

And I haven't even started talking about the false negative rate problems. What will you do when people start saying: "Nah. I don't need to go to the doctor, my Apple watch isn't flagging anything."

This is why this kind of stuff gets deployed for "terrorism" first because there isn't any consequence to the manufacturers whether they're right or wrong--so they don't care about the relative probabilities. The fact that some random schmuck gets detained and searched isn't the manufacturers problem.


Unfortunately the cynic in me says there's more money in keeping people fearful.


Criminals should be fearful, so that they think twice before committing a crime against an innocent.


I don't know that there's anyone out there twisting an evil moustache, counting the profits from keeping anyone fearful.

News do fear mongering for the same reasons people write click-bait article titles - it works.

It works because humans are not that far removed from our ancestors - the primal buttons are the easiest to push and they work on the largest number of people.

Anyone whose job it is to compete for people's attention, knows this very well. To pretend there's people far above who are 'bad', or 'good' is a coping mechanism to avoid seeings people and life for what it is :)


>I don't know that there's anyone out there twisting an evil moustache, counting the profits from keeping anyone fearful.

Doesn't have to be some actual singular person.

Businesses, media etc work as collective entities, alone and together, towards their goals, that is optimizing their environment for maximum profit -- similar to how our cells, which are living things in themselves, do.

If profit is the basic measure of success for an organization, it will reward those individuals behaviors that help it grow -- to the point that a perfectly normal company, like WV, from seemingly decent people, will create execs that lie about their pollution emissions, bribe politicians, etc., and -- among tons of similar examples.

Similarly, it would try to expunge those in the organization that do moral or consumer friendly stuff that costs the organization money.

>Anyone whose job it is to compete for people's attention, knows this very well. To pretend there's people far above who are 'bad', or 'good' is a coping mechanism to avoid seeings people and life for what it is

People that do "what works" because it brings money in, other considerations be damned, are bad, end of story.

Everything else is a coping mechanism to avoid seeings people and life for what it is (and for them to rationalize to themselves that they "just do their job").


You're perfectly describing capitalism's essence, with a cherry on top, labelling it 'bad'.

The point I was aiming for, is that people like what's easy, what's natural.

You're saying 'they're bad'. That's actually the Christian axiom from what I understand, that people are inherently sinful and bad. I'd like to suggest that feeling that way is a dead end.

In other words - I actually agree with everything you're saying, minus the 'bad' label. I label it 'I'd do it differently' which's subtle but very important. Labelling what most people around you are good with, bad, is simply going to make you a misanthrone [0] and that'll prevent you from leading the life you want :)

The point regarding coping - labelling people bad and good is an extreme simplification used for children. Once you're an adult, it's much more helpful to understand that what another does, you do in other ways, maybe much more subtle, and let go of harsh judgements. Judgements separate people into us vs them, which's what leads to most of the behaviour you're labelling bad in the first place.

It's a very subtle point, that most people endlessly miss.

[0] https://en.wikipedia.org/wiki/Misanthropy


>You're saying 'they're bad'. That's actually the Christian axiom from what I understand, that people are inherently sinful and bad.

Never said that "people are inherently sinful and bad". I said that (and I quote): "People that do what works because it brings money in, other considerations be damned, are bad, end of story". Which is a very different thing.

>Labelling what most people around you are good with, bad, is simply going to make you a misanthrone

Most people aren't profit seekers "other considerations be damned". And if they are, then they are bad -- doesn't even matter if they are the majority. You don't get a free pass on crapping on others because "most do the same".

>The point regarding coping - labelling people bad and good is an extreme simplification used for children.

Labelling a sad state of things as "natural" is an extreme perversion or an extreme manipulation. Human civilization is not based in whats natural. Some natural things and behaviors are good, others are bad. Civilization is all about making that distinction.

>Judgements separate people into us vs them, which's what leads to most of the behaviors you're labelling bad in the first place.

You gave me a $2 philosophy (and condescending at that, with the link to "misanthropy" to enlighten me of that "obscure" term), whereas I gave specific examples of behavior.

People promoting fear from their media outlets because "it sells". People who lied about emissions in WV's case. Etc.

Yes, we should absolutely separate people into "good" and "bad" for such offenses. I don't care if otherwise they are complex characters and "good people", e.g. great with their kids or tender to their adopted a one-legged pet hamster, when their actions hurt society for their benefit (be it money, promotions, esteem from their colleagues, etc).


So you're saying fear-mongering "works." Specifically, in an attention-based market, this generates profits more successfully than other strategies. Or to put it another way, there's more money in keeping people fearful.


Nah, i don't think that's it. Fear is a feeling, which is per default not any more or less connected to "i need to buy this thing" than any other feeling. So why should fear encourage spending money more than any other feeling. How about the idea that a joyful person is much more inclined to spend money - but on different things. Just imagine a world without any fear, just with different kind of markets and the same amount of money.


Joyful people don't spend much money because they're already satisfied. When people dish out for large homes, expensive cars and whatever else, they do it because they're insecure about something.

If you subtract the cost of our insecurities and fears from the cost of living, life is actually really cheap.


> Nah, i don't think that's it. Fear is a feeling, which is per default not any more or less connected to "i need to buy this thing" than any other feeling

It seems to me that a significant part of the press and TV news are selling just that - fear. It doesn't take long to find a word like "threat", example taken at random, on the first page of the NYT ("Shinzo Abe: Solidarity Against the North Korean Threat").


If you have fear, then you'd be more willing to give up certain freedoms to rid said fear. Freedoms such as privacy. Mark my words, there's going to definitely be massive facial recognition on the population, in the name of keeping you "safe".


>Nah, i don't think that's it. Fear is a feeling, which is per default not any more or less connected to "i need to buy this thing" than any other feeling. So why should fear encourage spending money more than any other feeling.

Because amassing stuff gives psychological relief from such anxiety ("shopping therapy").

The same reason people on midlife crises go on shopping sprees...

>* Just imagine a world without any fear, just with different kind of markets and the same amount of money.*

Not all possible worlds have the same level of shopping activity.

It took heavy advertising and quite a lot of conditioning and pressure for early 20th century society to turn it into the shopping culture that it has today.


I've always believed that the price of freedom is that from time to time some of us are killed. Obsessing over making that number zero is essentially handing your freedom back, saying, "no deal..."


That is such a false-dichotomy. You can have freedom and safety at the same time, and need-not sacrifice one for the other.

The reason people say that one has to lose at the expense of the other is because of weird "freedom losses" such as privacy. Lack of privacy is not a loss of "freedom". Surveillance, again does not mean a loss of "freedom". That is, unless you're doing something wrong, in which case the system is behaving exactly as it should (identifying someone that is doing something wrong). Tech-people constantly talk about old laws having to "change" for the new technological landscape we live in. But we rarely talk about changing how crime and justice gets handled in the new landscape. And it is a discussion we need to have, because we're moving into territory where automated systems can very easily detect and identify the occurrence of crime. Magically moving into "surveillance reduces our freedom" because it catches all of the commiters of that crime, which never used to be the case until technology enabled it, is a disingenuous path to take. Just look at red-light or speed-trap cameras to see how it's already been playing out.

Sure, once we start going into territory such as mandatory curfews and what-not in order to combat crime then you can start saying that we're losing freedom. We're not even close to that, as much as that pains me because I think that a government that permits the existence of easy-to-stop violent crime in 2017 is a morally bankrupt one that is complicit to those crimes on some level.


Now why on earth would I want to help come up with better justifications for people who want to spy on me, just because their current justification - fear of terrorism - is irrational?

This seems like really screwy logic. How about we just don't spy on people? Or, if you think it could be used to help people, maybe start from a point of "how can we help people" rather than working backwards from "how can we come up with legitimate uses for mass surveillance".


I'm of the opinion surveillance will not decrease or stop, so we should find screwy ways of having it work for a societal positive - and not be a fixture in the landscape on the war on terror.


We need a sort of Geneva Conventions for privacy and unwanted intrusions on attention.

Creating something like a wide-scale database about individuals based on data collected without their consent or under duress could be treated, internationally, like the deployment of nuclear weapons in a first strike.

Likewise, we could charge people who deploy pop-up ads with crimes on the magnitude of a mugging. Botnet operators could be hunted with the zeal we currently reserve for drug smugglers and media pirates.


The US is already moving to overtly image the face of everyone visiting the country during passport control.

https://www.cbp.gov/travel/biometric-security-initiatives


This is already happening. People visiting the US and entering through airports have had their fingerprints and facial biometrics captured since 2004, and it's been progressively expanded since then. All visa applications have the same information captured at interview, too.

The current large expansion planned is capturing facial biometrics at exit, and using this to confirm departure (instead of relying on airline manfiests).


But that's nothing compared to IDing people by their license plates, face, gait, etc. as they pull up to the airport, to catch them earlier.


People need to stop thinking gait is a useful metric. It's been disproved as an effective biometric, and is largely ignored by the (non-fraudulent) biometrics research community and companies offering products based on said research. There are simply too many trivial things that significantly alter an individual's gait to make it a meaningful metric.


Pretty sure PDX captures faces when exiting. They force you through a series of doors/lanes, and if you look up two cameras (at least) per lane facing you. Since they're spaced apart I wonder if they're using it to make a 3d image.


They take iris scans I think - that's why you have to gaze into the scanner. Iris recognition is very accurate (they prefer better than 1 in 1000000 accuracy!), and hard to modify.


I fly into and out of PDX regularly, and there's no scanner that you have to gaze into. Parent is talking about cameras at a high angle in a hallway—no one is there to make you look up. Are you talking about the millimeter-wave scan in security? Your eyes can be closed while doing it—no one complains.


Yeah, you know what I'm talking about. They're fairly new (at least on the D/E side. They're angled in a way where it should be able to get your face as you're walking up unless you stare at the ground, but considering having your face scanned isn't a requirement for flying they're probably fine with that - if they are in fact collecting this data.


Ahhh, not the same as the lanes we have that you need to gaze into an obscured camera to pass inwards international border control (must get green light before gate will let you through).


Global Entry kiosks require you to look directly into a camera a foot from your face. It's not a very good camera, but I wonder if that's what they are thinking of?


They already check fingerprints. I don't see how this would be significantly different.


As 'icebraining alludes to, finger printing requires consent or force. Face/gait recognition via camera does not.


This tech can be incorporated into public CCTVs.


Actually, this is literally 100% what Australia is looking to introduce in the next few years in its airports - gateless immigrations.

https://www.theguardian.com/australia-news/2017/jan/22/facia...


Right now, if you hold a friendly passport, the only reason you stop at all for immigration is to answer "Have you been to Africa in the last three months?" and have your face scanned.

Customs will still delay you for not particularly good reasons.


You still need to go through the 'SmartGates' or whatever it is. They're aiming to completely remove all of that.


Is it because of some potential illness?


Remember reading how biometrics essentially ruined the old-style spies, they can no longer get through the airports. And once they get burned, they're out!

I am pretty sure that each country already runs the same face but two different IDs check. Ironically, the poorer countries probably have better tech since they started fresh, countries like USA keep patching their decade old systems.


Here's one fool proof way of countering facial recognition: a paper bag


Entering any place where there are armed security personal with a paper bag on your face is a good way to get shot.


That and a passport under the name "Unknown Comic" should work just fine, right?

https://www.google.com/search?q=unknown+comic&hl=en-US&prmd=...


How about women wearing burka and niqab?


Does not matter, we already have hi-resolution IR that images through clothing. (and paper bags.)


No face, no access.


Draw a face on the bag.


Women in societies that require face veils will be the only ones who can move around freely, which will be ironic.


No it won't. If you are not able to identify yourself you won't be able to enter[1] and will be deported.

[1] http://www.aljazeera.com/news/2017/09/brussels-airport-depor...


Not until they start putting QR codes on them.

Not that women in those societies can move freely, they must be accompanied by a man in most countries that do that to women.


But QR Codes are dead ? Isn't that what I have heard over the last 20 years?


Or those masks in mission impossible.


Worrying what governments will do with this is a little backwards. Worry what Google's and Facebook's ad networks will let their paying customers (including political campaigns) do with this.


As wary as I am of Google and Facebook, governments can arrest me.


Equifax has been showing us just how hellish private companies can make our lives, without any recourse.

The state may have a monopoly on violence, but private companies have more than enough power to ruin us even if they don't arrest us.


If a corporation chooses to, there are numerous ways they can ruin you, with little to nothing a human can do about it. The legal system is stacked to trust corporate entities far beyond the legal power a single citizen or any family of private citizens that may try to defend them. Most criminals never actually generate the notice of a corporation for their interest to focus on them. They are very aware of organized crime, but there is little corporations can do about them. However, the economic combat taking place in some industries actually is the stuff of espionage spy novels.

This thread has a lot of sensor speculation about what if scenarios. As a person working in FR and security, I can tell you we already have giant databases with millions of people in them. They are actively searched 24/7 with cameras located in public high traffic areas. Private spaces are filled with them. Retail is filled with them. You already see the cameras everywhere and do not think about it. We have systems that can search a 8 billion facial records on a single server in under 10 seconds, working with more realistic sized galleries, we have deployed and operating systems performing real time searching of everyone in a sports stadium or retail mall repeated 30 times a second for the duration of the public use of the space.


Facebook can affect elections to put candidates in office who will arrest you.


For what?


You're guilty of some criminal offense.


It will probably turn out just like the rest of the history of the internet - as a huge "public/private partnership" (/s)

FB/Google jump on the train and enable new possibilities you don't want to miss, meanwhile Government gets nothing done, so in the end the three letter agencies will just arrive at the doorstep snagging all data anyway (formerly known as PRISM).

I mean why go to the trouble of installing expensive hardware identifying anyone if they all incriminate themselves for free on Snapchat?

Welcome to the new world.


And I suspect as with all deep-learning based solutions to date, there will be subtle modifications to one's face that will serve as robust adversaries to all networks simultaneously. The phase space for inputs is amazingly huge, so huge that it is nearly impossible to cover all of it with a finite training set, doubly so if one has a gradient for locating such adversaries...

https://blog.openai.com/robust-adversarial-inputs/


I'm very worried about face recognition technology but don't think that FaceID will play any role in it. Apple is the only large company that appears to take privacy seriously. There won't be a big database with all FaceID information.

There are enough companies selling face recognition tech to states but Apple isn't one of them. So the iPhone X shows how much is possible but in itself won't change surveillance.


There are a few people doing studies on whether or not automatic passport gates in some countries are more likely to fail you if your mobile phone is switched off.


> Congratulations, you've just started building a database of everyone entering the country with biometric data that can be checked in the field with equipment costing less than $1000, and which can later be cross checked to find people traveling with false or multiple IDs ("this facial structure comes back as matching (person x) and (person y), and the gait is almost the same. Pick him up.")

I assume you believe this can be done accurately, or you wouldn't be suggesting this scenario. In that case—what exactly do you see as the problem here?


If it means I don't have to carry a driver's license or credit card or cash, sign me up.


Wow, is surrendering all privacy in public a fair price to pay for the inconvenience of carrying a wallet?


I'm perfectly fine with it. Including license-plate cameras on every street corner. 24/7 aerial surveillance and cell-phone tracking, too.

Why do I need privacy, exactly? Sure, privacy from being spied-on without reason by random low-level government employees, sure. But I don't care about privacy from automated systems, or systems that are used in criminal investigations after the fact, or during the fact. Also, I'm fine not having privacy from detectives and criminal investigators that are investigating crimes.

That distinction is an important one, though, and is usually lost in discussions of tech and privacy.


I was being facetious, sorry.


Ah sorry, misunderstood.


I would not want to be an in the field spy for any country right now or at any time in the future.


To some extent, technology is making "in the field" spies redundant. Instead of paying someone to break into a secure facility and steal paperwork, you can pay a hacker sitting in their home country to hack your opponent's data.


Don't think so. For many hacks you'll probably need physical access to a phone or computer.


Not to be alarmist, but how do we know this IR blasting won't be damaging to the eyes, a 3 seconds google search leads to a 2011 study[1] which concludes:

> The protein of eye lens is very sensitive to IR radiation which is hazardous and may lead to cataract.

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3116568/


In short because an optical engineer at apple checked that the emission was at a safe level, and if it wasn't they'd be shafted by lawsuits. Assuming it's a laser, it'll be a Class 1. The emitter is short range, so the power doesn't need to be as high (though interesting to see how well it works outdoors).

> Workers in hot environments, exposed to IR, developed lenticular opacities due to IR irradiance in the order of 80–400 mW/cm2 on a daily basis for 10–15 years

That is really high. The original Kinect has an output power (at the emitter) of around 60mW. The expanded beam is safe to look at beyond a centimeter or two (I think less, actually) due to energy conservation.

On top of that you're only being exposed for a second or two to grab the depth image, not 5 minutes.


The Kinect has a number of hardware interlocks as well, and will shut the laser down (without firmware intervention, because firmware might not be working) based on some hard-wired detections. The unexpanded raw laser in the Kinect, prior to hitting the hologram, wouldn't be great to look at.


Sure, in my post I assume it's going through the diffractive optical element (DOE). A 60mW, 940nm laser is absolutely not class 1!


The amount probably is low enough not to be an issue. If it needs to be somewhat high, they could use the output of the face detection from the visible light image capture not to blast anything at the eyes themselves.


I highly doubt the iPhone will be able to compete with the flux density of sunlight, at any wavelength, which we generally regard as safe.

(1400 watts / m^2) * 0.2 [ground albedo] * 0.5 [proportion in infrared] = 14 mW / cm^2 of infrared radiation looking at the ground on a sunny day.

(14 mW / cm^2) * 1 square foot [approx size of a face] = 13 watts

I'm going to guess there isn't a 13 watt infrared laser in the iPhone, or we'll be seeing some great hacks where people take the lens out and start setting things on fire from hundreds of feet away.


What difference does it make whether they know or not? AFAIK TSA knows of at least 3 independent studies done by different Universities of harmful radiation from backscatter machines; yet it didn't stop them from using those and making pat downs more invasive so even less people opt out.

Unless some technology is capable of killing more than few percent of population, government is the last to worry about statistics.


link to these studies?

backscatter machines are considered complete safe, from what I understand. they use radio waves (not ionizing radiation) the same as phones/microwaves, and the waves don't even penetrate skin (or else they wouldn't be very effective, would they)?

you have nothing to worry about, medically, from them. I guarantee you're being bombarded right this moment with a similar amount of energy in a similar portion of the spectrum from Wifi, phone calls, bluetooth, FM/AM radio, GPS radio, etc.


I think GP is referring to these:

https://en.wikipedia.org/wiki/Backscatter_X-ray

whereas you're thinking of these:

https://en.wikipedia.org/wiki/Millimeter_wave_scanner.

They are compating technoogies; the former employ ionizing radiation, the latter do not. I don't really know which ones are employed on US airports right now, but backscatter X-rays were employed at some point, which generated a bit of media shitstorm few years back.


Ah, yes, my mistake. I was aware the TSA used X-ray machines at some point, but I thought those were phased out long ago. I absolutely would refuse to get into one of those.


The linked article used 200 mW/cm^2 for 5 to 10 minutes. This is orders of magnitude more light exposure than an iPhone would create - it wouldn't even be able to from its battery capacity. The light emitted by the iPhone would be closer in power to what a normal TV remote emits.


Exposure of a few seconds is one thing but when they demo'd SnapChat, it sure looked like they might be using a depth map to plaster a texture on to your face for as long as you're in the app.


The iris scanner on my S8 carried a warning not to let young children use it.

That was enough to put it off for myself for long term use after trying it a a couple of weeks, until I can find something that explains the risks - or rather, lack thereof - very definitely. I've attempted to search but have come up short each time.

The S8 iris scanner was super quick, very impressed by it.


I will not say it is safe for you to use or not. But the reason for that and many of the "don't let children near it" is better be safe and make it user error than take an useless risk. In particular, I'm sure there are very little if any studies on the effects of that tech on children, for obvious reasons.


I was an early backer of the LiDAR Lite[0], and I'm really excited to see LiDAR products become more affordable. A recent project called Sweep[1] the $350 super low-end disruptor to the $80,000 Velodyne models. I wonder how long before we have plug-and-play open-source projects for multi-sensor fusion[2] of cameras, LiDARs, and microphone arrays. A common digital trend appears to be subsidizing sensor quality with data volume and processing power. Are there projects for this now?

[0] https://www.sparkfun.com/products/14032

[1] http://scanse.io/

[2] https://en.wikipedia.org/wiki/Joint_Probabilistic_Data_Assoc...


The sensor in the iPhone X isn't LiDAR, it's structured light.

https://en.wikipedia.org/wiki/Structured-light_3D_scanner


Thanks, and I can't wait to see systems that will fuse cheap multi-pose structured light sensor data together too! These devices are all measuring proxies of the same thing: 3D structure.


Does Tesla still use Lidar ?


As I understand it they don't. I don't think they ever have.

https://cleantechnica.com/2016/07/29/tesla-google-disagree-l...


Have they started? I thought they insisted that cameras are good enough?


Cameras are definitely not good enough alone. Tesla's enhanced their camera systems with different models of automotive-range radar (most likely manufactured by Bosch; either the LRR4 or the MRR1).


have you tried the sweep? I cancelled my pre-order after seeing some bad reviews


Isn't this tech more useful on the other side of the phone?

It is true that you may try to catch emotion from the user face but on the other side you can catch the world...


Use to work on the Kinect so have some knowledge around it - not really. The lenses and power requirements to do more than a couple feet away would be huge. Though I'm sure they'll eventually get it.


From cursory reading, laser illuminators can be ~50X more efficient than the IR LEDs in Kinectv2 (though v1, ehich they compare it to, used IR lasers too, and didn't need active cooling like v2, it can also spread the illumination over a longer exposure than is required for v2's time-of-flight) and illuminate a much more narrow band which allows you to use a more selective filter, so you have less background IR to overcome.

It still may be only good to a few feet out in direct sunlight, I don't know.


having a phone-based ruler with a couple foot range would be pretty handy. if you took a lot of samples as the phone bobbed around, how good a resolution do you think one could get?


The Google Tango devices have this. Unfortunately the tech has limited range and doesn't work well outdoors, since it relies on projecting an IR pattern into the world. The projected image is easily overwhelmed by ambient IR.


You're looking for Google Tango, then: https://get.google.com/tango/


On the other side apple gets depth maps with two cameras that have different lenses. Different distance to objects, different technologies.


One step at a time...the world will still be here tomorrow.


Probably, but a big problem with structured light is the inverse square law. You need a lot more power to recover depth of far objects -- this is why all the Project Tango phones are yuuuge and not selling too well.

There are ways around the inverse square law (such as projecting a single/few laser beams) but the hardware becomes slower and more complicated.


I think the new Kinect makes use of time-of-flight if I recall correctly (which they do indicate in the article). If I understand correctly, with time-of-flight you don't need to project a pattern as in structured light setups.


Yes, the current Kinect is a time-of-flight device. The original one was an "unstructured light" device, the generic term for those random-pattern-of-dots projectors. This is a takeoff on "structured light", where you project a known line pattern on a 3D surface and view it from another angle to get depth. Structured light systems are used industrially; the compute power required is low, as is the software complexity.[1]

[1] http://www.micro-epsilon.com/2D_3D/laser-scanner/


Yes, time of flight makes a direct distance measurement per-pixel. It's based on a phase shift principle; send out modulated light and compare the ingoing/outgoing phases. The phase difference is proportional to the distance the wave travelled.

The original Kinect is basically a one-camera stereo setup. The pattern is known and they perform correlation with a stored image of the pattern at a calibrated distance.


Wait - so i could just "blast" a suitably depth adjusted copy of a face scan on a white page and get the iphone to unlock?

Basically use a surreptitious infrared camera, copy the face of a person, then project it onto a screen and I would mostly be good to go?

Alternatively, if I just plastered infrared dots on someones face, face recognition would cease to work? So 2 Iphone X could be used to interfere with each other?


LOL... that sounds about 10x harder to do than lift a fingerprint; the level of increased security Apple is claiming.

Actually on second thought I'm not sure it'll work. Your infrared ink dots will interfere with the projected infrared dots.


I assume the depth is measured using the two cameras and the dot projection. So you couldn't use a 2D copy that was depth adjusted (if that was what you were suggesting).

I also assume the dots are tracked to identify, so static dots wouldn't be considered, although it could cause problems.

I could see how using 2 phones could cause problems, perhaps they need to strobe them to avoid clashes.


"Just"


Fine. Just is hyperbole. But I suppose I could just interfere with logging in if I had 2 iPhone X and pointed both at a persons face.

I'm guessing The 2 sets of dots would overlap and confuse.


Actually probably not. At least not fatally. You can apparently use two Kinects simultaneously with no serious issues.


I've always thought that depth sensing tech could be used to help offload some of the nasty hacks floating around in the world of AR, computer vision, and spatial navigation. I just never realized how much it would speed up development until I decided to play around with an Orbbec Astra and PCL a few years ago... I'm not an Apple user, but I am glad that they implemented this technology in the iPhone X since it will spur other vendors to adopt this type of technology as well. Hopefully they will allow some form of direct access to the IR dot projector and camera along with DepthKit or whatever Apple decides to call it. Until that time, however, at least other projects like structure.io are attempting to bring depth sensing technology into the mainstream.


> Hopefully they will allow some form of direct access to the IR dot projector and camera along with DepthKit or whatever Apple decides to call it

And then the app developers upload the depth data to their servers and use it to track users, and then the servers are hacked and the depth data is taken by the hackers and then the hackers sell the depth data and then someone can use that data to unlock stolen iPhones. Sounds great /s


It's 1/100 of the Kinect's size, that is the innovation and a pretty big leap


If size is not a requirement, I understand that they didn't optimize for it.

I don't know if they could have, but for something that just sits somewhere in your living room... that's a totally different design mindset than something that needs to fit in your pocket and whose main real estate you want to spend on a screen.


Well, the Surface Pro 4 had the same thing 2 years ago.


Meanwhile everyone forgets Project Tango which has a similar sized sensor and structured light projector...

The Apple "innovation" here is finding an application for it: basically Snapchat. (Because, and let's be really serious here, FaceID is not a good application.)

...and that alone just shows how off the mark this implementation is. When this comes to the cheap Chinese Android phones in six months, it's going to cause another sales boom there, as parents will actually be willing to spend a few hundred dollars for a phone upgrade for a teenager, verses the thousand bucks for one of these Apple phones...


> FaceID is not a good application

Why not? More secure face recognition seems like a very good application that will be useful to millions.


Can you explain how?


iPhone X doesn't have touchID, if you want security and don't want to enter a password/pin then you have no choice.


Using biometrics for security is hardly secure anyway.


Were there no patent problems? I'm glad about it, of course, but it just somewhat surprised me that so many companies can use the same basic principle and there didn't seem to be too much hassle with people suing each other. Then again, I might just not have heard of any high-profile cases.

The basic working principle is probably also basic enough to have lots of prior art I guess.


If they bought the company, I would assume they bought the patents. I assume that MS had licensed the patents for Kinect.


In another thread I wondered if apple had to license any tech from microsoft but got voted down.


Apple bought PrimeSense- the company who’s tech MS Alice SEs for Kinect. So no.


Wasn't the kinect susceptible to infrared interference such as from sun light?

Question is, will there be issues with face unlock in bright sunlight.


I guess it's possible that Apple never considered the Sun when they were designing this.


Blasted edge cases!


Another one: If you see someone's phone lying around, just stare at it few times and see if yo disabled their FaceID without ever touching it :)


Apple hates Sun. That's why they built their OS on BSD....

/s


Would not be first time. Antenna Gate, Bend Gate...


Apple in 2018: Ever since the beginning of time, man has yearned to destroy the sun. We shall do the next best thing: block it out.


They showed videos of people using it in bright sunlight as well as in the dark.

If they used a variety of frequencies alternating at some known sequence or something it should be possible to differentiate signal from noise, but I have no idea what I'm talking about.


For what it's worth, I sometimes have trouble getting my Surface (one of the ones with Windows Hello) to recognize me when I'm in my apartment and the bright sun is shining through. I've never tried using it outdoors (not that the reflective glass would be encouraging of that), so I can't really speak on that.


I thought the MS Surface already had facial recognition unlock, so how are the stories of it in sunlight?

my issue with facial recognition on the phone is, will it try to unlock if I am just reading notifications/texts? not something I want it to do.


I wonder if they licensed any of this from Microsoft. I know they share a cross patent agreement on various things; but they bought PrimeSense, however that doesn't mean nothing was licensed.


Other way around. Microsoft screwed project Natal badly, spending >$1B to buy 2-3 companies in a row and still unable to make Kinect. In the end they were forced to license from Primesense, because alternative was no Kinect at all.


Can you program it like a kine ct? Or is it essentially a kinect running one program, which makes it a lot less like a kinect (genuine question, I haven't been able to find any info)


You can play with it using ARKit [0].

[0] https://developer.apple.com/documentation/arkit


However, only a depth map is accessible by developers. The raw sensor data is not.


When the Kinect first came out I immediately thought to myself "one day all the tech in this thing will be cheap and tiny, and that day will be a weird day". I know it's obvious, tech gets smaller and cheaper. No great insight there. I just think it's one of those small innovations that ends up having an outsized impact.

Think of it this way, what might be possible or normal once sensors like this are so cheap they come embedded in all OTS camera modules for even the cheapest of devices?

Pretty crazy right?


Interestingly enough, I made a similar comparison before reading this article on an IRC. It felt similar to Windows Hello which uses an RGB, IR and 3D camera for face detection, which are all used for authentication. So I just made a quick bastardisation of the Kinect on this camera set, then applied that to FaceID as well


Like others have mentioned the Kinect v2 uses time of flight to detect depth. I'd be more impressed if Apple used this over replicating v1 of the Kinect.

Even though I'm an Android user, I think I'm more scared about Google catching up and incentivising the high res scanning of faces for some consumer application.


> Google's Tango technology ... which is also based on infrared depth detection

I thought Tango included hardware to process parallax and phone position so it could potentially work outside and in bright open areas.


I own a tango phone and it sucks outdoors. Last time I tried was a few months ago though


Of course it is, apart from hardware comparisions; Apple has all the necessary components sans gamepad to create an Nintendo Switch esque setup.


Depth sensor is cool, but I think a better feature would be increasing the EM spectrum the phone could operate on. It would be nice to open garage doors, detect speed traps, or enable push to start on cars.


Steal car alarm codes with an app...


Eh I was mostly sick of having to have a device to open the gate, a device to turn on my car, and a device to open my garage. Whats wrong with 300MHz? I'd rather not have to carry around all this crap vs a depth sensor that I might use once as a gimmick to 3d map my bathtub.


Radios and antennas. More weight and power for limited usecases. In Apple and Google’s mind - replace your garage door opener with a connected one. You are talking about an SDR to allow multiple frequency ranges.


I personally would love to have an SDR integrated with my phone (and for operating in lower frequency ranges, just give me an antenna port next to the audio jack (oh, wait...)). But then again, a TX-enabled SDR in a popular consumer device? FCC would not be happy...


If that car alarm system stupid enough to have static unlock code that can be replayed, it deserved to be hacked.


Easy for them to say such things and devalue the engineering effort that goes into miniaturizing such tech and having the software to match it.


Interesting. The way I read the article, I got the sense that it was emphasizing exactly that, by highlighting examples of how far the tech has progressed in less than 10 years.


Does anyone collate a list of startups (in newfangled tech) likely to be acquired by Apple? Would be good to work for a place where a nice earn-out is likely.


I'm sure M&A analysts in Consulting Companies would be collating this to enable their next pitch!


Wait, so THAT's what M&A do? I thought they simply facilitate transactions once management has decided who to acquire. Which firm does M&A for Apple? That would be a great job.


"We help clients ensure that their M&A strategy aligns with their broader corporate strategy. We identify and assess targets based on a client's strategic objectives, potential synergies, organizational and cultural fit, and the feasibility of a deal. To help the transaction proceed smoothly, we support clients in structuring the deal, communicating its rationale to stakeholders and markets, and planning for integration."

- Mckinsey's website http://www.mckinsey.com/business-functions/strategy-and-corp...


Sounds like a pretty good job.


Wondering if Apple should have rather spent their time and energy in R&D for a front-facing camera sitting behind an OLED screen so we could have a real bezel-less screen.

Earspeaker behind a screen works already (a Chinese handset manufacturer did this) and I dont need the other sensors and that for Face-ID. I found a fingerprint sensor better anyway.


Apple has a patent on a camera integrated into the display. So it’ll happen at some point. There’s probably a team working on it in R&D as we speak.

How did you find TouchID to be better than FaceID? Did you get a pre-release version of iPhone X?


> Wondering if Apple should have rather spent their time and energy in R&D for …

I'm pretty sure Apple R&D is continuing apace :) They ship updates every year, so at some point during the cycle they have to decide what they'll ship in the next model. Figuring out which set of features to ship now that also keeps them on a path forward to a not-completely-decided future is quite a challenge, I'm sure. They've been pretty successful so far.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: