To give you a real-world example: when I started BarSsense (http://www.barsense.com) the core problem was tracking the path and velocity of a weightlifter's bar. I bought a PrimeSense camera because it can extract a lot more data and with greater accuracy out of an image than a regular camera. After some prototyping, I decided to use a 2D camera and deliver the software as an app because I thought wide distribution and ease of use was more important than the fidelity and correctness of the data - ie, the "worse is better" approach. When these cameras make their way into regular phones, "worse is better" will suddenly become "better".
As an aside, do you expect to monetize this in someway, or are you just doing it for the good of man?
I'm planning to monetize and I have a few ideas, hopefully one of them works out!
Sorry, today is clearly Pun Day.
But the trend I see is companies telling us to regularly buy stuff we really don't need (and obviously throw away our "old" solutions). The world has way more pressing issues than yet-another-gadget. And the planet's resources are not unlimited.
Or, if not doomed, it's not getting any better in areas that matter.
(People laugh at words like "doomed", assuming everything will be as it was when they were growing up. For some lucky ones that's true. For others the worse happens, like a financial collapse or a world war, and then they "knew it all along it was going to happen").
True in the sense that if "enough people" believe the Internet's favourite scare-story of imminent-financial-collapse (growing in popularity ever since Y2K) then it will indeed by necessity finally happen ;)
People paid a trillion in the US alone, out of their pockets, to ameliorate it (plus close to another trillion they lented to Detroit). And tons of middle/working class jobs are not coming back in the foreseeable future.
And that's the US. For some European economies it is even worse -- they got from 30% unemployment to double the suicide rates in 3-4 years time.
I don't see how ethical science relies on the advancement of technology, though. I'm not even sure what ethical _science_ is, to be honest.
That's 20% saving on global non-renewables. “Niche”?
Essentially this would make it much easier to represent the physical world digitally. But what use cases does a consumer or the average phone user have for digital representations of the physical space around them, particularly given that the user is already aware of the the physical space around them? How can this sort of digital device extend our ability to interact with physical space?
Ever wanted a floor plan for your home? Just wave your phone around.
* Visual annotation
Add direction overlays, see the plan for a play your team is executing on the helmet HUD, highlight "dangerous"(weaving, too fast, whatnot) drivers on the car HUD.
* Integrate sensor data to extend human perception
Add an IR overlay. Sample sound across the room, do a volumetric display of noise levels. "see" the strength of your WiFi signal.
* Image post processing
You have a 3d map of an area, plus pictures of all textures - rearrange to your hearts content.
* Alternate Reality
Completely change the look of the world around you, just because you can. (Semi-useful application: Interior decoration. See that couch right in your living room before you buy it)
There are tons of applications there. It mixes the "reality" of physical space with the malleability of the digital space.
This is an unveiling of a technology but the "applications" they show in this video are probably a waste of time to most users. Google have not shown a killer app that uses this tech. Maybe they're working on something (they hint that they may be planning to integrate indoor mapping into Google Maps which might be interesting) but they're not showing it in this video.
That doesn't mean the tech is bad. But we haven't seen enough to judge it as useful to end users.
I liked this part.
But they won't! It is very difficult to get those people to post more than a low res picture of one room online when they can very easily walk through the house with a video camera, for example. Even large apartment complexes have 1-2 pictures on their website and call it a day.
The single most important thing, for me, is a nice and correct 2D floor plan. Add to that 5-10 well chosen photos and you're done on the visual front as far as I'm concerned. If you haven't convinced to at least go look at the house with that, no video or interactive 3D model is going to change my mind.
The big problem is that the overlap between the information I consider important and the information the real estate agent is eager to share don't overlap all the much
I have chatted with one of the agents about how do they work, and no surprise - they lend out everything by phone, because "uploading pictures to website takes 24 hours."
I'm a Russian living in the UK and it still surprises me that what Russians consider to be a viable information about property (gross internal and net internal area, kitchen area and a floor plan) is so rarely present on the UK property sites. "Lovely 2-bedroom" is all you normally expect.
Sure with embedded in a smartphone would be more convenient, but I'm sure agents have some cameras when they get in houses.
In the meantime, I'm sure this will revolutionize something. It always does. I'm just in your same position ... a neo-luddite.
Also, I would much rather have a space elevator than a 3D mapping phone.
And then release it for VR applications/games (one can hope).
Is there something special about the house I'm pointing my phone at?
Who owns the daycare centre I'm pointing my phone at? Have there been any licensing / regulatory violations?
Paint a coloured path on the floor or wall to guide me to the dentist's office.
I'm old and can't see very well. Give me verbal directions to navigate (indoors) to get to the clinic.
I've got a wet spot on my basement ceiling. Highlight the outside of the house where water might be entering.
I've been toying with the idea of such software in a phone for a while but never bothered exploring it because I lack the technical chops. There's a piece of software I've previously used called PhotoModeller  that allows you to calibrate a standard digital P&S camera then use a bunch of shots of the scene from various angles to build a 3D point cloud of it. Given that you can know the lens of an iPhone to a pretty close accuracy, I was thinking that you could build a similar application straight into the phone that then could upload 3d scans to dropbox. It'd be invaluable to field work.
This takes the above idea and loads it with steroids. I'm really excited!
It does just that and even let's you post the scenes.
You need situational awareness in a device to start using it to paint data onto the surroundings. Once you can do that, a whole world of applications unlocks.
But clearly PK Dick's self-aware advertising will not be possible until such ads can distinguish a human from a column of marble or a dog.
in short, the physical world can be treated as a user interface to computers.
There's a large technical distinction of course; in the case of the bat, getting the technology up and running to do this was sophisticated enough that the video i linked to spends a large portion explaining how it works. they used centralized computers and centralized sensors (echo-locators) to figure out where the device you are holding is within a known environment. Nowadays, we can put so much compute power and so many sensors in a device you are holding that it can figure out its own environment instead.
Imagine being in a complex refinery, factory, etc., where tons of infrastructure is hidden behind walls or a couple of rooms away. Just hold your device in front of you, and peer through its virtual portal through the walls onto your hidden surroundings ...
Being able to know where everything is, along with an overlay of real-time status, etc. will be valuable.
For the user? Probably not much at the moment, if ever.
For a company whose mission is to collect, aggregate, and extract monetary value from every last piece of data in the world?
For a government that's interested in extending its awareness?
You can see crowd moving on streets, where each person is a box (or a 3d avatar)... than if other people have installed the same app as you do.. it will show a icon on top of the box representing the person (the phone sends a signal IR, BT whatever) .. or is geolocated by a central server that sync the position of everybody..
Than you can interact with those strangers on the street.. in the 3d box you see representing the person, it may have more clues about that person..
so you can send a message to a girl/guy you liked and ask to hang out with you.. for intance.. its like a people radar..
Can work in traffic too.. so you can tag people in cars around you..
You can create a game, and involve people you have tagged, and give each one a role.. like in a RPG
If you get feedback of the camera too, you can see people with 3d stuff on top.. like holding a 3d gun..or a secret message in your virtual shirt.. it would work like a magical glass
I had this micro idea, a year ago.. this is the technology to make it work.. feel free to use it to create something
just call me for a beer later :)
This sort of system could provide audio cues in any environment. That's already a big step up from 'most environments'.
I think the end game of this technology can help people without a coach learn to lift. From what I've read about how people learn, immediate feedback is extremely important, as in within a couple of seconds. I'd love it if this took bar path and velocity information, and put it into a machine learning system. The app could watch you in real time and immediately tell you whether it was a good lift or what was wrong with it. Lift your butt up faster, or lift the bar faster, slower or whatever. I think you'd need to sit down with a couple good coaches and have them classify what's wrong or right with several hundred lifts to get your training dataset—but I would love to pay for this product.
You can focus on and isolate part of a rep to really understand how well "YOUR" performed
The problem I have is that I don't have a smartphone. Each cool app like this brings me closer to getting one though.
Can anyone suggest an alternative to this app that could work with video files? Or maybe something that I can stick to the barbell?
I was just having a discussion yesterday with a friend who works at Google about what data they store when you query their search engine. Every single keystroke, including backspaces, is stored. They don't just know what you ask. They know how well you can spell and know how well you type, not just in general but down to specific letter sequences. With this data, they can tell if you are regularly more impaired (fine motor control) at some times than at others, or if you're growing more impaired over time and match that against the content of your queries, etc.
"Phones that don't limit their boundaries to a touchscreen", meaning, we're not satisfied limiting our knowledge of you to just what we can extract from what you enter and how you enter it and when on a touchscreen. We want to know every step you take, when you sit, when you stand, how and where you walk.... SO much more data about you and your world that we can mine for treasure!
I'm not saying that Google is evil. My friends at Google certainly aren't. It's just that they are like kids in a candy store with unprecedented access to data and so many great, new algorithms for extracting information from it that they are just loving it, the way geeks would. But we're really going down a rabbit hole here.
Yes, google can know what your room looks like. Google could also know literally every place you go, every message you send, every transaction you make through a bank, every website you visit, every photo you take, every person you meet, when you use the bathroom, when you sleep, etc. etc. etc.
Who cares? If you don't like these services, don't use them. Nobody is forcing you to use this device.
Have you, or any of the other people on HN who repeat this ad nauseum, ever considered the possibility that the people running google are actually just hackers who got lucky enough to have the resources to pursue projects that they think are cool?
I mean...what would you do if you had google's resources? If I had google's resources I'd probably be doing exactly what they're doing. Things like trying to improve the broadband situation in the United States, protected rhinos from African poachers, developing cool future-tech like self-driving cars, etc.
If you don't like modernity, stop using modernity. Move to a homestead in the pacific northwest, never use a telephone, never use grid power, grow your own vegetables, grow your own cattle, and keep away from the evil, scary spies at gooooogggllleeee who are trying to...uh...give you better targeted advertisements? Or provide you with better search results?
I think your view of the world is the lazier of the two--you're basically saying "let's not worry about the negative implications of the things we create as long as our intentions are good".
> Who cares? If you don't like these services, don't use them. Nobody is forcing you to use this device . . . If you don't like modernity, stop using modernity. Move to a homestead in the pacific northwest, never use a telephone, never use grid power, grow your own vegetables, grow your own cattle, and keep away from the evil, scary spies at gooooogggllleeee who are trying to...uh...give you better targeted advertisements? Or provide you with better search results?
Here's how your entire comment reads to me: "If you're interested in thinking about the future implications of the new things we're creating, please shut up and go live in the woods, because you obviously hate modernity. I'm really sick of people thinking about the future and how we might be creating things that will eventually hurt us. The market comes before foresight, and commentary isn't welcome; when you have potential concerns for the future, the only acceptable response is voting with your feet. Please do that and don't ruin our fun."
What you miss is that one can object to a single, potentially dangerous aspect of recent developments (personal information collection and exploitation) while embracing technology in general at the same time.
In creating new technology, it's essential that we examine the paradigms we're creating. If no one's doing that, we're almost sure to run ourselves into serious trouble sooner or later. There's a reason you look before you leap.
Which is lazy: choosing not to use awesome new technology because it has some potentially dangerous implications, or just using any new and seemingly convenient thing that people come up with and assuming it's got to be okay?
If you're worried about the information Google collects about /everybody/, and you're posting in /relevant topics/, I think you're accomplishing quite a lot. I think it's really important that when new technology is announced, interested parties discuss both "here are some exciting new possibilities" and "here are some areas for concern". In an impartial discussion forum, a product announcement shouldn't just be an excitement-fest.
> Do you think the people reading your comment don't already know what information Google has available?
This technology will likely result in Google and others collecting and having access to new sources of information about people. I think that many people reading the announcement might not immediately think about some of these implications, and I think it's a great time for review and examination of the topic.
> Do you think Google's going to change its practices because you complain?
I think some engineers and others working on this project and other projects at Google probably read HN, and I think comments here could have an effect on how they build this product and what data is eventually sent to the company. Reading these concerns might be a good reminder to reflect on these things from time to time as well.
> I think the grandparent's point is to focus on things that are actionable.
Action without reflection is a dangerous formula. Google is taking action by building this and announcing it. What we do at HN is learn, reflect, and discuss.
I also don't believe the answer is simply not using the services. That's like saying if you don't like the government monitoring your phone calls or location, you should just stop using phones.
I'll put down my vegetable hoe and ask you this. Given that these are my friends, neighbors, and former coworkers I'm talking about at Google, is there any information about you that you would be uncomfortable having your friends collect and store in a database for unknown parties to eventually analyze? If there's some limit beyond which you might say, "Well, that's going a bit too far," and that caused someone else to fulminate over your backwardness, what would you say to them?
Oh, well, gotta send this pigeon off and get back to my bunker before the drones catch me out in the open....
Even twenty years ago, if you called a mail order catalog company, and it was the first time you'd called them, the sales rep on the phone already had your number (from commercial caller ID), your name (either "Bob Smith" or "Sally Smith"), and your household's credit card numbers before you even opened your mouth. They would still ask you for them for security reasons, to see which of your cards you wanted to use, to double-check accuracy, and in order to NOT freak you out, but companies you had never heard of already knew your name before you ever called them.
But you think Google can't figure out my name?
I'm not saying that his comment was a sound rebuttal to yours or anything; just noting that he was talking in the context of "your friends, neighbors and former coworkers" accessing your data and you're talking about "Google" accessing your data, and those are potentially very different things.
Given that he said that, yes, he probably has considered it.
You're reacting reflexively here.
There's ABSOLUTELY nothing wrong with considering the potential problems along with the benefits. In fact, it's extremely shallow not to.
You just lazily suggested to these people to stop using these products/services, so I suggest for you to stop reading comments "like this."
> Almost as lazy as "remember, if you're using a service for free, you're not the customer, YOU'RE THE PRODUCT!"
By the way, what do you think Facebook bought yesterday for $16 billion? Not a product, but users.
My theory as to why people act this way is that they do so to give their lives meaning. That, or there is something that makes their life meaningful they believe the "enemy" will take away. For instance "Google spying on my children" -> "Google is the enemy". They see Google, the NSA, Microsoft, etc, as a cohesive group that is "after them".
But in absence of this conflict, or the external meaning that is the cause of the (perceived) conflict, they wouldn't know what to do with their own existence. Religion, alcohol, and hedonism may end up filling this gap.
In contrast, those who have a self-defined objective see the world as a massive system of problems to solve (or not solve). There's no singular "enemy", just a huge number of people who all have their own motivations and ideals. "Google" is nothing more than a tightly-bound collection of humans.
> "Google" is nothing more than a tightly-bound collection of humans.
Yes, and collections of humans tend to exhibit emergent properties / behaviours which are not necessarily ideal in all respects.
Discussion of these properties / behaviours, and their desirability, is not a bad thing.
> They see the world in a black and white. "Us" and "them".
This is humorous to see in a post expounding a false dichotomy between the "two" types of people in the world.
I cannot dictate my friends. Should I disallow them entering my house with their shiny toys? Should I nuke their wireless connections to make sure they cannot transmit live? Privacy is a societal issue that cannot be fixed by ignoring it.
Have you considered that making this impression on gullible people has been part of Google's schtick from the beginning?
I suppose your comment has, if nothing else, the virtue of being relatively novel compared against either the comments of those unaware of the privacy implications of much modern technology or those concerned by them. People who are largely aware and untroubled do seem to be in the minority.
> This view of the world is just lazy. Almost as lazy as "remember, if you're using a service for free, you're not the customer, YOU'RE THE PRODUCT!", which seems to get parroted every single time there is any discussion related to google, facebook, microsoft, instagram, twitter, or any other web company that uses ads as a revenue model.
It appears to be, if not 100% correct, a generally good heuristic, and a way of pointing out to people that the costs of participating in nominally free services are often hidden. Perhaps you could explain how this is lazy.
> If you don't like these services, don't use them / If you don't like modernity, stop using modernity.
Whew! Simple binary options. It's all clear now, and anybody who's thinking about this can safely choose one without worrying any further (or being accused of being... lazy, right?).
Or maybe some of us "lazy" people would like to explore the possibility of having a world where we can have the benefits of the technologies and minimize the problems.
> Have you, or any of the other people on HN who repeat this ad nauseum, ever considered the possibility that the people running google are actually just hackers who got lucky enough to have the resources to pursue projects that they think are cool?
I suspect many of them are. My acquaintances who work there seem to be.
But perhaps you are familiar with the aphorism "the road to hell is paved with good intentions" (though perhaps this is something else you consider "lazy"). It doesn't matter what motivated those who pursued a thorough knowledge and practice of various nuclear fission techniques, the fact is that they have various drawbacks (some pretty serious) that aren't a matter of individual opt-in / opt-out. This is true of technologies that extend legibility of personal spaces and activities.
> If you don't like modernity, stop using modernity. Move to a homestead in the pacific northwest, never use a telephone, never use grid power, grow your own vegetables, grow your own cattle, and keep away from the evil, scary spies at gooooogggllleeee
I think I've seen this suggestion before, but less in earnest:
I wouldn't even say laziness as much as simple-mindedness. Apparently there are a decent amt of people (at least on HN) that actually read stuff like this and the word Google/Facebook/Yahoo/what-have-you is the only thing their brain apparently pulls out of the content. It's staggering in its vacuousness, but I wouldn't go so far as to blame them for being lazy.
My problem with the whole thing is that you get sucked in and by the time you realize what's going on, it's too late to turn back.
You get tempted by the amazing gadgets and services, somehow your whole life ends up in Google's computers. You think it's just a bunch of geeks like you. Then Snowden blows the lid on the whole thing and now you're in some Philip K Dick story.
The reason it doesn't happen is because corporations don't like giving up control of data that they can keep to themselves and store forever.
Many of these apps could run clientside and access the cloud solely for information, but that doesn't fit in with the Google vision of organizing the world's information and making it useful for selling products and services to advertisers.
Data retention and collection practices that prevent widespread spying must be codified into law for this to ever work. Companies, Google being the shining example, (but Apple and Facebook not far behind) will simply not enforce these boundaries upon themselves, because the data is just too valuable in the context of future algorithmic analysis.
Also, it's not a "slight chance". All these major innovations HAVE been used for evil, from mobile phones to wide-area networking to the web. It's not just fearmongering.
This is precisely my point, these technologies have been used for evil, but would we be better off NOT having them in the first place? Everything has been used for evil, including pencils. If we loose sight of perspective and only play on the fears, then it's very much fearmongering.
I agree with you on your solution of decentralized collection being better. But I would argue that data analysis involving many data sources, including yours, is what makes a lot of the services being built USEFUL. Google Now being a good example of that. I would also argue, that targeted advertisement is much more useful to me, than non targeted advertisement.
The solution is to stop centralized data collection.
Similarly for "Mark as Spam", Priority Inbox, Recommended Videos on Youtube, Voice Recognition on Android, etc.
Note 1: Yes, you could also do a pretty good job by having a model of your problem. i.e. computing a weighted levenstein distance where the weights are the probabilities of making that error. However, I'd argue that this would still be better with centralized data; you can compute much better probability vectors. And regardless, the best solutions in the field will be with the combination of both.
Note 2: All of the above is speculation. While I help write some of the tools that these guys use, I have no knowledge of how they write their software. This is just how I'd do it.
Auto-correction for a user's contacts could probably be done on-device, although I'd guess that machine learning across all users will probably massively reduce your success rate. Consider an ambiguous correction; you accidentally type "Gob", but have contacts of "Rob" and "Bob". I imagine that ranking the suggestions can be improved using a globally trained model.
Data is what lets us keep hospitals safe by establishing best practices, lets us cure diseases, lets us avoid unnecessary treatment, and just in general, separate fact from fiction. If we can't aggregate data, we can't do those things nearly as effectively as we could, and simplistic views of privacy have prevented this from happening.
There are plenty of concerns about data, and plenty of ways of mitigating those concerns, but a blanket condemnation of any centralized data would hold humanity back immeasurably (and in fact, already has.)
This line of reasoning seems similar to arguing that, by not creating a society in which 95% of people live in sanitized hospital rooms and the remaining 5% of people are doctors and nurses, we have "already caused many deaths".
Yes, the fact that people value things like "privacy", "being in large groups where others could be ill", and "moving from place to place" will "cause deaths". But that doesn't mean individuals should be forced to abandon these things for better health outcomes. While some benefits could certainly come from centralizing medical records, I tend to think that benefit cannot justify compelling people to keep their records in a centralized database, in the same way I don't think the health benefits of not traveling justify banning travel. You may not value privacy, but I think that in a truly inclusive society that values the individual, privacy should remain a real option for those who value it.
> There are plenty of concerns about data, and plenty of ways of mitigating those concerns, but a blanket condemnation of any centralized data would hold humanity back immeasurably (and in fact, already has.)
I agree that a blanket condemnation of centralized data is also not a good approach. Each individual should be able to make this choice in a meaningful way, with respect by default that any given person might care about it.
The thing I find most aggravating about it is that the standard for harm for data seems to be "What could a totalitarian government do with it?" and there are very few useful things that couldn't be used for very bad things in the hands of a totalitarian government (newspapers, for instance.) Meanwhile, companies can't reveal all the useful things that are consequences of their data because that makes them vulnerable to both competitors and spam.
So we're pretty much stuck with only uninformed opinions and worst-case scenario analysis, which isn't a rational way to approach anything. The only way I can think to improve the debate is for privacy advocates to focus on actual harm that has actually happened to someone to at least keep things grounded in reality.
Some people would gladly share medical information for the greater good if asked. You'd have to pry it from the cold, dead hands of others. The problem is that we currently don't separate those who would like to provide their information from those who would like to refuse because we're greedy for as much information as we can get, or we don't trust people to make the "right" decision.
A system that worked well and that accomodated both views would be one that really respected a user's choice--one that could handle data in a centralized or individual fashion, accordingly. Data that was willingly given could be used under the terms of the agreement without risk of angering people who don't want their data to be analyzed. It would ultimately provide the same benefits while respecting individuals and not creating controversy.
In the medical records situation, you'd have a group of people at one end that would be quite pleased to contribute their information, a group at the other end that would immediately decline, and quite a few people in the middle who would likely fall somewhat evenly to either side. Sure, some people would probably be fear-mongered out of sharing, but I imagine there'd be a lot less of that going on if people knew they could actually make a meaningful individual choice for privacy if they chose to do so. When opt-outs are buried, hidden, and it's not clear that they actually work, skepticism and fear grow. In a system that doesn't try to railroad people to sacrifice their privacy--a system that actually respects the individual's choice--fear would be reduced.
The only way to cast questions around this type of privacy as "a debate" involves changing the issue to compelled sharing of personal information. In a debate over forced participation, I think it's pretty easy to see why thoughts start to drift towards totalitarian concerns.
I think a system that offered real choice for each individual may drastically reduce the current problems from both perspectives, at the price of some engineering overhead.
There isn't any coercion going on with any of the things we're talking about. Every technology product has at least the choice to not use it.
It's quite difficult to ask meaningful questions about what users are comfortable with, get meaningful answers, and then figure out how the answer they've already given applies to a grey area situation where the cost of getting it wrong is a lawsuit. It becomes no longer sufficient to treat the data with respect and only use it for beneficial and privacy respecting purposes. You now have to constrain it by another set of rules whose relationship to what's actually happening can be unclear and arbitrary.
Some products work without storing any data. Most don't. For those that require user data to fulfill their basic function, the engineering cost of exempting certain data from certain systems can be much higher. One programmer screwing up becomes a lawsuit.
This is essentially the situation with HIPAA. Everyone is too worried about liability to do anything innovative, so that sector doesn't improve.
If someone is actually harmed by something a company does with user data, then it's entirely appropriate to stop patronizing that company, or to claim damages through all the normal routes. The presumption of not trusting anyone with data in advance of any actual harm is what I object to. Data can do real and permanent good in the world, and some companies are worth trusting (particularly since all of their incentives are to remain trustworthy if they want to continue to exist.)
But that's exactly the problem: many of the modern technologies that pose potential risks to privacy don't in practice provide an opt-out for the people whose privacy they might infringe.
Sometimes, you do have a choice. However, if you don't know about it or understand the implications, you can't make an informed decision.
Sometimes you don't get any meaningful choice, for example if governments decide it's OK to share sensitive healthcare information now. Strictly speaking you do have a choice, but that choice is never to visit a doctor or hospital. Try contrasting the dangers from a significant reduction in public trust in the integrity and ethics of the entire medical profession with the hypothetical future benefits of analysing aggregate healthcare data, and let me know which one really seems like the bigger risk.
Sometimes you don't get any choice in practice because your data is collected incidentally. When you were in the background of someone's personal holiday snap, that didn't really matter. When you're in the background of a CCTV image, which is centrally recorded and subject to future data mining operations, it matters more. When you're in the background of numerous CCTV images just because you left your home, which are subject to geotagging, facial recognition, gait analysis, covert audio recording, correlation with other databases such as mobile phone history, ANPR scans and purchase history, permanent archival and any additional data mining techniques that anyone who gets hold of the data might find later... Well, now you're in the plot of a sci-fi short story that doesn't end well.
Except that of course, it's not a story any more. Insurers already bump premiums based on profiling, but that profiling is notoriously inaccurate. Lenders already check credit records, which again are notoriously inaccurate. Employers already not only Google job applicants but in some cases also ask for personal log-in credentials to read through their social networking history. Governments already sell personal data held for legitimate public interest reasons to private parties, and even in seemingly simple cases like the government's vehicle licensing authority in the UK providing details of the registered owner of a car with given plates, this has been widely abused. Where these things have been curtailed -- which doesn't happen nearly as often as it should -- it has mostly been because primary legislation was passed or the rules for government's own departments were updated to cover specific cases, and only after so many people suffered from the intrusion that it became a politically significant issue.
To be clear, I don't object to the idea that there are potentially great benefits to be had from data mining, including in sensitive cases like public health data. But I think you are almost completely ignoring the accompanying risks, despite a seemingly endless stream of failures resulting in serious adverse consequences for individuals whose privacy wasn't adequately protected. We need the rest of how society works to catch up with the capabilities of modern technology before we can reap the benefits without paying too high a price.
Such as? Typically it seems that things become a scandal based on hypothetical harm rather than actual harm.
Of course, in reality it's often difficult to prove that a specific outcome was the result of a privacy invasion. It's not like insurers or employers are going to document that they discriminated unfairly, whether illegally or otherwise, in making their decisions. But we know all the things I mentioned can happen, partly because too many times there have been cases where real evidence was seen, and partly because in some cases incentives are aligned with poor behaviour and it's just plain naive to think it won't then happen if there's nothing to balance those incentives.
Privacy is important because it removes the ability to make those unfair decisions in the first place.
First off, let me clarify what I'm advocating for so we aren't talking past each other. I would like for the public to be less skeptical of organizations collecting large amounts of data, and storing it to analyze in aggregate for a variety of purposes. Particularly, if access to the data is controlled, and if it is only used in a sufficiently aggregated form. Society will reap tremendous benefits from enabling things like this.
I don't think of most of the cases you are describing as being related to this.
That's a very one-sided view. Just this week, the latest attempt to do this in the UK, the care.data programme, essentially became so politically toxic that it's dead.
This happened for a number of reasons. Some of them were just incompetence, like claiming everyone would receive a leaflet explaining the proposals and the right to opt out, and then finding that not only was your leaflet heavily criticised by medical and IT professionals for being woefully misleading, but when surveyed about 2/3 of adults reported not having seen it anyway. That was the credibility of the programme operators you saw falling down the sinkhole.
However, other reasons for objecting would have stood up even if everyone were fully informed. The data in question wasn't actually going to be available to clinicians like doctors and nurses who might find it useful when providing care. And contrary to the laudible-sounding goals that some medical professionals have suggested, much like those you have been advocating yourself in this discussion, it also wasn't going to be restricted to people like medical researchers.
In fact -- and it is now well-established, beyond-any-doubt, clear-as-day fact -- the data could never have been protected to the extent that was claimed (numerous qualified people have debunked the effectiveness of the claimed pseudonymisation), and the proposed rules and "safeguards" for who would have access to the data and for what purposes weren't even close to restricting it to legitimate medical research of the kind you describe. Those advocating the scheme at government level have once again demonstrated a fundamental lack of understanding of the implications of this kind of technology. There are a few other questions that seem to have been brushed under the carpet, too, like how opting out would supposedly mean your information never physically left your GP's systems, yet paradoxically there were circumstances discussed a couple of weeks ago where some organisations, like police and security services, would be able to access the data centrally via the new system anyway.
The trouble with many of these privacy issues we've been discussing recently -- whether it's Google-esque creepy mass surveillance, or the NHS plans to consolidate and share particularly sensitive data about individuals, or governments monitoring surveillance networks -- is that they are all cases of Pandora's box. Once you've compelled people to give up privacy and they've been entered into someone's database, that data is out there, and it's subject to redistribution and repurposing at any future time, with or without the blessing of the data subjects. Our privacy laws are dangerously underweight and already fail to balance the heavyweight capabilities of modern technologies. Until that gets fixed -- and I mean fixed in the sense that privacy laws are actually enforced and respected at government level -- the only reasonable conclusion is that we should err on the side of caution with giving up personal data, and challenge every attempt to push the boundaries to make sure it's justified.
Of course we should encourage the good and discourage the evil, but that's a moral pursuit, not a technological one.
The world would be a better place if there were no defined country names, zip codes. We should decentralize all our data to prevent evil.
Just once I would love to be proven wrong. But Google just seems like the neediest, clingiest, nosiest spouse one could ever have.
It's not like Google needs to map a complete picture of everything at once. If someone pulls out their phone this week and points it around your room, the location and data is sent to Google. Fast forward a couple of years, and someone does it again, and now they have a little more data of your home. They'll have millions and millions of little cameras and sensors around every corner of the globe, slowly constructing a virtual 3D world in the coming decade.
So yes, even if you don't want to surrender personal information, everyone around you will do it for you, and companies encourage your friends to profile you. It's similar to people uploading and tagging photos of you on Facebook. You can lock yourself in a box, and Google is going to know everything about you, even if you decide not to willingly share information. Most of the world uses Gmail. A friend recently sent out an e-mail for an event. I think I was the only person out of 20 that wasn't using Gmail. How do I not give up my information to Google? I stop e-mailing my friends? How do I contact them? Pick up the phone? Didn't Snowden inform the world of our phones being monitored? Hmm, no e-mail, and no phones. I can't even browse the internet, since the majority of sites run Google Analytics, or AdSense.
Good luck not surrendering information.
Anyway though, I don't think the intentions are evil, it's simply business. Rule number one, if you deal with users, you store as much data as possible. Delete nothing. You don't know where your business will be in the future, and this data might be priceless for improving your service, or expanding into other areas.
Days ago a friend showed me how after calling a restaurant (just a number) for doing a reservation and later the contact list displayed the name of the restaurant and an image of it.
Cool, right? Joe and Foo love this feature, but do not know the implications of it: a query is sent to Google asking for the info on a number (one must presume, tied up to your account id). Google now knows which numbers you are calling. and, Assuming they do this just for restaurants (how would they know, the query is sent anyways), they might even know your food preference.
(Note: I'm not actually that worried about this -- I think society will adjust -- but I think collecting anecdotes only shames the people using your data in ways that are trying to be helpful to you.)
In addition, the FCRA mandates the following cannot appear on background checks:
- Bankruptcies after 10 years
- Civil suits, civil judgments and records of arrest, from date of entry, after seven years
- Accounts placed for collection after seven years
- Any other negative information (except criminal convictions) after seven years
I'm not talking out of my ass: someone who has one of the most innovative companies around shared a real problem they had because of that. If NSA was not completely overwhelmed by bullshit data, they would absolutely use supermarket data to target Salafists.
Most people do not care. I am glad that some people do care, and I read the news actively.
As much as it sends chills down the spines of wary folk, for the time being, the government and the huge corporations have my trust.
I think you're misreading this. There may someday be a Google product that reads in a full 3d scan of your surroundings, but it's not this product. It's a sensor and some APIs.
Until a device is actually released that requires you to upload all data, it seems premature to give up on it.
> Until a device is actually released that requires you to upload all data
No, no, it won't be required... it will just be on by default, and the opt-out will be buried in a submenu hidden under the "Advanced Developer Options" section.
The closest analog would be the Android camera, and it asks if you want to turn on autoupload or whatever. And it doesn't upload at all if the camera is being used by an app, even if you have that turned on, which would be the same case if people build apps to use this device.
I disagree. The closest analog is the Google location services, which continuously uploads data about your position and local wifi networks to Google, and is not very transparent about what it's doing: http://www.pcmag.com/article2/0,2817,2384751,00.asp
It's far more likely (if this catches on) that this will just be another sensor in your phone that apps can use, and if Google wants to improve maps or whatever, they'll just ask people to "contribute" scans, like they've gotten people to basically make street view for them.
I find it odd that stories like that don't get much exposure on HN unlike negative stories about Apple, Facebook, Microsoft etc.
They are. Selling ads is basically the only way Google makes money.
It'd be nice if they at least offered the option to pay $X/year to use Google services without tracking. But I think that would be unlikely, because it would highlight the value of the data many people give away freely.
Every advance humanity makes can be used for good or evil. On the whole, we tend to find more good than evil in most of them, and maybe global data capture will be the same.
Google probably isn't evil, but is getting naughty.
On Android, they make it so difficult to avoid data collection. One example, which frustrated me yesterday: To access the Play store on Android, you need to enable background data sync. The problem is, there is no indication of what that background data is, and how do I turn it on only for the Play store. The UI is so designed that you can only enable or disable sync for all of Android, and clicking on a given app's name starts syncing that app automatically (there is no confirmation and there is no toggle to turn it off for that particular app).
When you go to the doctor, you are not inspected by a stethoscope, you are not inspected by an X-ray CT, you are not inspected by a scale. You are inspected by a doctor who is using those things.
When I make decisions about consent, I am not primarily considering if I trust the tool. I am considering if I trust the person using the tool. My doctor wants to stick me in an X-ray CT? Okay, I'll consent to that. My employer or my insurance company wants to stick me in an X-ray CT? They can fuck right off.
You are being studied with algorithms.
Unplug. Stay away from cities and other people who have devices.
You mean like the blueprints that were provided to your local municipality when whoever built it applied for a permit?
Not that this does them much good when Google Now (or, as I affectionately refer to it, Google Later) grinds my phone to a screeching halt and plays mercilessly with touch input.
Johnny Lee was the guy with the awesome Wii Controller demos back in 2007 (can't believe it's been that long).
edit: here's the full set of demos: http://johnnylee.net/projects/wii/ (also, I'm assuming it's the same guy, but his site says he's at Google now)
Back in 07/08 when he demoed his hacks in TED it really blew my mind. 
I was really hoping someone would pick him up and let him loose on some projects.
I guess Google did just that.
I see that his focus moved from living room TV/game console to a mobile device...
It could have made an awesome additional experience to some games (even if it was optional), and they could have done it with no accessory required.
So while the objects move around as if you would move in relation to them - your brain still tells you that the image is painted on the surface of the television.
The bad news is as you say: having mapping data of your private residence siphoned off by third parties, be it government or private industry.
I would say I'm as excited as I am worried about the potential for this technology. That said, I think the privacy implications are very similar to Kinect, which is already here. The only difference perhaps being mobility.
However, my recollection is that there was a reasonably vocal reaction to the privacy implications of Xbox One / Kinect 2 in gaming and tech media, although they mostly related to the timing of the first Snowden leaks. This article is a pretty good example of the media hype around it: http://www.theverge.com/2013/7/16/4526770/will-the-nsa-use-t... .
I think the "always-on" nature was more criticized than the point-cloud/time-of-flight abilities of the camera, but pushback against more and more ways to collect data attached to equipment made by big tech corporations isn't exclusive to Google.
Having point cloud data of your home in general is a complete non-issue, I'm sure most of the time the government already has floor plan data for your home.
The possible benefits of this tech are tremendous, and far outweigh any crazy conspiracy theory downsides.
What you label as "paranoia" and "baseless conspiratorial fear" is better described as "examining the potential consequences of the things we build". You are advocating that we examine only the positive potential uses of the technology we develop and that we ignore the potential consequences, because "we'll work it out and everything will be awesome!", or something.
Perhaps the potential consequences will never come to fruition--but it's still very important that we examine them. There's a real shortage of that going on, which is why some people post negative reactions to these developments rather than something more balanced. Stop trying to dismiss those concerns and that conversation--if you're not interested in hearing it or taking part in it, this is likely the wrong discussion forum for you to be participating in.
The problem, is no longer that someone is "watching you". Its worse. The problem is that someone, one day, could watch your entire life. Then proceed to pull apart events and use them, mostly out of context, against you.
I wouldn't call myself paranoid. I'm not doing anything about it. Honestly, if the data was only retained a month, unless needed in a current investigation, I would agree with you, this wouldn't be worth mentioning. However, retention of data is cheap and the data of your life will probably outlast you. As such, the parent posters in this convo have a point, it is something worth giving attention to.
PS. And to be quite honest, I don't even care that they can look back at my life and scrutinize me for an email or two. The real fear is that they can pull specific quotes or life choices out of context. You as an individual can't even discredit them because you don't have access to the original files (unless the entity does you a favor and shares them with you). So you can show how innocent the same email quote or life choice is while in context. You are instead left with your memory, and hopefully powerful rhetoric to convince a jury you are innocent.
That would be absolutely incredible... Firefighters die every year because they get disoriented and lost in zero-visibility conditions. Even if it didn't have a pre-generated map of the structure loaded... it could build a map as it went, and at least be able to provide a 'retrace my steps' view.
It could also be networked with other devices, such that all of the routes mapped could be composited and help in the identification of alternative routes should one become blocked.
Instead consider you won't see the actual items in the room. Polygon shapes will replace "real-space" items (coffee table, sofa, bed, doors) but can be dungeon items (for D&D) or items in an evil corporate waiting room (a la Mission Impossible or Metal Gear).
Personally I cannot wait to be chased by Cloverfield like monsters/creatures along the skyline and weaving in between buildings downtown. Movie ads are going to be amazing.
3D Printing is a big data problem where the data is not being collected. Sensors in desktop 3D printers are usually restricted to simple limit switches on the axes.
We would use Project Tango for a real-time feedback system for 3D Printing. Initially, we would demonstrate a simple functionality: recognizing when a print is failing and instructing the machine to stop, rather than waste more material. Next, with the help of the open-source community, we would expand functionality to dynamically adjust machine instructions to compensate/fix problems observed during the print. Here are a few examples:
- adjust bed height for different layer heights via software rather than manual hardware tinkering
- dynamically change extrusion rate if underextrusion/overextrusion is observed
- detect if belts are slipping & correct extruder positioning
- pause print is no filament is extruding
- intelligently resume print if stopped (e.g. power failure)
- inform slicing software if/where/why a print fails so the software can reslice and repeat properly
For users, no new hardware will be needed besides Project Tango - a computer will stream GCODE instructions via USB to a RepRap-like 3D printer (e.g. Makerbot, Ultimaker, etc.). Project Tango is precisely the breakthrough we have been waiting for to make 3D printing more user friendly.
That's a bummer.
Also, the page's default background-color should be set to black (or something dark). Most of the text is white(ish) and with a slow connection the background images take a while to load, making it impossible to read while you wait. /rant
[this is not legal advice]
Based on the old mnemonic trick of taking a real physical location that you know well, and associating memories with objects in that space. The digital version of this would be having files and data stored in a "physical" place - although they're not solid, they'd be tied to a single location.
Harder to organise, but I know several people who have completely filled their computer's desktop with shortcuts, because they don't like futzing around with folders. The folder metaphor isn't the be-all and end-all, there are times when it's appropriate and times when it's not. The metaphor of icons that are dragged around the screen is limited by available screen space - Project Tango gives you a house-sized (or even just room-sized) 3D space to play with, more than enough for all the files you could need to be immediately visible.
The main risk is that my virtual room could end up as messy as my real room.
I don't know how anyone could make money from this, but it would be really damn cool.
Imagine taking a scan of your pantry, refrigerator, and/or laundry room. Then mark everything as what it is (e.g. "box of cheez-its", "milk", etc). Then come back a few days and do the scan again and it'll tell you what's missing. Once you return from shopping, scan again saying what the new items are (even if they aren't what was there). The software would probably need to recognize certain shapes so a slight rearrangement/movement doesn't change. It'd be like history/bookmarks/favorites for perishables!
* It takes a good amount of time scanning and cataloging everything and we are taking about a world where people don't find enough time to scribble down these things on a piece of paper.
* What if you pulled out cereal box A and cereal box B but switched places when putting them back?
* Since its scanning the boundary of objects, it won't be able to tell you if your Milk is empty, half empty or full.
The one thing I could think of where this is useful is in scanning rooms/objects which are for sale. Like 3d scanning the bike you are going to sell, or a realtor using it to provide a virtual 3d tour of a house. However, Microsoft already did it with their Photosynth program and regular photos you take , so I am not sure how this is going to fair if it turns out to be expensive.
Then it will just be a nice convenience, not a big deal.
But that describes huge swaths of technology, so go figure.
It's such a waste of time to get groceries, wait in line, unload them at the cashier, scan them all, pay for them, reload them again.
Couple of ideas...
1. Some type of RFID tags built into all items, so the checkout can read all the prices when you step through the scanner.
2. You order everything online at home. When you get to the store, you swipe your CC, and your order is automatically picked in the back warehouse, and comes out out within a minute. Downside, this wouldn't work for certain produce, where you want to pick the bananas that look the best, or the green onions only if they're up to your freshness standard. However, it could be combined, so you order 95% of your groceries ahead of time, then just pickup the few extras and add them to your order waiting at checkout. This would mean 95% less time picking out items, faster checkout times since 95% of your items don't need to be scanned, or bagged.
But image recognition on a 3D scan may be more accurate (and processor intensive). Possibly more error prone though.
output string: "$new_google_tech will let Google know more about you! I disapprove! Also, NSA."
Repeat. Occasionally sprinkle with insightful comment about the actual technology being introduced.
An exception might be if you have a very similar announcement. Doing so would get you included in the same articles as your competitor.
Wait, what? Why would Google view the WhatsApp news as competitive? Other than it setting an insane price and inflating bubbles more, how is the FB acquisition really useful news to anyone?
This tech will have far more impact on the world. Apart from the few people doing startups, FB buying WhatsApp will have almost zero impact in day-to-day lives.
I was only responding in the context of parent's PR comment, re: scale of press worthiness of the day. Tech sites have been covering WhatsApp/FB non-stop for the past 24 hrs.
As for impact on the world, this is just a call for developers. I'm encouraged by the ambition, but it's too hard to judge if this will be successful. FB/WhatsApp will almost immediately affect 400M users (+1M more per day).
Facebook got some news that Google is about to announce an ambitious, experimental project so they preempt it by buying WhatsApp for billions of dollars.
The rising tide lifts us all, and this technology can be ubiquitous within twenty years. Isn't that worth it?
They are the Internet, they have the half of cellphone market, they sell apps, books, CPU cycles, they sell Chromebooks, tablets, they make maps, they provide apps for chat, video conferencing, they've made an online office suite, they store your documents, they make the mobile OS that is sold on other company's mobile phones, they do mobile payment, they create programming languages, they employ Ken Thompson, Rob Pike, and probably the guy you envy the most, they have quantum computers and they do a sh*t load of other things that I can't bother to remember (Translate, Youtube, G+, Blogger, Groups, ...).
It is not possible (for me, at least) to attach Google the childish excitement that two geeky weirdos with scruffy hair bear for their fundraiser. Google is now the hot kid in the school, who's tall, strong, handsome and athletic, who also plays guitar and is very successful at his exams, speaks three languages and represents his school in the drama festival, and also participates in the school band as whoever the guy who gets the most attention in a band.
Innovation is good for science, but it is harmful to the collaborative efforts when it comes from companies that should've been saturated. When Google dies (like anything else), it will fell over the crowd that lives under it's shadow.
In one, data and data about access to data would be widely available, giving us an unprecedented level of citizen oversight of society. Think sunshine laws increased a thousandfold. In the other, access to data would be limited, and those who had the data would have incredible power. The kind of power that totalitarian states dreamed of but never achieved.
Every Googler I've met, which at this point is quite a number, is a fine person. I trust them as individuals. But for my tastes, Google as a company is already more than sufficiently powerful. Power corrupts. Power attracts the corrupt.
At this point, I think everything new they do should be treated with suspicion. Not hostility, but with the same sort of scrutiny that we should monitor every other dominant megacorp.
That was the first thing I thought about - actual augmented reality down to the inch would be quite something.
Household layout and furniture are mapped out, then re-textured to represent a castle, evil lair, enemy corporation, etc. Textures can update allowing the story to reuse each room as different places as you progress - the same way the holodeck area is actually small but uses optical illusions to give you a sense of greater mobility.
Not saying we have holodeck. But it's a step towards.
Also, imagine making a 3D "scanner" that you can scan objects with into a virtual world, or print out on a 3D printer.