I think it's a tenable but extreme position, because basically they are objecting to Google reserving the right to develop new features in an empirical/data-driven way.
I think most people don't think of e.g., their privacy w/r/t tax data being compromised when their tax prep software company mines it to make data entry simpler, or to make it easier to understand the consequences of various filing choices by visualization, etc. Similarly, I don't think Google is invading my privacy when it takes my search queries and uses not only to produce SERPs for me but also to notice that when people type cyombinator it is likely a typo for ycombinator.
EFF basically doesn't trust a cloud software company to have any discretion, but I think most people are willing to take an informed risk when they entrust their data to someone else's instructions or computational resources; otherwise they'd write the software themselves and run it on their own device and so forth.
The risks that you describe also sound reasonable for adults to assume. However, Google, being who/what they are (organizing the world's information), shouldn't be surprised when "watchdogs" ask more of them, particularly in this case. They deliberately entered this education space. It's not just about plain advertising (to me). Whatever profiles are built from children's use of Google's services (that they are required to use through school), should be carved out from their normal data harvestinf and user profile nurturing. It should be up to Google to develop a "win/win"model whereby they can protect students and properly monitor app performance. It doesn't sound too challenging for a company like Google.
I think experiments should be done in controlled conditions with mock data and informed participants. When you then sell something to the public I expect it to be a finished and stable product. I can then build my workflow around your product and know that it won't be made obsolete overnight because you applied some enhancement in a patch.
When Google used to explicitly mark their apps as "beta" we'd joke about them being in eternal beta mode. It really looks like that's not a joke.
My problem with that reasoning is that you're assuming that there is a best interface and that what it is is already known. A/B testing is usually done to determine which choice is the best, how will they know if they don't test? Also who decides what is best? Does best mean that it's incredibly easy to use for 90% of tasks but terribly hard for the other 10% for some reason, or that it's moderately easy to use for 100% of tasks?
> I think experiments should be done in controlled conditions with mock data and informed participants.
The problem with this is that the mock data won't get you applicable results in a lot of cases. Along with that, an informed participant affects the selection further reducing how well the results reflect the real population of users. You've also likely agreed to this kind of testing in the TOS, so you might be considered to be the informed participant that you mention.
Nobody cares about Amazon using their own sales records and server logs to generate recommendations. The problem comes when technology companies decide that means they get to use any data.
If you're fine with google using personal data, are you also fine with FedEx opening up every package they deliver to you?
> take zero-sum advantage of you
Nobody said "all" or "zero sum", which doesn't apply here, but "take advantage of you:" is pretty much a description of capitalism. On HN, this is usually called "monetizing".
If Amazon buys a shoe store, they get the sales records and any other related data. They do not get to know where you walk with their shoes.
If there is any confusion here, it is because of the recent trend in Services as a Software Substitute that makes the business's server necessary for normal use of their product. Some people seem to think this lets them open the packages they are conveying or storing.
They still don't know what's inside the package. I really don't see why this boundary is hard to understand. With snail-mail (fedex, usps, etc) there are even laws that protect the boundary between the envelope and the private contents. Why would you think software would be different?
They can develop new features with their data.