Hacker News new | comments | ask | show | jobs | submit login
Everything Is Going Deep: ‘The Age of Surveillance Capitalism’ (nytimes.com)
133 points by mverwijs 17 days ago | hide | past | web | favorite | 62 comments



I’m most worried about the rise of Fintech apps enabled by APIs like Plaid. The media seems more worried about 10-year old Facebook likes being sold than a perpetual real time feed of bank transaction data ending up in the wrong hands or in the hands of a nefarious developer.

For the record, I’m highly critical of Plaid and hope the tech media catches on soon. They do not require developers to communicate which permissions they are asking for when onboarding new customers (I don’t even think that is an option even if developers wanted to) and there’s no central UI for a end customer to review permissions you’ve granted across developers and revoke them. I don’t think they have any requirements to encrypt this data on the developer side and have no idea how they audit developers to make sure they are using various endpoints without violation of their developer terms.


I worked on card transaction data (from Mastercard) in streaming 5 years ago. It's as shitty and invasive as a group of soulless bank BI can make it. Their detachment from the human damage they were creating and the way they basked in their own smartness was scary and disgusting.


> a perpetual real time feed of bank transaction data.

Jeez that does sound terrifying. I mean I guess that's already here in my credit cards databases, but at least (in the USA) I have some legal protections.


Like what? Dollars to doughnuts, Facebook already has your purchase data from your card company.


Great point. I had a service like this setup that had a couple of my accounts - I forgot about it. Just turned it off.


Yes this is by far my greatest criticism of how bank APIs work today. I have no idea what third party developers might still have access to one or more accounts that I might’ve set up years ago and forgot about.


Shoshana Zuboff is one of those people that make me upset when I discover them. Why didn't I hear about them and their books books much earlier? Is it only because she is not marketing her books well enough?

The Age of the Smart Machine (1988) is truly visionary and well written.

edit:

I'm currently reading The Age of Surveillance Capitalism.

The book has well developed concepts like 'behavioral surplus and 'instrumentarianism'. There are also clever terms like 'radical indifference', 'observation without witness', 'equivalence without equality'. They are just plain insightful. I can instantly recognize them as something I could not conceptualize before.


This kind of books go through different channels than the ones that are followed by the engineers. Engineers pick up this stuff when it ends up in the mainstream but they don't really participate that much in the tech critique world.


somewhat unfortunate vicious cycle it seems like, engineers either don't consider the ramifications of what they build or are willing to handwave it away, until it all starts to come back around



I've also just started reading 'The Age of Surveillance Capitalism'.

Some of my colleagues at work use the term 'digital native' to refer to (young) people who have grown up with ubiquitous computing. Next time someone says that, I should now perhaps say "oh, you mean, the wage slaves of the surveillance capitalists".


>oh, you mean, the wage slaves of the surveillance capitalists

That phrase is just a series of boo words concatenated together, as in, "wage (booooo) slave (booooo) of the surveillance (I'll give you that one) capitalists (boooo)." It is too woven in with the "capitalists (boooo)" movement to be effective as a rallying point for people that don't want to overturn all of society.


You think "wage" and "capitalist" are boo words? Really?


I'm not booing them myself, I'm talking about the (non-universal!) cultural subtext of those words. "Wage" has a negative connotation for being lower status than "salary," and appears in terms like "wagecuck." "Capitalist," is absolutely a boo word around leftists and was actually coined by Marx specifically to name his enemies.


> "Capitalist," ... was actually coined by Marx specifically to name his enemies.

Wikipedia [0] disagrees: The Hollandische Mercurius uses "capitalists" in 1633 and 1654 to refer to owners of capital. In French, Étienne Clavier referred to capitalistes in 1788,six years before its first recorded English usage by Arthur Young in his work Travels in France (1792).In his Principles of Political Economy and Taxation (1817), David Ricardo referred to "the capitalist" many times. Samuel Taylor Coleridge, an English poet, used "capitalist" in his work Table Talk (1823). Pierre-Joseph Proudhon used the term "capitalist" in his first work, What is Property? (1840), to refer to the owners of capital. Benjamin Disraeli used the term "capitalist" in his 1845 work Sybil.

[0] https://en.wikipedia.org/wiki/Capitalism


Just ordered The Age of Surveillance Capitalism based on your appraisal. Thank you.


I manage a deep learning team but I have some reservations about the technology. IMO deep learning is best for optimizing back end systems and not good for systems that ‘touch’ people: deciding to loan money, automated sentencing of criminals, targeted marketing from personal information, etc.

For me, the problems are lack of explainability and possible bias.

There are many great applications for deep learning and AI in general but some guard rails must be in place for public good.


Being able to shrug and say “the algorithm did it,” is almost certainly seen as a killer feature for any authoritarian, soulless megacorp and others. Instead of having to take responsibility for decisions, they just point at the box they programmed to act a certain way, and blame it. Unless a lot more people understand GIGO, this is going to stick around.


Exactly, this is easily one of the scariest things to see happening. Examples like Google's recommendation algorithm pushing people towards more and more extreme content (or even bot created content) signals to me that we're headed down a very dark road.


Right now the big guys can put the blame on their underlings but blaming it on machines will be even better.


These systems will be the perfect "faceless bureaucracy". Nobody knows exactly why they are doing what they do and can't be held responsible but the people who deployed them will get the profits.


Until people figure out how to hack them for profit. An AI with permissions to spend money on the company's behalf is going to fuck up eventually (and eventually in a major way).

Then it's back to human bureaucracy.


The funny thing is that has already happened with automated traders and they still keep at it. They have gotten fooled before by misparsing twitter feeds and they keep at it. They have gotten some kid-glove reversals though.

The thing is that it just needs to mess up less than humans to be worth sticking with it or have the political inertia to be 'preferable' to humans bureaucracies making the decisions even if it is sub-optimal. Zero Tolerance in schools is a godawful policy but because it lets them cover their asses even when it results in them getting sued and losing due to wrongdoing by trying to avoid frivolous lawsuits which they would win it is unfortunately sticking around.

The whole reason bureaucracies proved useful over just fiefdoms is that constraining to rules worked better than leaving everything to the discretion. Even the infamous 'flower poetry' Chinese exams were a leap forward because it meant that anyone who could prove sufficient literacy could get government jobs instead of just those connected and offered a floor. Not a great one mind you but literacy is a pretty good baseline for 'capable of handling paperwork and worth giving a decent paying indoor job'.


Probably some will benefit but most people will have difficulty rectifying issues that aren't in their favor. Look at how difficult it can be to reopen a wrongfully closed PayPal account or just to get an explanation.


A 'Medusa' browser that will constanly spawn chaff:numerous instances of false data and meta-data, patent it, Google hates it, buys it.


..predictably, angry Abbot RMS/openSource commandeers 'Medusa' into 'gnudusa' github forks to 'nudeusa', 'goregon', 'gone', vpn networks p2p, 'blockchain' likely tossed in somewhere, conflates concurrent user data/meta-data into noise, mass outrage wide adoption undermines, saps, destroys walled gardens.


You make it sound like bureaucracies with permission to spend money don't fuck up all the time.


I think you have captured the essence of it. What I wonder though: have systems before been without bias? Is the DL/ML bias worse than the one we had before?


For ML systems, it's an engineering mistake to deploy a complex model when you don't have a simpler baseline (e.g. does this outperform a basic n-gram model?). Similarly, it's a strategic mistake to deploy a deep learning model without assessing the baseline of human performance (including bias).

I see the problem of inexplicability as less salient than (1) responsible, informed deployments of models, and (2) ongoing measurement (especially against a human baseline).

You can deploy explainable models without (1) and (2) and end up with a much, much worse result.


The important phrase there is engineering mistake.

Intelligibility and ongoing responsible measurement creates a performance metric, and a line of responsibility.

To many, especially if they receive large pay but are incompetent and/or face legal risks if found liable, these are significant benefits.

/depressing, I know...


But if the logic before (for, say, whether to loan people money) was some sort of flowchart or checklist or whatever, it may be bad, but it's inspectable, and so could be examined, evaluated, and changed. The DL/ML creates effectively uninspectable black boxes.


IMHO, “deep” is just the new “smart”. People are just doing the same thing they were doing but bigger and better, but when you are building a new company you need to use the adjective of the times.

We have had “smart”-everything, it already sounds tired, hence “deep”—everything, let’s see how long it lasts..


I've recently realized how trivial is to detect suspicious activities on real-time video feeds just by tracking human poses, and how this is an almost completely solved problem now (basically it can count on incremental improvement of accuracy of models that are used inside). I have doubts this would in any way "democratize AI", but instead might end up as a powerful weapon of oppression. No wonder most of the papers on this topic originate from China.


Walk without rythm and it won't attract the surveillance state


> "That’s why the adjective that so many people are affixing to all of these new capabilities to convey their awesome power is “deep.”"

One of the best pieces of academic marketing was calling this set of techniques "deep" learning. The word is so rich with connotations, it immediately brings to mind all the synonyms: profound, complex, arcane, etc. It makes people ascribe far more complexity to the system than it actually has.

When in reality, it's just a "massively multi-layered and multi-stage" network. But that doesn't sound nearly as profound, and doesn't allow journalists to spin wild tales.


People will nearly always opt for the language that conveys the most meaning, even if doing so outstrips the underlying phenomena being named, since the point of language is to convey meaning


That doesn't mean the meaning has to be precise or truthful. Poetry, metaphors, marketing, implication, white lies - they all rely on meaning being multifaceted, and bringing things to mind without actually saying them explicitly.


If you don't live in China, you can start by not carrying a smartphone around all the time and disabling javascript. Just saying...


on my mobile, i keep safari's JS turned off and always in incognito mode. when i need to (like right now, to comment) i use DuckDuckGo app, which requires a fingerprint to open, for sites that require JS (like linkedin), or when i want to login.


At least use the NoScript plugin and opt-in to only the JS you need.


I ditched NoScript when I realised that it wasn't blocking inline JS. I landed on uBlock with Advanced Mode, which allows quick toggling of inline, 1st-3rd party and a few other things. uMatrix deserves an honourable mention also; it's been illuminating to see the sheer scale of trackers that are loaded on certain pages.


Deep Learning could in fact enable the next Deep State. Not that I love repurposing madness of today.


What does “the next Deep State” even mean? What do the words “deep state” even mean in this context?


It means state within a state. Origin of the term is in 1990s Turkey. Military, police and justice system were upholding national interests and acting independently from the government.

Deep State is a form of clandestine government made up of hidden or covert networks of power operating independently of a nation's political leadership, in pursuit of their own agenda and goals.


upholding national interests

'national interest' seems like a highly moveable feast.


Not particularly. As they were in control of the state, their goals were considered the national interest. It may not have reflected the will of the population or government, but that's often the case with national interests.


That seems like a really bad definition.


The concept of "national interests" has been in use for about five hundred years now. What issue do you have with it?


I posed the following to @Dang a few days ago with respect to what one would possibly think is, at minimum, as responsibility of YC (and the greater VC/SV population) to acknowledge -- though I don't see this happening any time soon:

----

[How can we] Find a way to have a serious objective talk with the greater community on the extraordinarily global reaching issues of the impact of Silicon Valley on society, community, culture as a whole.

Look at what we have to just emerge in the last 1.5 decades alone from "unicorns" in silicon valley:

* US policy seemingly being set/disrupted via twitter

* Mental health studies coming out on the negative impact of Facebook

* Election manipulation through ad-powered platforms such as Google and FB

* Massive cultural dialogue and political revolutions being fueled through twitter

* Assassinations being corroborated through Apple an watch

* Global spying and surveillance conducted through all our connected technology

Just to name a few of the globally impactful issues of our day which directly stem from the efforts of Silicon Valley in specific and the tech industry in general.

As the preeminent VC company in the minds of any young entrepreneur who wants to build the Next Big Thing, I would pose that YC actually has a social responsibility to, at a minimum, foster a conversation on these issues in a meaningful, serious and deep manner.

What are the consequences of MASSIVE success of a company?

----


I wonder if it’s time for software engineers to form our own union or guild to combat misusing our profession in corrupt and immoral ways. We would have immense power as a group, but on our own we’re all beholden to our employers which makes us complicit in doing work without thought to the long term societal damage we do.


The article makes me wonder if the author is aware of the technical meaning of "deep" in the context of the term "deep learning." Not that I disagree with the article, these things tend to take on a life of their own and that's just how it goes with language and culture.. but at least in the case of machine learning "deep" is not just an arbitrary terminology to sound fancy, but refers to a series of breakthroughs allowing incredible training performance on multi-layered neural networks; where "deep" specifically contrasts these results with prior state of the art in 3-layered networks. And presumably this use of the term is at the source of several of these other "hyped" uses of it, perhaps with the exception of "deep state", so it's frustrating to see it thrown into the same basket.


The word "deep" came way before deep learning, and can mean different things in different contexts


But one of the contexts provided in the article is "deep learning", that is what I am responding to. My point being that he lumps "deep" in as a hyped up term, trying to make connections across "different things in different contexts" as you say, ignoring the fact that it has a technical meaning.


The term 'surveillance capitalism' has become rather misleading, especially since Snowden pretty much showed the whole thing wasn't either all about terrorism or capitalism but control. It is forgetting about the relationship between big tech and the state, which today sometimes mean the same thing.


capitalism IS control, at the most basic level control of private property ensured by the state


You can start by not carrying your smartphone around with you and turning off javascript.


That's just swapping one set of disadvantages for another.


The Intercept has published an interview with the author, and I found it to be compelling enough to immediately start reading the book.

> You’re not technically the product, she explains over the course of several hundred tense pages, because you’re something even more degrading: an input for the real product, predictions about your future sold to the highest bidder so that this future can be altered.

> it’s clear that surveillance capitalists have discovered that the most predictive sources of data are when they come in and intervene in our lives, in our real-time actions, to shape our action in a certain direction that aligns with the kind of outcomes they want to guarantee to their customers.

https://theintercept.com/2019/02/02/shoshana-zuboff-age-of-s...


if surveillance capitalism was so successful, you would expect the overall ad spending to have spiked recently , since people claim to have found the holy grail that turns ads directly into profits. But it hasn't.


And they can sell information to your insurance company etc. It doesn't have to be only about targetting you with ads. Information about you can be valuable in other ways.


And it doesn't even have to be useful to the companies although it certainly helps. They just need to /think/ it is useful to sell it. Theranos raked in lots of money before it was revealed to be a lie.

Capitalism and values are fundamentally a means of resource allocation - if everyone is wrong about what really matters that means it is valued even if its true utility is nill or negative.


Now I want to eat a deep dish pizza to not think about this.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: