Also it's evil (disinter-mediating people preventing human contact, gathering more data, closing options, preventing plurality of opportunity), but never mind ehh?
Second point; they aren't "self driving cars" they are not driving driving is a human thing that requires agency that no AI has at the moment. Autonomous vehicles are environmentally responsive guided robots or something like that. This confusion is important because many people see something that is "self driving" and impute many characteristics and capabilities to "it" that "it" just doesn't have; and this colours their expectations about what it can and can't do and what autonomy and responsibilities to ceed and delegate to that thing.
Most perniciously both Duplex and Autonomous Vehicles are parts of systems, they are not "your assistant" they are an interface to a collection of machines and people who's function is not to help you, but instead is there to enrich someone else and constrain your ability to choose and act as you would if you didn't interact with them.
Stalin would have loved this - the citizens go where they are taken, they can't even think of an alternative. They choose the food we say they can. The pictures that they take a manipulated and managed by us.
At any time your photos of your friend who's no longer in favour might morph over night to be landscapes only, or to contain just the friends who remained loyal to the party/government, you might wonder what's happened, but how will you know?
It's always possible to imagine bad and worse scenarios. Fear is always available, and can be applied to any situation.
Believing that technology is the tool of an "evil" system, or that it facilitates "evil" is not healthy for the believer or anyone else.
Naturally, we are all aware of the power of tools and especially modern "smart" systems. How can we trust anything? We can imagine how things could be as bad as we can imagine!
I've decided, even at the risk of being naive, that I must not fear the machine, fear the network, fear anything.
We living beings all basically want the same outcome- happiness.
Having convenient tools help us actualize that outcome is a good thing. Just because we can imagine some deep dark conspiracy of "evil", behind the scenes, deceiving and manipulating us, doesn't mean that there is any such thing.
Nor, of course, that there isn't. Since we can't know, let's just enjoy all of our cool gadgets while being happy...
Although, I'm also beginning to feel as though the best decision will be exiting the information super-highway :)
If y'all haven't read it, The Joy Makers is a pretty fun sci-fi exploration of technological routes to happiness, and possible downsides to "plugging in".
I don’t think things are quite as distopian as envisioned above but would love to see the counter argument.
But once you want to extend a problem into an unfamiliar domain or debug why an algorithm is not working, then I don't see how you can accomplish this without domain-specific and machine learning knowledge. Or have I been missing out on some amazing auto-debugging AI that I am unaware of?
Or because your own lively hood is dependent on machine learning?
Whatever the case may be and my response is that I don't care. I firmly believe that this is a GOOD thing.
As developers we should be striving for a) open sourcing machine learning so that everyone can contribute to the good it can do and b) that we democratise the result of those efforts so that everyone can benefit from it for their own projects.
When time goes forward these efforts will translate into an NPM of machine learning. What would have taken a small team and a year to do, now just takes installing a plugin and having an API at the developers disposal. This is definitely a future that I cannot wait to see and it's within reach.
Out of the 3, AN (Narrow) I, AG (General) I and AS (Super) I. Artificial Narrow Intelligence is going to grow in prevalence in the next 10 years and the effects will be very good. I don't think it will kill jobs, merely make people much more productive.
Why? Oh I know. You are a wild-eyed optimist who would have cheered unfettered globalization in late 90's because "the whole can benefit from it".
You know what - as developers, we should care about maintaining scarcity of developers around the world so that we can continue being in demand and continue having great salaries. I would rather be a part of AMA (cartel which limits supply of Doctors in the USA and erects legal/regulatory barriers for competitors, thereby ensuring high standard of life for Doctors) than being a manufacturing engineer whose job was lost (or threatened) first by globalization and now by automation.
You know who benefits from "democratization of results of our efforts" or when "everyone can contribute to the good machine learning" (or other software engineering in general)? VCs and mega-rich folks, whose capital now doesn't have to share productivity gains with labor because labor is abundantly available.
Yes, it still needs that expertise to run models well. But those without the expertise sound more confident and then ruin the market with the expectations they elicit.
I use heaps of tools I probably couldn't create myself, or at least don't have the gumption to try.