"I can't think of a job or a career where the understanding of ethics is more important than engineering," Dr. Kearns continues. "Who designed the artificial aortic heart valve? An engineer did that. Who designed the gas chambers at Auschwitz? An engineer did that, too. One man was responsible for helping save tens of thousands of lives. Another man helped kill millions."
"Now, I don't know what any of you are going to end up doing in your lives," Dr. Kearns says, "but I can guarantee you that there will come a day when you have a decision to make. And it won't be as easy as deciding between a heart valve and a gas chamber."
To me this is incredibly valid for Silicon Valley engineers these days.
Sometimes the distinction is even more insidious. I did work on perpetual flight for drones, and Facebook had a perpetual flight project that had the goal of bringing internet access to remote locations in Africa. Sounds humanitarian, but I also didn't want to be responsible for subjecting poor Africans to what I consider Facebook's panopticon. Or maybe it would have been a net boon for the region? It's really hard to tell a priori, so the best I have managed for myself is just to try and stay in theory, where developments are further removed from direct consequence.
Separately, RE: bringing internet to new places, there's reasonable evidence to show that introducing broadband lifts GDP, so whatever you may think of Facebook, giving people high-speed connectivity to the whole Internet is probably a net positive. Disclaimer: I helped start Facebook's Connectivity Lab but no longer work there.
From a wikipedia section on Free Basics https://en.wikipedia.org/wiki/Free_Basics#User_experience_re...
> In 2015, researchers evaluating how Facebook Zero shapes information and communication technologies (ICT) usage in the developing world found that 11% of Indonesians who said they used Facebook also said they did not use the Internet. 65% of Nigerians, and 61% of Indonesians agree with the statement that "Facebook is the Internet" compared with only 5% in the US
No wonder I have such an alcohol problem...
To me this argument that "anything can be used in evil ways" is a poor excuse used to distract from the issue.
When gunpowder was invented the purpose was medicinal, three centuries later we had cannons.
Many things in our world seem positive on the onset, but give it some time and start looking from different angles and you're bound to find some potentially detrimental use.
Granted some inventions like nuclear energy and nitroglycerin, both intended for constructive purposes, have a more obvious destructive potential. An example that better connects with your question is a company such as Monsanto.
It's for golf courses - there are amazing golf courses in dubai that required genetically modified burmuda grass in a sandy synthetic fertilized substrate. They take huge leaf blowers and cover football fields worth of sand with it, water the HELL out of it, and pretty soon, beautiful grass in the desert.
Grass in the desert is great for everyone, that wants to golf or make money off golfers.
Getting grass where it isn't sustainable is not benevolent.
Actually I was talking about this:
Regarding the original statement I was criticizing: Of course one can construct an argument of technology itself not having an ethical bias and the notion that everything can be used as a weapon. Sure, beat someone to death with flowers, go on!
The point I'm trying to make is that I consider this tactic a distraction. Nothing more than look! a three headed monkey behind you!
And then perhaps there's a question of who owns this now greened and more usable desert?...
And the prince said "by aiding them you killed my men" and slew the man.
And then the people in the next town all got together and said, "Let us all work together and see that this prince is never able to defeat our army."
"Give me six ideas from the most ethical of engineers, and I'll find something in them to weaponize."
Any primary research can be used for any range of good or evil... the task is to curtail the creators of evil uses.
Instead, he made war and death cheaper. He made it more likely. In his lifetime, machine guns were deployed asymmetrically, making it cheaper for colonial empires to kill large numbers of the colonized.
"Perhaps my factories will put an end to war sooner than your congresses: on the day that two army corps can mutually annihilate each other in a second, all civilised nations will surely recoil with horror and disband their troops."
Some genius in the near-future will probably make the same inaccurate prediction about autonomous killerbots, or more likely realistically, cyber-warfare (I hate that term): "Shutting down hospitals, power plants and other critical infrastructure will make wars shorter"
Thinking you can is naive at best.
In the case of Fritz Haber, it's the same man who saved billions from hunger and killed millions by prolonging World War-I. Excellent article on the same - https://medium.com/the-mission/the-tragedy-of-fritz-haber-th...
Sometimes, the decision isn't clear as black & white; especially when you mix nationalism & patriotism into it.
Mass intentional killing is wrong, pretty much everyone agrees with that.
Bomber Command in WWII and the use of nuclear weapons in WWII are both examples where you will find nuanced, intelligent debate on both sides of the argument of whether "mass, intentional killing" being wrong or not. What you describe as "hate / pride for his country" ... but isn't that the reason most soldiers kill? That you even bother to call this out shows that you know the reasons for killing people need to be taken into account.
I'm not making a statement on whether or not mass killing is wrong here, simply pointing out what seems obvious: it's far from black and white.
I'm sorry, but this sentence contains already the problem.
In shooting war, both sides have already ceded that killing people is morally justified.
And now you're in the unenviable position arguing about how much death is too much, and what are the proper ways to kill people in a war setting.
And then you get asked moral dilemmas if killing 100,000 people is okay if it saves 1,000,000 people from dying further. This is the Hiroshima/Nagasaki question in a nutshell.
I don't know that answer, but then I didn't fight a brutal war for four years, watching my friends, family, and countrymen die at the hands of a brutal regime.
Power, corruption and lies.
Those people often stand to personally (financially) benefit from unethical business decisions. Engineers have to think through all the dirty consequences, and still draw pretty much the same salary.
So, there are two aspects of that: first, do you trust the people you're designing a weapon for to use it for justifiable purposes, and second, how does the weapon itself influence whether it can be used for just or unjust reasons? Gas chambers, for example, can only be used to kill someone whom you are already able to coerce into entering it; its only use is to murder a helpless prisoner. A Spitfire or Hurricane can shoot down bombers but not necessarily do that much damage on the ground. Orwell had an interesting perspective on this question: "Thus, for example, tanks, battleships and bombing planes are inherently tyrannical weapons, while rifles, muskets, long-bows, and hand-grenades are inherently democratic weapons. A complex weapon makes the strong stronger, while a simple weapon — so long as there is no answer to it — gives claws to the weak."
I really don't have answers. The stance of, "weapons equals bad, therefore it's wrong to ever help design weapons" is a pacifist notion, and maybe that appeals to you, but if you're not a pacifist, it becomes a complicated question with certain political ramifications.
If one could engineer a solution to save lives, but instead uses his/her talents to build the nth iteration of, say, a food delivery app, does he/she bear any ethical culpability?
Because it seems the bigger challenge for engineers is not avoiding overt evil, but in the opportunity cost of spending their time on work of significantly lesser consequence than their talents might allow.
Yes, yes, we have a responsibility to not defraud people, and to not work on obviously evil projects, and even to speak out if your employer is, say, building a bridge that is not up to the clearly defined, established standards. (which is super different from when you think a bridge will collapse, but can't explain why in a way that other people can understand. This parenthetical part is particularly important, I think.)
But we are free humans, and if we want to waste our lives on trivial things, so long as those trivial things don't hurt others, that's our choice. We are humans, and humans need entertainment, we need play.
Well, that's a narrow view of ethical guidelines in conducting one's work. We all generally agree that people shouldn't be evil.
My question sits above that. Is it enough for a moral society to simply decline to do evil? Or do we have a responsibility beyond that? And if it's the latter, then whose responsibility is it?
My question was whether it's just as immoral a choice to not use one's potential to help people as it is to use one's talent to overtly harm people.
Is the answer to that not obvious? It's the difference between negligence and pre-meditation/mens rea.
Because, yes, it would seem obvious.
>It's the difference between negligence and pre-meditation/mens rea.
a) you're using legal definitions when the question is about morality and b) it's not negligence to actively choose one path over another. It's simply making a decision, and I was asking if that decision can be considered a moral one.
Started off as a philosophical question, but seems to have gone a bit literal. Oh well, it happens.
You don't think the legal differences between those two things were based on perceived immorality?
Here they say that the design of the internet was very much determined by the military priorities of the project (Telcos had different priorities so they came up with OSI where the link layer is supposed to be reliable, as opposed to DARPANET/IP, where the link layer/IP routing is dumb and the endpoint is smart; this allowed link layer to be pluggable and IP routers to be cheap)
That said, the approach of some of the commentors here is that the only acceptable approach is to refuse to work on anything that could ever be "weaponized". It's possible that the distinction you've so clearly and correctly pointed to could stand to be more compatible with this stance.
Autonomous murderbots are highly likely to depend on network protocols, among other things.
Please excuse my manichaeism.
But then came the nuke. Nuclear weapons make traditional war between developed and nuclear capable nations basically impossible. Think about what the Cold War really was, or even what the geopolitical 'disagreements' of today are. Those are World Wars 3 and 4, averted because these wars would be unwinnable by any side. Like Einstein said, 'I'm not sure what World War 3 will be fought with, but the 4th will be fought with sticks and stones!'
Did the people developing nukes understand that they would finally create a weapon that immensely powerful that it would deter open unrestrained war for decades to come? It's possible, but I think that their motivation was something more straight forward - gain military superiority in the present, probably also mixed with a bit of scientific curiosity about the challenge of creating a nuclear bomb.
It's difficult to predict the future. The most awful and aweful weapon created in the history of mankind ended up creating the most unprecedented period of peace for mankind as well. This is also why I vehemently oppose nuclear disarmament. That's how you get WW3 if people actually disarm, though in reality it'd simply likely result in nations obfuscating their nuclear weapon programs and facilities. It's just a molten salt thorium reactor guys, come on - breeders are awesome!
The loss of life when developed nations fight against each other is literally inconceivable in today's times which is why I went out of my way to try to give measurements that can help you grasp it. The entirety of the world's conflicts today is completely and absolutely negligible compared to WW2. And I'm focusing on WW2. The major point here is the WW3 that we have avoided. The reason for that avoidance is almost entirely because of nuclear weapons.
Finally, it's not just peace for the west. There is unprecedented peace throughout the vast majority of the world. The entirety of Africa and the Mideast account for less than 20% of the world's population.
The school my friend went to as a child no longer exists because it was bombed (Syria)
Wars are still happening today, just not with nukes. Every nation with nukes knows that it would be suicide to launch one.
I used to find myself most of the times confused while using technologies whose CEOs, mission I do not agree with. It felt as if I was contradicting myself. But then I realized the dialectical nature of this relationship which is if you want to create something better, improve something it is only natural that you will do so using the existing tools available in your world.
A great example is Facebook. There are things you might not agree with, you might hate it but if you look closer you will see volunteer groups to political campaigns to marches and protests which are organized only via Facebook. One of such political activities one day could as well change Facebook or the world we live in drastically.
You can apply this to other cases. You might have an anti-capitalist philosophy podcast hosted on Amazon or Spotify or Apple which in essence contradicts the very existence of such companies. But in the end you are using them to deliver your message, cause change etc.
I am curious about what others think about this and how they feel about looking at it from a dialectical point of view as I am continuously thinking about this and still forming my thoughts.
But there is space for ethical contemplation during an arms race.
No one hates war more than soldiers.
If you didn't do great in school and don't have rich parents to pay for college, the military in the US will look great with its education perks and chance to "see the world".
Or they're broke, have their back against the wall, and can get good training and education through the military.
How should the military fill its ranks? Would you rather have a lottery?
That's the truth.
Did the engineers of the atomic bomb save millions of lives and usher in an era of unprecedented peace? Or did they slaughter millions of innocents and sow the seeds of humanity's destruction?
It's been 70 years; we've had plenty of time to make up our minds.
Many inventions are not so easily classified as innately good or evil.
>Kearns was a member of the Office of Strategic Services, the forerunner of the U.S. CIA, during World War II.
Somehow I don't think that Kearns would have had an issue with wriing software to analyze video footage for the military.
Merely offering a force projecting capability is not evil.
If you look at world history, the only concrete definition of who has most political power is this: it's the party with the most capability for projecting violence in a given area.
Small close knit groups that live outside of large populations can live independently and do without military force. Once you have a large population with more or less fixed hearth and home the party with most violence projection capability owns them politically.
This dynamic is played in almost ridiculously minuscule scale in the peloponnesian wars of the greek city states of antiquity.
The pathological manifestation of this principle can be viewed, say, in the rise of ISIS and the Somali warlords.
While a modern state is seldom a benevolent actor, at least in the western countries it's the best of known alternatives.
In this framework violence is a key tool of the state, just like good governance, tax collection, etc.
Given that violence is a necessity, in my opinion designing military instruments in itself is not evil. It's not as good as designing new vaccines, so there is some ethical scale in the matter, but I'm damned if I can put it in concrete terms.
Are you sure it's a good idea to become less influential on the world stage?
What gets me now, is how much the US economy and focus is on death and violence. Offensive death and violence. We're bombing people all over the Middle East. And have been for years. Tangentially, I find 'March for our Lives' ironic given the amount of, to me, unjustified, violence and death the US country exports to children and families around the world.
For self-defense only? In an ideal world, yea, invent the meanest things possible. That philosophy still is compatible with a strong military. But then again you have Gulf of Tonkin, etc., so who knows!?
Lately I've been telling myself this whole wargame is a game I'll never be privy to what's actually happening.
- Slaughterbots: https://www.youtube.com/watch?v=HipTO_7mUOw
- Last Day of War: https://www.youtube.com/watch?v=IjJmTeBSEzU
I think people often have that twisted. A lot of the U.S.'s expansion is cultural, not military. That is to say: people liked what they were doing, and became a fan. Think: music, movies, and to some extent, literature, and now you might add software and consumer electronics to that list.
And even when looking at Rome, can you imagine how much harder those repeated annexations would have been if the Roman culture wasn't respected?
I'd say that being capable at least of defending oneself, independently, from most individual powers and believable coalitions, creates a certain amount of stability.
I'm not claiming it is, nor am I arguing that culture isn't also a factor. I'm saying that military presence shouldn't be discounted, as the parent seemed to be doing.
Realpolitik is meaningful; and everyone on the left seems to have forgotten about it (I say this as an unabashed progressive liberal).
The US’ prime advantage in the 20th century was the fact that basically the rest of the industrialized world was completely destroyed twice — and then they paid the United States to help them rebuild. It wasn’t until the 70s that the US’ reliance on oil became heavy enough to exploit strategically, but by then we had built an entire post-industrial economy a decade ahead of Europe and two decades ahead of the rest of the world. We helped Japan do the same because they were basically a puppet state and gave us a foothold in Asia to serve as a buffer with Russia.
Does that sound like cultural expansion driving the military, or the military goals driving the cultural expansion?
The list goes on and on.
Defense spending is a small parasite on an enormous private market for cultural and consumer products, of which the U.S. is and was a major exporter, respectively.
Defense agencies drive innovation because they have external constraints which force them to try to procure things which haven't yet been invented. It's similar to the way that automobile races are used to drive auto innovation. Necessity is the mother of invention, and necessity is not unique to defense, but defense necessities tend to produce extreme engineering efforts.
But, there is power in organizing, or unionizing, for collective bargaining.
Well, given how unimaginative a leap the national socialists took from the national fumigation program to the gas chambers, I doubt there was much more to it than "take bug poison room tech, use it on humans".
There's no such thing as an "engineer" in the real world, and there's almost always somebody more unscrupulous, devout, or patriotic than you who knows or is willing to learn what it takes to finish the job, and school is not going to get in the way.
Added: Engineering is a practice, not an immutable identity. Even if you publicly shame all the "engineers" away from contributing to an effort you disagree with, replacing them is a matter of learning enough of the practice to complete the task. Even if we were just talking about what is in paper books today, it's hard to say that anyone could have control over who is or isn't capable of engineering efforts.
If you believe that doing something is wrong, the idea that someone more unscrupulous will do it if you do not must not be used as an excuse for doing that thing yourself. Giving in to this notion will prove yourself to be the one who is unscrupulous.
By involving yourself, you gain the opportunity to do the right thing (for the given the circumstances). If quitting accomplishes the same thing as sabotage or internal lobbying, then by all means.
1. If you're on an active battlefield, both sides have already agreed that killing people is morally justified.
2. If the enemy is shooting at you, he is not interested in the ethics of killing someone on a battlefield.
3. If ethics helped wars end, there would be a US Army 27th Ethics Brigade to parachute in, but there's not and probably never will be.
If on a battlefield, given a choice between holding scrupulous ideals and living, most people would choose living.
If someone wasn't willing to kill someone else on your country's behalf, most likely you wouldn't have the freedom to ask that question.
Edited to add:
The metaphysical question you posted might be better phrased to ask:
What's the point of life if someone else is willing to extinguish it?
Just recently in the AI space:
- Siri was spun out of a Pentagon project -- look up SRI International and CALO. Its purpose: a "soldier’s servant".
- Autonomous driving is a direct evolution of Pentagon-funded efforts -- see DARPA Grand Challenge.
And it's not just funding the early research, it's procurement like this too. Military procurement has also supported the development of technologies when the commercial market couldn't.
Again, I support the employees and hate the fact that in order to develop medical lasers we first have to figure out how to shoot down missiles with them. It's hugely inefficient, could spell our doom, and if you think about it, fundamentally undemocratic. (Gives elites more power to direct taxpayer dollars under the rubric of defense.) But this should be a basic part of any story on how Silicon Valley works.
And to think that SV has a large population of supposedly "small government" Libertarian Capitalists... oh, the mental gymnastics in that.
This is a national budget problem. If you're in the tech world long enough, it becomes pretty clear that the only way to get Big funding is through military affiliation.
This puts a huge selective bias on what kind of technology projects actually get big funding, and further it prevents the benefits of those projects from reaching the community for years, because the military overlords demand secrecy sole use of the technology until it gets superceded.
We need to cut a huge chunk out of the military budget and give it directly to the tech sector, so that big innovative projects are actually possible without having to be military.
Lookup "the secret history of silicon valley". There was no tech world in the bay and massive funding for radar research post-WW2 bootstrapped what is now silicon valley.
It's not a national budget problem, it's just the history of why things happened in SV.
That’s why it’s wrong to just compare the absolute amounts invested — it’s when it’s invested.
MIT's Lincoln Laboratory made 27% of their revenue (roughly a billion dollars) in 2016. That's one research center.
I believe the silicon research/production that started Silicon Valley was for military needs. Military paid for it.
First Computer of US and Britain? It was funded and created during war for military purposes.
And what about China and Russia? Will THEY stop AI research just because Google employees demand it?
Innovation doesn't need defense support, innovation will come anyway. We just choose to allocate much of our resources to the military so that's where it gets spent. Later we can claim it as a win for defense. A happy surprise benefit of trying to kill each other.
All you need is people, time and money. Profit.
Many years ago I choose not to work on a local defense project. I found out that the project had ties to Chinese defense. I'm fine with my choice because I find their leadership somewhat oppressive.
Another time I saw Iranians on the campus. Another awesome opportunity for interesting work that some might not want to get involved in.
I know someone who did work for Mugabe. Hey, cool project. It's easy when you justify it.
Yes, you need time money and profit. Agreed. But if those things go against the military's goals, or you're unable to convince them otherwise, you're in trouble. At a certain scale you need to have the nod of those in power. It's a fact of reality on this earth. I'm not saying it's good or bad.
And I don't know how millions being lifted out of poverty, 10000 miles of high-speed rail in ten years, and so on, in China is necessarily oppressive. Or how the sheer virtue of someone being Iranian makes the project bad. I'm getting a lot of black and white vibes.
Well yeah but good luck proving PARC was a secret defense contractor. We can cherry pick examples of tech or benefits all day. A few good defense ones doesn't mean defense is good.
You can have an oppressive government and still have a lot of good come out of it. China is amazing. Their people still aren't free. In fact you can have a murderous military and still have benefits to humanity emerge. The military is still there to kill people and I'd rather not be part of that.
Projects were chosen as examples of countries that are American rivals, for effect and the benefit of US readers. Fun well-funded Chinese research projects might one day kill you. I would be as unhappy working on a defense project for any country, including my own. the point was "someone on the other side might have the same justifications".
We'd all be better off if the world sent more defense money to other parts of our lives. It's not the defense aspect that makes the projects good or possible, it's that defense has our money. Send the money directly to tech or research, same benefits would emerge.
We can't change our countries' budgets but we can stop idolising the military as funder. Sure DARPA but they hooked up a few universities. Pics of cats and porn (probably not military but good luck proving otherwise) took us the rest of the way.
> Black and white vibes
Play the ball not the man
I've called SV the military's generalized R&D department.
However, if you think an ideology is has logical errors and requires mental gymnastics, you likely don't understand it well enough to intelligently criticize it.
That's true, but not really in the general way you argue. Military spending on bay area R&D has been more or less insignificant for the entirety of the internet era. It exists, sure, but it's not driving meaningful revenue for any of the big players. Silicon Valley since the mid 90's has been a consumer thing only.
And even in the genesis of the valley, it was just one serendipitous application (missile guidance systems, which were willing to pay huge sums for early transistors which were literally 100x lighter than vacuum tubes).
This is ridiculously false. SV receives billions of dollars from the Pentagon, the CIA, and a variety of other government agencies annually. Here is just one example:
Here is a list of 219 tech companies owned in whole or part by the CIA hedge fund (just the ones they let us know about):
Here is their logo, which speaks for itself:
The CIA hedge fund and billions in R&D money are in addition to an unknown portion of the "black budget" controlled by various spy agencies, which is over $80 billion dollars this year.
The truth is that the tech industry is brimming with money from the military and spy agencies - and this is just what we know about.
Except I have found this thread to be very novel, thought provoking, and seemingly well informed.
Not only that, but when Youtube and Google block content, censor people, or otherwise prevent free speech, the common rejoinder is "they are private companies, they don't have to abide by the 1st amendment". Any company that is received taxpayer money, whether through direct subsidies, grants, partnerership, or any other avenue, ceases to be a "private company". I believe strongly that if you are truly private, and you operate entirely with your own private funds, and wholly-owned, privately purchased infrastructure, you are free to say and do what you want - silence any voice or opinion that you don't like. But once you receive public funds, in any context, along with that comes with the responsibility to the public. Despite being "the way it works" currently, this system is incredibly corrupt and logically inconsistent. In a world where the Constitution was respected, enforced and held inviolate, a company that received public funds (like Google) would not be allowed to declare themselves a private company with total autonomy on one hand, and grab countless millions in taxpayer money in the other.
The military application in question is legal and is approved by a duly elected government that supports it politically. In earlier days, employees generally would see this as just doing their jobs in developing technology that their employers wanted developed and would not concern themselves about ultimate uses and applications. In other words, doing your job is personal and, as long as you do it honestly and work hard, you should not be faulted for doing it as requested by your employer. That was always the standard. What then is the new element from which this sort of employee-driven demand arises? Is it morality? In other words, if I help develop A.I. that can be used for all sorts of things, one of which happens to be military-related, is the effort "evil" if the employer for whom I develop it agrees contractually to provide it to the government for a wartime/military use that can kill people? Do I really make a difference for the good if I convince my employer not to do this if all this means is that the company down the street gets the contract and the military gets the same results, albeit from a different vendor? If this is so, then I assume that you as an employee can make no practical difference in making the world better by insisting that your employer forego this particular form of contracting opportunity. If you succeed, your employer misses an opportunity but the evil you see being released into the world still gets released. It just means that you do not personally contribute to the development effort by which it is made possible.
Of course, it might theoretically be possible to persuade all persons working in the field of A.I. to ban further work that directly helps the military. But that would seem a practical impossibility. Many people in all countries believe that military technology of all kinds is proper, legal, and politically supportable for purposes of self-defense or for some other overriding purpose they deem proper. And certainly, there are bad people throughout the world who are eager to use any technology that comes their way for overtly evil purposes such as misuse of an atomic bomb. Unless and until human nature is fundamentally transformed, that will never change.
So, what is the answer in a country such as the United States where people and companies have the freedom to develop A.I. for any lawful purpose and where some inevitably will do so for a military purpose of which you disapprove?
You are then left only with a political solution: use political means to gain control of the government and the military and apply the force of law to ban the military use of which you disapprove.
So this is either a personal act of futility by the Google employees or it is a case by which they cannot separate the personal from the political and thereby insist that their employer sacrifice particular economic opportunities to ensure that your personal actions do not support a political outcome of which you disapprove.
Even then, does this mean that your employer should cease working on A.I. altogether? For, just as cash is fungible, so too is technology. Every improvement you make in A.I. might have an immediate use of x for your employer but, as humans collectively do this for all sorts of improvements, the results are there for the taking in the future for military applications of all kinds. In other words, you cannot put your improvement in a box or control it so as to limit its future uses (at least not in a free society). The computing technology of recent decades undoubtedly has bettered many aspects of life but it has also greatly magnified the lethality and utility of military applications so as to make the world far less safe. And this was inevitable unless a supervening agency were to have used forcible and totalitarian means to suppress such technological development from inception. Since no such supervening agency existed or even can exist in a free society, does this mean that all engineers and technical developers have blood on their hands because, ultimately, things they have done were used for applications of which they disapproved? Of course it does not. Nor would people today working on A.I. be held morally or legally responsible for ultimate downstream uses made of their work of which they would not morally approve today.
But this brings us back full circle. In the long run, you cannot stop such uses (or misuses) made from your technical development work. Nor can you be held responsible for them even though you contributed to them in some remote degree through your work efforts. Why then should it make a difference if your direct work efforts for a company like Google are applied to a military application of which you do not approve but which is legal, politically approved by the governing authorities, and will happen anyway regardless of whether Google is involved?
The puritans of old tried controlling the morality of others by shunning and shaming and doing it to an extreme degree. They failed miserably in their efforts because humanity is what it is and followed its own course without regard to external religious constraints.
This sort of effort by Google employees is obviously different in that it is not religiously driven but does it amount to anything more than a shunning-and-shaming method for trying to impose one's sense of morality on others by signaling that this way lies righteousness and everywhere else lies evil?
If this is what "don't be evil" now means, then Google will need lots of help going forward because every cause under the sun can be used in the same way to shun and shame. We then have management by a corporate board as may be swayed to and fro by any organized protest of the moment.
Whatever this is, and however it might be defensible in "sending a message" or whatever, it is a sure way to put a company at a competitive disadvantage while accomplishing nothing practically. It may further political goals but, if those are the goals, better just to try to advance them directly and not by attempting to shun and shame your employer (and your co-workers who may disagree with you) into submission. The personal need not be political. If it does become that way, a new form of puritanism will hold full sway to the detriment of all.
"It's hard to define things, so why bother at all?" ... "Attempting to circumscribe the affects of your work is difficult, so why have a moral stance at all?" ... dancing through loaded assumptions like "duly elected" and "democracy", finally concluding with a tired crescendo of capitalist "competitive disadvantage".
I would merely counter: if we are the future, then we can't all be lazy sods, especially those of us empowering the greatest systems, information and power structures on the planet. Give a damn, it's your moral duty. Intelligent people recognize this.
This is a false history. In the 1960s the U.S. was full of young people questioning whether they should just do as they were told and be a good, dutiful employee.
Since then there has been a massive campaign to roll back what was called “Vietnam syndrome” — the idea that you should consider the morality of your actions and contributions to society, not mere legality. Hence all the passionate Hollywood WWII dramatizations, the Greatest Generation, etc., portraying war as a tough but noble effort in which we must all unquestioningly sacrifice for the greater good — de-emphasizing much of the horrendous atrocities that have been perpetrated by the U.S. military in Vietnam, Iraq, support for murderous dictatorships in Central America, Indonesia, and so on.
Noam Chomsky has written extensively on this. One essay in particular is called The Backroom Boys — a reference to the chemical engineers at Dow who developed napalm within the relative peace and equanimity of a laboratory.
> Whatever this is, and however it might be defensible in "sending a message" or whatever, it is a sure way to put a company at a competitive disadvantage while accomplishing nothing practically… If it does become that way, a new form of puritanism will hold full sway to the detriment of all.
Whatever your opinion of the anti-war activists of the 1960s, puritans they were not.
Is that what is typically meant by "Vietnam syndrome"?
Also, isn't it a stretch to say Hollywood movies about WWII propagandize the idea one should only consider the legality of one's actions rather than the morality? If anything I would think most films attempt (in a sappy and trite way) to defend the rightness of the Allied cause.
I think it's probably fair to say that Vietnam was an eye-opener that shook a lot of people's trust in the wisdom of our society's leadership in general. And that a segment of society nevertheless responded like Kissinger by doubling down and shaming the doubters.
Google has users in almost all countries, and even our friends in other liberal democracies do not see the U.S. military the same way we do. Perhaps some Google users' family members have even been killed by the U.S. military. This presents a perfectly reasonable business reason (one that has nothing to do with "the personal, the moral, the legal, and the political") for Google to turn down AI drone contracts.
Complying with laws in another country is one thing. Working with a military, which necessarily has implications beyond that country's borders, is another. And of course even here there are different degrees. You can build a general-purpose secure email client and sell it to a country's military, or you can design their bombs. Where the line is I'm not sure, but at some point your activity is inherently violent, inherently adversarial to some fraction of people in the world.
If a technology is not under that kind of scrutiny, chances are its military applications are... far-fetched.
We are starting to see the shape of the AI driven future and it is not pretty. Autonomous drones and other robots and surveillance patrolling systems establishing strict inescapable authoritarian control.
The elite employees at the global AI leader are best suited to see the coming dangers and are sounding the alarm bells. The outlook is bleak but moral engineers are going to be one line of defense in this fight. And I hope they keep doing it.
There's a famous quote for this:
It should be noted that no ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure. Basic professional ethics would instead require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter.
Of which I believe the meaning is yes, it's evil. It's handing a toddler a loaded gun sort of evil. If you DestroyBaghdad, you've limited the harm your program can do to what is specifically required by the situation. DestroyCity is easily misused in the wrong hands and should be carefully considered by ethical programmers.
Doctors solve this by disallowing unethical members of their profession to legally practice. Programmers should consider becoming an ethical profession, because depending on others in the field to do the right thing and police themselves hasn't been working out.
Doctors are healers. The Hippocratic oath - "do no harm" - is the logical conclusion of the practice of medicine. Medicine heals, which is the opposite of causing harm. Avoiding harm is the only consistent metric of success, which explains the oath's persistence for millennia.
Can you think of a consistent, concrete set of ethics that would draw unanimous support among programmers?
What currently sets programmers apart is the lack of liability. Programmers write their own get out of jail free cards. We call them EULAs.
If a doctor screws up and leaves a clamp inside you after surgery, he is sued. If a programmer screws up and leaves a debugging backdoor in a shipped product, nothing.
>Can you think of a consistent, concrete set of ethics that would draw unanimous support among programmers?
I think if programmers can't come to a consensus on that answer, then legislators will do it for them.
If you look around, we're actually witnessing this happening right now. Populist anger has erupted after Equifax, Cambridge Analytica, and Uber. NYT opinion pieces call for changes in liability law around programming.
And it's not just talk. Changes have already started. Section 230 was recently modified to make small changes in liability of web hosts. In response, Craigslist went full nuclear option in protest and dropped their Personals section. Almost nobody noticed, which means in the next round, law makers will be much more bold in applying more liability to the businesses of programmers.
Google's "Do no evil" was the closest thing I think we've witnessed to a Hippocratic oath for programmers. That's long gone now. Now it's all jerk tech, exploit your users for content and then demonetize them with no recourse or redress.
I don't think the west can get any wilder, so the pendulum is going to go against us from here on out. Programmers should be getting ahead of this, but like all dumb humans, we will sit stupidly. We will only react to immediately obvious consequences instead of preparing for the storm on the horizon.
Do you not believe that society is only the sum of its parts? Do you not believe that the mathematics of society can be changed, the more parts of the equation object to letting their talents be used for unscrupulous goals?
I would point you towards any cultural shift in modern society, and how it began—usually, as the imbalance of classes further divides, until one class can't tolerate it any further, and uses what power they have to reset the scales. That is what is happening today, and it isn't a fluke of the short attention span of the beast of humanity. It is a conscious, concerted effort of people in this country who are tired of existing in a system devoid of morals. And to frame it as something like embracing the status quo, or becoming a puritanical society, is simply a false dichotomy.
You might want to read some history. E.g. history of the development of nuclear weapons.
This sounds remarkibly similar to the "superior orders" defense given at Nuremberg 
(Those who gave it were hung from the neck until dead.)
But I also wonder, if your employer asked you to provide some legal cover for something you found unconscionable -- like maybe draw up incorporation documents for organizing sex tourism to a place where it's both legal and likely to involve slavery -- would you subscribe to the same argument about how you should just do your job and not ask questions, how someone else will do it if you don't, and dutifully provide the legal services?
And if you think that hypothetical scenario is meaningfully different from this one, could you describe how? (I don't mean to try to back you into a rhetorical corner -- I'm genuinely interested in your response.)
This is basically the "just following orders" defense. Your argument rationalizes doing nothing.
As a moral person I think you basically have two choices: either don't work on things you think can be used for evil or if you do work on those things step up and make sure they are used responsibly.
You never know what will be the initial trigger for change, who will be inspired or whatever. Take this guy: http://www.ecns.cn/2018/01-02/286632.shtml Why should some basketball celebrity and his campaign have so much effect?
Google supporting American military may have negative effects on Google the corporation, especially considering that the rest of the world is a bigger economy than the US.
Google (and it's parent, Alphabet) is a US corporation, predominantly controlled by a pair of American individuals, with overseas operations and subsidiaries.
It's “multinational” in much the same way that the CIA is.
Something I would add is that a lot of people don't understand how fundamental military R&D is to the collective progression of knowledge and technology. Take almost any common technology that we use today (computers, gps, rockets, airplanes, cell phones, radios, the internet, etc) and you will find it came from military R&D and use in war.
Since AI and all its related parts are the new technological hotness, to put it mildly, it only makes sense that Google, one of the companies on the forefront of this technology, would work with the government/military to do research and find ways to apply it with their scope.
There are definitely useful dual use technologies. But there's also a lot of military research that is almost strictly for military. Nuclear weapons research goes beyond the stuff needed for power generation, for example. Money that could have went into research in more stable, safer power generation instead went into how to make nukes small enough to be used by infantry on the battlefield
The military helps a lot by being a huge customer for a lot of this tech, but we can also cut out the middle man and just spend on r&d directly in some cases!
By discussing the morality of weapons research in a world where we already have civilization-ending technology ,we can maybe reorient ourselves to spending directly on progress in more cases. Without needing to have a military application to justify it.
This happened in the past with the neutron bomb, perhaps it can happen again with tech that could be used to help solidify a police state
You made a point of talking about cryptography. The US government also classified crypto as munitions in order to control its usage and export: https://en.wikipedia.org/wiki/Export_of_cryptography_from_th...
This had very negative effects on cryptography in general (see the FREAK exploit: https://en.wikipedia.org/wiki/FREAK )
Also, why wouldn't this be employed to better monitor Stingray (https://en.wikipedia.org/wiki/Stingray_phone_tracker) systems?
I don't think the government (and specifically, the military) should be viewed as a non-partisan actor when it comes to technology.
> IANAL but I’m pretty sure the second amendment only applies to guns; you can’t walk around with an RPG on your shoulder.
Surprisingly perhaps you can buy a military grade flamethrower complete with a napalm package: https://throwflame.com/ it's not even classified as a firearm, and is illegal in only two states I think.
I'm a gun owner myself, and I've (long ago) taken the stance that the 2A isn't really compatible with modern life, as the people who originally wrote it could not have anticipated the weapons militaries deploy today. It makes sense in my mind that it is Congress' responsibility to regulate what it reasonable weapons for civilized society.
There's a reason you can't just walk into a Wal-Mart and walk out with a machine gun, or a rocket launcher.
That said, I don't much care for the fact that the requirement to have legal access to machine guns is "be rich enough to buy your way around the law".
Personally, I do think we need a complete re-work of the way we treat weapons to reflect the realities of the modern world.
(1) We have four hundred million weapons in the United States. It is horribly negligent that safe weapon handling is not taught in schools, at multiple levels. Also a good touchpoint to get police interacting positively with the community.
(2) The current background check system should be replaced with a Swiss-style one; e.g., you get a code, valid for say a week or two, that any seller can use to verify that you have passed a background check.
This doesn't create a de-facto registry, so no problem getting gun owners (or the NRA) to go for this one.
(4) A red-flag law that allows immediate family members and healthcare professionals to temporarily bar an individual from purchasing weapons, with safeguards in-place to keep bad actors from abusing the system (e.g., a psychiatrist that just reports all of his patients to the system because he opposes gun ownership).
(5) Removing gun rights only in the case of violent crime. Right now, in most states, misdemeanor assault won't cost you your gun rights, but a felony for tax evasion will. That's ass-backwards.
(6) Machine guns, grenades, rocket launchers, and the like should be available with a license. You really need to be able to show that you can handle these things safely. Right now, it's just "be rich".
(7) Treat short-barreled rifles and shotguns as normal firearms, and silencers like any other accessory.
(8) Concealed-carry should be the same for police as it is for normal citizens. Same licensing requirements: a one-day course, including a test on safe weapon handling. This also ends gun-free zones: if you have a license, you can carry wherever you want.
(9) Open carry needs to work differently for urban and rural areas. The above permit will allow you to open-carry a pistol in an urban area. Rifles, no. Put it in a case. In the countryside, open carry just makes sense.
(10) Actually prosecute straw sellers. Buy a gun for somebody that can't legally buy one on their own, you permanently lose your gun rights, plus whatever punishment makes sense.
(11) Secure storage requirements. If one of your weapons is stolen and used for a crime, you are legally an accessory if you can not demonstrate that you took reasonable measures to store the weapon securely (e.g., a gun safe or lockable case). This doesn't require police inspections, but provides a strong incentive for personal responsibility.
All of this would enjoy massive support from gun owners, and address a lot of existing problems.
I may have missed my calling . . . .
That is where the saying comes from.
It can be more effective at clearing bunkers and closed rooms? The military were using these in WWII for trenches, bunkers, clearing brush. My friend's grandpa in US was operating one in the Pacific during WWII.
I just brought it was an example how something potentially more deadly than guns or RPGs is not much regulated and how laws are not very rational sometimes.
I recently purchased the Federalist Papers so I could have a better understanding of "how" to read it, what the context was, and what types of discussions to anticipate having with those who'd like to look at it differently
Also, why do they need to be purchased? Isn’t that information in the public domain? Surely enough time has passed and Alexander Hamilton’s family don’t need to continue to profit from it?
I'd rather have a new amendment then have judges be able to decide what the definition of arms is. Slippery slope. They really could one day decide that 'arms' is only muskets, or even just knives.
U.S. v. Miller loosely established the test (later solidified in 2008's Heller v DC) that weapons commonly used in militia service are inherently deserving of second amendment protections.
Miller, a known mobster who was caught with a sawed off shotgun lost his case because
a) the military testified that they had never used sawed off shotguns, so they were not useful to a military, ergo a militia (which was a lie -- they had used them, and found them useful for trench-clearing)
b) Miller's attorney was not very good, and didn't even challenge that testimony, much less so by totally disproving it -- I offer a little sympathy here as they didn't have Google at the time
Also noteworthy, Miller was actually dead when the decision came down, as he'd been murdered, and because /shrug, the trial kept going, but the defense (for obvious reasons) quit trying. He was sentenced in absentia.
Which I find just insane. People say no one wants to take Americans guns rights away, but we were literally one vote from effectively doing so.
This was true only between 2008 and 2010. In 2010, the Supreme Court clarified that it was incorporated against the states by the 14th Amendment.
And it was indeed the case well before 2008-2010. See United States v. Cruikshank (1875), Presser v. Illinois (1886), Miller v. Texas (1894).
so says you. it's not clear to me at all that militia members don't support collective action
Even the NRA removed references to that first clause from their material.
We do have definitions as to what militias are informed to us by writings of the founders, the federalist papers, previous drafts of the second amendment and, failing that, codified by law in 10 US Code § 311. The definition of militia there ("consists of all able-bodied males at least 17 years of age and, except as provided in section 313 of title 32, under 45 years of age who are, or who have made a declaration of intention to become, citizens of the United States and of female citizens of the United States who are members of the National Guard") will likely have been expanded by our recent advances in military equality.
Gun rights antagonists tend to downplay that bit, as it implies an individual perspective that they find conceptually incompatible with tightly-controlled gun ownership permissions.
Strawmen are so unhelpful in reasonable debate.
Not all gun control advocates are either anti-second amendment, nor in favor of eliminating all guns. To say otherwise, creating a false black/white dichotomy in the gun debate, does a massive disservice to both gun advocates and gun control proponents.
Ironically, by making the choice "all guns" or "no guns", gun advocates themselves are forcing the "no guns" option to the center of the debate. As a wave of frustrated anti-gun youth become voters and reasonable political moderates look at options to "protect the children", I really think it's in gun proponents' best interests to provide a better alternative than "do nothing" on one side and "civil war" on the other.
At the end of the day, these stances, even the "moderate" ones you mention, are irreconcilable all the way down to first principles. If you are for gun control, you are necessarily for measures that will restrict the right to bear arms as it is recognized today, some more, some less, but restrictions on the right all the same.
There's no real evading that.
But that does not mean that everyone holds extreme positions, which is what you claimed.
Arguing that the extremes are the only options is a problem.
How productive would health debates be if the only two options presented were veganism and paleo? If the only sex ed options are abstinence or polyamory?
There must be room for compromise, or there is no debate, only argumentation.
That's not compromise, that's capitulation.
What concessions would gun rights advocates accept in order to allow some restrictions? What's there left to give on this issue that wouldn't undermine any controls.
Say, for example, I wanted gun owners/users to be required to be as responsible as car users, i.e. pass a test, maintain a license and registration for weapons and weapon users, and hold liability insurance to cover damage either intentional or accidental (that would obviously scale with the likelihood and amount of damage the gun can do). What can gun control advocates give that will get that done?
I think part of the reason the gun debate can be one sided from the "control" side is that the US already is quite far to the "rights" side of the spectrum, relative to the rest of the developed world. It can be difficult to see where we could plausibly move further in that direction without causing more of the problems we're (hopefully) all trying to solve: unnecessary bloodshed.
Another example I floated in previous threads is surfacing psychological issues in NICS checks (stuff like certain diseases or involuntary holds) and granting access to that system to everyday people rather than just retailers.
I think for a weapon to be 'compliant' with US's second amendment a couple of criteria must be met
a) must be in current wide use in military
b) must be capable of being carried by a single person (which is why tanks and fighter jets will not qualify)
This follows form 'individual' focus of the bill of right.
c) must be capable to aim it at a single person (which is why explosive or RPGs would not qualify).
This follows from the notion that Bill of Right in general, does not condone collateral damage or collateral effect.
As this is Individual's right, and therefore presumes individual's responsibility.
I also find this linguistic analysis of what 'bear arms' meant in 1791s, interesting/educational
 - http://ia800500.us.archive.org/20/items/gov.uscourts.mad.135...
I believe GP was referring to times s/he's entered the US on a visitor visa of some sort.
It doesn't much matter. Even if somebody made the case that "crypto is a form of 'Arms', subject to 2A protection," the 2A doesn't grant an unregulated right to own whatever "arms" one chooses. All the rights granted by the Constitution are subject to reasonable regulation.
So, Congress could simply legislate a restriction on crypto, much as they have done with machine guns, grenades, and nukes.
The guns that a person are allowed to bear are nowhere near sufficient to take on the US military, and the modern arms such as encryption and freedom from surveillance are not even guaranteed by the 2nd amendment.
You're getting into spherical cow territory if you think the US citizenry couldn't defeat the US military.
The US military will not be fighting the US citizenry in a hypothetical clean room, it'll be fighting in America. Who are you going to bomb, who are you going to strafe, when your opposition can quietly fade into the populace that provides your material?
I'm tired of reminding HNers about Aghanistan.
Who has the better chance at ambushing and getting away with the enemys weapons: the one with the gun or the one without a gun?
Look at Turkey. The primary tool was encrypted communications and information dissemination. That’s modern power that citizens should have. Right now power comes from information and the preservation of speech and private communications, not a few handguns and rifles (or IEDs).
When the 2nd amendment was written a gun was relatively way more powerful than it is in today’s world. If you had a group of men with guns you were basically on par with the government military.
However, I'm not saying I'd want them a civil war, just that it failed despite doing everything "right" it seems.
Unless the US were to nuke itself, given the geography, a motivated citizenry could easily take on the US military. Not a scenario I want to imagine, but the US military has troubles in Afghanistan and Iraq — imagine that kind of conflict in the US. Just look at rebels in Syria to see this on a smaller scale.
WRT other rights... Explicitly? Not always in the Constitution. But, if you think you can cry "Fire!" in a theater, you would be mistaken.
SCOTUS has upheld reasonable restrictions on rights over and over. It's a settled matter.
(edited - cleaned up my thoughts a bit)
It ain't Dred Scott dumb, but it's up there in stupidity and retarded law.
Well regulated doesn't mean the feds get to control the details of what the militia can use either. The militia is every able-bodied 18-47 year old. The regulated part implies a command structure which the National Guard provides.
That's completely wrong. Each of the 1st ten Amendments - The Bill of Rights - enumerates rights of the individual against government power. Its absolutely absurd to argue that the 2nd amendment somehow refers to a government commanded and organized military body and not to the right of citizens to remained armed to prevent the government from getting out of line.
Chaplinsky v. NH was in 1942 and had nothing to do with draft dodging.
Since then, the courts have ruled inconsistently on the matter. The only common thread is the courts mostly agree that some restriction on speech is permissible - where that line lays is very much up for debate.
WRT the militia - the government disagrees with your assertion. A random citizen cannot go into the gun store and buy a machine gun, or grenades. Special (and expensive) permitting is required. If you're contention is that these restrictions are unConstitutional, I'm not really sure we should bother debating the point.
"A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed."
More information: https://en.wikipedia.org/wiki/United_States_Munitions_List
One of the founding principles of American government was the freedom from state surveillance and intrusion into private homes. Nowadays, the federal government of the USA can legally use any technology that is patented, so why should they be allowed to restrict inventors from disclosing or selling IP that inhibits surveillance?
I’m sure people have been saying that for decades if not centuries. Terrible things just get declassified slowly so we only learn about them later.
This is my perspective, because I read too much scifi: I can conceive of a couple possibilities in 300 years.
1. Humanity is gone or stone-aged. Either because it failed to colonize before being wiped out by disease, because it nuked itself, or because it implemented AI in a way that got itself killed.
2. Humanity has turned peaceful, formed a global community (if not wholly, then at least nearly so regarding scientific resources), and colonized.
I draw such a sharp line because I don't see how the resources to colonize can be mustered without justifying it with an arms race unless peaceful cooperation is established. Without this, I think somebody is going to put a nuke in orbit around Mars and call themselves King/Queen of the inner solar system.
Therefore, I think we should be working towards the cultural changes that I believe necessary immediately. This is why I'm a staunch proponent of universal education, universal healthcare, gun control, etc. I think there's no better time than the present to "be in the race together" globally than now. Given climate change, death of bees, super-bacterias, and narcissists with penis-size complexes holding fingers over red buttons, I see a ticking clock.
Avoid (1) by tackling culture, is my theory, and I think that's what these googler employees are here. (I post this very much looking forward to being challenged on all points)
If the probability of a kind of disaster is sufficient to blow up Earth, it is likely that the same kind of disaster will blow up all the handful of colonies we can hope to build. At astronomic scale, the speed of light is quite slow.
It's not nearly as easy as that but that's the basic summation of my philosophy around the idea.
Once you get to the top of an organization, only a few people need to listen to you. As long as you keep their paychecks and power, they will listen to you.
Dictators dont care if war is unpopular as long as its beneficial.
I think nukes may be obsolete because war is no longer beneficial, but leadership of a company may be 100% set on finding AI by any means necessary.
history should be a guid here.
presuming penis-size conversation is anything more than an intentional dig by a chauvinist powermonger, probably isn't helpful here.
Globalism == lack of agency, the end of voting and elections.
Universal education == total indoctrination and ideological consistency of the collectivistic variety.
Universal healthcare == the end of individual choice, individual responsibility, all new extensive taxing regimes, the destruction of the individual on many levels.
Gun Control == People control. Then total domination by the state and a monopoly on the use of force. Which, by the way, has never worked out well for any citizen anywhere.
>Globalism == lack of agency, end of voting and elections
If federalism didn't lead to this, why would globalism? What even is globalism? I am skeptical of this because "globalism" seems to be used as a catch-all alt-right bogeyman lately.
>universal education == total indoctrination, ideological consistency
Universal access to education doesn't have to mean homogeneous curriculum. It doesn't have to mean lack of education choices. It doesn't even have to mean requirement to receive education.
Another thing I find curious - the very people I find rejecting education "collectivism" are often people who wouldn't blink before forcing their own "indoctrination" upon others. We fight this fight in Texas far too often - should schools teach facts (evolution), or give "fair credence" to falsehoods (intelligent design) because of religion? Woops, out goes falsehoods as soon as the satanists get involved and also flex their constitutional rights, never mind, we'll take evolution!
>universal healthcare == the end of individual choice
I don't see why this has to be true, it isn't in any of the current implementations of universal healthcare.
>end of individual responsibility
Are you presently responsible for your water being clean? Are you presently responsible for the quality of your roadways? Are you presently responsible for the guarantee that your medication contains advertised active ingredients that do as it says on the label?
>Extensive taxing regimes
Shift military funding, close ultra-rich tax loopholes.
>destruction of the individual
Fail to see how this is true. Unsupported by argument.
>gun control == people control
>then, total domination by the state and a monopoly on the use of force
Currently working just fine in countries with gun control laws, which, by the way, completely nullifies the false "has never worked out well" absolute you ended with. Furthermore, the USA has right now a monopoly on the use of force. The US and its laws have sovereign control over whether you are allowed to use force or not. You have almost no say in the matter. Furthermore, this isn't 1776. The gap in armament between civilians and a martial state is so large as to be hilarious. Were the need to arise, US combined armed forces could drone strike, Tomahawk, artillery, AC130, or just door to door ground and pound any organized militia to dust. Or, you know, nuke it.
Are you, private US citizen, allowed to build a nuke?
I see this kind of response a lot to "the globalist threat."
What's the end game, for people that believe these things? Endless culture war? A galaxy-spanning human civilization that still has their "I'm an American and you're Chinese" safe little lines? Is there a fear that the "wrong" culture will persist into the future?
> What's the end game, for people that believe these things? Endless culture war? A galaxy-spanning human civilization that still has their "I'm an American and you're Chinese" safe little lines?
If we didn't have several superpowers on this planet, but only one unified government instead, what would whistleblowers do? Right now Snowden can fly to Russia, and Chinese dissidents can voice their opinions in the West. In a truly globalized world, there'd be nowhere to run.
We try to maximize competition and avoid monopolies and cartels in our economy. Why should we aim straight for the opposite when it comes to our governments?
Or as Obama said in the 2016 State of the Union:
"We spend more on defense than the next 8 nations COMBINED."
Keep in mind, most of that 8 are allies.
I struggle to see why we should be praising a self-fulfilling prophecy.
I disagree that "every country" does this. Many countries in Europe and Asia do not use their defense forces as a tool for attacking other countries and getting involved in regional conflicts that were only affecting people in those countries. And countries that do it are usually just "helping" US-lead invasions of countries (an example would be Australia -- the single reason we were ever in the Vietnam War was because of the US).
(While I may be a biased given that America bombed my home country and engaged in a "peacekeeping mission" when I was a child, for a civil war that had been going on for many years and had nothing at all to do with the US, I never understood how Americans can see invasion of other countries as being anything other than that.)
I meant that they call their armies/airforces/navies 'defense'.
> Many countries in Europe and Asia do not use their defense forces as a tool for attacking other countries and getting involved in regional conflicts that were only affecting people in those countries.
I wished that were true, but almost all deployments of EU troops over the last 4 decades fall into that category. And there have been a lot of those.
> And countries that do it are usually just "helping" US-lead invasions of countries (an example would be Australia -- the single reason we were ever in the Vietnam War was because of the US).
Ditto Iraq, Afghanistan.
> While I may be a biased given that America bombed my home country and engaged in a "peacekeeping mission" when I was a child, for a civil war that had been going on for many years and had nothing at all to do with the US, I never understood how Americans can see invasion of other countries as being anything other than that.
Agreed, they are invasions, and the worst ones are the ones under some kind of pretext.
Where are you from originally?
Or if the US hadn't spent over a trillion dollars defending South Korea from the China, USSR, North Korea axis across decades. South Korea would not exist as we know it today. It'd more likely look like North Korea.
There's an exceptionally strong argument for the US working with regional military allies in Europe and Asia on defense and having a very powerful military to match its $20 trillion economy. Should the spending be more like $450 billion or $730 billion - that's the primary debate.
Wars between nuclear countries don't happen any more.
The reason it's impossible to win a war against the current US military is because the US has spent an obscene amount of money building it for decades.
If the US military were not capable of defending Eastern Europe, Ukraine, Poland and the Baltics would be under Russian rule today.
Nukes prevent any war from happening between any powerful nation.
If we spent 1 tenth the amount on our military, we'd still have the power to blow up the entire world X times over.
I just think that if you have the ability to destroy the entire world, then that's a powerful enough military and you don't need anything more for defense.
The only thing preventing Eastern Europe from being invaded is the conventional military. The US will never risk it's own existence for another country. The Russians know this.
So no, I don't believe that the US would do that and I am happy about that.
The US would only use Nukes if there was an attack on the US.
I want the US to stay out of all wars that aren't directed related to attacks on US soil and US citizens.
If the EU is worried about invasion, then they should have their own military to defend themselves.
Cyber-warfare is going to be where it's at. You spend years infiltrating adversary networks, backdooring critical systems/infrastructure and exfiltrating intelligence so when the time comes for conflict, you already know what cards they're going to play, where all the pieces are on the map and you can shut it all down with a few commands. Then you start your blitzkrieg unopposed.
It's telling Russia, China and North Korea so heavily encourage cyber-espionage and malware development. Meanwhile in the US, we've got idiots passing laws criminalizing mere infosec research. We're handicapping ourselves against a threat our adversaries are all too eager to employ.
For the purposes of being better at waging war. I am not disagreeing with your statement, but I think leaving out the reason why is treating the military like it's "NASA with a few guns laying around" and implying that one can work on weapons systems without having any moral culpability for how they are used.
Possible reasons, using Wernher von Braun's rocket development as an example:
- He, or somebody else, would have invented rockets anyway at some point
- That 'some point' would have been much earlier if the money spent on military (research & some of the rest) would instead have been put into rockets in the first place
- even if the two mechanisms abovc were not true: I'd probably be willing to forgo the benefits of the space age if it meant undoing WW2 and the Holocaust
- In most ethical frameworks except utilitarianism, it's strongly discourage to accept some evil for some perceived greater good. Meaning: Even if rockets have saved far more lives than WW2 vanquished, participating in the project is morally dubious at best
Specifically to Google's project:
- This seems to actually invert the usual idea of the military funding basic research that later gets adapted for civilian use. They are building on the fundamental research into AI already done by Google and others, and using it very specifically to kill people. Any advances useful for other purposes would seem to be even more coincidental than usual.
- While "minimising collateral damage" i. e. civilian casualties may seem to be a worthy cause, it cannot be ignored that such advances are likely to result in greater use of the technology. Drones are actually the best example of this effect. Just look at the Obama administration's willingness to expand the use of drone strikes, which was a direct result of the technology appearing to be the lesser evil compared to manned missions on the ground.
> DoD/Pentagon involvement in artificial intelligence research shouldn't be viewed as a necessarily bad thing.
Let's be spesific: they are using current AI algorithms to track everyone's movements from airborn imagery. It will be used to kill people in drone strikes and for population control.
I'd rather have neither, but I'm realistic.
Another thing is to specifically choose to dedicate resources to one of those areas instead of another area. Why can't google bid on AI projects for better detection of geological landmarks in other planets? (which I'm guessing is a similar endeavor to what the article mentions - and already has ongoing research)
That's a direct application of tech to killing people, not some abstract network or compiler tech. There's a big difference.
This also has the benefit of getting war averse liberals to say nice things about the large standing army and bloody foreign interventions of the US empire, most of which have failed to produce anything other than blowback and civil war across the globe.
- The Global Assassination Grid: The Infrastructure and People behind Drone Killings: https://media.ccc.de/v/33c3-8425-the_global_assassination_gr...
- The Drone Papers: https://theintercept.com/drone-papers/
- Graphs, Drones & Phones: The role of social-graphs for Drones in the War on Terror: https://media.ccc.de/v/32c3-7259-graphs_drones_phones
Why would US agree to that if Russia or China couldn't care less about it? That'd be just handicapping the military.
Also, many people, including myself, consider drone strikes to be state-sponsored terrorism. Just because some part of the military may be justifiable, does not mean all of it is.
The Google workers urging their CEO to pull out of Pentagon projects are the ones who do not have such short memories as to be distracted from the blatant deceptions and disasters carried out in recent history, with no sign of relenting.
As an evolutionary selective pressure, war and military necessity are unparalleled.
Re-reading Samuel B. Griffith's translation of The Art of War, I noticed that he mentions, in 1963, Joseph Needham's Science and Civilisation in China, which had only begun delivering volumes (the work remains in progress). That would include not only metalurgy, but entire volumes devoted to both martial and marine technologies.
I don’t think people of ex-Yugoslavia would stop killing each other without being forced to.
That is an absolutely inane stance to take. How about instead of the obviously unattainable and unrealistic goal of "world peace", we just go for "more peace", or "less violent death" or similar goals. It is very hard to argue that American hegemony has not significantly reduced conflicts around the world. Not all of them, not equitably, and generally in pursuit of economic rather than ideological interests, but there is basically nothing to be gained by opening up the US to actually being threatened by competing powers.
What the google employees need to do is push for a civilian government research and science funding effort, like ARPA was back in the day. Unfortunately that sort of funding is politically dead, while military funding is always in style.
Reminder, if you work in tech you are collecting blood money because everything we use was initially funded by the military for military application. If you are going to attack Google for this and don't leave the industry you are a hypocrite
And what do we have to show for it? Wikipedia is great and Youtube is entertaining, but the rest of the internet is horrible. I say that as a software developer.
Due to the wonderful internet our children now have attention spans far below the time a typical class lesson takes. The smartest brains on Earth are not building spaceships or curing disease, but rather implementing systems to get you to click on ads. Our bodies and habits are being tracked, our opinions manipulated, and our privacy is being handed of willingfully for a chat and photo-sharing platform.
Military tech _often_ has implications far beyond what the original scientists and engineers intended. Sometimes for the best. But as you allude AI to the internet, don't forget how much damage the internet is doing to us even as we enjoy it and profit from it.
I think DoD/Pentagon put up the funding for what later became Google maps. Not to mention the GPS constellation we all enjoy using.
Its a shame progress has to be made through these channels but it has proved very effective in the past.
The obvious reply is: great, then we're not needed, let someone else do it!
That's an excuse people without any morals often use. It's up to each one of us to do the right thing.
After all: “The only thing necessary for the triumph of evil is that good men should do nothing” - or I would add that evil also rises when "good men" actually do the evil thing "because otherwise the bad men would do it anyway."
Google likely has the most advanced AI tech in the world right now. That means if that they allow the US government to use it, they will be directly responsible for accelerating the progress of autonomous killer robots or making them real in the first place. After all, I don't see too many other companies with AI that can learn the Go game within days and beat world champions.
We need weapons to fight an enemy like this, with help from US technology companies.
also it is not surprising that these things are invented by the dod when you're spending ~600-700 bilion a year on warfare and intelligence operations.
That last sentence of yours rubs me kinda the wrong way; that is a dangerious premise that could set us back ~30 years when the cold war and the arms race was well and alive, but this time we'd have robots and AI