I bring this up because when people propose a code of ethics, they assume it will help, but I'm not really convinced that we will really get to a point where codes of ethics are widespread and taken seriously enough that they will do anything but harm the altruists among us.
If you know, a priori, refusing to work on something you consider to be unethical will have /zero/ effect on the overall project, but negatively affect you, is it then ethical to do the work, or are you required to fall on your sword?
1) Listen to all competing arguments. Make sure you are making a sound decision and your opposition is genuine and not based on a personal bias.
2) Make it explicit that you are are saying no in your role as a professional, and that this is not just a casual "no". Ethics require engineers to make decisions only in areas of competency, so make it clear that you are competent to make this decision, and your decision derives directly from your competency.
3) Write it down. The document should cover 2) above, and set down the defensible reasons why assent is being refused, addressing counter arguments. Consider that the arguments you write down could well be tested in court, and you will be cross examined on them. The document/report should be of the same level as if you were a consultant to your employer.
4) Sign and submit the document (keeping a copy for yourself). You need to be open about your opposition. Keep it at a professional level. Hopefully you will get some respect, as your document should make it clear that you have legitimate concerns and are not just being difficult.
5) Be prepared for the consequences. Your opposition is now in writing and impossible to ignore. No sane boss is going to ignore it, as the legal consequences of doing so are too great if you turn out to be right. You will have heat applied to you, as the alternatives are to either address your concerns or to get you to retract, and the latter option is cheaper.
6) If the final decision is against you, either live with the decision or resign. If the issue is of sufficient gravity your only option may be to resign, whether that be for ethical reasons or your own legal protection. (Yes your Honour, my work killed 100 people, and I was fully aware of it as shown by this document that I wrote.)
If you end up resigning, then you're probably making the right decision anyway, as the events leading up to your resignation will have shown your employer to be a dud.
Granted, reality won't be as cut and dried as above, and will involve a lot of anguish.
I think a lot of people flippantly say they would refuse the order, but it's easy to say that when it's not your neck on the line.
Truly a deep moral dilemma.
It's the worse kind of dilemma: the kind that people outside the situation will claim was easy.
Personally, I think the Zero-effect case should not be cast aside lightly. Pragmatism is a legitimate ethical model.
And thinking in this schema is not only a mistake, but it produces a theoretical reversal in a certain literal sense. People experience their environment and the interests that exist in it as a conflict, as an injury. And they do not hold this experience against this environment, but against themselves: their bad behavior.
Meh. My application is an internal app for a large company. It's basically scheduling software for a part of our business process. To even start to hack it you'd first have to break into the corporate network, and in the end you'd have data you didn't care about. Hell, I'm not even sure the people who use it care.
Worst case, a subtle bug (and it would have to be subtle for my users to miss it) might cost my employer a few thousand bucks.
Again, meh. There are a whole lot of internal applications that fall into this category.
Never have I been prouder to be a plain old programmer. Those software engineers, architects, systems analysts, and data scientists can keep the capital punishment for themselves.
Quality is systemic, and has its roots across the organization.
This story is derived from Law #229 of the Code of Hammurabi (of Babylon). Babylon was not part of Greece, but it was very briefly part of the Roman empire.
229 If a builder build a house for some one, and does not construct it properly, and the house which he built fall in and kill its owner, then that builder shall be put to death.
Not that it has anything to do with the Greeks putting engineers under their arch.
That said, my employer has standards for security and privacy that go well beyond industry norms, so if I was working elsewhere maybe I would feel the need for better standards across the industry.
In my experience, software engineers tend to be conscientious. Caring about the big picture is a big part of open source, hacker and nerd culture. But knowledge is hard to come by. I learnt from the experts, but I doubt most engineers would be able to build a simple CRUD app form scratch without major security holes.
It would be nice to see some best practices around security and privacy emerge without forcing everyone to write Ada or Coq or completely change their approach to writing software.
The reason why we don't has been explained many times: It's too expensive, by many orders of magnitude.
And beyond that, I don't believe for a moment that--to use the example from the article--it would have increased Grindr's costs by 1000x if they had taken a little while to brainstorm possible negative consequences of an app that lets gay people be anonymously tracked and located worldwide. "Too expensive" is code for "we've decided that the safety of our users is less important than maximizing our profit margin."
How many engineers should this company add to act 'ethically'? According to you, they need to cancel the product (I think - only for an ungenerously rigid interpretation of your comment).
Risk is not 'it harms people or not'. Knowing whether your product harms people is more difficult still. And the question of who is responsible, more still. (and that's not just between the supplier and the user - also to other actors. To take the Grindr example, was it an engineering problem, a product design problem, a user education problem or a governance problem? I could argue for all of them. Who should act to mitigate the risk? I don't know, and it's not as clear-cut (in the abstract) as some here are letting on).
Escalation steps inevitably involve a 'nuclear option' whose size and expense relegate it to just the most sure and most aggrieved wrong parties. These days that option is a class-action lawsuit. It seems to work best when deterrence is the primary goal, plaintiffs rarely ever get anything of value out of it.
It works as you would expect, but one really wants a pathway to punish negligence without having to get lawyers involved. I believe this is usually done with market solutions to make 'buyer beware' less awful on the ignorant.
But the sheer scale of the stated problem, forcing all web service developers everywhere, to consider possible lines of attack on their software, under pain of loss of livelihood, seems too general and too unenforceable to be useful.
The reason is that, beyond a certain organizational complexity, we fail at creating the systems necessary to create the positive feedback loops necessary for quality, low cost, and fast execution to flourish naturally. Instead, we live in dysfunctional organizations where the tradeoffs prevail, and we choose speed and cost over quality in nearly all cases. Most of the time, we don't understand how to even begin to think about quality at an organizational level.
It's fully possible. Quality pioneer W. Edwards Deming explained the full system for creating an organization that is capable of quality in the first place. It mainly involves systems thinking, knowledge of variation, understanding psychology deeply, and creating systems of effective learning—and management and leadership versed in those new leadership competencies.
Instead our instincts are to blame, to punish, to hold accountable, effectively to leave each engineer as an island in charge of his or her own level of quality—when an organization and its production is so far removed from individual control it's not even funny.
We absolutely can write software like this—and we can do it quicker, and for lower cost. We need to work on the systems that produce and influence that quality/speed/cost relationship—not each individual part in isolation. There is a positive cycle to be achieved.
When looking at increasing quality (essentially, scope) in isolation of systems and the environment in which that quality is created, then the only way to increase quality is to increase cost. That much is true, but it's also incomplete.
When looking at and eventually optimizing the whole system—from the customer and their needs, to the leadership of the company, to the clarity of purpose, to values and culture, to the intrinsic motivation of your employees, to methods of management, to your learning and education processes, to your processes of production, to your communication links, to your output and how it's measured, and how each is controlled and understood in great detail—you can and will improve quality much more than you could by simply increasing cost in isolation—and at the same time, you'll decrease your overall costs in the long term, you'll speed up production, and you'll increase sales and customer satisfaction. The end result is a better product, for lower cost, with more predictability, proud happy employees, and faster to boot. That's what Quality really means.
Looking at quality in terms of cost alone is the naive mistake. Look at the whole system.
And for some evidence, look at Japan circa 1948. These exact methods of systemic quality, "total quality control" and statistical quality control taught to Japan by W. Edwards Deming turned them from the ruins of World War II into the second largest economy in the world, to the point of such success that Japanese companies were putting American ones to shame by the 70's. They did this by following exactly the model I've described above, decreasing total costs, increasing quality, and improving time to market, all by focusing on their total systems across the organization. The proof is in the success of an entire country's economy, I don't know how it can get much clearer than that.
It is not my fault that everyone in our industry has forgotten these lessons on running companies, but the prevailing methods remain incorrect, as does your original statement.
I'm betting it's you.
Please do prove us all wrong by starting up a software company that produces bug free software faster and cheaper than every other software company on the planet. There's a big pot of gold at the end of that journey. Go and claim it. If everyone else is as incompetent as you claim, it's going to be a walk in the park.
And I'm not saying it's easy. Not at all. It might be the hardest problem in all of business—combining the true systems of work into a cohesive philosophy: systems, statistics, people, and knowledge. I'm just saying (and I'm just repeating the words of some great thinkers in the quality space, mind you) that if we change the way we think about quality and companies in general, we'll get a lot of gigantic unrealized gains in productivity.
This happens naturally in many companies in small pockets, but human nature and psychology tends to break it down eventually.
If I'm wrong, then a great many other people are also wrong, primarily W. Edwards Deming, who, as I noted, single-handedly turned around the entire economy of Japan using these exact methods and ideas.
Here's a great place to start: http://www.amazon.com/Leaders-Handbook-Making-Things-Getting...
1. Write a letter outlining the above to a VC. Explain how this works in somewhat more detail; 10-15 pages should get it across. Sure it's work. But you seem resolved, and the reward is great. If you pull this off, not only are you gonna get a Turing award, they're going to mint a whole new award and name it after you. Anyway, give it a decent treatment and mail it off.
2. Tell us how that went over.
Hundreds of thousands, heck, millions of pretty smart people have been thinking of ways to solve "the software problem" for over 60 years. Many of those people are smarter than I am, and probably smarter than most of the readership on HN. The actual improvements have been incremental.
I'm not saying there hasn't been improvement. For instance, we tend to write more secure systems these days, but security remains really, really hard. Saying "you're doing it all wrong" to the incredibly smart and driven people I know who are experts in security is pretty bold.
So, I urge you to make a proposal and report back.
The main issue is that once you exceed a certain number of developers, the problems become more human than software. It's not the software problem we need to solve, and our solution to the human problem thus far is highly individual and anti-system, riddled with ineffective processes, "communication problems," and "culture issues" that we believe are someone else's problem that we can do very little to fix even as leaders. These are the root cause of quality issues, and without looking at the whole system—and indeed, creating a whole organization capable of looking at the whole system—the quality issues will continue dependably.
The final issue is that when it comes to people problems, everyone has their own beliefs. Ideas are easy—ideas can be explained, processes implemented, software designed—but beliefs are hard. If I were to write to a VC outlining these ideas (which are W. Edwards Deming's ideas circa 1950, Joseph Juran's ideas circa 1980, and Peter Sholtes's ideas continuing into the 90's, and the foundation of the current Lean movement in software, education, manufacturing, and healthcare)‚ what would really change? I can make a convincing argument, but I would be in for a 1-3 year conversation about beliefs about management and people, depending on the person and their adaptability and openness to new ideas. It's much easier to simply do it, and I'm trying to do as much as I can in the context of a growing company at present.
Plus, there are companies who are already into this stuff. Pluralsight integrates Deming's philosophy into their management and onboarding. Check them out.
Someday I will start my own company, and these concepts will be at the foundation. For sure. Until then, I'll try my best to shift some perspectives.
I've seen soooo many smake oil proposals, and wishy-washy garbage that amounted to "just do a better job", and I've kicked out a couple of consultants who wanted money (and one wanted a process patent!) for a scheme that amounted to "track team member progress on a whiteboard in your common area". Too many people with not enough background (or bad motives) think they have the answer.
So, I'm jaded. And allergic to rhetoric, because so much of it over the past 30-40 years has been vacuous bullshit.
This stuff is hard, no fooling. And rhetoric won't solve it. I'm happy you're actually doing something about it, and realize that it's difficult, and I honestly hope that you do well.
There are a lot of great ideas out there. What matters is what we do. Good luck, I wish you the best as well.
For what it's worth, this stuff is really great, and a good mental model for the reality of organizations. I highly recommend Peter Scholtes' book, The Leader's Handbook.
There are books precisely documenting the correlation between quality, speed of execution and lower costs.
If you ask a bunch of people who are analytically trained to ignore all the evidence in front of them you should expect this reaction. Especially if you tell them, to paraphrase, all they need to do is change their culture and all processes and everything will be rosy.
Pointing to books written by quality consultants isn't going to help your case much either I'm afraid.
Which is a shame. If you've ever been in an organisation that has this deep-seated view of quality it is quite an eye-opener. Of those that I am familiar with, only GE really were able to take it beyond manufacturing and definitely the only one where I saw it applied to software. It felt a little "religious cult"y at first but was pretty impressive nonetheless.
Your focus on Edwards Deming is a little unfortunate too, as I would expect that most people with an engineering bent will look at his work on SPC and, rightly, consider it inapplicable to software. It requires a high level of repetition to be valuable and software development doesn't have that.
His work on organisations could be applicable but it would require most companies to fundamentally change the way they approach almost everything. On the basis that it's not necessary to do this, I can only see it occurring if it became necessary. A special set of circumstances brought it to prevalence in manufacturing and by no means universally. If it happens to software I think it would need an equivalent set.
What Deming's organizational view promises, instead, is joy in work. Very simply, that's what I'm really after.
I have been doing professional software development since 1997, and from then until two years ago, I have been a developer on waterfall and scrummerfall projects. There have been deathmarches aplenty and lots of failures small and large with some heroic successes here and there. Overall, I'd say the performance of the teams I had been on has been fair to poor, and the code quality has been generally poor because it was always rushed.
Then two years ago, my manager got on a quality kick and the team was largely with him because we were frustrated with some boneheaded defects that had escaped to production. First, we started to require some unit testing here and there, and code quality started to improve, at least to those of us on the team who noticed such things.
Next, we all decided on a new greenfield project that we would enforce a test-first/TDD rule. Our code coverage was suddenly obscene! Months later, we had no escape defects when we delivered to production on-time. No escape defects was unheard of in our division, and a point of pride for the manager.
The TDD we were doing was largely just a gentlemen's agreement, and there was little pairing. Mainly we all still worked in our little heads-down solo developer modes. So, manager decided to amp up the agile more. We got TDD training ("obey the testing goat" video series) and we became much more open to the idea of pairing. The QA team was fired, we ended up with one floating QE engineer. He occasionally pairs with us, developing for real stories. An agile coach came to visit us and gave some tips. Got scrummaster training, bleh ...
On the next project, also a greenfield, we were much more confident. Pairing happened very dependably whenever we did any new development. We were less intent on metrics such as coverage, and more intent on writing our unit tests like they were a software spec. Spock Framework for Java is a really nice system for doing this, our tests read almost like english paragraphs. Our defect rate is similarly low like it was on the previous project, but now we have a much better comprehension of the code. I feel like the whole team is communicating better and can understand the subtle design direction changes that naturally happen throughout the course of a project. We just completed our first story mapping exercise for this project, it's really neat to see our plans all laid out on the board like that and this has been very helpful for our product owner.
There's a lot wrong with our situation, but we're impatient to fix things since we don't have defects to mentally burden us constantly. A one-click build would be really nice, and our overall system design handed down to us from on-high is heavy-handed. We're so agile now we can usually devise a long-term plan to address our issues that everyone is comfortable with. Our product owner is struggling to get better definition around aspects of the project, but we make it possible for him to shift directions - again because the codebase is well-factored and largely free from bugs.
My main observation looking back on my 18 years of coding is that software takes a long time to write. It's really excruciating. Trying to turbo boost a project by overcommitting resources and doing death marches usually doesn't shorten the actual time to delivery all that much in my estimation, but it DOES drive up the risk because people are going to fail on you left and right.
This excruciating slowness to develop software has only become more evident to me now that I'm on a team that is genuinely trying to be agile. Every little feature and every task on the board is SOOOO drawn out and painstakingly developed by pairs of developers. DevOps is a big deal that saps tons of the team's time, but it is important for us to own this.
I just never thought I would see proper resource allocation to software projects, but here it is. My company is spending the money it needs to AND the management, product owner, and developers on the team are all willing to work this way. We took no shortcuts and focus on code quality. Effort put into documentation is fairly lite, unless it is intended for the user.
I can count on one hand the number of days I was asked to work overtime in the last 2 years.
The question I have is, if high quality software development is possible, is it a lot more expensive than the old death march/waterfall/lie-to-ourselves-constantly style of development? My team no longer has show-stopper production environment defects nor a QA department anymore. Surely that has to cut our costs a lot! We do have a dependable cadence, and have recently been seeing our velocity accelerate. While we do meet our product owner's goals, but I'd say our progress feels achingly slow at times. Maybe I'm just impatient or restless. I'll need to ask my manager if he can gauge whether we're more or less expensive to run than we were in the past.
Keep on fighting for quality, it's worth battling over and teaching others about.
When you say "true systemic quality" are you speaking far beyond the scope of what any one company is responsible for?
Otherwise, I suspect those replying to you are correct - if bug-free software can be written, quicker and cheaper than software that winds up retaining some bugs, based on work well enough known for 20 years, we'd be seeing someone out there winning with it.
What do you think is the scope any one company is responsible for? Is there something about the quality of their output that they are not responsible for?
I'm not saying bug-free software is the goal, and maybe I'm too far off on a tangent to still be relevant to this conversation—just trying to get across that bugs come from somewhere, you should think about what all that "somewhere" is by looking at the complex system surrounding it, and you should not punish your developers or put them under the metaphorical bridge to hold them accountable for not producing bugs. That is ridiculous and ignorant of the system, and the system between and around people—not the individuals themselves—are where bugs come from.
They are not responsible for the output of other companies. My question was whether you were speaking to what one company could do, or what was prevalent in the development ecosystem. It seems that you were speaking to the former.
I certainly don't hold a position that there are no gains to be had focusing on the system. I do think it is most reasonable to say we don't know how to attain them in a way that acheives quality (or specifically lack of defects) at the level envisioned up-thread. The ways we do already know to attain that level of quality do impose substantial cost.
(And for the record, I certainly agree that individual punishment of developers is unlikely to get us to any better place.)
Take an anonymous currency system, for example. Handwave away whether or not that it's actually possible to build one -- let's say you have a way to create electronic, untraceable money, and let's go one step further and say that there's no personal risk to doing so (e.g., it creates no criminal or civil liability to doing so, and costs nothing).
Do you build that, or not?
"What about child porn?" We have that already; not having anonymous cash won't stop it. "What about tax fraud?" Pressuring governments to re-think corrupt, unbalanced tax systems might be a very great good. And so on. I have no question that it would be ultimately better to have this than not, although governments (forever forgetting who is really in charge) don't like it.
Obviously you do.
I don't understand why that would be a question.
I personally believe that, even given those potential ramifications, the benefits outweigh the costs -- but my personal balance of those is not everyone's.
Proposals for codes of ethics are about this exact issue - you don't get to build something that provides capability, then put up your hands and say you "didn't know what it was for" when it's used for something bad.
I think we all can agree that programmers almost never begin a project with the intention of causing harm. All the examples cited involved situations where immense pressure applied by higher-up impel a programmer to knuckle-under and agree to such approaches.
The solution that is offered seems very specific to now - we take the fall guy who caved in to one sort of incentive and we put an opposite incentive on him to force a different behavior (btw without removing the first incentive). How fucked-up is that?
Obviously, the better, saner solution is removing the existing perverse incentives, giving software engineers more leverage in decisions, punishing high-ups if they don't give software engineers autonomy to make decisions. And heck, execute the CEOs when the bridges collapse. The bucks should stop with them, right?
There's a diffusion of responsibility that makes it hard to point to any given person after the fact. How can we arm people - software engineers, but product designers, project managers, QA engineers, CEOs, and the welders on the assembly line can say, "Hold on, stop, this isn't right!"?
It's their responsibility. If it's not possible for them to comprehend the design and execution thoroughly enough, they'd better make sure they had someone check it whom they can trust to understand it. Like ancient builders, who didn't lay every stone themselves, but were still responsible for the outcome.
Perhaps our standards for responsibility of leaders are a little low these days.
Those of you who don't, would you consider pledging yourself to a code of ethics if they met your requirements? What would those requirements be?
I've been a member of ACM primarily because I wanted easier access to ACM library papers, but not any more. I'm uninterested in turning software engineering into a gated discipline with a professional organisation acting as a monopoly keyholder.
Software as a discipline affects too many areas of life for anything much more than a vague platitude as a code. Components may be used for good or evil in ways we can't control. Even something as potentially evil as Metasploit is also almost exactly the same thing you want to use to test for vulnerabilities (personally I would not write a plugin for it).
I don't like the idea of extensive central regulation, but it might well be better for programmers' lives.
Your argument that other professions do it is invalid. In fact the government should be more proactive in ensuring that professions only impose valid conditions on employment, and don't add spurious requirements in order to exclude people from the profession, in order to, as you say "protect [their] careers and incomes".
Unilateral disarmament has already led to a situation where 80,000 unregulated immigrants every year are competing with us while those regulated professions see few or none. We're not going to see them de-regulate in any of our lifetimes, because the theorems of economists don't persuade anyone -- except possibly computer programmers and we don't have any power or influence because we're not united to get any. So the best option is obviously to at least organize to get a fair shake. But so far there are few signs of that.
There have been projects of the sort you propose (the space shuttle software comes to mind), but they are few and far between, and it simply not possible to quickly change the industry in general without completely replacing it. Putting aside the technical challenges, it simply is not economically feasible - we don't have the resources.
The situation was very different with the Quebec bridge. At that time, the practice of quantitative engineering had advanced to the point where large, safe bridges could be reliably built, and all that was needed to make it happen was to deal with some human problems leading to sloppiness. The desired outcome was within grasp, and the public demanded it. Until the practice of software development reaches those points, a code of ethics is not going to make it happen. The department store collapse shows that engineering ethics need to be supported by society to be effective (and by support, I mean more than just lip service, and the support has to come from the people who set the agenda.)
Edit: Let me explain further what I meant by 'quantitative engineering'. Bridge designs are analytically verified to ensure that they meet a formal set of safety requirements before they are built. In contrast, almost all software is built and then tested (and fixed, and tested...) against an informal specification (even in iterative and agile methods, the building of parts precedes their testing, as it must.) Analysis is relatively much less important, and is generally informal.
I wonder if the real difference between software engineering and other engineering disciplines is that they have a culture of responsibility--of saying "hey, public safety is more important than money"--and we emphatically don't.
To address the thread topic, I'm a P.Eng, ACM member, and IEEE member.
And even in the case of bridges and the like, they do have a tendency to collapse to earthquakes, wars, and the like. Given enough time, most everything has bugs.
I think you are right, up to the point of security. The problem comes when the handling of customers' credit cards is treated as if it were just a dating app.
Was there a time when other engineering disciplines were not formally schooled? Did they develop into having formal education or were they born that way? Is computer programming moving in the same direction via organizations like the ACM?
Can we learn from other fields who have in fact experienced similar issues on some level? Do we need to reinvent the wheel of creating an effective culture that rewards all the values humans desire? What are those? Success, ability to express oneself, ... ?
I'm a boiler-maker / welder by trade (metal fabrication). My trade certificate actually says "Engineering Tradesperson". I operate a laser cutter, and have written scripts to automate some of my computer related tasks, and have a rudimentary understanding of Python. I've also worked at an ISP in NetOps doing physical security and infrastructure.
I live in Australia so I can only speak for the system we have here: trade school is formal education, the on-campus component of trade education is delivered throughout the nation by TAFE campuses. We are formally educated to build things that don't kill people, and can certainly be held responsible if our actions or omission of action leads to injury. We are expected to escalate anything we see on workshop drawings that could be problematic. Each of the tradesmen in our workshops are required to perform a weld test every six months, which is examined using ultrasound techniques, to ensure our work meets or exceeds AS1554
While us tradespeople aren't the ones engineering the bridges, we are the ones building them.
Comparatively, the software development discipline is still in it's infancy, whereas the construction trades have been around for millennia. Structural Engineers and tradespeople build physical structures to withstand Category 5 cyclones, but my bank (in the top 10 globally by market cap) can hardly have a month go by where some software system doesn't fail.
The advent of bootcamps is probably a side effect of 'coding' becoming relatively easy compared to it's more esoteric history, with the advent of high-level languages like Python and Ruby. Or maybe that stuff about the average IQ increasing over time is correct, and more people have the capacity to understand writing code. Probably both.
Another comment here spoke about TDD - Test Driven Development, that's probably closer to the physical engineering disciplines with destructive testing of random batches of building materials.
With the current state of bootcamps in the US, I feel it's a stretch to call them formal education. They are at best short training programs. There's no accreditation, no accrediting body, and no real quality control beyond the press. That the field has few widely accepted standards does not help matters.
Personally, I suspect that the advent of bootcamps in the US is a fad driven by the explosion of the tech sector and the amount of time it takes to train a computer scientist. I suspect that in ten years, they will be much less popular as people who only know Ruby or PHP drop out of the field. It's hard to have a multi-decade long career in computing if you don't actually understand computers.
Problem comparing software engineers to the civil engineers lays in the ability of the latter to constrain their users, e. g. stating the weight limit for the bridge. Try to do that with software and enforce it.
We can't achieve this consistently with privately owned software. Software must be public and transparent to have consistent accountability.
In the case of Grindr, for example,
I assume we don't know exactly who wrote the original code. And the company would probably protect that information.
I believe those who contribute to open source software can and do subscribe to the author's desired level of accountability specifically because they know everything they write is available for public scrutiny for eternity.
Calls to transfer the idea to today's lowly programmer drone would be akin to ancient bridge builder absconding with substantial extra profit after convincing his Royal that it really should be the bricklaying crew who should line up under the bridge.