The moral responsibility of software engineers

(thoughts collected from others, mainly software engineers)

Kate Heddleston

Since I started programming, discussions about ethics and responsibility have been rare and sporadic. Programmers have the ability to build software that can touch thousands, millions, and even potentially billions of lives. That power should come with a strong sense of ethical obligation to the users whose increasingly digital lives are affected by the software and communities that we build.

[…]

Programmers and software companies can build much faster than governments can legislate. Even when legislation does catch up, enforcing laws on the internet is difficult due to the sheer volume of interactions. In this world, programmers have a lot of power and relatively little oversight. Engineers are often allowed to be demigods of the systems they build and maintain, and programmers are the ones with the power to create and change the laws that dictate how users interact on their sites.

https://kateheddleston.com/blog/a-modern-day-take-on-the-ethics-of-being-a-programmer

2015

Ben Adida

Here’s one story that blew my mind a few months ago. Facebook (and I don’t mean to pick on Facebook, they just happen to have a lot of data) introduced a feature that shows you photos from your past you haven’t seen in a while. Except, that turned out to include a lot of photos of ex-boyfriends and ex-girlfriends, and people complained. But here’s the thing: Facebook photos often contain tags of people present in the photo. And you’ve told Facebook about your relationships over time (though it’s likely that, even if you didn’t, they can probably guess from your joint social network activity.) So what did Facebook do? They computed the graph of ex-relationships, and they ensured that you are no longer proactively shown photos of your exes. They did this in a matter of days. Think about that one again: in a matter of days, they figured out all the romantic relationships that ever occurred between their 600M+ users. The power of that knowledge is staggering, and if what I hear about Facebook is correct, that power is in just about every Facebook engineer’s hands.

[…]

There’s this continued and surprisingly widespread delusion that technology is somehow neutral, that moral decisions are for other people to make. But that’s just not true. Lessig taught me (and a generation of other technologists) that Code is Law, or as I prefer to think about it, that Code defines the Laws of Physics on the Internet. Laws of Physics are only free of moral value if they are truly natural. When they are artificial, they become deeply intertwined with morals, because the technologists choose which artificial worlds to create, which defaults to set, which way gravity pulls you. Too often, artificial gravity tends to pull users in the direction that makes the providing company the most money.

https://benlog.com/2011/06/12/with-great-power/

2011

Arvind Narayanan

We’re at a unique time in history in terms of technologists having so much direct power. There’s just something about the picture of an engineer in Silicon Valley pushing a feature live at the end of a week, and then heading out for some beer, while people halfway around the world wake up and start using the feature and trusting their lives to it. It gives you pause.

[…]

For the first time in history, the impact of technology is being felt worldwide and at Internet speed. The magic of automation and ‘scale’ dramatically magnifies effort and thus bestows great power upon developers, but it also comes with the burden of social responsibility. Technologists have always been able to rely on someone else to make the moral decisions. But not anymore—there is no ‘chain of command,’ and the law is far too slow to have anything to say most of the time. Inevitably, engineers have to learn to incorporate social costs and benefits into the decision-making process.

[…]

I often hear a willful disdain for moral issues. Anything that’s technically feasible is seen as fair game and those who raise objections are seen as incompetent outsiders trying to rain on the parade of techno-utopia.

https://33bits.wordpress.com/2011/06/11/in-silicon-valley-great-power-but-no-responsibility/

2011

Damon Horowitz

We need a “moral operating system”

TED talk: https://www.ted.com/talks/damon_horowitz

2011

Why did the US drop the atomic bomb on Japan?

Why did the US drop the atomic bombs on the Japanese? On people. On tens of thousands of people. When the war was in practice already won. When the US was nowhere near in any existential threat.

How did they overcome the moral obstacle, the principle of not hurting other people, the force of conscience that should have prevented that?

I was looking for answers to this question, while reading Richard Rhodes’ book. And here are my answers…

Moral fatigue

The war was going on for ages. People were mentally tired. They felt they would give anything just to put an end to their own suffering.

Dehumanization

Japanese culture seemed very distant from that of the western civilization. The Japanese were strange, seemingly irrational and looking different. People from the US did not understand them. Somewhere deep down they did not think they are really human… of equal value.

Hunger for power

The US was aware that the atomic bomb is not just a means to win the war. It is a tool to skew the power balance in the world after the war – and to skew it strongly in their own favour. And what is the one thing that is more deeply wired in humans than caring about other humans? The will to power.

No more words.

The Making of the Atomic Bomb – by Richard Rhodes

The Making of the Atomic Bomb I’ve read this book – about the story of the atomic bomb. I wanted to know more about the subject because I think that the making of the atomic bomb was the greatest ethical test for humanity so far in history. And it is a complex one. What happens if we create a weapon of unseen destructive power? What happens if we don’t? Can technological progress be stopped, or should it? Can it be controlled? – is humanity mature enough for that? Is science superior to national interest? The list goes on…

Before the atomic bomb, the filed of physics was dominated by hope: the ways that science could be used to make the world a fundamentally better place. Afterwards, everyone became painfully, continuously aware of how things could be turned against everything they ever dreamed of. It was a a sobering awakening moment for the professions of physics and engineering.

We are approaching this sobering awakening point with the profession of software engineering. And I’d rather learn from history than learn from my own mistakes.

So, I’ve read this book, and it was time well spent. I made notes… About all the ethical dilemmas people went through. I have a great admiration for those physicists, Bohr, Szilárd, Teller, Oppenheimer and others. They were pioneers of technology. And whether they liked it or not, they were pioneers of ethics too.

I am still arranging my notes and thoughts, but I promise to share them one by one on this blog.

 

Thoughts about responsibility

This year’s Google I/O event started with Sundar Pichai stating:

We recognize the fact that the tech industry must always be responsible about the tools and services that it creates.

I couldn’t agree more. In fact, this is why I started blogging again. Because I feel responsible for the technology we are creating.

But what does this really mean? Sundar did not elaborate during the keynote… So here are my own thoughts.

First it means that we must apply our talents with good intentions. Creating an algorithm that diagnoses eye diseases is a good thing, making this accessible to all people is even better. Automating away routine everyday tasks and giving people more time to do meaningful things, it’s great too. Working towards self driving cars is in essence also cool.

But secondly, being responsible also means being mindful of the consequences – some of them unintended – of the disruption we are causing with our innovations. Technology is fundamentally transforming society. Automation transforms the job market, social networks transform the media, autonomous weapons change the power balance between nations. We cannot say that this is somebody else’s problem. We cannot say that we are just tech people… how our tech is used is up to regulators to control, sociologists to analyze and philosophers to contemplate about. No. We created it, we understand it best, it’s our task to deal with the consequences. And in doing so, we must have the whole humanity’s interest in mind. Technological advancement is good in principle, but only if we do it right: if we make life better for everyone. And some groups are vulnerable to being left out: old people, non tech-savvy people, people who don’t have access to technology…

It’s part of the job to take responsibility for the technology we create. And it’s a damn hard thing to do it right, but hey, that’s what really makes our job really cool!

I am not alone with these thoughts… Let me close by quoting a very wise man:

It is not enough that you should understand about applied science in order that your work may increase man’s blessings. Concern for the man himself and his fate must always form the chief interest of all technical endeavors; concern for the great unsolved problems of the organization of labor and the distribution of goods in order that the creations of our mind shall be a blessing and not a curse to mankind. Never forget this in the midst of your diagrams and equations.

– Albert Einstein, speech at the California Institute of Technology, Pasadena, California, February 16, 1931, as reported in The New York Times, February 17, 1931, p. 6.

Value systems: Kant

The famous categorical imperative!

Kant was the most famous philosopher of he Enlightenment, and he had some truly remarkable contributions to the human civilization. One of his most notable achievements is the creation of an ethical framework based purely on rational thinking. Previous ethical frameworks, at least in Western civilization,  were all based on god.

The categorical imperative is the one fundamental rule that defines what it is to be a good person. It has several different formulations, all meaning the same thing. Here is one formulation:

Act only according to that maxim whereby you can at the same time will that it should become a universal law.

Kant chose preciseness over clarity in his formulation, so it takes some time to digest what he is saying.

Let me try to simplify his words (and by doing so, choose clarity over preciseness):

 Do unto others as you would have them do unto you.

My personal value system is very much in line with Kant’s… I’d say it’s 99% the same. Why not 100%, you ask? Only because of this: Kant’s value system is based on rules and not on consequences of one’s actions. I firmly believe that there are situations in life where blindly following rules – no matter how smartly defined those rules are – is not the right thing to do. One always has to be mindful of the potential consequences, and apply good judgement before following any rule.

Rules or consequences – this is a well known debate among philosophers: it’s called the deontological vs the consequentialist ethical system. Kant was representative of the deontological school of though. Nietzsche was representative of the consequentialist school.

Value systems: Thomas Aquinas

In the middle ages, Thomas Aquinas constructed the Natural Law Theory for ethics… In his view, human beings are pre-loaded with some instincts/drives that make them good.

The natural laws are:

  • Preserve life
  • Make more life
  • Educate one’s offspring
  • Seek god
  • Live in society
  • Avoid offence
  • Shun ignorance

I personally agree with this value system in ~80%. Let me explain why…

In my value system the point: “Seek god” does not have a place. In my view ethics and divinity are completely separate things. What is good and what is bad does not come from some higher consciousness… but rather from a deep understanding of who we, humans, really are.

There is one other point where my value system differs from that of Thomas Aquinas, and that is the point: “Make  more life”. This is indeed a strongly wired instinct in all animals, and consequently in all humans. But I would not go as far as to give it a prominent place in my value system, because doing so would imply that those who are not having children – either for health or for social reasons, or simply because they choose not to – are somehow less valuable people.

Value systems: Plato

In his book, The Republic, Book IV, Plato wrote down what the ancient Greeks called the four cardinal virtues.

These virtues are:

  • prudence
  • cadence
  • temperance
  • justice

I took the liberty of using a different words, more accessible to the 21th century thinker, to describe these virtues:

  • wisdom
  • courage
  • self control
  • fairness

This list is very useful…. Because it is concrete! In all my readings about ethics, philosophers go to a great detail in setting up frameworks and defining concepts, but they rarely have the guts to come up with practical useful guidance. This is a rare exception.

I personally agree with these values in 100%.

The will to power

Schopenhauer said in the early 19th century that the most inner drive in humans is the will to live. A generation later, Nietzsche challenged this statement, and said the most inner drive in humans is the will to power.

That seems to be true… but it got me thinking.

My understanding on human motivations is firmly based on Maslow’s hierarchy of needs.

But then, if Maslow’s theory on human drives is fundamentally true, then why is power not mentioned at all in it?

[If anyone well educated in psychology and philosophy can give me some pointers, please do.]

My first thought is that power is a tool to achieve the goals set on the levels of the pyramid. That’s a comforting thought… It means that power is not a fundamental drive, just a tool – and that paints a better picture of humanity than if it were the other way around. Following on this thought, I ask myself: is power equally useful on all levels of the pyramid? I somehow question the idea that power is useful to gain love. I doubt even more that power is useful for self-actualization. I came to this hypothesis: power is more useful on the lower levels, and less and less useful as you go up. I need to think more about this…

My other thought was that power is actually present in the pyramid, it is just called differently. In my interpretation, by power Nietzsche meant the level of esteem. Achievement, ambition, and the striving to reach the highest possible position in life – these are all manifestations of the will to power. This is interesting too… if we believe that the will to power is truly fundamental, then we need to rearrange the levels in the pyramid. It means that the level for esteem goes down to the bottom. It means that esteem is more fundamental than food or love. When poor homeless people refuse to accept food stamps we see supporting evidence of this hypothesis.

The ethics of power

Here is the question (again): what’s more important in evaluating one’s actions (whether they are good or bad): the intentions or the consequences? This is best illustrated by an example question…

Let’s imagine that you’re driving along in a car. You’re slightly over the speed-limit, but you’re on a straight length of road, without any houses around. It’s also early in the morning, and there are no other cars nearby. You are in no way driving recklessly. You’ve done the same route many times before, and you’ve never run into trouble. But this morning you don’t spot a small pothole in the road. Your front wheel hits it, and you lose control of the car. The car skids around and around, and you watch with horror as a bus stop veers into view. You crash into it, and in doing so, hit two school children waiting for their ride to school. One is seriously injured, the other killed outright.

There is no correct answer to this question. That’s why it’s called an ethical dilemma. And the general problem that lies behind is equally puzzling: what’s more important in evaluating one’s actions (whether they are good or bad): the intentions or the consequences?

I say this… it depends on how much power the moral agent (the decision maker) has. The more power you have, the more I will evaluate your actions on the consequences and less on your intentions.

With great power comes great responsibility.

Sharpening the ethical mind

I am diving into ethics.

I am a software engineer. Recently the software engineering community found itself in a sudden heated discussion involving ethical dilemmas. Facebook was called for influencing the very fundamentals of democratic society with the Cambridge Analytica controversy. Meanwhile the Department of Defense of the United States is ramping up its technology for algorithmic warfare marching towards the next “atomic bomb”, involving industry leaders like Google. These are heavy topics… and they caught most technologists unprepared. For decades, software developers were only focusing on creating the technology and not giving too much thought on the moral questions around its usage.

So, earlier this year I decided that I will ramp up my ethics knowledge, and enter the debate. I did my homework in the last few weeks: Plato, Aristotle, Thomas Aquinas, Kant, Rousseau, Nietzsche…

Here is a test question: what’s more important in evaluating one’s actions (whether they are good or bad): the intentions or the consequences? This is best illustrated by an example…

Let’s imagine that you’re driving along in a car. You’re slightly over the speed-limit, but you’re on a straight length of road, without any houses around. It’s also early in the morning, and there are no other cars nearby. You are in no way driving recklessly. You’ve done the same route many times before, and you’ve never run into trouble. But this morning you don’t spot a small pothole in the road. Your front wheel hits it, and you lose control of the car. The car skids around and around, and you watch with horror as a bus stop veers into view. You crash into it, and in doing so, hit two school children waiting for their ride to school. One is seriously injured, the other killed outright.

How do we judge this action?

There is no correct answer to this question. That’s why it’s called an ethical dilemma. And the general problem that lies beneath is truly puzzling: what’s more important in evaluating one’s actions (whether they are good or bad): the intentions or the consequences?

Nietzsche gave something of guidance on this matter. He distinguished two kinds of moralities: master morality and slave morality.

Nietzsche says that in master morality the values are things like pride, strength, and nobility. Actions are evaluated on good or bad consequences. Quite contrary, in slave morality the values are things like kindness, humility, and sympathy. Actions are evaluated on good or bad intentions.

Nietzsche was quite clear on which kind of morality he preferred. – Take a hint from how he named them.

I am not so convinced however… I do think that kindness is a great value.