AI Now Institute

The AI Now Institute is a research institute dedicated to understanding the social implications of AI technologies. They are doing a great favor to our society: filling the gap of ethical concerns around the massively approaching Fourth Industrial Revolution.

Recently I read their annual report of 2018. Here are my highlights…

Who is responsible when AI systems harm us?

The AI accountability gap is growing. The technology scandals of 2018 have shown that the gap between those who develop and profit from AI—and those most likely to suffer the consequences of its negative effects—is growing larger, not smaller.

Civil rights concerns

Earlier this year, it was revealed that Pearson, a major AI-education vendor, inserted “social-psychological interventions” into one of its commercial learning software programs to test how 9,000 students would respond. They did this without the consent or knowledge of students, parents, or teachers. This psychological testing on unknowing populations, especially young people in the education system, raises significant ethical and privacy concerns.

Power asymmetries between companies and the people

There are huge power asymmetries between companies and the people they serve, which currently favors those who develop and profit from AI systems at the expense of the populations most likely to be harmed.

Moreover, there is a stark cultural divide between the engineering cohort responsible for technical research, and the vastly diverse populations where AI systems are deployed. These gaps are producing growing concern about bias, discrimination, due process, liability, and overall responsibility for harm.

Their full report: https://ainowinstitute.org/AI_Now_2018_Report.pdf

John Stuart Mill’s Dilemma

John Stuart Mill wrote a famous book, entitled “Utilitarianism“.

Utilitarianism was first formulated by Jeremy Bentham, and though it was a sound moral theory, it received some serious criticism. The objection was that utilitarianism does not take into account the individual, and that some moral principles need to be followed, even for the benefits of a single individual. John Stuart Mill went deep into defending his standpoint. He stated that when you optimize for maximum utility for the greatest number of people on the long term – emphasis on the long term – utilitarianism is actually not the cold-hearted calculating moral theory that it looks like.

But there was one element he could not reconcile with utilitarianism. He had the intuition that human beings are entitled to an intrinsic respect, no matter what. Even when you optimize for the whole of humanity nong term. I like that!

Justice with Michael Sandel – Lecture 5

Libertarianism – people are entitled to intrinsic respect; the fundamental right is the right to liberty.

Free to Choose

With humorous references to Bill Gates and Michael Jordan, Sandel introduces the libertarian notion that redistributive taxation—taxing the rich to give to the poor—is akin to forced labor.

Lecture: http://justiceharvard.org/lecture-5-free-to-choose/

Libertarian principles:

No paternalist legislation – laws that protect people from themselves

No morals legislation – laws that promote virtuous living

No progressive taxation – redistributing income or wealth from the rich to the poor

Am I a libertarian? What do I think of these principles?

Paternalistic and moral laws – a certain minimum amount is required to sustain the stability of society. As little as possible.

Progressive taxation – required, for the same reason. Large inequalities make the society unstable. Aristotle: the poorer 90% only accept that they are poor because thay are convinced that they have a reasonable chance to be rich. Rousseau: the society does you a favour: sustains favourable environment. If you live in a society, the social contract applies.

The moral goal is to provide a prosperous society, where everyone has equal chance and everyone has a large amount of guaranteed freedom.

Am I a utilitarian?

Utilitarianism is an ethical and philosophical theory that states that the best action is the one that maximizes utility, which is usually defined as that which produces the greatest well-being of the greatest number of people.

Is utilitarianism the best moral system we could possibly have? Many people around me think so.

This is a very compelling idea. But it can be challenged. Here is the challege…

Would you want to live in a world… where everyone lives a long and prosperous life and they are very happy … but! there is one single person, a child, who is constantly suffering and has a miserable life… in exchange for the happiness of the others.  Would you be OK with this, being one of the millions of happy people, while knowing that there is one who pays for your happiness?

I would not.

Justice with Michael Sandel – Lecture 1

The Moral Side of Murder

If you had to choose between (1) killing one person to save the lives of five others and (2) doing nothing, even though you knew that five people would die right before your eyes if you did nothing—what would you do? What would be the right thing to do? That’s the hypothetical scenario Professor Michael Sandel uses to launch his course on moral reasoning.

Lecture: http://justiceharvard.org/themoralsideofmurder/

My thoughts about this lecture…

The first questions were easy to answer. As we went deeper into the lecture, they became harder. Often I had to go back to the previous questions are revise my answers… not the outcome of the moral dilemma (which option is the morally correct one?) but the reasoning that led to that outcome.

Very often we know why we believe that something is right, but when pushed to dig deeper, we realize that we actually don’t know. We feel that something is right, and our rational brain comes up with a logical reasoning – it rationalizes our choice. But that rationalizing can be challenged and it sometimes breaks down.

The moral responsibility of software engineers

(thoughts collected from others, mainly software engineers)

Kate Heddleston

Since I started programming, discussions about ethics and responsibility have been rare and sporadic. Programmers have the ability to build software that can touch thousands, millions, and even potentially billions of lives. That power should come with a strong sense of ethical obligation to the users whose increasingly digital lives are affected by the software and communities that we build.

[…]

Programmers and software companies can build much faster than governments can legislate. Even when legislation does catch up, enforcing laws on the internet is difficult due to the sheer volume of interactions. In this world, programmers have a lot of power and relatively little oversight. Engineers are often allowed to be demigods of the systems they build and maintain, and programmers are the ones with the power to create and change the laws that dictate how users interact on their sites.

https://kateheddleston.com/blog/a-modern-day-take-on-the-ethics-of-being-a-programmer

2015

Ben Adida

Here’s one story that blew my mind a few months ago. Facebook (and I don’t mean to pick on Facebook, they just happen to have a lot of data) introduced a feature that shows you photos from your past you haven’t seen in a while. Except, that turned out to include a lot of photos of ex-boyfriends and ex-girlfriends, and people complained. But here’s the thing: Facebook photos often contain tags of people present in the photo. And you’ve told Facebook about your relationships over time (though it’s likely that, even if you didn’t, they can probably guess from your joint social network activity.) So what did Facebook do? They computed the graph of ex-relationships, and they ensured that you are no longer proactively shown photos of your exes. They did this in a matter of days. Think about that one again: in a matter of days, they figured out all the romantic relationships that ever occurred between their 600M+ users. The power of that knowledge is staggering, and if what I hear about Facebook is correct, that power is in just about every Facebook engineer’s hands.

[…]

There’s this continued and surprisingly widespread delusion that technology is somehow neutral, that moral decisions are for other people to make. But that’s just not true. Lessig taught me (and a generation of other technologists) that Code is Law, or as I prefer to think about it, that Code defines the Laws of Physics on the Internet. Laws of Physics are only free of moral value if they are truly natural. When they are artificial, they become deeply intertwined with morals, because the technologists choose which artificial worlds to create, which defaults to set, which way gravity pulls you. Too often, artificial gravity tends to pull users in the direction that makes the providing company the most money.

https://benlog.com/2011/06/12/with-great-power/

2011

Arvind Narayanan

We’re at a unique time in history in terms of technologists having so much direct power. There’s just something about the picture of an engineer in Silicon Valley pushing a feature live at the end of a week, and then heading out for some beer, while people halfway around the world wake up and start using the feature and trusting their lives to it. It gives you pause.

[…]

For the first time in history, the impact of technology is being felt worldwide and at Internet speed. The magic of automation and ‘scale’ dramatically magnifies effort and thus bestows great power upon developers, but it also comes with the burden of social responsibility. Technologists have always been able to rely on someone else to make the moral decisions. But not anymore—there is no ‘chain of command,’ and the law is far too slow to have anything to say most of the time. Inevitably, engineers have to learn to incorporate social costs and benefits into the decision-making process.

[…]

I often hear a willful disdain for moral issues. Anything that’s technically feasible is seen as fair game and those who raise objections are seen as incompetent outsiders trying to rain on the parade of techno-utopia.

https://33bits.wordpress.com/2011/06/11/in-silicon-valley-great-power-but-no-responsibility/

2011

Damon Horowitz

We need a “moral operating system”

TED talk: https://www.ted.com/talks/damon_horowitz

2011

Why did the US drop the atomic bomb on Japan?

Why did the US drop the atomic bombs on the Japanese? On people. On tens of thousands of people. When the war was in practice already won. When the US was nowhere near in any existential threat.

How did they overcome the moral obstacle, the principle of not hurting other people, the force of conscience that should have prevented that?

I was looking for answers to this question, while reading Richard Rhodes’ book. And here are my answers…

Moral fatigue

The war was going on for ages. People were mentally tired. They felt they would give anything just to put an end to their own suffering.

Dehumanization

Japanese culture seemed very distant from that of the western civilization. The Japanese were strange, seemingly irrational and looking different. People from the US did not understand them. Somewhere deep down they did not think they are really human… of equal value.

Hunger for power

The US was aware that the atomic bomb is not just a means to win the war. It is a tool to skew the power balance in the world after the war – and to skew it strongly in their own favour. And what is the one thing that is more deeply wired in humans than caring about other humans? The will to power.

No more words.

The Making of the Atomic Bomb – by Richard Rhodes

The Making of the Atomic Bomb I’ve read this book – about the story of the atomic bomb. I wanted to know more about the subject because I think that the making of the atomic bomb was the greatest ethical test for humanity so far in history. And it is a complex one. What happens if we create a weapon of unseen destructive power? What happens if we don’t? Can technological progress be stopped, or should it? Can it be controlled? – is humanity mature enough for that? Is science superior to national interest? The list goes on…

Before the atomic bomb, the filed of physics was dominated by hope: the ways that science could be used to make the world a fundamentally better place. Afterwards, everyone became painfully, continuously aware of how things could be turned against everything they ever dreamed of. It was a a sobering awakening moment for the professions of physics and engineering.

We are approaching this sobering awakening point with the profession of software engineering. And I’d rather learn from history than learn from my own mistakes.

So, I’ve read this book, and it was time well spent. I made notes… About all the ethical dilemmas people went through. I have a great admiration for those physicists, Bohr, Szilárd, Teller, Oppenheimer and others. They were pioneers of technology. And whether they liked it or not, they were pioneers of ethics too.

I am still arranging my notes and thoughts, but I promise to share them one by one on this blog.

 

Thoughts about responsibility

This year’s Google I/O event started with Sundar Pichai stating:

We recognize the fact that the tech industry must always be responsible about the tools and services that it creates.

I couldn’t agree more. In fact, this is why I started blogging again. Because I feel responsible for the technology we are creating.

But what does this really mean? Sundar did not elaborate during the keynote… So here are my own thoughts.

First it means that we must apply our talents with good intentions. Creating an algorithm that diagnoses eye diseases is a good thing, making this accessible to all people is even better. Automating away routine everyday tasks and giving people more time to do meaningful things, it’s great too. Working towards self driving cars is in essence also cool.

But secondly, being responsible also means being mindful of the consequences – some of them unintended – of the disruption we are causing with our innovations. Technology is fundamentally transforming society. Automation transforms the job market, social networks transform the media, autonomous weapons change the power balance between nations. We cannot say that this is somebody else’s problem. We cannot say that we are just tech people… how our tech is used is up to regulators to control, sociologists to analyze and philosophers to contemplate about. No. We created it, we understand it best, it’s our task to deal with the consequences. And in doing so, we must have the whole humanity’s interest in mind. Technological advancement is good in principle, but only if we do it right: if we make life better for everyone. And some groups are vulnerable to being left out: old people, non tech-savvy people, people who don’t have access to technology…

It’s part of the job to take responsibility for the technology we create. And it’s a damn hard thing to do it right, but hey, that’s what really makes our job really cool!

I am not alone with these thoughts… Let me close by quoting a very wise man:

It is not enough that you should understand about applied science in order that your work may increase man’s blessings. Concern for the man himself and his fate must always form the chief interest of all technical endeavors; concern for the great unsolved problems of the organization of labor and the distribution of goods in order that the creations of our mind shall be a blessing and not a curse to mankind. Never forget this in the midst of your diagrams and equations.

– Albert Einstein, speech at the California Institute of Technology, Pasadena, California, February 16, 1931, as reported in The New York Times, February 17, 1931, p. 6.

Value systems: Kant

The famous categorical imperative!

Kant was the most famous philosopher of he Enlightenment, and he had some truly remarkable contributions to the human civilization. One of his most notable achievements is the creation of an ethical framework based purely on rational thinking. Previous ethical frameworks, at least in Western civilization,  were all based on god.

The categorical imperative is the one fundamental rule that defines what it is to be a good person. It has several different formulations, all meaning the same thing. Here is one formulation:

Act only according to that maxim whereby you can at the same time will that it should become a universal law.

Kant chose preciseness over clarity in his formulation, so it takes some time to digest what he is saying.

Let me try to simplify his words (and by doing so, choose clarity over preciseness):

 Do unto others as you would have them do unto you.

My personal value system is very much in line with Kant’s… I’d say it’s 99% the same. Why not 100%, you ask? Only because of this: Kant’s value system is based on rules and not on consequences of one’s actions. I firmly believe that there are situations in life where blindly following rules – no matter how smartly defined those rules are – is not the right thing to do. One always has to be mindful of the potential consequences, and apply good judgement before following any rule.

Rules or consequences – this is a well known debate among philosophers: it’s called the deontological vs the consequentialist ethical system. Kant was representative of the deontological school of though. Nietzsche was representative of the consequentialist school.