Quoting Bodor Ádám

This statement is in Hungarian, so a lot of you might have a challenge in understanding it. But believe me, it is so strong, like a morning espresso.

Egymás érdekeinek, értékeinek megbecsülése, kölcsönös tisztelete csakis harmonikus egzisztenciális és morális viszonyok között lehetséges, olyan fokú stabilitásban, amelyben az ember saját sorsával már annyira elégedett, hogy fogékonnyá válik szomszédja mássága, sajátos intézményei iránt, sőt közelségének mint tulajdon gazdagságának örülni is tud. Jelenleg minden jel arra mutat, ettől mérhetetlen távolságra vagyunk.

Values in order – by Stephen Fry

I suppose the thing I most would have liked to have known or been reassured about is that in the world, what counts more than talent, what counts more than energy or concentration or commitment, or anything else – is kindness. And the more in the world that you encounter kindness and cheerfulness – which is its kind of amiable uncle or aunt – the better the world always is. And all the big words: virtue, justice, truth – are dwarfed by the greatness of kindness.

This resonates a lot.

AI Now Institute

The AI Now Institute is a research institute dedicated to understanding the social implications of AI technologies. They are doing a great favor to our society: filling the gap of ethical concerns around the massively approaching Fourth Industrial Revolution.

Recently I read their annual report of 2018. Here are my highlights…

Who is responsible when AI systems harm us?

The AI accountability gap is growing. The technology scandals of 2018 have shown that the gap between those who develop and profit from AI—and those most likely to suffer the consequences of its negative effects—is growing larger, not smaller.

Civil rights concerns

Earlier this year, it was revealed that Pearson, a major AI-education vendor, inserted “social-psychological interventions” into one of its commercial learning software programs to test how 9,000 students would respond. They did this without the consent or knowledge of students, parents, or teachers. This psychological testing on unknowing populations, especially young people in the education system, raises significant ethical and privacy concerns.

Power asymmetries between companies and the people

There are huge power asymmetries between companies and the people they serve, which currently favors those who develop and profit from AI systems at the expense of the populations most likely to be harmed.

Moreover, there is a stark cultural divide between the engineering cohort responsible for technical research, and the vastly diverse populations where AI systems are deployed. These gaps are producing growing concern about bias, discrimination, due process, liability, and overall responsibility for harm.

Their full report: https://ainowinstitute.org/AI_Now_2018_Report.pdf

John Stuart Mill’s Dilemma

John Stuart Mill wrote a famous book, entitled “Utilitarianism“.

Utilitarianism was first formulated by Jeremy Bentham, and though it was a sound moral theory, it received some serious criticism. The objection was that utilitarianism does not take into account the individual, and that some moral principles need to be followed, even for the benefits of a single individual. John Stuart Mill went deep into defending his standpoint. He stated that when you optimize for maximum utility for the greatest number of people on the long term – emphasis on the long term – utilitarianism is actually not the cold-hearted calculating moral theory that it looks like.

But there was one element he could not reconcile with utilitarianism. He had the intuition that human beings are entitled to an intrinsic respect, no matter what. Even when you optimize for the whole of humanity nong term. I like that!

Justice with Michael Sandel – Lecture 5

Libertarianism – people are entitled to intrinsic respect; the fundamental right is the right to liberty.

Free to Choose

With humorous references to Bill Gates and Michael Jordan, Sandel introduces the libertarian notion that redistributive taxation—taxing the rich to give to the poor—is akin to forced labor.

Lecture: http://justiceharvard.org/lecture-5-free-to-choose/

Libertarian principles:

No paternalist legislation – laws that protect people from themselves

No morals legislation – laws that promote virtuous living

No progressive taxation – redistributing income or wealth from the rich to the poor

Am I a libertarian? What do I think of these principles?

Paternalistic and moral laws – a certain minimum amount is required to sustain the stability of society. As little as possible.

Progressive taxation – required, for the same reason. Large inequalities make the society unstable. Aristotle: the poorer 90% only accept that they are poor because thay are convinced that they have a reasonable chance to be rich. Rousseau: the society does you a favour: sustains favourable environment. If you live in a society, the social contract applies.

The moral goal is to provide a prosperous society, where everyone has equal chance and everyone has a large amount of guaranteed freedom.

Am I a utilitarian?

Utilitarianism is an ethical and philosophical theory that states that the best action is the one that maximizes utility, which is usually defined as that which produces the greatest well-being of the greatest number of people.

Is utilitarianism the best moral system we could possibly have? Many people around me think so.

This is a very compelling idea. But it can be challenged. Here is the challege…

Would you want to live in a world… where everyone lives a long and prosperous life and they are very happy … but! there is one single person, a child, who is constantly suffering and has a miserable life… in exchange for the happiness of the others.  Would you be OK with this, being one of the millions of happy people, while knowing that there is one who pays for your happiness?

I would not.

Justice with Michael Sandel – Lecture 1

The Moral Side of Murder

If you had to choose between (1) killing one person to save the lives of five others and (2) doing nothing, even though you knew that five people would die right before your eyes if you did nothing—what would you do? What would be the right thing to do? That’s the hypothetical scenario Professor Michael Sandel uses to launch his course on moral reasoning.

Lecture: http://justiceharvard.org/themoralsideofmurder/

My thoughts about this lecture…

The first questions were easy to answer. As we went deeper into the lecture, they became harder. Often I had to go back to the previous questions are revise my answers… not the outcome of the moral dilemma (which option is the morally correct one?) but the reasoning that led to that outcome.

Very often we know why we believe that something is right, but when pushed to dig deeper, we realize that we actually don’t know. We feel that something is right, and our rational brain comes up with a logical reasoning – it rationalizes our choice. But that rationalizing can be challenged and it sometimes breaks down.

The moral responsibility of software engineers

(thoughts collected from others, mainly software engineers)

Kate Heddleston

Since I started programming, discussions about ethics and responsibility have been rare and sporadic. Programmers have the ability to build software that can touch thousands, millions, and even potentially billions of lives. That power should come with a strong sense of ethical obligation to the users whose increasingly digital lives are affected by the software and communities that we build.

[…]

Programmers and software companies can build much faster than governments can legislate. Even when legislation does catch up, enforcing laws on the internet is difficult due to the sheer volume of interactions. In this world, programmers have a lot of power and relatively little oversight. Engineers are often allowed to be demigods of the systems they build and maintain, and programmers are the ones with the power to create and change the laws that dictate how users interact on their sites.

https://kateheddleston.com/blog/a-modern-day-take-on-the-ethics-of-being-a-programmer

2015

Ben Adida

Here’s one story that blew my mind a few months ago. Facebook (and I don’t mean to pick on Facebook, they just happen to have a lot of data) introduced a feature that shows you photos from your past you haven’t seen in a while. Except, that turned out to include a lot of photos of ex-boyfriends and ex-girlfriends, and people complained. But here’s the thing: Facebook photos often contain tags of people present in the photo. And you’ve told Facebook about your relationships over time (though it’s likely that, even if you didn’t, they can probably guess from your joint social network activity.) So what did Facebook do? They computed the graph of ex-relationships, and they ensured that you are no longer proactively shown photos of your exes. They did this in a matter of days. Think about that one again: in a matter of days, they figured out all the romantic relationships that ever occurred between their 600M+ users. The power of that knowledge is staggering, and if what I hear about Facebook is correct, that power is in just about every Facebook engineer’s hands.

[…]

There’s this continued and surprisingly widespread delusion that technology is somehow neutral, that moral decisions are for other people to make. But that’s just not true. Lessig taught me (and a generation of other technologists) that Code is Law, or as I prefer to think about it, that Code defines the Laws of Physics on the Internet. Laws of Physics are only free of moral value if they are truly natural. When they are artificial, they become deeply intertwined with morals, because the technologists choose which artificial worlds to create, which defaults to set, which way gravity pulls you. Too often, artificial gravity tends to pull users in the direction that makes the providing company the most money.

https://benlog.com/2011/06/12/with-great-power/

2011

Arvind Narayanan

We’re at a unique time in history in terms of technologists having so much direct power. There’s just something about the picture of an engineer in Silicon Valley pushing a feature live at the end of a week, and then heading out for some beer, while people halfway around the world wake up and start using the feature and trusting their lives to it. It gives you pause.

[…]

For the first time in history, the impact of technology is being felt worldwide and at Internet speed. The magic of automation and ‘scale’ dramatically magnifies effort and thus bestows great power upon developers, but it also comes with the burden of social responsibility. Technologists have always been able to rely on someone else to make the moral decisions. But not anymore—there is no ‘chain of command,’ and the law is far too slow to have anything to say most of the time. Inevitably, engineers have to learn to incorporate social costs and benefits into the decision-making process.

[…]

I often hear a willful disdain for moral issues. Anything that’s technically feasible is seen as fair game and those who raise objections are seen as incompetent outsiders trying to rain on the parade of techno-utopia.

https://33bits.wordpress.com/2011/06/11/in-silicon-valley-great-power-but-no-responsibility/

2011

Damon Horowitz

We need a “moral operating system”

TED talk: https://www.ted.com/talks/damon_horowitz

2011

Why did the US drop the atomic bomb on Japan?

Why did the US drop the atomic bombs on the Japanese? On people. On tens of thousands of people. When the war was in practice already won. When the US was nowhere near in any existential threat.

How did they overcome the moral obstacle, the principle of not hurting other people, the force of conscience that should have prevented that?

I was looking for answers to this question, while reading Richard Rhodes’ book. And here are my answers…

Moral fatigue

The war was going on for ages. People were mentally tired. They felt they would give anything just to put an end to their own suffering.

Dehumanization

Japanese culture seemed very distant from that of the western civilization. The Japanese were strange, seemingly irrational and looking different. People from the US did not understand them. Somewhere deep down they did not think they are really human… of equal value.

Hunger for power

The US was aware that the atomic bomb is not just a means to win the war. It is a tool to skew the power balance in the world after the war – and to skew it strongly in their own favour. And what is the one thing that is more deeply wired in humans than caring about other humans? The will to power.

No more words.