The Making of the Atomic Bomb – by Richard Rhodes

The Making of the Atomic Bomb I’ve read this book – about the story of the atomic bomb. I wanted to know more about the subject because I think that the making of the atomic bomb was the greatest ethical test for humanity so far in history. And it is a complex one. What happens if we create a weapon of unseen destructive power? What happens if we don’t? Can technological progress be stopped, or should it? Can it be controlled? – is humanity mature enough for that? Is science superior to national interest? The list goes on…

Before the atomic bomb, the filed of physics was dominated by hope: the ways that science could be used to make the world a fundamentally better place. Afterwards, everyone became painfully, continuously aware of how things could be turned against everything they ever dreamed of. It was a a sobering awakening moment for the professions of physics and engineering.

We are approaching this sobering awakening point with the profession of software engineering. And I’d rather learn from history than learn from my own mistakes.

So, I’ve read this book, and it was time well spent. I made notes… About all the ethical dilemmas people went through. I have a great admiration for those physicists, Bohr, Szilárd, Teller, Oppenheimer and others. They were pioneers of technology. And whether they liked it or not, they were pioneers of ethics too.

I am still arranging my notes and thoughts, but I promise to share them one by one on this blog.

 

Thoughts about responsibility

This year’s Google I/O event started with Sundar Pichai stating:

We recognize the fact that the tech industry must always be responsible about the tools and services that it creates.

I couldn’t agree more. In fact, this is why I started blogging again. Because I feel responsible for the technology we are creating.

But what does this really mean? Sundar did not elaborate during the keynote… So here are my own thoughts.

First it means that we must apply our talents with good intentions. Creating an algorithm that diagnoses eye diseases is a good thing, making this accessible to all people is even better. Automating away routine everyday tasks and giving people more time to do meaningful things, it’s great too. Working towards self driving cars is in essence also cool.

But secondly, being responsible also means being mindful of the consequences – some of them unintended – of the disruption we are causing with our innovations. Technology is fundamentally transforming society. Automation transforms the job market, social networks transform the media, autonomous weapons change the power balance between nations. We cannot say that this is somebody else’s problem. We cannot say that we are just tech people… how our tech is used is up to regulators to control, sociologists to analyze and philosophers to contemplate about. No. We created it, we understand it best, it’s our task to deal with the consequences. And in doing so, we must have the whole humanity’s interest in mind. Technological advancement is good in principle, but only if we do it right: if we make life better for everyone. And some groups are vulnerable to being left out: old people, non tech-savvy people, people who don’t have access to technology…

It’s part of the job to take responsibility for the technology we create. And it’s a damn hard thing to do it right, but hey, that’s what really makes our job really cool!

I am not alone with these thoughts… Let me close by quoting a very wise man:

It is not enough that you should understand about applied science in order that your work may increase man’s blessings. Concern for the man himself and his fate must always form the chief interest of all technical endeavors; concern for the great unsolved problems of the organization of labor and the distribution of goods in order that the creations of our mind shall be a blessing and not a curse to mankind. Never forget this in the midst of your diagrams and equations.

– Albert Einstein, speech at the California Institute of Technology, Pasadena, California, February 16, 1931, as reported in The New York Times, February 17, 1931, p. 6.

Value systems: Kant

The famous categorical imperative!

Kant was the most famous philosopher of he Enlightenment, and he had some truly remarkable contributions to the human civilization. One of his most notable achievements is the creation of an ethical framework based purely on rational thinking. Previous ethical frameworks, at least in Western civilization,  were all based on god.

The categorical imperative is the one fundamental rule that defines what it is to be a good person. It has several different formulations, all meaning the same thing. Here is one formulation:

Act only according to that maxim whereby you can at the same time will that it should become a universal law.

Kant chose preciseness over clarity in his formulation, so it takes some time to digest what he is saying.

Let me try to simplify his words (and by doing so, choose clarity over preciseness):

 Do unto others as you would have them do unto you.

My personal value system is very much in line with Kant’s… I’d say it’s 99% the same. Why not 100%, you ask? Only because of this: Kant’s value system is based on rules and not on consequences of one’s actions. I firmly believe that there are situations in life where blindly following rules – no matter how smartly defined those rules are – is not the right thing to do. One always has to be mindful of the potential consequences, and apply good judgement before following any rule.

Rules or consequences – this is a well known debate among philosophers: it’s called the deontological vs the consequentialist ethical system. Kant was representative of the deontological school of though. Nietzsche was representative of the consequentialist school.