Coding Ethics

30 Nov 2017

Ethics?

We’ve all heard of the term “ethics”. It’s such a broad term used by everyone to describe the decision making of what one thinks is “right”. Ethics and ethical decisions are taught in schools starting from grade school, teaching people how to think and decide what is morally correct and wrong. Although this type of topic is generally very useful in getting people to think about ethical dilemmas early on in their lives, it also poses a problem that there is no right or wrong answer to what is considered ethically correct. Rather, there is only what society or individuals think is the right practices.

Therefore, thinking of ethical decisions can be very overwhelming if it was applied to everything. If one person considered the ethical issues of eating a McDonalds burger, flushing the toilet, and turning on the lights every single day, it would take a lot of energy and time out of their lives. However, some situations require such attention. A great example is the military. The United States military has an oath of enlistment that states “…I will obey the orders of the President of the United States and the orders of the officers appointed over me, according to regulations and the Uniform Code of Military Justice.” While this may seem like a person in the military should always follow orders, the UCMJ does state that those in service have an obligation to disobey unlawful orders. Similarly, I believe this idea applies in the computer science/coding field. As technology evolves, programming will take an even bigger part in our lives. Advancements such as self driving cars will take a critical role in our safety, and one small coding error may cause the lives of hundreds. When lives are at stake, the ethical dilemmas tend to come up more.

Programming Dilemmas

In the article called “ Why Self-Driving Cars Must Be Programmed to Kill” by MIT Technology Review talks about an ethical dilemma where self driving cars need to choose between killing the owner/driver of the car or the lives of numerous other people. What a dilemma. When I asked my friends, all of them humbly stated that if they were the programmer, they would make sure the car killed the driver. But how many people would actually buy a car knowing that it is designed to kill them? Not much. The dilemma now is the ethics between marketing and safety. Should a company cause the car to hit the pedestrians so that they could market the car better? If so, is that car really safe? And if self driving cars were programmed to kill the driver and no one bought them, would the roads be any safer? Either way, it seems as if people’s lives will be in danger.

I would personally choose to kill the pedestrians. Not because of the marketing value, but because I believe that my technology and items I buy should be designed to protect me. I believe that the moment we start building programs with the option to kill the user, that will be the day we will be at the mercy of our own technology. However, this is far beyond the ethical dilemma at this point. I think that the point is, there is no right or wrong answer. As stated in the beginning of the article, it’s all about personal opinion, and whether it is your belief to save yourself or save others, just make sure you do your research before you buy your next self driving car.