CSS Student Colloquium: Dilara Yesilova
Driverless Vehicles and Machine Ethics
Info about event
Autonomous machines are developing rapidly and machine intelligence is beginning to replace human activities in many sectors. This great autonomy given machines often requires them to make decisions on their own. How should these machines decide about some ethical dilemmas where even people are sometimes hesitant? Scientists all over the world are searching for an answer to that question, especially about driverless car ethics since they are expected to come into use in the near future. In general, we can reduce these ethical research to two approaches. The first one I take into consideration is a research called ‘Moral Machine’, which collects the general moral choices of humanity, based on the understanding that morality is programmable. I claim that this method is similar to ‘explicit ethical agents’ in ethical machine classification. The second approach aims to teach the machines the right behavior by believing that machine intelligence will have the ability to make its own decisions. I place this approach in the full ethical agents class among ethical machines. The first approach will be insufficient to solve today's complex ethical problems because of the impossibility of simulating all possible scenarios, impossibility of building a global consensus and impossibility of limiting the artificial intelligence enough to predict its behavioral pattern. Although it is not yet known whether an artificial intelligence can be a fully ethical agent one day, I believe that the adoption of the second approach will bring more accurate and intended results.