A lot of our presentations have brought up ethical or moral issues, but I think the technological singularity presentation raised a really interesting point: how could we program artificially intelligent systems to behave morally or ethically?
Of course there are a lot of technical things at work behind the scences, but I think the most important question here is: who decides how to program the systems? Would there be an organization founded to develop these codes? Who decides those are the most qualified people?
One issue that comes to mind is culture - how would cultural norms be incorporated into these moral and ethical codes? There are obviously huge differences between some societies and cultures, so this would be as essential to the program as what language (spoken, not programming) the system used. And this isn’t even an international thing, necessarily – think of the differences between the southeast and the northern east coast in our country.
Since we talked about outsourcing recently, consider this: what if the company was founded in America, the actual program was written by people in India, production was outsourced to China, and the robots/machines/whatever were shipped out to 10 different countries?
What other problems you can see with programming moral codes?
Subscribe to:
Post Comments (Atom)
In response the question of culture, I believe that the more technology-driven countries such as the US and Japan will be the first to pioneer and implement the uses of AI devices, since they're usually on the forefront of those kind of discoveries. However, I think that the US will be more susceptible to the dangers of AI since Western cultures tends to be independent than Eastern cultures. Americans are usually more individually driven so they may neglect their role as a member of a collective society, and abandon the norm by using more technological singularity-like devices. I believe that technology singularity won't become a global trend since there are many cultures that prefer more traditional ways of life and do not embrace technology as much as other countries.
ReplyDeleteThe only way I see of programming a machine to have a moral code is for it to be capable of making its own decisions. We can give instructions about what is currently acceptable in society and feed in what the laws are now, but ethics and morality have to do with decision making on the part of the individual inside that society. These machines would end up having the moral biases of their programmers, so it would be difficult to decide who to put in charge of this. The organization of coders that you proposed would need to have people from varying backgrounds so that the issues of cultural divides wouldn't be so strong.
ReplyDeleteWhile many cultures currently prefer more traditional ways of life, there is no way to predict how many of these cultures will adapt in the future generations. What if countries like the United States benefit tremendously from the advances of AI? Will these cultures change their views to improve their standard of living? There are currently so many unanswerable questions regarding this topic.
ReplyDeleteHere is another problem I see with AI. Perhaps society can be successful in creating AI that is subservient to the human race. When future generations come along, their views may adapt to one of sympathy towards AI. Can this eventually become a AI rights issue? I know this is a radical question to pose, but there are many uncertainties towards this topic, and this is one uncertainty that cannot be answered today.
I think before people start worrying about this, the technology needs to come out first. Moral codes will become an issue once we find out what the first artificial intelligence robot will do. Language wouldn’t be a problem. If the programmers are smart enough to create a computer that thinks, they would easily be able to program different languages in it. I agree with S Patel in saying that these questions will eventually be answered, just not today.
ReplyDeleteI also agree that the technology should be invented and utilized before such codes or questions can truly be posed and discussed. However, I think it is important to realize that while these computers may be programmed to 'think' they can not be compared to actual human processing and thus would redefine typical ethical codes and moral laws. Regarding cultural issues the program would have to be exact to the general region. While this might be difficult, it is necessary to maintain the utmost benefits. So many actions, thoughts and ideas that are accepted in one culture may be incredibly offense to a neighboring region. This issue is one that inherently defines the differences between AI and human processing. Humans have the ability to read reactions, assess situations and decide on future actions. These are all issues that will not emerge for many years; however, these may be the questions facing future generations similar to current generation’s discussions over intellectual property.
ReplyDeleteIt is important when discussing artificial intelligence ethics to assume that the technology exists. It is certainly a stretch to do this, but it makes the topic more credible and allows for better discussion. Aman brought up a great point in class about how stupid AI should be so that it is not comparable to humans but still really smart. One of the things that scared me about technological singularity is how these systems develop automatic ways to improve itself. What if these systems get so advanced that it surpasses the human race and no longer needs us to run them? There is a great ethical dilemma that is posed by AI, and the question of who deals with it is even a greater concern.
ReplyDeleteI'm not real sure that AI is even possible, but I'm sure that a lot of people never dreamed that something like the internet or the computer is possible too so it may be my own shortsightedness. It just seems that scientists still do not fully understand exactly how certain aspects of the brain works, and we are still finding new chemicals and processes that are responsible for simple movements and commands we perform every day. To me, if AI was ever developed it would be hundreds of years from now simply because there is a lot of research that still needs to be done to understand our own human bodies. How would we develop a machine using the human mind as a model if we still do not fully understand the model? However, if AI ever existed and it had all the capabilities of a human brain, which I still think would be impossible, then it would be more of a question of whether or not ethics and morality is something that is programmed in our brains through evolution, or if it is something that we are taught. And if robots had our brain does that not mean that they are capable of learning, or is their intelligence fixed? I feel that we cannot really talk about ethics and morality of something that we know nothing about and will never understand until the questions actually arise hundreds of years from now.
ReplyDeleteI think that it would be unethical and morally wrong to program artificially intelligent systems. I think the most interesting and important question to focus on is the question of "who?" I think the person and/or organization with the power to develop these codes and make these powerful decisions will give them too much power and is scary to think of the possibility of the person/organization taking advantage of that. If the company was like the example in Ashley's post...I think that this could jeopardize the relationships that different sides of the country has with one another, along with the relationship the US has with other countries. I agree with Aivi about how the US and western culture grows more independently and usually faster than the rest of the world so it makes sense that AI will happen here first, however, thus causing the US to experience the negative consequences before anyone else.
ReplyDeleteThe problem with implanting computers with moral codes is that they would not be able to make exceptions. In almost any case of a law or moral code, there are grey areas where exceptions can be made. Right now, we do not have the technology to teach computers on how to deal with grey areas. Therefore, I do not think that we should instill the responsibility of enforcing a moral code with a computer.
ReplyDeleteThis topic gets a lot more pressing if you accept the (plausible) assumption that at some point, someone is going to create artificial intelligence that can develop its own capabilities. The question stops being whether we "should" program computers to think morally; it becomes something we must do.
ReplyDeleteIf it is technically possible for a programmer to create AI (or becomes possible with hardware advancement), someone will do it, regardless of whether society as a whole thinks that it's a good idea. We should accept that fact and start thinking proactively about what conditions should be in place when technology like this arises.
AI forces the convergence of morality and technology. That puts the developers in a new and interesting situation. Instead of technology as a tool governed by its user, it would become something between a tool and an individual. I think that creating a truly independent form of artificial intelligence would revolutionize what it means to be human. What is Descartes's famous quote, "I think therefore I am?" That would no longer apply or at least its meaning would be changed dramatically.
ReplyDelete