How can one evaluate the fairness of decisions made by AI in a democracy? Which decisions can be fully automated? Who is liable for decisions made by AI? Corporate social responsibility and the responsibility of AI engineers will grow, along with demands set by society itself.
Discussion about the political regulation of AI is growing in the United States. As different AI applications rapidly evolve, there has been a surge of questioning about the ability of technology companies to independently decide on the ethical, moral, and societal norms held by AI.
In recent years, tentative sets of norms regarding the development of AI and the liability of algorithms have been established in the state of California and in New York City. On the legislative side, there exists an AI Caucus in the House of Representatives. The main purpose of the caucus is to advance the understanding of Congress members when it comes to issues of AI technology development and requisites. Additionally, a separate AI Committee operates within the White House.
Two issues of confidence will be integral in the future: on the one hand AI’s confidence in its own decision-making and, on the other, people’s confidence in the ability of AI systems to make fair decisions. The task of good AI policy is to preempt these issues from conflicting one another. Indeed, in current American thinking, there is a recurring idea that certain areas of life (particularly areas related to the loss of one’s freedom and one’s life) should still be under human control. Or at the very least, there should be a monitorial mechanism accessing AI’s internal decision-making which would allow the tracing of the AI’s reasoning and its subsequent legal evaluation.
A hybrid of human and AI decision-making is likely the prevalent scheme long into the future. Full automation may be applied in areas that require it, for instance, on the battlefield. In terms of legal processes, the delivering of sentences will foreseeably remain in the hands of human judges assisted by AI.
The significance of regulation in AI development is now recognized in the United States more clearly, but at the same time corporate social responsibility and the responsibility of engineers over their products is further emphasized. Society sets additional demands of which the most pressing one is the aforementioned issue of explication. If the “thought process” of AI has to be explained after the fact, some of its efficiency will inevitably be lost. The big question is: Which decisions made by AI should be open for explication? Where does one draw the legal line?
Text by: Antti Niemelä
Follow Antti for more trade and tech-related content on Twitter @TopsyTurvyWorld
The officials at the Embassy of Finland in D.C. write reports that are periodically published in Finnish on the Embassy’s official site, on the website of the Finnish Ministry of Foreign Affairs, as well as on Team Finland’s site Market Opportunities.