top of page
Writer's pictureprivcombermuda

Guest Post: The Role Of Academia In Bridging Global AI Divides – Cultural Representation In AI

Updated: Aug 23, 2023

As students return to school, the month of September inspires learning and collaboration to strengthen opportunities for work and personal interests. Many of us enjoy the range of benefits that technology introduces to our lives, and artificial intelligence is embedded in many applications that we are starting to depend on for employment and home life. Unfortunately, we cannot ignore the ethical, privacy, and human rights issues that AI presents to individuals, locally and globally.


In conjunction with the Morality & Knowledge in Artificial Intelligence (MKAI) Forum, PrivCom is pleased to share this guest post by Animesh Jain, Vemir Ambartsoumean & Richard Foster-Fletcher (Chair of MKAI). The writers of the below article explore the importance of academia's role in mutual trust-building exercises and cross-cultural development to promote collaboration and cooperation in order to establish AI governance that serves & protects humanity.

 

Culture influences reasoning. People’s choices, including conflict resolution and decision-making, are rooted in their environment and lived experiences. Therefore, creating global AI governance systems that can take culturally influenced reasoning into account is crucial for creating effective and sustainable AI systems for citizens and policy-makers and building fertile ground for AI governance.

Influences of Culture on AI ethics and governance

The societal implications of AI are a contributing factor towards raising the awareness of governments of the need to collaborate on an international framework for AI governance. There exist over 160 documents from around the world aiming to contribute towards the development of ethical principles and guidelines for AI. AI regulation around the world is fragmented with moral pluralism and cultural diversity. Therefore, to create globally compatible AI Governance, substantial and coordinated efforts are necessary.

Major issues in this subject include the mistrust among cultures and the practical challenges of coordination across locations. Looking at ‘moral decision-making’ can give us a better understanding of the implications of cultural difference issues. Psychological studies show that individuals are not strictly utilitarian in moral reasoning due to personal values and cultural standards – answers to the trolley problem being one example. By contrast, cultural products such as stories, religious texts, and folktales provide a reliable source of data for cultural modelling. These cultural products, formed over generations, provide historical memory, a moral compass, and a backbone for decision making. Therefore, considering cultures can help policy-makers understand how different groups might react to new regulations and help negotiators find common ground. Currently, there is an increasing need for global governance of AI and frameworks. The cultural differences among nations and regions present a unique set of challenges, especially when aligning core ethical principles. An examination of historical influences that inform current societal thinking may contribute to a way forward, and this has to be done while respecting diverse cultural perspectives and priorities.

Case Study: Europe & China

One example of historical-cultural differences in AI resides with China and Europe as conglomerates. They both regularly discuss the aspects of privacy, safety, fairness, robustness, and transparency; however, their viewpoints have marked differences.


European roots stemming from the Enlightenment period and various revolutions have grown into a rights-based, protecting-individuals-from-harm mentality. In Europe, based on the General Data Protection Regulation (GDPR), privacy involves the protection of an individual’s personal data from both private-commercial entities and the state. Chinese data privacy guidelines, historically influenced by the Confucian value system, have developed uniquely into a hierarchical structure of shared social responsibility and a community-based, state-run focus. According to the Chinese guidelines, data is protected from private companies and other malicious agents. But, the state has absolute control over citizens’ personal data, something that would be very difficult to incorporate in a European state. While Europe emphasizes fairness and diversity by factoring in gender, ethnicity, disability, etc. and insisting upon protecting vulnerable individuals, China focuses on society as a whole by working towards reducing regional disparity and regulating individuals’ behaviour by encouraging inclusive development among their citizens. Therefore, when discussing the same concepts like privacy and safety, Europe and China essentially mean different things. While the Chinese see AI as a way of continuous improvement, some Europeans view it as a potential loss of control.


For Europeans, AI development should be fair and the processes transparent. For the Chinese, AI development is to elevate society even at the expense of citizen privacy.



Overall, AI is perceived largely as a force for good in Asian cultures in general; on the other hand, there is a deep-seated fear of a dystopian technological future in Western cultures. In the future, technology and robots are envisioned as pets and companions in Chinese society while in the Western psyche, they are envisioned as tools and potentially deadly machines (as seen in films such as Terminator, The Matrix, and Black Mirror). This reveals blind spots and represents a gap in the cultural representation of AI.

Importance of Cooperation for Sustainable – Global AI Development


Conclusion

International cooperation while respecting diversity across cultures and countries is required for sustainable global AI development. As exemplified by the Chinese-European differences and similarities, modern overlap and historical diversions require delicate consideration. Hence, more avenues are necessary to discuss these challenges and increase cross-cultural AI cooperation among nations.


Mutual trust-building exercises and cross-cultural development will create a rich atmosphere to promote collaboration and cooperation to create governance that serves all of humanity.

Interested in learning more about how Artificial Intelligence (AI) can impact information privacy, ethics, and human rights?

Have you seen an interesting story about AI in the media and have questions about what it means for information privacy?


 

To reach out to the Office of the Privacy Commissioner, please visit our Contact Us page.


For more information about PrivCom, go to Press Background | PrivComBermuda (privacy.bm)

Comments


bottom of page