The moment I stopped taking notes for the panel on ‘Rebuilding Trust in the Digital Age’, I realised that it was impossible to sum everything up in one piece. Most, if not all people raised their hands to ask a question to the panellists after a discussion on building ethical artificial intelligence (AI), self-regulating in Silicon Valley, and creating inclusive online spaces. In this piece, I am going to talk about a thread that runs through all these discussions: the importance of human values and culture in building ‘ethical’ AI and regulating technology.

After discussing the increasing distrust in platforms due to data breaches and lack of accountability, John Dupree (Mississippi & Christ Church 1976) asked the panellists a rather important question: “Is ethical AI a reasonable or real thing?” Lian, Helen, and Atindriyo seemed to agree on one thing – that ethics are contextual and subjective. While organisations like United Nations are trying to define it at a multilateral level, achieving a one size fits all model is not possible in practice. Helen pointed out that an ethical AI in China would be more focused on national security and rooted in surveillance, while in the United States it would be more focused on addressing algorithmic harm. ‘Human values of fairness, transparency, and justice must be at the core of building ethical AI’, remarked Lian, while explaining how there should be some core human values as the base of ethical AI irrespective of the context.
This raised an important point about technology governance: that it is not an isolated technical category, but rather a product of the culture in which it operates. Therefore, platforms need to have policies and guidelines that are informed by local cultures and practices rather than having a blanket model which operates uniformly across the world. This is one of the first steps in building an ethical AI. However, this comes with a caveat less discussed: discourses on ‘culture’ are often hijacked by authoritarian regimes to censor dissent, and therefore culture must be understood and located in people at the margins, as much as it is in the elites at the forefront.
This contextual understanding plays a key role in moderating online content as well. ‘Harm is universal, but the way it is manifested is local’, remarked Lian while talking about TikTok’s content moderation policies. She explained how online content depicting use of firearms would be harmful in a particular society but would not be considered as such in societies where use of firearms is a non-threatening cultural practice. Therefore, the community guidelines must go through quick reviews and evolve at a fast pace. She stressed on the need for these guidelines to be context-specific, address the nuances in different cultures, and have human values at their base. While answering a question on who gets to decide what ‘harmful’ is, Atindriyo remarked that the definition of harm is dynamic and actively shaped by society. This fluctuation in society also causes a drift in the algorithmic definition of harm for hate speech detection by AI. This dynamism of what constitutes ‘harm’ reflects that algorithmic decision making and content moderation is not value neutral, but rather a process rooted in political decision making. Given the immense power such platforms have, it is important for them to be ‘fair, safe, and representative’, as remarked by Atindriyo while stressing on the need to have guardrails in AI systems.
On being asked what their respective companies are doing to implement such guardrails, the panellists mentioned initiatives like establishing an AI security fund at Google, making inclusive and context-sensitive guidelines at TikTok, and building safe and secure AI at Galileo.
The panel ended with recommendations on how to navigate digital spaces without compromising on safety and privacy. Helen asked the panellists to explore different AI models which are open source and explore AI beyond U.S based models. Arijit, while remarking AI models ‘are not infallible’ remarked that users hold more power than they think they do. He argued that people can walk out and opt-out of technology if their privacy was being harmed. However, as was pointed out in one of the questions, information capitalism has operationalised in a manner where it has become impossible to opt out of the internet, or even platforms like WhatsApp which hold immense market share in online communication services. Lian’s concluding advice was to practice critical thinking, and fact check the information we receive online.