It's the greatest paradox of the fourth industrial revolution: a tool so powerful it can reimagine how we construct narratives and understand the world, while simultaneously risking the distortion of our collective vision.
I am talking, of course, about artificial intelligence and its profound impact on our sense-making. Let me explain.
AI serves as a lens that helps us look beyond our individual perspectives. By analyzing vast datasets, it can reveal hidden social dynamics, emerging cultural trends, and marginalized experiences that might otherwise remain unseen. This tech can synthesize thousands of personal stories, identifying nuanced patterns of human experience that transcend individual accounts.
However, the same capability carries significant risks. AI systems are essentially mirrors that reflect the biases embedded in their training data. And when these datasets represent dominant cultural narratives, the technology inadvertently reinforces existing power structures, potentially rendering invisible the very voices it claims to amplify.
To understand this evolving landscape, I partnered with ComplexChaos, a platform specializing in AI-mediated cooperation and collective intelligence. Our research, which surveyed industry leaders and practitioners, revealed a fundamental tension in how AI shapes storytelling.
While the majority of respondents highlighted AI's potential to enhance narratives through personalization and data analysis, an equal number raised concerns about inclusivity and representation. The message is clear: building better AI means taking responsibility for its social impact.
Most critically, experts warned that over-reliance on AI could hinder the creativity and critical thinking that make storytelling powerful. The solution lies in creating diverse datasets that recognize intersectionality while ensuring human insight guides AI's capabilities. Without this balance, AI systems risk producing what one respondent called "shovelware" - content that adds volume but lacks meaningful impact. Here's what industry leaders revealed about AI's impact on storytelling in their own words.
Thomas D. Zweifel is former CEO and multiple board member, Columbia leadership professor emeritus and award-winning author of 11 strategy and leadership books.
Zweifel sees AI as a potential catalyst for democratic storytelling. "AI can be a vehicle for enhancing democracy by giving a voice to people without a voice," he said. "By understanding public sentiment and the nuances of various perspectives, creators can develop stories that foster empathy and drive conversations about important societal issues."
For instance, projects like The New York Times' "The Daily" use AI to curate content and suggest personalized storylines based on user interests, ensuring that viewers engage with topics that matter to them. Another example is the application of AI in developing virtual reality experiences that immerse users in the lives of others, like 'Clouds Over Sidra,' which transports viewers to a Syrian refugee camp.
“By democratizing storytelling, AI can amplify important narratives that challenge societal norms and inspire collective action, ultimately contributing to a more informed and empathetic society,” he said.
However, Zweifel warns about the perpetuation of biases: AI models trained on datasets reflecting existing biases can lead to storytelling that's xenophobic, racist, ageist, or sexist.
Making AI truly representative means building it with - not just for - diverse communities. This involves collecting varied data, involving different voices in design, and empowering creators from all backgrounds to use AI in telling their stories.
“Platforms can leverage AI to assist creators from underrepresented communities in crafting their stories, helping to ensure that the narratives produced are authentic and resonate with their intended audiences.”
Lucia Terrenghi, Senior Director of UX, YouTube Creator
Terrenghi highlighted AI's unique ability to expand narrative possibilities through natural language processing and multi-modal data analysis. "AI offers the opportunity to converse and build ideas using natural language and hence augment people's perspectives," she said. Large Language Models (LLMs) can help tell stories from various viewpoints, considering different genders, ages, and ethnicities.
Yet, we need to go beyond readily available datasets, actively seeking data from marginalized communities, and diverse cultural backgrounds. This could involve partnerships with community organizations, academia, and cultural experts from different backgrounds.
“It's important to increase the representation of underrepresented groups and reduce dominance of overrepresented groups to ensure we don't inherit societal biases and build models upon balanced datasets.”
Arlette Danielle Román Almánzar, Postdoctoral Researcher, Department of Management, Technology, and Economics, Zurich
“The problem of ¨ trash-in, trash-out¨ in AI is widely known. There has been a fixation on mitigating biases by providing statistical fixes or enhancing the training data. However, some academics argue that even if we collect the most representative data of the world as it is- it will still hold society's current prejudices and biases because it is our mirror. “
“This means that even with the most representative data, we must be active in bias mitigation and ask ourselves: do we want to represent the world as it is or as it was, or do we want to shape a new world?”
“Furthermore, AI poses the risk of epistemic violence - when dominant knowledge structures are imposed on others, and even epistemicide, when knowledge systems are silenced, devalued, or annihilated. Such practices exclude marginalized perspectives and narratives. Issues such as local slang and dialects are lost in the digital world, or accents are not recognized by the technology due to the sole recognition of the dominant language.
“Interdisciplinary collaboration between technology developers, social scientists, and marginalized stakeholders is crucial to grasp the social and ethical implications of AI. In order to minimize the risk of epistemic violence and blindspots, a diverse background of technology developers and stakeholders is needed in all stages of development.”
“Bias mitigation efforts must extend beyond data collection to include regular audits and evaluations of AI models after deployment. Furthermore, there needs to be more discussion about the role of the technology developer as a source of bias, as research has shown that programmers can transfer their cultural bias into their algorithms. It is essential to recognize that technical fixes alone are insufficient to mitigate bias, they must be complemented by a deep understanding of the ethical and social contexts.”
Barak Kassar, Co-Founder and Partner, BKW Partners
“Gen AI is being used to create imagery to illustrate stories as well as the stories themselves. At this stage of AI's development - and our development as AI users - the impact on society is neutral to negative. Human creativity thus far is the true driver of the story, plot, character, pacing, specificity, feeling. AI is being used a lot more in shovelware storytelling - corporate storytelling, political storytelling, run-of-the-mill social media, and regurgitative news. This just adds volume but does not have a huge societal impact. As AI is used to scale political storytelling, it can begin to have a negative or potentially positive impact - depending on the intent of the practitioners behind it.”
“Great stories are actually few and far between. AI might help create some of these in time, but as society we can really only ingest so much content. Use of AI feels inefficient and likely ineffective here.”