Artificial Intelligence ( AI), in the coming years, could reshape journalism’s essential function, noted a professor from one of the Qatar Foundation partner universities while highlighting some of the substantial risks of AI integration in journalism.
“As AI becomes more involved in content creation, it could begin to reshape journalism’s essential function in helping the public understand the world and make informed decisions. This is particularly critical in regions like the Global South, where AI systems may not recognise or value local perspectives,” Prof Eddy Borges-Rey, associate professor in residence, Northwestern University in Qatar told Gulf Times.
Prof Borges-Rey said that currently journalists are cautiously experimenting with AI for specific tasks such as transcription, translation, summarisation, and initial drafts.
“If the current trends continue, we may see a newsroom workflow where human editors are managing AI-generated drafts, optimising content for algorithms rather than audiences. The most radical shift will not be technical, it will be conceptual,” he explained.
However, the professor cautioned that these experiments are occurring amid growing concerns and involve substantial risks.
“Tools like ChatGPT and Gemini can produce plausible-sounding but inaccurate content, and generative outputs often lack context and editorial judgment. The BBC recently conducted a study asking major AI platforms to summarise 100 news stories; 51% of the results contained significant issues. These findings reflect a broader trend: while journalists explore automation, the risks remain substantial,” he said.
The academic said that AI is rather a transformative force that challenges people’s assumptions about knowledge, authority, and responsibility. He stated : “If journalism is to remain a meaningful institution, we must be proactive in shaping how AI is adopted. That means investing in education, building internal safeguards, and centering voices from the Global South in global debates. We should be cautiously optimistic, but more importantly, we must move away from the rhetoric of ‘falling behind’.”
Prof Borges-Rey noted the media faces several profound challenges in integrating AI. “First, there is the problem of bias — most large language models are trained on English-language data from the Global North, which can marginalise other cultures and viewpoints,” he said. “Second, the lack of transparency in how AI makes decisions raises questions about accountability.
“Third, editorial authority is being blurred: when a machine suggests a headline or summary, who is ultimately responsible for its accuracy or ethical framing? And finally, there is mounting pressure to adopt AI quickly, often without adequate safeguards in place.”
Prof Borges-Rey also highlighted some of the advantages and risk factors of integrating AI into journalism.
“It can reduce the burden of repetitive tasks, improve accessibility through automated translations and voiceovers, and help identify patterns in large datasets for investigative reporting. It can also support local journalism by generating summaries or story alerts for underserved communities. But these benefits must be weighed against the ethical and professional risks. Used wisely, AI can support journalism’s mission. Used blindly, it can undermine the very trust it is meant to foster,” he added.
Prof Eddy Borges-Rey
