Researchers from the College of Law at Hamad Bin Khalifa University (HBKU) have proposed a new model of governance billed 'True Lifecycle Approach' (TLA) for the use of Artificial Intelligence (AI) in healthcare.

TLA is premised on the idea that AI should be governed across all stages of its research, design, implementation, and oversight," noted Dr Barry Solaiman, assistant professor of law and associate dean for academic affairs, College of Law, HBKU in a recent article in Diplomatic Courier, a global affairs media network.

“At its heart is a simple concept: patients must come first. Governance should embed medical law and ethics throughout, and not ignore the patient,” he said.

The article states that as the technologies continue to evolve, the frameworks that govern their use remain fragmented. ‘Law, as usual, is slow to catch up to technological innovation. This raises a serious question: how do we ensure that AI in healthcare is safe, ethical, and worthy of patients’ trust? And therefore, the proposal for this new model.

Though there are already governance frameworks for AI, by several regulatory bodies, these approaches focus narrowly on approving AI medical devices to market, the academic pointed out.

“Devices that pose greater risks to the public must undergo more checks and approvals before seeking approval. The problem is that these frameworks have not been designed with the full complexity of healthcare AI in mind. They overlook important issues like informed consent, malpractice liability, and other patient rights,” he argues.

The College of Law faculty member also notes that these frameworks regard AI as a technical tool, not as something that deeply impacts human lives.

"TLA is grounded in healthcare law and ethics, emphasising the importance of the standard of care in medicine, patient confidentiality, matters of consent, and respect for cultural and religious differences. These values are particularly relevant among GCC countries, which are home to diverse expat populations.”

According to the writer, TLA has three core phases of governance, starting with research and development. This phase sets the foundation for legal and ethical AI in healthcare from the very beginning of its conception. In Qatar, HBKU worked in partnership with the Ministry of Public Health to create the “Research Guidelines for Healthcare AI Development.”

“These encourage developers to follow detailed processes and document the purpose, scope, and intended use of AI systems. Importantly, researchers should consider ethics and law from the outset, such as compliance with data protection law as it pertains to medical data,” he continues.

The second phase considers systems approval. While not all healthcare AI tools require regulatory approval, regulators should nevertheless have broader powers to ensure that healthcare AI meets robust safety standards.

TLA’s final phase focuses on AI once it is used in practice. Rules should govern not only researchers and developers, but also healthcare providers, insurers, and any other entity using AI downstream.

Dr Solaiman highlights that GCC countries are prioritising AI investment, including healthcare AI governance and developers and deployers of AI in healthcare settings must remember that AI is not just a technological project but a human one.

“While policymakers and lawmakers decide how to regulate the technology, TLA provides a principled roadmap that encourages discussions on how best to govern the technology, putting patients at the centre,” he adds.