Are medical degrees keeping pace with artificial intelligence?

TL;DR

There’s no doubt that medical students receive a thorough education, priming them for careers in precise, people-centred clinical practice. But in the age of artificial intelligence, a question is worth asking: are MDs being taught enough to capitalise on this emerging technological potential?

Background

My own undergraduate experience in computer science and mathematical modelling represents a relatively uncommon pathway into medicine. Most aspiring doctors, quite appropriately, enter medical school with backgrounds in biology or the life sciences. However, developing a solid understanding of tech-oriented problem-solving need not be confined to early undergraduate learning; these skills can still be meaningfully introduced at later stages of a doctor’s education.

Increasingly, hospitals are relying on advanced technologies: imaging devices for diagnosis, robotic systems for surgery, and software platforms to automate administrative tasks. Artificial intelligence promises to accelerate this trend further, but identifying which clinical tasks are appropriate for technological delegation, then safely executing this translation, is far from trivial. This responsibility sit firmly within the remit of clinicians.

Diversification of MD backgrounds

In recent years, Australian institutions such as the University of Melbourne and the University of Sydney have relaxed prerequisite requirements for entry into their MD programs. This shift has opened the door for applicants from more technically oriented undergraduate backgrounds to enter the clinical environment. Emphasis is increasingly placed on harnessing this diversity, with the aim of introducing new ways of thinking about clinical care and healthcare systems more broadly.

At the same time, both undergraduate-entry and graduate-entry MD programs are introducing a greater number of elective subjects to complement the core medical curriculum. Internationally, similar trends are evident. Medical schools in the United States and Europe have begun embedding formal pathways in digital health, biomedical informatics, and clinical data science. The rationale is pragmatic: graduates with exposure to technical disciplines are better equipped to critically evaluate emerging tools alongside engineers and subsequently translate this innovation into clinically meaningful outcomes.

But these movements are insufficient on their own. The majority of the current medical workforce trained before artificial intelligence and digital health became embedded in routine care. For these clinicians, technological literacy must be developed through structured Continuing Professional Development (CPD). Technical literacy allows clinicians to see the forest through the trees when confronted with AI hype, distinguishing genuine clinical value from superficial optimisation.

This year’s Australasian Workshop on Neuroengineering and Computational Neuroscience (NeuroEng Australia 2025) saw neural engineers (working both in academia and for Asia-Pacific companies like Araya) and clinicians (like neurologist A/Prof Chris French) to meet and collaborate. I noticed a lack of advertising for the event and thus a shortage of doctors and medical students in attendance - my own attendance as a medical student brought out-of-pocket costs and was cut short due to clashes with a mandatory classes.

Development of technology relies on clinical comprehension

Biomedical engineers and technologists, while essential to innovation, cannot fully grasp the complexities and nuances of clinical care in isolation. Medicine is shaped by uncertainty, competing priorities, ethical obligations, and deeply human interactions—factors that are difficult to capture without lived clinical experience.

A thorough understanding of the clinical problems that new technologies aim to solve requires both a doctor’s perspective and meaningful access to patients’ experiences. Clinicians play a central role in articulating use cases, identifying unintended consequences, and advocating for patient-centred design. No matter how technically impressive a system may be, it remains useless if it solves the wrong problems.

Of course, navigating the sometimes decade-long regulatory pathways preceding the commercialisation of a medical instrument inevitably hinges on the role of the doctor and their ability to put clinical trials in motion. Earning FDA/TGA approval remains the great bottleneck in the emergence of new medical technologies.

Tech literacy for patient communication

Medical school entails many classes filled with meaningful discussions around ethical responsibilities in hypothetical scenarios. The importance of communication has been repeatedly emphasised to us - sufficiently informing patients of treatment is a pillar of professional practice. But ethical responsibility is no longer merely theoretical.

Regulatory bodies and health authorities are increasingly explicit in placing accountability for AI-enabled care on healthcare providers and institutions. Emerging frameworks from international regulators (Therapeutic Goods Administration 2024) emphasise transparency, clinical oversight, and the ability to explain and justify algorithm-influenced decisions at the point of care. In practice, this means that doctors are expected not only to use AI-enabled systems, but to understand their scope. Regulatory expectations are therefore converging with ethical ones: clinicians cannot responsibly (and legally) govern and explain what they do not comprehend.

A typical day in clinical practice involves the acquisition and interpretation of vast quantities of information. Doctors continuously compare signs and symptoms against memorised models of physiological and pathological processes. Increasingly, technological systems (algorithms, decision-support tools, and automated analyses) form part of this interpretive landscape. These processes deserve the same level of scrutiny and understanding as biological mechanisms if clinicians are to explain them responsibly (and again, in a legally sound manner) to patients.

Data privacy has also become central to the delivery of just and confidential care, another pillar of a doctor’s professional practice. As digital systems proliferate, doctors cannot realistically guarantee confidentiality without properly understanding the necessity of data infrastructure and the risks inherent to data security.

Central to the ethics of AI

Much of the contemporary discourse around AI ethics implicitly shifts responsibility onto those who apply technology, rather than those who build it. As discussed in The Ethical Agency of AI Developers (Griffin et al. 2023), ethical accountability does not vanish once a system leaves the hands of its developers; it is exercised most acutely at the point of deployment.

Doctors therefore occupy a central position in the ethical application of healthcare technology, and in my opinion, more responsibility than the developers themselves. From the first day of medical school, students are taught to practice beneficence and to mitigate harm. Fulfilling these obligations in a technologically mediated environment is only possible if clinicians possess a working understanding of the tools they are using.

Decisions such as employing AI for clinical documentation, permitting patient data to train inference models, or choosing to partner with private technology companies over public research institutions all carry significant ethical implications. These choices shape trust, equity, and accountability in healthcare worldwide.

Griffin, Tricia A., Brian Patrick Green, and Jos V. M. Welie. 2023. “The Ethical Agency of AI Developers.” AI and Ethics 4: 179–88. https://doi.org/10.1007/s43681-022-00256-3.
NeuroEng Australia. 2025. 14th Australasian Workshop on Neuroengineering and Computational Neuroscience (NeuroEng 2025). Https://neuroeng.asn.au/NeuroEng2025.html.
Therapeutic Goods Administration. 2024. Safe and Responsible Artificial Intelligence in Health Care: Legislation and Regulation Review. Australian Government Department of Health; Aged Care. https://www.tga.gov.au/resources/publication/publications/safe-and-responsible-artificial-intelligence-health-care.