How far away is AI from taking your Job?

June 2024


Thank you to all those who attended our recent talk!

We are excited to share the highlights from our launch event , focusing on the promise and pitfalls of AI in healthcare. Our discussions covered the potential of AI to revolutionise patient care, as well as the ethical considerations necessary to ensure it benefits everyone equally.


The Promise of AI in Healthcare

AI has the potential to:

  • Improve patient care and deliver faster diagnoses
  • Enhance diagnostic accuracy
  • Optimise resource allocation
  • Personalise treatment plans

Currently, there is a significant backlog in routine scan reporting, often exceeding three months. AI algorithms promise the potential to enhance diagnostic capabilities by analysing data with precision and speed, reducing errors, and facilitating tailored treatment plans.


AI Pitfalls and Mitigating Bias

Our presentation underscored the importance of addressing biases in AI. We discussed: AI Pitfalls and what not to do: mitigating bias in AI (Published in British Journal of Radiology October 2o23)

The integration of AI in radiology is growing rapidly, and it is crucial to be aware of potential biases that can impact diagnostic accuracy and patient outcomes. Key strategies include:

  1. Problem Definition: Bias can be introduced if not properly accounted for at this stage.
  2. Data Set Selection: Diverse data sets are essential to prevent biased outcomes.
  3. Model Training: Vigilance is needed to address biases during this stage.
  4. Deployment: Real-world application can reveal new biases.
  5. Monitoring: Ongoing evaluation is necessary to address any new or persisting biases.

Case Study: The Epic Sepsis Model

A critical review of the Epic Sepsis Model (ESM), implemented across numerous US hospitals, highlighted significant issues:

  • Low Diagnostic Accuracy: The model failed to identify many patients with sepsis.
  • Data Leakage: The use of antibiotic orders as an input undermined predictive capabilities.
  • Lack of Calibration: The model was not adjusted for different populations and hospital practices.
  • Alert Fatigue: Excessive alerts compromised the effectiveness of sepsis alerts.

The model identified only 7% of patients whose sepsis was missed by clinicians and failed to flag 67% of sepsis cases, generating alerts for 18% of hospitalised patients.


Areas for Improvement

To improve AI in healthcare, we must:

  • Collect inclusive data to avoid excluding diverse populations.
  • Ensure transparency and reproducibility, especially for commercial algorithms.
  • Address class imbalance and mitigate harmful hidden signals in datasets.
  • Use diverse teams to develop and monitor AI tools.

How Far Away is AI from Taking Your Job?

AI integration in healthcare faces challenges such as:

  • Navigating complex hospital IT systems.
  • Engaging clinicians and patients.
  • Maintaining continuous model monitoring to detect failures.
  • Human-machine collaboration to improve model performance and prevent automation bias. The use of a ML algorithm to differentiate liver cancer types did not improve pathologist accuracy when predictions were correct but worsened their performance when predictions were incorrect.

Join Us

We welcome speakers and participants at all levels, from medical students to consultants. If you're interested in speaking or discussing, please contact us at official.brjc@gmail.com or reach out through Facebook and Instagram.


Keep your eyes peeled for our next talk, we hope to see you soon!

For more information, follow us on Instagram