AI Patient Practice Built for the AMC Clinical Exam
AI-powered clinical practice tools are becoming more common. You may have seen platforms that let you chat with an AI patient, answer questions about a case, or work through OSCE-style scenarios on screen. Most of these tools are built for medical students or designed around exam formats from other countries: USMLE Step 2 CS, PLAB 2, MRCP PACES.
If you are preparing for the AMC Clinical Exam specifically, you need a tool built for that exam. The scoring criteria, the station format, the clinical content, and the consultation style expected by AMC examiners are distinct from what other exams require.
What makes AMC-specific AI practice different
Generic AI OSCE tools typically score your performance against frameworks like Calgary-Cambridge or a universal communication skills checklist. These are good frameworks, but they are not what AMC examiners use. The AMC Clinical Exam has its own assessment domains covering clinical knowledge, procedural skills, communication, diagnostic reasoning, patient safety, and professional behaviour.
BlitzBuddy scores your practice sessions against these AMC-specific domains. When you finish a station, your assessment tells you how you performed on the criteria that actually matter for this exam, not a generic OSCE rubric adapted from another system.
The station content is also AMC-specific. Each station is mapped to the AMC clinical exam blueprint, covering medicine, surgery, women's health, paediatrics, and mental health across community and hospital settings. The scenarios reflect the kinds of presentations that appear in the AMC exam, created and validated by a doctor who sat the exam herself.
How the AI patient works
BlitzBuddy uses voice, not text. You speak to the AI patient and it responds out loud, creating a realistic consultation experience.
The AI patient follows strict rules designed to simulate a real patient encounter. It only reveals information that is explicitly in the patient profile. If the profile says the patient has a family history of diabetes, the AI will mention it when you ask about family history. But it will not volunteer that information unprompted, and it will not invent details that are not in the brief.
Responses are short and natural: one to two sentences, the way a real patient speaks. The AI does not deliver medical monologues or use clinical terminology unless the patient character would realistically do so.
This means your practice sessions reward the same skills the exam rewards: asking the right questions, in the right order, and responding appropriately to what the patient tells you.
The voice pipeline
The technical setup behind BlitzBuddy is purpose-built for low-latency voice conversation. Your speech is transcribed in real time, processed by the AI patient, and the response is delivered back as natural speech. The total round-trip is fast enough that the conversation feels natural, not like you are waiting for a chatbot to think.
This matters because consultation fluency depends on rhythm. If there is a five-second pause after every question, you cannot build the kind of conversational flow that the exam demands. The voice pipeline is optimised to keep that rhythm intact.
Post-session coaching
After each practice session, you get two things: a scored assessment and a gold-standard coaching demo.
The assessment breaks down your performance across AMC domains, showing you exactly where you scored well and where you need improvement. Over multiple sessions, you can track which domains are consistently strong and which need focused attention.
The coaching demo shows how an ideal candidate would run the same station from start to finish. You can compare your approach, including your opening, your questioning sequence, your examination discussion, and your management plan, against the standard. Each demo is station-specific, not a generic consultation template.
Doctor-created, exam-aligned content
Every station in the BlitzBuddy library is created by a doctor who prepared for and passed the AMC Clinical Exam. The content is not adapted from a UK OSCE bank or a US Step 2 CS collection. It is written from scratch for the AMC exam, reflecting Australian clinical guidelines and the specific expectations of AMC examiners.
The station library covers the full range of the AMC clinical exam blueprint, with new stations added regularly.
Try it yourself
The best way to understand how AI patient practice works is to try a station. Pick a scenario, talk to the AI patient, and see your scored assessment. No credit card required.