23 Dec 2025 — Six short items on AI that matter for clinicians. These cover major clinical advances, genetic discovery powered by AI, regulatory shifts, federal action you can influence, executive expectations for 2026, and legal risk from whistleblowers. Scan the bullets, then jump to the sections you need.
AI boosts neuromuscular diagnosis and shortens EMG time
Genetics + AI map 166 loci for aortic stenosis
2026: pilots must prove clinical results and ROI
HHS RFI — an open chance to shape clinical AI policy
Administration proposes dropping AI model‑card rule
Whistleblowers and legal risk from clinical AI
Have suggestions? Reply to this email.
AI boosts neuromuscular diagnosis to 99.5% and can cut test time to 5 seconds
AI tools now match or beat experts on several neuromuscular tasks. EMG models in reviews report accuracies from about 67% up to 99.5% for conditions such as ALS and myopathy Source. AI applied to neuromuscular ultrasound shows >90% for nerve entrapment and ~87% for inflammatory myopathies, per a Cleveland Clinic summary Source. Video analysis can pick up facial signs of myasthenia gravis and outperform experts in some tests Source. Experimental systems can produce accurate needle EMG reads in five seconds or less, shortening an uncomfortable test for patients Source. Note: early models trained on narrow data sets underperformed in small subgroups; diverse training data is essential to avoid equity gaps Source. AI imaging can match radiologists in some muscle MRI tasks, which supports wider deployment in low‑resource settings Source.
Genetics and AI found 166 gene sites linked to aortic stenosis — new paths for early detection
Researchers used AI to extract valve measurements from ~60,000 UK Biobank MRIs and combined those traits with very large GWAS to find disease links. AI‑derived continuous valve traits first yielded 61 loci; a disease GWAS meta‑analysis (~40,000 cases, 1.5M controls) found 91 loci; multi‑trait analysis produced 166 total loci tied to valve function and aortic stenosis Source. Genetic correlation between normal AI‑measured valve traits and clinical aortic stenosis was strong (r≈0.50–0.64), supporting shared biology and possible drug targets in lipid and phosphate pathways Source. UCSF and Broad summaries note this creates a large resource for target discovery, but clinical tests are needed before changing care Source.
2026: AI stops piloting and must prove real results in clinics
Executives expect 2026 to be the year tools move from pilots to measurable, governed systems. Regulators and payers (CMS ACCESS, FDA programs) are nudging adoption, while internal use of more agentic AI and orchestrated workflows is rising Source. Providers should tie pilots to clear metrics — cost, readmission, ED use — and build governance, human escape hatches, and integration plans before scaling Source. Back‑office automation (coding, claims, denials) is highlighted as a safe place to get fast ROI while clinical safety data accumulates Source.
HHS asks stakeholders how to speed safe AI into clinical care — tell them what works
HHS opened an RFI to learn how its regulatory, payment, and research tools can speed safe AI into routine care. The RFI asks for input on patient experience, quality, clinician burden, and cost savings SourceSource. Responses that focus on reimbursement models, safety monitoring, real‑world validation, and deployment barriers are likely to be influential Source. If your team sells or buys clinical AI, submit focused comments now. HHS has signaled this fits into its OneHHS AI strategy and could shape payment and approval levers Source.
Administration proposes dropping AI “model card” rule — less paper, less transparency
The Department of Health and Human Services proposed removing the federal rule that would require developers to publish “model cards,” which summarize model build, testing, and performance. The change is framed as cutting developer burden and speeding releases, but critics warn it reduces public oversight of tools that touch patient care Source. The proposal is open for public comment. Buyers and hospitals should plan to demand equivalent documentation in contracts if the public rule is dropped Source.
AI in healthcare: whistleblowers can trigger big legal and business risks — act now
Worker complaints about AI that affects care, billing, or jobs can trigger whistleblower claims under several federal laws. False Claims Act suits may follow if AI‑driven billing or documentation creates false claims Source. OSHA protections cover employees who report safety or legal violations, and HHS‑OIG enforces healthcare fraud matters arising from AI use SourceSource. Federal guidance, including the White House AI Bill of Rights, stresses explainable, safe AI and staff protections Source. Practical steps: map AI uses tied to billing or care, create clear reporting channels, audit models for bias and explainability, and document tests and fixes to limit legal exposure Source.
Sources
Conclusion: Clinical AI is now clinical change. Tools can speed diagnosis, extend specialist reach, and uncover biology at scale. At the same time, 2026 will force tools to show impact, regulators and HHS want input on safe scaling, and transparency or legal gaps could shift who adopts what. Clinicians should push for diverse data, measurable outcomes, and contract clauses that require vendor evidence and explainability before wide use.