Le #wikipedia du #prompt pour l #IA - Top
|
|
Scooped by
Gilbert C FAURE
onto Notebook or My Personal Learning Network November 7, 2025 12:58 PM
|
Get Started for FREE
Sign up with Facebook Sign up with X
I don't have a Facebook or a X account
|
|
Scooped by
Gilbert C FAURE
onto Notebook or My Personal Learning Network November 7, 2025 12:58 PM
|
Your new post is loading...
Your new post is loading...
|
|
|
Scooped by
Gilbert C FAURE
March 16, 4:43 AM
|
🚨 Et si votre prochain médecin… n’était plus humain ?
Aux États-Unis, un phénomène silencieux prend de l’ampleur.
De plus en plus de patients consultent une intelligence artificielle avant d’aller voir leur médecin. 🤖
Aujourd’hui, des millions de patients utilisent déjà des outils d’IA pour poser leurs symptômes, comprendre un diagnostic ou vérifier un traitement. 📊
Selon plusieurs études publiées ces dernières années, une part croissante des internautes américains utilise des outils numériques ou conversationnels pour obtenir des informations médicales avant de consulter un professionnel de santé.
Historiquement, la médecine reposait sur une relation très claire.
Le médecin détenait l’expertise.
Le patient lui accordait sa confiance.
Mais l’intelligence artificielle introduit un troisième acteur dans la relation médicale.
Un acteur accessible 24h sur 24.
Instantané.
Et perçu comme neutre.
Cette transformation crée un déplacement progressif de l’autorité médicale. ⚖️
D’un point de vue économique, les implications sont immenses.
La santé représente plus de 18 % du PIB aux États-Unis et près de 12 % en France.
Si une partie de l’information médicale, du tri des symptômes et du suivi patient est absorbée par des outils d’IA, une partie de la chaîne de valeur médicale pourrait se transformer radicalement. 💰
Cela ne signifie pas la disparition des médecins.
Mais cela signifie probablement une redéfinition profonde de leur rôle.
Dans de nombreux cas, l’IA peut déjà :
• analyser des symptômes
• proposer des pistes diagnostiques
• aider à l’interprétation d’imagerie médicale
• accompagner le suivi de maladies chroniques
Les médecins restent évidemment les seuls habilités à poser un diagnostic et prescrire un traitement.
Mais la frontière entre information médicale et acte médical devient de plus en plus floue. 🔍
La question n’est donc pas de savoir si l’IA va remplacer les médecins.
La vraie question est :
Comment la médecine va-t-elle intégrer cette nouvelle réalité technologique ?
Les professionnels de santé vont devoir renforcer ce que la machine ne peut pas reproduire facilement :
l’expertise clinique
la relation humaine
le jugement médical
la responsabilité éthique.
Dans le même temps, les systèmes de santé devront intégrer l’IA pour :
• réduire les coûts
• améliorer l’accès aux soins
• désengorger les cabinets médicaux
• optimiser le suivi des patients. 🚀
Ce qui est certain, c’est que la confiance du patient devient le véritable enjeu stratégique.
Confiance dans le médecin.
Confiance dans la technologie.
Confiance dans le système de santé.
La question n’est donc plus seulement médicale.
Elle est économique, sociétale et stratégique.
Si la médecine ne s’adapte pas à cette révolution technologique, d’autres acteurs le feront.
Et ils iront vite.
Très vite.
Alors posons la vraie question :
Allez-vous d’abord consulter votre médecin… ou votre IA ? 💬
|
|
Scooped by
Gilbert C FAURE
March 14, 8:33 AM
|
81% of physicians are routinely using AI in their clinical practice, according to a survey by the American Medical Association.
Unregulated use of this new powerful, but not infallible, technology in medicine is a reality now.
It is up to healthcare professionals to ensure responsible adoption. That is why AI competency needs to be part of any healthcare training program.
Link in the comments.
Geisel School of Medicine at Dartmouth IAMSE Dartmouth College AMEE - The International Association for Health Professions Education
|
|
Scooped by
Gilbert C FAURE
March 14, 8:30 AM
|
🛡️ Veille informationnelle : Ce qu'il fallait retenir du Visibrain Meetup 2026
Rendez-vous incontournable des professionnels de la veille, le Visibrain Meetup a tenu toutes ses promesses. Entre retours d’expérience sur scène et échanges informels, la discipline confirme sa mutation profonde. Merci Nicolas Huguenin et ses équipes pour l'invitation.
Voici les 11 points clés que j'ai retenus pour naviguer dans l'infosphère cette année :
1️⃣ L'info comme champ de bataille : L’information est devenue un véritable terrain de confrontation géopolitique et économique.
2️⃣ Nouvelles cartographies : Maîtriser les réseaux LinkedIn et TikTok n'est plus une option, c'est une nécessité stratégique.
3️⃣ L'enjeu de la souveraineté : À qui confions-nous nos données ? La question de la nationalité des outils de veille devient centrale.
4️⃣ L'outil ne fait pas le moine : Un logiciel de veille n'a de valeur que s'il est servi par des humains formés et sensibilisés.
5️⃣ Omniprésence obligatoire : En 2026, la veille doit couvrir l'intégralité des plateformes, sans exception.
6️⃣ L'anticipation est la clé : Alerting, préparation de smartboards... la veille se prépare en amont pour être efficace le jour J.
7️⃣ Le paradoxe X (Twitter, Inc.) : Beaucoup de marques quittent le réseau en tant qu'annonceurs, mais aucune n'arrête d'y veiller. C'est là que cristallisent les crises, surtout en communication financière.
8️⃣ Les bastions de X : Le réseau reste le cœur battant du Gaming, de la Finance et de l'actu chaude.
9️⃣ Le soufflet BlueSky : Malgré le buzz, l'engagement y est resté très faible.
🔟 LinkedIn, outil de com' interne : Une annonce de CEO sur LinkedIn a souvent plus d'impact sur les collaborateurs qu'un mail interne.
1️⃣ 1️⃣ La fin de la "Safe Place" : LinkedIn se crispe et les polémiques y sont de plus en plus fréquentes.
🧐 Ma réflexion : L'angle mort de 2026 ?
Un point m'a frappé : presque personne n'a abordé les plateformes de messaging. Pourtant, Visibrain permet depuis quelque temps déjà de surveiller les chaînes Telegram.
Le Dark Social (WhatsApp, Signal, Telegram Messenger) ne serait-il pas le véritable angle mort de la veille cette année ? Comment anticiper une crise qui se propage dans des boucles fermées ?
|
|
Scooped by
Gilbert C FAURE
March 14, 8:26 AM
|
The "Human + AI" Partnership (Best for Clinicians & Leaders)
The Thought: AI is not a replacement; it’s a "force multiplier" for empathy.
The Angle: Argue that by automating administrative burdens (like clinical documentation which has seen a 70% reduction in workload this year), AI actually returns the doctor to the patient’s bedside.
Key Phrase: "The most important surgical tool in 2026 isn't a robot—it's the data that gives the surgeon clarity before the first incision."
2. From Reactive to Proactive (Best for Tech & Innovation)
The Thought: We are finally moving from "Sick-care" to "Health-care."
The Angle: Highlight how predictive analytics are now moving into system design. Instead of treating a heart attack, we are using AI-driven wearables and EHR data to prevent one six months in advance.
Key Phrase: "In 2026, success in healthcare is measured by the hospital beds that stay empty, thanks to predictive intervention."
3. The Trust & Ethics Pivot (Best for Policy & Ethics)
The Thought: Interoperability and "Model Cards" are the new gold standard.
The Angle: Address the "Diagnostic Dilemma." Mention that as AI becomes embedded in mission-critical workflows, transparency (knowing why an AI made a suggestion) is more valuable than the algorithm itself.
Key Phrase: "Algorithm transparency is the new bedside manner. If we can't explain the 'why' behind the AI, we haven't earned the patient's trust."
Suggested Post Structure:
The Hook: "AI in healthcare has officially moved from the 'pilot' phase to the 'implementation' era."
The Insight: Use one of the angles above. (e.g., "85% of healthcare leaders now prioritize data sharing over shiny new tools.")
The Question: "Do you think the biggest barrier to AI adoption today is the technology itself, or the cultural shift required in our clinics?"
|
|
Scooped by
Gilbert C FAURE
March 14, 8:25 AM
|
A quick reflection during a challenging time.
To stay focused and motivated, I’m starting a 30-day publication challenge on AI in healthcare ,sharing ideas, insights, and innovations along the way.
Looking forward to the journey.
✏️ Day 1/30
Is AI going to replace doctors ? for healthcare AI globally, China s experience offers an important lesson: the real breakthrough is not autonomous diagnosis, but scalable clinical collaboration between algorithms and physicians.
and i truly believe that its not about machines replacing doctors but about augmenting clinical capacity. In high volume healthcare systems where a single hospital may process thousands of scans per day AI functions as a rapid screening layer that helps clinicians prioritize attention where it matters most.
a study titled 基于深度学习的人工智能胸部CT肺结节检测效能评估(Performance of Deep learning based AI for pulmonary nodule detection) reported that AI systems achieved 99.1% sensitivity in detecting lung nodules on CT scans, identifying many small lesions that radiologists might miss during initial screening.
Another clinical study, 人工智能辅助诊断系统在肺结节检测及良恶性判断中的应用价值, found that AI reached 85.22% overall diagnostic accuracy in distinguishing benign vs malignant nodules. While radiologists still showed higher specificity, the combined AI + physician workflow improved detection rates and reduced reading time.
Across studies and meta-analyses ( Journal of Thoracic Disease), a consistent pattern emerges:
-AI shows very high sensitivity often 90 to 99% for detecting abnormalities in CT imaging.
- Human radiologists still provide better specificity and contextual clinical judgment.
-The best outcomes occur when AI is used as decision support rather than replacement.
|
|
Scooped by
Gilbert C FAURE
March 14, 3:39 AM
|
Most researchers waste hours reading papers that never make it into their thesis.
Here is the exact 6-step method I use to read any research paper in 20 minutes without missing anything important.
Step 1 → Read the abstract only (2 mins)
Decide if the paper is even relevant before you commit. 80% of papers get eliminated here.
Step 2 → Read introduction and conclusion (3 mins)
You now understand the full argument without reading the middle.
Step 3 → Scan headings and figures (3 mins)
Figures hold the key findings. Read every caption before anything else.
Step 4 → Read results and discussion (5 mins)
This is the heart of the paper. Highlight anything you might cite.
Step 5 → Check the methodology (4 mins)
Weak methodology means weak findings. Always verify before citing heavily.
Step 6 → Write your 3 sentence summary (3 mins)
What they studied. What they found. How it connects to your research.
This becomes your literature review foundation.
Save this post. You will need it.
Which step are you skipping right now?
♻ Repost this to help a fellow researcher in your network.
Follow Piumsha Mayanthi for more smart research writing tips.
#PhDLife #ResearchSkills #AcademicWriting #LiteratureReview #ThesisTips #MSc #PhD #ResearchTips #AcademicLife #GradSchool #ResearchPaper #StudySmart #ScientificReading #DoctoralStudent #AcademicSuccess
|
|
Scooped by
Gilbert C FAURE
March 13, 8:29 AM
|
Everyone is adding “AI” to healthcare apps right now.
But many times it’s just:
pick a model → add a chatbot → call it AI healthcare.
That approach rarely solves real problems.
Healthcare is too sensitive for hype. Data quality, safety, privacy, and clinical workflows actually matter and many AI health apps ignore those fundamentals. Studies and experts often point out issues like bias in models, unreliable medical advice, and privacy risks when AI is poorly integrated.
What’s more interesting is when AI is used thoughtfully inside the workflow, not just as a feature.
Platforms like Augi Health are a good example of a more practical direction focusing on how AI can support patient engagement and doctor communication rather than pretending AI replaces healthcare professionals.
The difference is subtle but important:
smart integration vs AI for marketing.
If you're curious about where more responsible AI-driven healthcare tools are heading, you can check it out here:
https://lnkd.in/gwfq5ZMn
#AIinHealthcare #DigitalHealth #HealthTech #MedTech #HealthcareInnovation #FutureOfHealthcare #PatientCare #AI #HealthApps #HealthcareTechnology
|
|
Scooped by
Gilbert C FAURE
March 13, 8:24 AM
|
One thing healthcare isn’t short on these days is information.
It’s offered up to clinicians via conferences, webinars, journals, booklets, pamphlets, newsletters, headlines, slide decks, inboxes… it’s an endless stream of updates.
The challenge today isn’t accessing knowledge, it’s processing it.
We are all hearing this more and more: “There’s just too much to keep up with.”
That’s not a motivation problem; it’s a cognitive load problem.
Check out my tips below on how to reduce cognitive load in health education.
__
When you’re ready, here’s how I can help:
- Partner with life science teams to design medical education that delivers insight, not just information.
- Build engagement strategies that use visuals and structure to reduce cognitive load for clinicians and patients.
|
|
Scooped by
Gilbert C FAURE
March 13, 4:37 AM
|
Je viens de lire une étude qui montre un vrai problème pour l'avenir de l’IA.
Je vous explique :
1️⃣ Les grands modèles d'IA (ChatGPT, Llama, etc.) sont entrainés à partir de textes et d'images trouvés sur internet.
2️⃣ Or, internet est de plus en plus rempli de contenu généré par IA (blogs, réseaux sociaux, Youtube, banque d'images, etc...)
➡️Question légitime :
Que se passera-t-il quand les prochaines générations d'IA s'entraîneront sur les données produites par les générations précédentes ?
Pour y répondre, les chercheurs ont donc pris un modèle de langage et l'ont entraîné en boucle sur ses propres productions.
Génération après génération, le modèle commence par perdre les idées rares, puis converge vers un contenu banal et uniforme, et finit par produire du non-sense.
Mathématiquement, ils montrent que ce processus est inévitable : quelle que soit la taille du modèle, c'est ce qui se produira si l'IA s'entraine sur ses propres données.
Du coup, avec ce scénario, les prochains modèles risquent d'être de moins en moins capables de représenter les idées minoritaires, puis deviendront progressivement inintéressants... avant de terminer complètement débiles.
Mais j'imagine que les ingénieurs IA sont déjà sur le coup.
(Note : les chercheurs ont travaillé sur du texte et des images de chiffres manuscrits – pas sur des photos de chiens. Mon image ci-dessous, c'est juste pour illustrer).
-----------
Refs :
- Shumailov et al. (2024) AI models collapse when trained on recursively generated data. Nature
- Wenger (2024) AI produces gibberish when trained on too much AI-generated data. Nature | 227 comments on LinkedIn
|
|
Scooped by
Gilbert C FAURE
March 12, 12:04 PM
|
🚨 32.5% increase in patient preference. Not from a clinician in the loop. Not from the FDA. From AI performance.
Quite interesting survey of 3,000 US adults quantified what actually drives patient trust in medical AI:
→ AI performing above specialist level increased the probability of a patient choosing that visit by 32.5%. That's nearly three times the effect of FDA approval alone.
→ The findings suggest that governance strategies, FDA approval, clinician in the loop, and local validation all increase patient trust. But none comes close to the effect of transparent performance data: which patients almost never receive today.
👉 But here's the deeper question: patients want to know that AI performs at specialist level. Do we actually know how to measure that? "Better than a specialist" sounds reassuring, but what does it mean in practice? Accuracy on a curated test set? Synthetic cases that challenge diagnosis?
Before we can be transparent about AI performance, we need far more comprehensive frameworks for assessing it, including how models communicate what they don't know and their level of uncertainty.
Bracic A, et al Factors for Patient Trust and Acceptance of Medical Artificial Intelligence. JAMA Netw Open. 2026
|
|
Scooped by
Gilbert C FAURE
March 12, 5:03 AM
|
Announcing #CognitiveAI at the Département d'Études Cognitives, ENS-PSL Ecole normale supérieure.
Under the impetus of Charlotte Jacquemot and the DEC executive council, I am very pleased to coordinate this initiative together with Alda Mari.
The program is structured around five research axes and brings together many research teams across #philosophy, #linguistics, #neuroscience, #psychology, and #machinelearning, with the aim of fostering interdisciplinary work on the cognitive foundations and implications of AI.
Looking forward to the collaborations and discussions ahead!
more info here: https://lnkd.in/euqe33AP
The Cognitive AI initiative brings together research that approaches AI from multiple, complementary perspectives. Some work focuses on the design and improvement of artificial systems themselves, building on insights from neuroscience and cognitive science. Other research uses AI as a scientific instrument, treating computational models as tools to probe the mechanisms of human cognition and brain function. As AI systems increasingly interact with humans and are deployed in real-world contexts, additional efforts examine human–AI interaction, societal impact, and applications in education and health. Finally, the rise of AI raises foundational philosophical and ethical questions about meaning, agency, responsibility, and the nature of intelligence itself.
These five axes together define a coherent and integrated research landscape. Rather than treating AI as an isolated technology, Cognitive AI research emphasizes continuous dialogue between artificial systems, human cognition, and society. This integrative perspective allows scientific advances to inform responsible innovation, while grounding AI development in robust cognitive theory and empirical evidence.
| 10 comments on LinkedIn
|
|
Scooped by
Gilbert C FAURE
March 11, 12:52 PM
|
Artificial intelligence is often presented as a near-magical solution for medicine: algorithms that will diagnose diseases, discover drugs, and personalize treatments. The reality is more complex.
The biggest challenge for AI in medicine is not the technology, but how we talk about it. Overselling AI’s promise while ignoring uncertainty risks eroding public trust in science and medicine.
https://lnkd.in/eC5NA-W2
Grateful to the Public Voices Fellowship, in partnership with Yale and The OpEd Project, for their support.
#OpEd
#PublicVoices
#MedicalAI
|
|
Scooped by
Gilbert C FAURE
March 16, 1:53 PM
|
Can AI Actually "Think" Like a Physician? 🤖💻
Medical logic isn’t a straight line
it’s a complex web of history, timing, and pathophysiology.
Yet, most AI models today treat medical data as a mere mathematical puzzle, ignoring the "Clinical Thread" that connects the dots.
🚩 The Problem: The Diagnostic Vacuum
Monitoring AI models reveals a dangerous trend: they often treat the "Principal Complaint" in a vacuum.
Take this scenario:
Case Scenario
1️⃣ Past History: Gastritis (1 month ago).
2️⃣ Recent History: conjunctivies,
Red flag +Knee edema (1 week ago).
3️⃣ Current Complaint: Eye infection + Reiter’s Syndrome triade (Reactive Arthritis).
Medical Logic
🔍" Can't see, Can't pee, Can't climb a tree " + history of GE(Salmonela) or UTI within past few months
💡Medical management must conclude treatment of arthrities by (NSAIDs, ABs, Corticosteroids) not treatment of current eye infection only.
⛔Standard AI often fails here because it’s programmed for "Pattern Matching" rather than "Clinical Reasoning."
It struggles to link a gastrointestinal trigger to a delayed multi-system syndrome.
The Solution:
Clinical Intelligence Infrastructure 🧠
Physician don't just "prompt" AI; we engineer its logic
using a Holistic, Patient-Centered Hypothetical Structure:
✅ Evidence-Based Frameworks: We integrate formal clinical guidelines—including USPSTF, GOLD, and ABCDE—directly into the model’s reasoning engine.
✅ CDL Complexity Scaling (0-5): Our proprietary scale forces the AI to rank Differential Diagnoses from the "Worst-Case Scenario" to the least likely, ensuring no "Red Flags" are missed.
✅ Physician-in-the-Loop (PITL): We use Human Incident Transcription to teach models the nuance of medical ethics and logic, moving beyond simple mathematical imputation.
✨✨
🤖🚩 How We Monitor the "Unsupervised" ML 🧪
🔍Most AI models are built on Aggregation Mathematical Imputation.
They are great at math, but illiterate in Medical Ethics.
🔴When an unsupervised model processes a complex case—like a patient with past Gastritis now presenting with Reiter’s Syndrome,
it often ignores the "Diagnostic Thread" in favor of the most statistically "loud" symptom
At terms of clinical precision we enforce
🔍 Clinical Drift Detection: Auditing the Latent Path to ensure the AI reached a conclusion for the right medical reasons.
🛡️ Zero-Hallucination Policy: Every output is cross-referenced against formal medical governance before it ever reaches a decision-maker.
Yes, Medicine is complex but AI’s reasoning shouldn't be a black box
Know How do you monitor a black box?
The reply is tbat (You don't. You wrap it in a Physician-Led Validation Layer)
🤖Is your healthcare AI clinically safe, or just "statistically likely"?
bridging the gap together.
Clinical Methodology
#MedicalAI #HealthTech #ClinicalLogic #AIethics #DigitalHealth #NOVIQ #PhysicianLedAI #HealthData
|
|
Scooped by
Gilbert C FAURE
March 16, 4:42 AM
|
Most parents don’t realise this.
The version of TikTok Chinese kids use is not the same one our kids get.
Not even close.
In China, if you’re under 14:
• The app shuts down at 10pm
• Usage is capped at 40 minutes a day
• The algorithm mainly shows science, museums, and educational content
Now compare that with the version our kids get:
• Infinite scroll.
• Endless dopamine loops.
• No off switch.
Same company: ByteDance runs Douyin in China and TikTok everywhere else.
For Chinese kids:
• Youth Mode enabled by default
• Real-name verification
• Educational feeds
• Hard time limits
For other kids all over the world?
An algorithm optimized for maximum watch time.
You don’t have to agree with China’s internet policies to notice something interesting here: The people who built these algorithms know exactly what they do to developing brains.
And they chose to protect their own kids from it.
The real problem isn’t just lack of regulation all over the world. It’s also lack of tech literacy among everyday non-tech people.
Engineers understand these systems. Many tech executives do too.
But most parents were never taught how algorithms actually work. So billions of kids are growing up inside systems their parents don’t understand.
And with AI-driven feeds, that gap is only getting wider.
Question:
Should platforms be responsible for protecting kids from their own algorithms?
________________
♻️ share this post to spread awareness.
➕ Follow Tiana Zivkovic for more on tech/AI posts. | 85 comments on LinkedIn
|
|
Scooped by
Gilbert C FAURE
March 14, 8:32 AM
|
Microsoft lance Copilot Health, son chatbot dédié à la santé, capable d’analyser les données de vos objets connectés (montre, balance...), de vous aider à comprendre vos bilans ou analyses, ainsi que de répondre à des questions générales de santé et bien-être.
> https://lnkd.in/e-ZisUmT
Ce nouveau chatbot s’appuie sur une collaboration avec des institutions et un panel de 230 professionnels de santé dans 40 pays. Mais également sur l’historique des 50 M de questions de santé posées à Copilot et Bing tous les jours.
> https://lnkd.in/eYBAB_uf
Copilot Health est conforme à la certification ISO/IEC 42001 et s’intègre dans une initiative plus large qui inclue le système d’orchestration de diagnostiques MAI-DxO. L’ambition de Microsoft est de mettre au point une superintelligence médicale. #GenAI
> https://lnkd.in/eVVHbF-E
|
|
Scooped by
Gilbert C FAURE
March 14, 8:28 AM
|
🩺 AI Can Assist Healthcare — But It Can’t Replace Healthcare Professionals
Artificial Intelligence is transforming healthcare. From medical imaging analysis to predictive diagnostics, AI is helping doctors make faster and more informed decisions. But one truth remains clear:
AI cannot replace healthcare professionals. It learns from them.
Every medical AI system depends on the knowledge, experience, and judgment of doctors, nurses, pharmacists, and clinical experts. Without their expertise, AI models simply cannot function safely or accurately.
Here’s why healthcare professionals remain essential in the AI era:
🔬 Training AI with Medical Annotation
Medical experts label and annotate clinical data such as X-rays, CT scans, pathology slides, EHR notes, and prescriptions. These annotations teach AI what a disease looks like, how conditions progress, and how treatments work.
📊 Clinical Validation & Evaluation
AI models must be evaluated by clinicians to verify that predictions are accurate and safe for real patients. Doctors help identify errors, biases, and edge cases that algorithms alone cannot detect.
🧠 Clinical Judgment & Context
Medicine is not just pattern recognition. It involves patient history, symptoms, emotions, ethical decisions, and complex case interpretation. Human judgment remains irreplaceable.
🤝 Human Connection in Care
Patients need empathy, trust, and communication — something AI cannot truly provide.
The future of healthcare is not AI vs Doctors.
It’s AI + Doctors working together.
When clinicians collaborate with AI developers through data annotation, model training, and clinical evaluation, we build smarter and safer medical systems that support better patient care.
As someone working with medical AI data annotation and training, I strongly believe that healthcare professionals will always remain the core intelligence behind medical AI.
Let’s build technology that empowers clinicians — not replaces them.
#HealthcareAI #MedicalAI #DataAnnotation #ClinicalAI #AIinHealthcare #DigitalHealth #AITraining #HealthTech #FutureOfHealthcare
|
|
Scooped by
Gilbert C FAURE
March 14, 8:26 AM
|
The transition of artificial intelligence in healthcare from a futuristic concept to a daily reality is happening faster than many expected. While some clinicians still view the technology with a healthy dose of skepticism—citing concerns over privacy, displacement of judgment, and systemic bias—early adopters are proving that AI isn’t here to replace doctors. Instead, it’s here to act as the ultimate assistant.
By automating the mundane and sharpening diagnostic accuracy, AI is allowing healthcare providers to get back to what they do best: caring for patients. Here are five groundbreaking ways AI is currently transforming the medical landscape.
https://lnkd.in/gdag5GHU
|
|
Scooped by
Gilbert C FAURE
March 14, 8:24 AM
|
[ 🧠 WALDSEEMÜLLER AUTREMENT]
🗺️ En ce moment, je travaille sur la traduction de la mappemonde de Martin Waldseemüller (1507), une carte (notamment) célèbre du fait d'être la première sur laquelle on trouve mention de l'Amérique ("America").
📚 Si la mention "America" est un fait historique majeur (considéré, aux États-Unis d'Amérique et d'un point de vue colonial, comme l'acte de naissance du continent), et que la carte, fondée pour partie sur l’œuvre de Ptolémée (alors redécouvert en Occident), témoigne d'une modernité saisissante... le reste de la mappemonde nous raconte aussi une autre histoire, et c'est bien dommage qu'on s'en préoccupe moins 😉
Bon nombre d'annotations ou de toponymes rapprochent cette mappemonde du Moyen-Âge, tant les sources mobilisées et les commentaires sont issus de cette période (comme /et donc / et ou / de l'Antiquité !). C'est tout à fait logique, puisque nous ne sommes "qu'en" 1507, et que les classifications historiques posent des frontières nettes sur des objets flous. Sur la mappemonde de Waldseemüller, on trouve beaucoup (en fait, 80%) de notes anthropologiques, de faits historiques ou de toponymes médiévaux. Et bien entendu, on trouve un bestiaire magnifique !
🦄 Il faut donc le dire et le redire : dragons, cynocéphales, léviathans, licornes et autres bestioles, on trouve bien tout cela, sur le cartes, à la Renaissance (et jusqu'à tardivement) !
Un magnifique exemple de travail d’agrégation des découvertes les plus récentes de l'époque à de (très) (très) vieilles informations.
🥳 Petit florilège de commentaires traduits hier !
(et à celles et ceux qui se demandent si cette traduction fera l'objet d'une nouvelle carte pour la série 𝑪𝒂𝒓𝒕𝒐 𝑮𝒓𝒂𝒑𝒉𝒊𝒆, peut-être, mais ce n'est pas certain : il reste l'éléphant dans la pièce à traduire - l'ASIE -, ce qui, en général, termine sur un échec cuisant)
|
|
Scooped by
Gilbert C FAURE
March 13, 2:15 PM
|
A very interesting new review from a Chinese team map where AI agents are heading in healthcare and, more importantly, how we should evaluate them.
The paper goes beyond the usual excitement around diagnosis and automation and asks the harder questions: safety, trust, controllability, humanistic care, and real clinical usefulness. In other words, the future of agentic artificial intelligence in medicine will not be defined only by what agents can do, but by whether they can be deployed responsibly inside real workflows (with proper evaluation and governance).
On China’s progress in AI agents in healthcare, the trend is real and strategically important. Tsinghua University officially inaugurated its AI Agent Hospital in April 2025, with pilot operations planned at Beijing Tsinghua Changgung Hospital and its internet hospital, beginning in areas such as general practice, ophthalmology, radiology, and respiratory medicine.
Separately, Chinese authorities said in late 2025 that by 2030 AI-assisted diagnosis and treatment should become broadly available across primary-level medical institutions, and hospitals are expected to widely adopt AI for imaging, decision support, triage, and follow-up. At the same time, a Nature Medicine commentary warned that the rapid hospital deployment of DeepSeek in China has created a regulatory gray zone, which suggests China is moving fast on implementation, sometimes faster than governance catches up (https://lnkd.in/evfzJemp).
📄 The npj Artificial Intelligence paper - AI agent in healthcare: applications, evaluations, and future directions:
https://lnkd.in/ePzCQnYB
#ArtificialIntelligence #AI #AgenticAI #HealthcareAI #DigitalHealth #ClinicalAI #LargeLanguageModels #LLM #HealthTech #AIinHealthcare #PatientCare #ChinaTech
|
|
Scooped by
Gilbert C FAURE
March 13, 8:28 AM
|
3 key factors shaping patient choice and trust in medical AI :-
1) Medical AI performance at specialist level (AMCE, 0.248 [95%CI, 0.234 to 0.262]) or above specialist level (AMCE, 0.325 [95%CI, 0.310 to 0.339]) was the most significant factor associated with shaping patient choice and trust (pages 6 & 8). Above-specialist performance was nearly 3 times as important as FDA approval (page 6).
2) Clinician oversight of medical AI (AMCE, 0.184 [95%CI, 0.173 to 0.195]) was the second most significant factor associated with shaping patient choice and trust (pages 6 & 9). "Meaningfully, clinician presence was associated with a greater change in patient trust and choice than any individual form of governance. However, effective clinician governance presents substantial challenges: not only are clinicians limited in their ability effectively to oversee medical AI performance, but the availability of adequately trained clinicians is limited" (page 9). Clinical oversight of medical AI is more important than FDA approval.
3) FDA approval (AMCE, 0.111 [95%CI, 0.101 to 0.121]) or national-level validation of medical AI via the Mayo Clinic (AMCE, 0.111 [95%CI, 0.101 to 0.121]) than local hospital validation was the third factor associated with shaping patient choice and trust (page 6).
Ana Bracic, Kayte Spector-Bagdady, Sophie Towle, Rina Zhang, Cornelius A. James, and W. Nicholson Price II, Factors for Patient Trust and Acceptance of Medical Artificial Intelligence, (2026) 9(3) 𝘑𝘈𝘔𝘈 𝘕𝘦𝘵𝘸𝘰𝘳𝘬 𝘖𝘱𝘦𝘯 e260815, published online 5 March 2026, https://lnkd.in/g4uR3wug
|
|
Scooped by
Gilbert C FAURE
March 12, 1:51 PM
|
Delighted to share the first-of-its-kind prospective feasibility study of conversational AI in real clinical workflows: https://lnkd.in/e_zpBmvR
Brought to you by a Beth Israel Deaconess Medical Center and Google research collaboration.
Design:
-100 patients with urgent care concerns interacted with Google’s AMIE prior to presenting to their PCP at BIDMC.
-AMIE took a clinical history and presented possible diagnoses to patients.
-Board certified internal medicine physicians provided continuous safety oversight of patient-AMIE interactions and were instructed to stop interactions based on prespecified safety criteria.
-AMIE transcript and summary of the case were provided to the PCP prior to the urgent care appointment.
-Allowed eight weeks of time for case maturity, followed by chart review to determine the ground truth diagnosis for the patient presentation.
Primary Objectives: Safety and quality of AMIE conversations
Secondary Objectives: AMIE diagnostic and management plan performance
Results:
-Zero conversation safety stops across all patient-AMIE interactions.
-Patient attitudes toward AI significantly increased after interacting with AMIE.
-Strong conversational quality ratings by patients and board certified internal medicine physicians.
-With access to AMIE’s output, PCPs reported possible behavior change in nearly 60% of cases and increased visit preparedness in 75% of cases.
-AMIE had 75% top-3 diagnostic accuracy and included the correct diagnosis in its differential 90% of cases.
-The quality of AMIE’s differential diagnosis and management plan appropriateness and safety was similar to PCPs.
-PCPs won on management plan practicality and cost-effectiveness.
Implications:
-The deployment of AMIE into a high volume clinical workflow was practical and safe.
-The safe presentation of potential diagnoses to patients shifts the patient-AI interaction from simple information gathering to personalized collaboration and counseling.
-Patients highly value confidentiality and trust - repeated safe interactions with AI will be key to building a trustworthy relationship.
-PCPs had a significant context advantage - had access to medical charts, physical exam, and the AMIE transcript. AMIE still performed similarly on differential diagnosis quality and management plan appropriateness and safety.
-Future iterations of AI in real clinical workflows will likely include reasoning with multimodal data: https://lnkd.in/ebWMZD2v.
-Robust safety oversight will allow stratification of which cases are suitable for human-on-the-loop rather than in-the-loop.
Huge thanks to the amazing co-authors and supporters of this work!
|
|
Scooped by
Gilbert C FAURE
March 12, 5:58 AM
|
🦾 Lancement d'Amazon Health AI : Et si on arrêtait de regarder le doigt quand on nous montre la lune ?
Amazon vient de lancer "Health AI".
Tout le monde se focalise sur le chatbot : est-ce qu'il explique bien les résultats de prise de sang ? Est-ce qu'il sait renouveler une ordonnance ? Est-ce que c'est fiable ?
Honnêtement, c'est presque un détail. Explication 👇
En assemblant One Medical (soins), Amazon Pharmacy (médicaments) et AWS (infra), il manquait une seule pièce au puzzle pour boucler la boucle : l'aiguillage.
C’est exactement le rôle de cette nouvelle IA. Elle ne répond pas juste à des questions, elle orchestre un flux.
Symptôme → IA → Consultation → Prescription → Livraison.
Le tout, sans jamais sortir de l'écosystème Amazon.
C'est là que ça devient fascinant / vertigineux / flippant.
Dans le retail, Amazon n'a jamais cherché à fabriquer tous les produits du monde. Il a cherché à devenir l'interface obligatoire entre le client et les marques.
Dans la santé, la stratégie semble identique. Amazon ne veut pas remplacer les médecins. Il veut devenir la porte d'entrée unique du système.
D’où mon hypothèse : et si Amazon était en train de devenir le premier "assureur invisible" au monde ?
Pas un assureur qui signe des contrats, mais un acteur qui, en contrôlant l’accès au patient, l’orientation clinique et la donnée, finit par contrôler le risque lui-même.
Aux US et dans beaucoup d'autres pays, celui qui pilote le parcours de soins pilote l'économie de la santé. Point final.
Je suis curieux de savoir ce que vous en pensez. On est sur un gadget ou sur un basculement de modèle ?
François Veron, Pascal BECACHE, Nicolas Schneider, Pierre Roux de Lusignan, Grégoire Pigné, Dr Solène Vo Quang, Mathilde Merckx, Samir Harfacha, MD, Dr Amine Korchi
Et merci à Anca pour le partage inital ;) | 15 comments on LinkedIn
|
|
Scooped by
Gilbert C FAURE
March 11, 8:54 AM
|
Numeric risk comparisons anchored in patient context may improve shared decision-making and help patients interpret clinical choices.
JAMA, Journal of the American Medical Association
https://lnkd.in/gKKv4d-n
Your new post is loading...