Notebook or My Personal Learning Network
13.8K views | +23 today
Follow
 
Scooped by Gilbert C FAURE
onto Notebook or My Personal Learning Network
December 25, 2013 2:22 PM
Scoop.it!

Ending the tyranny of the impact factor : Nature Cell Biology : Nature Publishing Group

The San Francisco Declaration on Research Assessment (DORA), an initiative spearheaded by the American Society for Cell Biology, aims to reform research assessment.
No comment yet.
Notebook or My Personal Learning Network
a personal notebook since summer 2013, a virtual scrapbook
Your new post is loading...
Your new post is loading...
Scooped by Gilbert C FAURE
October 13, 2013 8:40 AM
Scoop.it!

This notebook..

is a personal Notebook

 

Thanks John Dudley for the following tweet

"If you like interesting snippets on all sorts of subjects relevant to academia, information, the world, highly recommended is @grip54 's collection:"

 

La curation de contenus, la mémoire partagée d'une veille scientifique et sociétale

Gilbert C FAURE's insight:

... designed to collect posts and informations I found and want to keep available but not relevant to the other topics I am curating on Scoop.it (on behalf of ASSIM):

 

the most sucessful being

Immunology, teaching and learning immunology

http://www.scoop.it/t/immunology

and

From flow cytometry to cytomics

http://www.scoop.it/t/from-flow-cytometry-to-cytomics

Immunology and Biotherapies, a page of resources for the DIU 

 http://www.scoop.it/t/immunology-and-biotherapies

 

followed by

Nancy, Lorraine

 http://www.scoop.it/t/nancy-lorraine

I am based at Université Lorraine in Nancy

Wuhan, Hubei,

 http://www.scoop.it/t/wuhan

because we have a long standing collaboration through a french speaking medical training program between Faculté de Médecine de Nancy and WuDA, Wuhan university medical school and Zhongnan Hospital

  

CME-CPD,

 http://www.scoop.it/t/cme-cpd

because I am at EACCME in Brussels, representative of the medical biopathology and laboratory medicine UEMS section

 

Mucosal Immunity,

 http://www.scoop.it/t/mucosal-immunity

because it was one of our main research interest some years ago 

 

It is a kind of electronic scrapbook with many ideas shared by others.

It focuses more and more on new ways of Teaching and Learning: e-, m-, a-, b-, h-, c-, d, ld-, s-, p-, w-, pb-, ll- ....

Thanks to all

No comment yet.
Scooped by Gilbert C FAURE
Today, 7:13 AM
Scoop.it!

Embedded AI in Clinical Workflow Revolutionizes Medical Decision Making | Jiajie Zhang posted on the topic

Embedded AI in Clinical Workflow Revolutionizes Medical Decision Making | Jiajie Zhang posted on the topic | Notebook or My Personal Learning Network | Scoop.it
A subtle but important shift in medical AI just appeared in the literature.

Researchers recently evaluated a large language model embedded directly inside electronic health records in primary care settings.

Not as a chatbot.
Not as an external decision support tool.

But as part of the clinical workflow itself.

This may sound like a technical milestone, but institutionally it signals something much bigger.

For more than a century, clinical reasoning has been organized around the individual physician’s mind. Clinical decision support systems were external references—guidelines, textbooks, alerts.

Now we are beginning to see the emergence of embedded clinical cognition systems.

In these environments, diagnosis and treatment planning become a distributed process involving clinicians, patient data ecosystems, and generative AI operating inside the health record.

The physician’s role does not disappear.

But it evolves—from sole generator of medical reasoning to architect and supervisor of human–AI cognitive systems.

For academic medicine, the strategic question is no longer whether AI will assist clinicians.

It is how we design AI-native medical institutions where human expertise and machine reasoning operate safely as a single cognitive architecture.

We may be watching the early infrastructure of the Cognitive Revolution in medicine take shape.

Source:

https://lnkd.in/g5_jgbna

#MedicalAI #AcademicMedicine #AIinHealthcare #DigitalHealth #FutureOfMedicine #AINative
No comment yet.
Scooped by Gilbert C FAURE
Today, 7:09 AM
Scoop.it!

PhD Tools: Research Paper Writing and More | Muhammad Muneeb posted on the topic

PhD Tools: Research Paper Writing and More | Muhammad Muneeb posted on the topic | Notebook or My Personal Learning Network | Scoop.it
PhD Students - Which tool to use in each phase of your PhD?

Research paper writing
↳ 𝐀𝐧𝐬𝐰𝐞𝐫𝐓𝐡𝐢𝐬 → https://lnkd.in/dT8KNcjY

Grammar & typos checking
↳ 𝐏𝐚𝐩𝐞𝐫𝐩𝐚𝐥 → https://lnkd.in/diAjKHUR

Convert paper to a poster
↳ 𝐒𝐜𝐢𝐒𝐩𝐚𝐜𝐞 → https://lnkd.in/d_rDXkNU

Conduct literature reviews
↳ 𝐆𝐚𝐭𝐬𝐛𝐢 → https://
gatsbi.com

Extract data from research papers
↳ 𝐦𝐨𝐚𝐫𝐚 → https://
moara.io

Identify right journal for paper
↳ 𝐑𝐞𝐯𝐢𝐞𝐰-𝐢𝐭 → http://
review-it.ai

Receive feedback on thesis
↳ 𝐓𝐡𝐞𝐬𝐢𝐟𝐲 → http://
thesify.ai

Peer-review a manuscript before submission
↳ 𝐑𝐞𝐯𝐢𝐞𝐰𝐞𝐫𝟑 → https://
reviewer3.com

Paper editing
↳ 𝐎𝐯𝐞𝐫𝐥𝐞𝐚𝐟 → http://
overleaf.com

Identify papers for literature review
↳ 𝐍𝐎𝐀𝐇 → https://
lnkd.in/d-hgRce2

Identify trends in research
↳ 𝐄𝐮𝐫𝐞𝐤𝐚 → https://
lnkd.in/dqWj6kYv

Identify research gaps
↳ 𝐖𝐢𝐬𝐏𝐚𝐩𝐞𝐫 → https://
lnkd.in/dVUCUA_p

Any other problem/tool you'd like to add?

#phd #aitools | 20 comments on LinkedIn
No comment yet.
Scooped by Gilbert C FAURE
Today, 5:05 AM
Scoop.it!

AI Chatbots in Healthcare: Reducing Risk and Building Trust | Robin Melanie P. posted on the topic

AI Chatbots in Healthcare: Reducing Risk and Building Trust | Robin Melanie P. posted on the topic | Notebook or My Personal Learning Network | Scoop.it
Reducing risk in AI-driven healthcare decisions will protect patients, strengthen trust in digital health tools, and support responsible adoption of generative AI across healthcare services. A randomized study involving about 1,300 participants found that people who used AI chatbots to interpret medical symptoms did not make better health decisions than those who relied on traditional sources such as search engines or their own judgment. The findings highlight a critical gap between strong benchmark performance and safe real-world use.

Researchers from the University of Oxford reported that large language models often perform well on standardized medical exams but struggle when real users describe symptoms in everyday language. Participants often received inconsistent guidance, and slight changes in how users asked questions sometimes produced different responses. These interaction challenges increase the risk that people misunderstand medical advice or fail to recognize situations that require professional care.

The results point to a clear path forward. Healthcare organizations must test AI systems in real-world settings, establish stronger safety guardrails, and integrate chatbots with clinical expertise before offering frontline medical guidance. As generative AI expands across digital health services, companies that prioritize validation, transparency, and human oversight will build the most trusted and scalable healthcare solutions.

#artificialintelligence #chatbots #llm #genai #digitalhealth #healthtech

https://lnkd.in/gNUUN5Fy
No comment yet.
Scooped by Gilbert C FAURE
Today, 4:23 AM
Scoop.it!

Ce livre a tout d’un livre. En réalité, c’est une synthèse générée par IA. J’ai comparé la synthèse avec trois interventions (Marc Decombas, Murielle Popa-Fabre, et Thomas Huchon) à partir des… | ...

Ce livre a tout d’un livre. En réalité, c’est une synthèse générée par IA. J’ai comparé la synthèse avec trois interventions (Marc Decombas, Murielle Popa-Fabre, et Thomas Huchon) à partir des… | ... | Notebook or My Personal Learning Network | Scoop.it
Ce livre a tout d’un livre. En réalité, c’est une synthèse générée par IA.

J’ai comparé la synthèse avec trois interventions (Marc Decombas, Murielle Popa-Fabre, et Thomas Huchon) à partir des vidéos disponibles sur YouTube.

1. Normalisation du ton et perte d’aspérité
L’IA lisse le discours. Le résultat devient institutionnel, neutre, consensuel.
Ex. de Murielle Popa-Fabre : la synthèse retient les chiffres mais laisse de côté des réalités sociales plus dures, comme le fait qu’une femme sur deux quitte la tech vers 35 ans ou les biais sexistes qui décrédibilisent l’humour féminin en entreprise.
Ex. de Thomas Huchon : une performance engagée sur les dérives démocratiques devient une fiche de gestion des risques. Les critiques directes sur certaines personnalités politiques ou puissances étrangères disparaissent.

2. Sacrifice de la narration
Les anecdotes et les exemples qui permettent de comprendre un sujet disparaissent. Le discours devient plus abstrait.

3. Simplification des concepts techniques et juridiques
La synthèse privilégie le « quoi » (le constat) et évacue souvent le « comment ».
Ex. Marc Decombas évoque des paramètres précis comme la température des modèles ou l’effet de la compression sur leur fiabilité. Ces éléments techniques disparaissent dans la synthèse.
Ex. juridique. Murielle Popa-Fabre mentionne l’article 17 d’un traité international permettant un recours contre les biais de l’IA. Une information utile pour des dirigeants, absente du résumé.

4. Décontextualisation géopolitique et conjoncturelle
La synthèse retire les éléments liés à l’actualité ou aux tensions géopolitiques.
Ex. L’analyse de Thomas Huchon sur les élections américaines de 2024 ou l’opération d’influence russe autour de l’affaire des punaises de lit en France n’apparaît plus.

Cette synthèse générée par IA reste efficace pour condenser une masse énorme d’informations. Mais elle révèle aussi ses limites, elle transforme des réflexions vivantes en un savoir plus standardisé, parfois moyenasse, où certaines alertes, nuances ou solutions concrètes disparaissent.

Nicolas Dufourcq dans son édito, décrit la vérité comme une force « abrasive » qui ne nous caresse pas et nous oblige à affronter le réel. Or, en cherchant le message moyen et consensuel, l’IA finit précisément par caresser le lecteur. Elle gomme les faits les plus rugueux qui devraient pourtant nourrir le débat.

L’iA sait résumer un discours. Elle ne sait pas encore en préserver la singularité.
No comment yet.
Scooped by Gilbert C FAURE
March 16, 4:43 AM
Scoop.it!

🚨 Et si votre prochain médecin… n’était plus humain ? Aux États-Unis, un phénomène silencieux prend de l’ampleur. De plus en plus de patients consultent une intelligence artificielle avant d’alle...

🚨 Et si votre prochain médecin… n’était plus humain ? Aux États-Unis, un phénomène silencieux prend de l’ampleur. De plus en plus de patients consultent une intelligence artificielle avant d’alle... | Notebook or My Personal Learning Network | Scoop.it
🚨 Et si votre prochain médecin… n’était plus humain ?

Aux États-Unis, un phénomène silencieux prend de l’ampleur.
De plus en plus de patients consultent une intelligence artificielle avant d’aller voir leur médecin. 🤖

Aujourd’hui, des millions de patients utilisent déjà des outils d’IA pour poser leurs symptômes, comprendre un diagnostic ou vérifier un traitement. 📊

Selon plusieurs études publiées ces dernières années, une part croissante des internautes américains utilise des outils numériques ou conversationnels pour obtenir des informations médicales avant de consulter un professionnel de santé.

Historiquement, la médecine reposait sur une relation très claire.

Le médecin détenait l’expertise.
Le patient lui accordait sa confiance.

Mais l’intelligence artificielle introduit un troisième acteur dans la relation médicale.

Un acteur accessible 24h sur 24.
Instantané.
Et perçu comme neutre.

Cette transformation crée un déplacement progressif de l’autorité médicale. ⚖️

D’un point de vue économique, les implications sont immenses.

La santé représente plus de 18 % du PIB aux États-Unis et près de 12 % en France.

Si une partie de l’information médicale, du tri des symptômes et du suivi patient est absorbée par des outils d’IA, une partie de la chaîne de valeur médicale pourrait se transformer radicalement. 💰

Cela ne signifie pas la disparition des médecins.

Mais cela signifie probablement une redéfinition profonde de leur rôle.

Dans de nombreux cas, l’IA peut déjà :
• analyser des symptômes
• proposer des pistes diagnostiques
• aider à l’interprétation d’imagerie médicale
• accompagner le suivi de maladies chroniques

Les médecins restent évidemment les seuls habilités à poser un diagnostic et prescrire un traitement.

Mais la frontière entre information médicale et acte médical devient de plus en plus floue. 🔍

La question n’est donc pas de savoir si l’IA va remplacer les médecins.

La vraie question est :
Comment la médecine va-t-elle intégrer cette nouvelle réalité technologique ?

Les professionnels de santé vont devoir renforcer ce que la machine ne peut pas reproduire facilement :
l’expertise clinique
la relation humaine
le jugement médical
la responsabilité éthique.

Dans le même temps, les systèmes de santé devront intégrer l’IA pour :
• réduire les coûts
• améliorer l’accès aux soins
• désengorger les cabinets médicaux
• optimiser le suivi des patients. 🚀

Ce qui est certain, c’est que la confiance du patient devient le véritable enjeu stratégique.
Confiance dans le médecin.
Confiance dans la technologie.
Confiance dans le système de santé.

La question n’est donc plus seulement médicale.

Elle est économique, sociétale et stratégique.

Si la médecine ne s’adapte pas à cette révolution technologique, d’autres acteurs le feront.
Et ils iront vite.
Très vite.

Alors posons la vraie question :
Allez-vous d’abord consulter votre médecin… ou votre IA ? 💬
No comment yet.
Scooped by Gilbert C FAURE
March 14, 8:33 AM
Scoop.it!

81% of physicians are routinely using AI in their clinical practice, according to a survey by the American Medical Association. Unregulated use of this new powerful, but not infallible, technolog...

81% of physicians are routinely using AI in their clinical practice, according to a survey by the American Medical Association. Unregulated use of this new powerful, but not infallible, technolog... | Notebook or My Personal Learning Network | Scoop.it
81% of physicians are routinely using AI in their clinical practice, according to a survey by the American Medical Association.

Unregulated use of this new powerful, but not infallible, technology in medicine is a reality now.

It is up to healthcare professionals to ensure responsible adoption. That is why AI competency needs to be part of any healthcare training program.

Link in the comments.

Geisel School of Medicine at Dartmouth IAMSE Dartmouth College AMEE - The International Association for Health Professions Education
No comment yet.
Scooped by Gilbert C FAURE
March 14, 8:30 AM
Scoop.it!

🛡️ Veille informationnelle : Ce qu'il fallait retenir du Visibrain Meetup 2026 Rendez-vous incontournable des professionnels de la veille, le Visibrain Meetup a tenu toutes ses promesses. Entre… |...

🛡️ Veille informationnelle : Ce qu'il fallait retenir du Visibrain Meetup 2026 Rendez-vous incontournable des professionnels de la veille, le Visibrain Meetup a tenu toutes ses promesses. Entre… |... | Notebook or My Personal Learning Network | Scoop.it
🛡️ Veille informationnelle : Ce qu'il fallait retenir du Visibrain Meetup 2026
Rendez-vous incontournable des professionnels de la veille, le Visibrain Meetup a tenu toutes ses promesses. Entre retours d’expérience sur scène et échanges informels, la discipline confirme sa mutation profonde. Merci Nicolas Huguenin et ses équipes pour l'invitation.

Voici les 11 points clés que j'ai retenus pour naviguer dans l'infosphère cette année :
1️⃣ L'info comme champ de bataille : L’information est devenue un véritable terrain de confrontation géopolitique et économique.
2️⃣ Nouvelles cartographies : Maîtriser les réseaux LinkedIn et TikTok n'est plus une option, c'est une nécessité stratégique.
3️⃣ L'enjeu de la souveraineté : À qui confions-nous nos données ? La question de la nationalité des outils de veille devient centrale.
4️⃣ L'outil ne fait pas le moine : Un logiciel de veille n'a de valeur que s'il est servi par des humains formés et sensibilisés.
5️⃣ Omniprésence obligatoire : En 2026, la veille doit couvrir l'intégralité des plateformes, sans exception.
6️⃣ L'anticipation est la clé : Alerting, préparation de smartboards... la veille se prépare en amont pour être efficace le jour J.
7️⃣ Le paradoxe X (Twitter, Inc.) : Beaucoup de marques quittent le réseau en tant qu'annonceurs, mais aucune n'arrête d'y veiller. C'est là que cristallisent les crises, surtout en communication financière.
8️⃣ Les bastions de X : Le réseau reste le cœur battant du Gaming, de la Finance et de l'actu chaude.
9️⃣ Le soufflet BlueSky : Malgré le buzz, l'engagement y est resté très faible.
🔟 LinkedIn, outil de com' interne : Une annonce de CEO sur LinkedIn a souvent plus d'impact sur les collaborateurs qu'un mail interne.
1️⃣ 1️⃣ La fin de la "Safe Place" : LinkedIn se crispe et les polémiques y sont de plus en plus fréquentes.

🧐 Ma réflexion : L'angle mort de 2026 ?
Un point m'a frappé : presque personne n'a abordé les plateformes de messaging. Pourtant, Visibrain permet depuis quelque temps déjà de surveiller les chaînes Telegram.
Le Dark Social (WhatsApp, Signal, Telegram Messenger) ne serait-il pas le véritable angle mort de la veille cette année ? Comment anticiper une crise qui se propage dans des boucles fermées ?
No comment yet.
Scooped by Gilbert C FAURE
March 14, 8:26 AM
Scoop.it!

The "Human + AI" Partnership (Best for Clinicians & Leaders) The Thought: AI is not a replacement; it’s a "force multiplier" for empathy. The Angle: Argue that by automating administrative burdens…...

The "Human + AI" Partnership (Best for Clinicians & Leaders) The Thought: AI is not a replacement; it’s a "force multiplier" for empathy. The Angle: Argue that by automating administrative burdens…... | Notebook or My Personal Learning Network | Scoop.it
The "Human + AI" Partnership (Best for Clinicians & Leaders)
The Thought: AI is not a replacement; it’s a "force multiplier" for empathy.
The Angle: Argue that by automating administrative burdens (like clinical documentation which has seen a 70% reduction in workload this year), AI actually returns the doctor to the patient’s bedside.
Key Phrase: "The most important surgical tool in 2026 isn't a robot—it's the data that gives the surgeon clarity before the first incision."
2. From Reactive to Proactive (Best for Tech & Innovation)
The Thought: We are finally moving from "Sick-care" to "Health-care."
The Angle: Highlight how predictive analytics are now moving into system design. Instead of treating a heart attack, we are using AI-driven wearables and EHR data to prevent one six months in advance.
Key Phrase: "In 2026, success in healthcare is measured by the hospital beds that stay empty, thanks to predictive intervention."
3. The Trust & Ethics Pivot (Best for Policy & Ethics)
The Thought: Interoperability and "Model Cards" are the new gold standard.
The Angle: Address the "Diagnostic Dilemma." Mention that as AI becomes embedded in mission-critical workflows, transparency (knowing why an AI made a suggestion) is more valuable than the algorithm itself.
Key Phrase: "Algorithm transparency is the new bedside manner. If we can't explain the 'why' behind the AI, we haven't earned the patient's trust."
Suggested Post Structure:
The Hook: "AI in healthcare has officially moved from the 'pilot' phase to the 'implementation' era."
The Insight: Use one of the angles above. (e.g., "85% of healthcare leaders now prioritize data sharing over shiny new tools.")
The Question: "Do you think the biggest barrier to AI adoption today is the technology itself, or the cultural shift required in our clinics?"
No comment yet.
Scooped by Gilbert C FAURE
March 14, 8:25 AM
Scoop.it!

A quick reflection during a challenging time. To stay focused and motivated, I’m starting a 30-day publication challenge on AI in healthcare ,sharing ideas, insights, and innovations along the… | J...

A quick reflection during a challenging time. To stay focused and motivated, I’m starting a 30-day publication challenge on AI in healthcare ,sharing ideas, insights, and innovations along the… | J... | Notebook or My Personal Learning Network | Scoop.it
A quick reflection during a challenging time.
To stay focused and motivated, I’m starting a 30-day publication challenge on AI in healthcare ,sharing ideas, insights, and innovations along the way.
Looking forward to the journey.
✏️ Day 1/30
Is AI going to replace doctors ? for healthcare AI globally, China s experience offers an important lesson: the real breakthrough is not autonomous diagnosis, but scalable clinical collaboration between algorithms and physicians.
and i truly believe that its not about machines replacing doctors but about augmenting clinical capacity. In high volume healthcare systems where a single hospital may process thousands of scans per day AI functions as a rapid screening layer that helps clinicians prioritize attention where it matters most.
a study titled 基于深度学习的人工智能胸部CT肺结节检测效能评估(Performance of Deep learning based AI for pulmonary nodule detection) reported that AI systems achieved 99.1% sensitivity in detecting lung nodules on CT scans, identifying many small lesions that radiologists might miss during initial screening.
Another clinical study, 人工智能辅助诊断系统在肺结节检测及良恶性判断中的应用价值, found that AI reached 85.22% overall diagnostic accuracy in distinguishing benign vs malignant nodules. While radiologists still showed higher specificity, the combined AI + physician workflow improved detection rates and reduced reading time.
Across studies and meta-analyses ( Journal of Thoracic Disease), a consistent pattern emerges:
-AI shows very high sensitivity often 90 to 99% for detecting abnormalities in CT imaging.
- Human radiologists still provide better specificity and contextual clinical judgment.
-The best outcomes occur when AI is used as decision support rather than replacement.
No comment yet.
Scooped by Gilbert C FAURE
March 14, 3:39 AM
Scoop.it!

6-Step Research Paper Reading Method in 20 Minutes | Piumsha Mayanthi posted on the topic

6-Step Research Paper Reading Method in 20 Minutes | Piumsha Mayanthi posted on the topic | Notebook or My Personal Learning Network | Scoop.it
Most researchers waste hours reading papers that never make it into their thesis.

Here is the exact 6-step method I use to read any research paper in 20 minutes without missing anything important.

Step 1 → Read the abstract only (2 mins)
Decide if the paper is even relevant before you commit. 80% of papers get eliminated here.

Step 2 → Read introduction and conclusion (3 mins)
You now understand the full argument without reading the middle.

Step 3 → Scan headings and figures (3 mins)
Figures hold the key findings. Read every caption before anything else.

Step 4 → Read results and discussion (5 mins)
This is the heart of the paper. Highlight anything you might cite.

Step 5 → Check the methodology (4 mins)
Weak methodology means weak findings. Always verify before citing heavily.

Step 6 → Write your 3 sentence summary (3 mins)
What they studied. What they found. How it connects to your research.
This becomes your literature review foundation.

Save this post. You will need it.

Which step are you skipping right now?

♻ Repost this to help a fellow researcher in your network.

Follow Piumsha Mayanthi for more smart research writing tips.

#PhDLife #ResearchSkills #AcademicWriting #LiteratureReview #ThesisTips #MSc #PhD #ResearchTips #AcademicLife #GradSchool #ResearchPaper #StudySmart #ScientificReading #DoctoralStudent #AcademicSuccess
No comment yet.
Scooped by Gilbert C FAURE
March 13, 8:29 AM
Scoop.it!

AI in Healthcare: Beyond Hype to Practical Integration | Neeraj Bedi posted on the topic

AI in Healthcare: Beyond Hype to Practical Integration | Neeraj Bedi posted on the topic | Notebook or My Personal Learning Network | Scoop.it
Everyone is adding “AI” to healthcare apps right now.
But many times it’s just:
pick a model → add a chatbot → call it AI healthcare.

That approach rarely solves real problems.

Healthcare is too sensitive for hype. Data quality, safety, privacy, and clinical workflows actually matter and many AI health apps ignore those fundamentals. Studies and experts often point out issues like bias in models, unreliable medical advice, and privacy risks when AI is poorly integrated.

What’s more interesting is when AI is used thoughtfully inside the workflow, not just as a feature.

Platforms like Augi Health are a good example of a more practical direction focusing on how AI can support patient engagement and doctor communication rather than pretending AI replaces healthcare professionals.

The difference is subtle but important:
smart integration vs AI for marketing.

If you're curious about where more responsible AI-driven healthcare tools are heading, you can check it out here:

https://lnkd.in/gwfq5ZMn

#AIinHealthcare #DigitalHealth #HealthTech #MedTech #HealthcareInnovation #FutureOfHealthcare #PatientCare #AI #HealthApps #HealthcareTechnology
No comment yet.
Scooped by Gilbert C FAURE
March 13, 8:24 AM
Scoop.it!

Reducing cognitive load in healthcare education | Rory Daley, MPH

Reducing cognitive load in healthcare education | Rory Daley, MPH | Notebook or My Personal Learning Network | Scoop.it
One thing healthcare isn’t short on these days is information.

It’s offered up to clinicians via conferences, webinars, journals, booklets, pamphlets, newsletters, headlines, slide decks, inboxes… it’s an endless stream of updates.

The challenge today isn’t accessing knowledge, it’s processing it.

We are all hearing this more and more: “There’s just too much to keep up with.”

That’s not a motivation problem; it’s a cognitive load problem.

Check out my tips below on how to reduce cognitive load in health education.

__
When you’re ready, here’s how I can help:
- Partner with life science teams to design medical education that delivers insight, not just information.
- Build engagement strategies that use visuals and structure to reduce cognitive load for clinicians and patients.
No comment yet.
Scooped by Gilbert C FAURE
Today, 7:14 AM
Scoop.it!

Considering the use of AI tools for medical advice can be complex. AI-powered medical advice tools, such as Counsel Health, Microsoft Copilot Health, and Amazon Health AI, provide personalized heal...

Considering the use of AI tools for medical advice can be complex. AI-powered medical advice tools, such as Counsel Health, Microsoft Copilot Health, and Amazon Health AI, provide personalized heal... | Notebook or My Personal Learning Network | Scoop.it
Considering the use of AI tools for medical advice can be complex. AI-powered medical advice tools, such as Counsel Health, Microsoft Copilot Health, and Amazon Health AI, provide personalized health insights, analyze medical records, and connect users to professionals. However, it's important to remember that these tools should serve as guidance rather than a definitive diagnosis.

Experts caution that AI can deliver inaccurate information with high confidence, which may lead to delayed or incorrect care. Therefore, AI should be viewed as a complement to, not a replacement for, a physician or nurse.

Key Considerations for AI Health Advice:
-Best Use Cases: Ideal for gathering preliminary information, simplifying medical jargon, and preparing questions for a doctor.
-Major Risks: AI models can suffer from hallucinations, providing false information or inaccurate diagnoses. Over-reliance on AI can lead to significant patient harm.
-Current Tools: New tools like Copilot Health are integrating with personal health records for tailored insights, notes The Wall Street Journal. Other platforms like Counsel Health allow users to share symptoms and connect with human doctors for $29, notes Counsel Health.
-Verification: Always cross-reference AI-generated advice with trusted, professional sources.

The Risks of Relying on AI:
-Inaccurate Information ("Hallucinations"): AI can authoritatively present false or dangerous medical advice, such as suggesting improper treatments or failing to recognize life-threatening drug interactions.
-Lack of Personalization: Most AI models provide generic responses based on broad datasets rather than your specific health profile, allergies, or past conditions.
-Sycophancy (Telling you what you want to hear): Some models may mirror your own biases, potentially downplaying serious symptoms if you sound dismissive or heightening anxiety if you are already worried.
-Privacy Concerns: Many general-purpose AI tools are not HIPAA-compliant, meaning any personal health data you share may not be protected by standard medical privacy laws.

Using AI for medical advice can be a helpful starting point for education and context, but it is not a replacement for professional medical diagnosis or treatment. While AI tools are fast and accessible, they lack the ability to perform physical exams, access your full medical history, or apply clinical intuition.
No comment yet.
Scooped by Gilbert C FAURE
Today, 7:12 AM
Scoop.it!

The most dangerous idea in healthcare AI right now is that large language models are the whole story. They are not. Nature's latest research on the evolving landscape of AI in healthcare reveals… |...

The most dangerous idea in healthcare AI right now is that large language models are the whole story. They are not. Nature's latest research on the evolving landscape of AI in healthcare reveals… |... | Notebook or My Personal Learning Network | Scoop.it
The most dangerous idea in healthcare AI right now is that large language models are the whole story. They are not. Nature's latest research on the evolving landscape of AI in healthcare reveals something the mainstream tech press keeps missing: the clinical frontier is not one technology, it is an ecosystem — large language models working alongside specialized, task-specific models that were trained not on internet text but on imaging data, genomic sequences, lab values, and clinical notes stretching back decades. When these systems work together, something genuinely new becomes possible.

Here is the reframe most people need: we have been asking the wrong question. The debate about whether AI belongs in healthcare has already been settled by the data. The real question — the one that will determine who lives and who dies in the next decade — is whether the institutions responsible for deploying these systems can close the gap between what the technology can do today and what clinical workflows actually allow. That gap is not technical. It is institutional. And it is enormous.

Think about what that means for a rural hospital in Indiana, a district clinic in Portugal, or a community health center in South Africa. The specialized diagnostic models that outperform radiologists on specific imaging tasks, the language models that can synthesize a patient's entire history in seconds, the predictive models that flag deterioration hours before a physician would notice — these tools exist now. They are not theoretical. The question is whether the institutions serving the most vulnerable patients will ever get access to them, or whether AI-augmented medicine becomes another advantage reserved for wealthy urban health systems with the IT infrastructure and capital to deploy it. That is not a technology question. That is a policy question, and almost no one in a position of authority is treating it that way.

Here is the challenge for anyone reading this who has institutional power: the Nature research makes clear that the landscape is already complex and already bifurcating. The HELP Committee in the U.S. Senate, the EU AI Office, and the UK's MHRA need to stop treating healthcare AI as a future regulatory problem and start treating it as a present deployment crisis. The question is not whether to regulate — it is whether oversight frameworks can be built fast enough to ensure that the hospitals serving the poorest communities don't get left decades behind the hospitals serving the richest ones. Every month of delay is a policy choice with a body count.

FREE SUBSCRIBERS — no credit card ever required — full analysis at https://lnkd.in/e5ZTi-x9
No comment yet.
Scooped by Gilbert C FAURE
Today, 6:59 AM
Scoop.it!

#valeurs #sagesse #management | Chunyan Li | 11 comments

#valeurs #sagesse #management | Chunyan Li | 11 comments | Notebook or My Personal Learning Network | Scoop.it
« Un homme de bien doit s’efforcer de parler avec prudence et d’agir avec promptitude » (君子欲讷于言而敏于行), disait Confucius. Selon lui, un gentleman (ou un homme de bien) doit être responsable de ses paroles et de ses actes ; il ne doit pas promettre quelque chose à la légère, et si une promesse est faite, il doit tout mettre en œuvre pour la tenir. Sinon, il perdra la confiance des autres et, par conséquent, sa crédibilité diminuera.

Il ajoutait également : « Un gentleman a honte lorsque ses paroles dépassent ses actes » (君子耻其言而过其行). Cela signifie qu’un gentleman doit se sentir embarrassé lorsque ses paroles surpassent ce qu’il accomplit.

D’autres proverbes chinois expriment des idées similaires et mettent l’accent sur l’importance de tenir sa parole :
👉 « Les paroles d’un homme de bien, une fois prononcées, ne peuvent être rattrapées, même par un attelage de quatre chevaux rapides. » (君子一言,驷马难追)
Cette expression, inspirée des Entretiens de Confucius (Lunyu), signifie qu’une parole prononcée est irrévocable.
👉 « Parole tenue, acte assuré » (言必信,行必果)
Également tirée des Entretiens de Confucius, cette expression signifie qu’il faut être digne de confiance dans ses paroles et efficace dans ses actions.
👉 « Une parole pèse autant que neuf cauldrons » (一言九鼎)
👉 « Une promesse vaut mille pièces d’or » (一诺千金)


Pendant la période des Royaumes combattants, Han Fei, philosophe et penseur politique chinois du courant légiste, racontait dans ses écrits l’histoire suivante :

La femme de Zengzi devait aller au marché. Son fils la suivait en pleurant. Elle lui dit : « Rentre chez toi, je tuerai un cochon pour toi à mon retour. » Après être allée au marché, elle rentra à la maison. Zengzi s’apprêtait à saisir un cochon pour le tuer, mais sa femme l’arrêta en disant : « Ce n’était qu’un jeu avec l’enfant. »

Zengzi répondit : « On ne joue pas avec un enfant de cette manière. L’enfant est incapable de raisonner seul et doit apprendre à travers ses parents et suivre leurs enseignements. Si tu le trompes maintenant, tu lui apprendras à mentir. Une mère qui trompe son fils ne peut plus être crue par lui — ce n’est pas une méthode éducative appropriée. » Zengzi tua alors le cochon et le fit cuire.

Ainsi, ces enseignements nous montrent que si nous promettons quelque chose à quelqu’un, nous devons absolument tenir notre promesse. Si des conditions extérieures rendent cela impossible, il faut au moins fournir des explications et un suivi. Et si nous ne savons pas si nous pourrons réaliser quelque chose ou si cela est faisable, il ne faut pas promettre à la légère.

En somme, c’est à la fois mon principe de conduite et l’un de mes critères pour évaluer si quelqu’un est digne de confiance. En management, cela représente également une qualité précieuse et permet de gagner en crédibilité : « Faire ce que l’on dit, dire ce que l’on fait. »

Et vous, qu'en pensez-vous ?

#valeurs #sagesse #management | 11 comments on LinkedIn
No comment yet.
Scooped by Gilbert C FAURE
Today, 4:30 AM
Scoop.it!

AI Reshapes Healthcare Delivery: AMA Survey Reveals 81% of Physicians Use AI Tools | José M. Zuniga posted on the topic

AI Reshapes Healthcare Delivery: AMA Survey Reveals 81% of Physicians Use AI Tools | José M. Zuniga posted on the topic | Notebook or My Personal Learning Network | Scoop.it
AI is no longer a future concept in medicine; it is already reshaping how healthcare is delivered. A recent survey fielded by the American Medical Association found that 81% of physician-respondents use AI tools in their clinical practices.

Physicians report using AI primarily to reduce administrative burden: summarizing medical records, drafting documentation, and supporting care planning. If deployed responsibly, these tools could return something medicine has been steadily losing – time between clinicians and their patients.

In addition to data privacy concerns, rapid adoption raises other important questions: Will AI help strengthen health systems or deepen existing inequities? Will innovation reach the communities with the greatest health burdens first? And how do we ensure physicians and other healthcare providers remain central to decision-making as these tools evolve?

In my latest Fast-Track Health blog post, I reflect on what the rapid rise of AI use in medicine means for clinicians, health systems, and public health as well as why governance, trust, and equity must keep pace with innovation.

Read my blog post: https://lnkd.in/eQvJMvRM

#AIinHealthcare #DigitalHealth #HealthSystems #PublicHealth
No comment yet.
Scooped by Gilbert C FAURE
March 16, 1:53 PM
Scoop.it!

#medicalai #healthtech #clinicallogic #aiethics #digitalhealth #noviq #physicianledai #healthdata | Mohamed Kassab

#medicalai #healthtech #clinicallogic #aiethics #digitalhealth #noviq #physicianledai #healthdata | Mohamed Kassab | Notebook or My Personal Learning Network | Scoop.it
Can AI Actually "Think" Like a Physician? 🤖💻
​Medical logic isn’t a straight line

it’s a complex web of history, timing, and pathophysiology.

Yet, most AI models today treat medical data as a mere mathematical puzzle, ignoring the "Clinical Thread" that connects the dots.

​🚩 The Problem: The Diagnostic Vacuum
Monitoring AI models reveals a dangerous trend: they often treat the "Principal Complaint" in a vacuum.
​Take this scenario:

Case Scenario

1️⃣ Past History: Gastritis (1 month ago).
2️⃣ Recent History: conjunctivies,
Red flag +Knee edema (1 week ago).
3️⃣ Current Complaint: Eye infection + Reiter’s Syndrome triade (Reactive Arthritis).

Medical Logic
🔍" Can't see, Can't pee, Can't climb a tree " + history of GE(Salmonela) or UTI within past few months

💡Medical management must conclude treatment of arthrities by (NSAIDs, ABs, Corticosteroids) not treatment of current eye infection only.

⛔​Standard AI often fails here because it’s programmed for "Pattern Matching" rather than "Clinical Reasoning."

It struggles to link a gastrointestinal trigger to a delayed multi-system syndrome.

​The Solution:
Clinical Intelligence Infrastructure 🧠

Physician don't just "prompt" AI; we engineer its logic

using a Holistic, Patient-Centered Hypothetical Structure:

​✅ Evidence-Based Frameworks: We integrate formal clinical guidelines—including USPSTF, GOLD, and ABCDE—directly into the model’s reasoning engine.

​✅ CDL Complexity Scaling (0-5): Our proprietary scale forces the AI to rank Differential Diagnoses from the "Worst-Case Scenario" to the least likely, ensuring no "Red Flags" are missed.

​✅ Physician-in-the-Loop (PITL): We use Human Incident Transcription to teach models the nuance of medical ethics and logic, moving beyond simple mathematical imputation.

✨✨
🤖🚩 ​How We Monitor the "Unsupervised" ML 🧪

🔍​Most AI models are built on Aggregation Mathematical Imputation.
They are great at math, but illiterate in Medical Ethics.

🔴When an unsupervised model processes a complex case—like a patient with past Gastritis now presenting with Reiter’s Syndrome,
it often ignores the "Diagnostic Thread" in favor of the most statistically "loud" symptom

At terms of clinical precision we enforce

​🔍 Clinical Drift Detection: Auditing the Latent Path to ensure the AI reached a conclusion for the right medical reasons.

​🛡️ Zero-Hallucination Policy: Every output is cross-referenced against formal medical governance before it ever reaches a decision-maker.


Yes, Medicine is complex but AI’s reasoning shouldn't be a black box
Know ​How do you monitor a black box?

The reply is tbat (You don't. You wrap it in a Physician-Led Validation Layer)

🤖​Is your healthcare AI clinically safe, or just "statistically likely"?

bridging the gap together.

Clinical Methodology

​#MedicalAI #HealthTech #ClinicalLogic #AIethics #DigitalHealth #NOVIQ #PhysicianLedAI #HealthData
No comment yet.
Scooped by Gilbert C FAURE
March 16, 4:42 AM
Scoop.it!

Most parents don’t realise this. The version of TikTok Chinese kids use is not the same one our kids get. Not even close. In China, if you’re under 14: • The app shuts down at 10pm • Usage is… |...

Most parents don’t realise this. The version of TikTok Chinese kids use is not the same one our kids get. Not even close. In China, if you’re under 14: • The app shuts down at 10pm • Usage is… |... | Notebook or My Personal Learning Network | Scoop.it
Most parents don’t realise this.

The version of TikTok Chinese kids use is not the same one our kids get.

Not even close.

In China, if you’re under 14:
• The app shuts down at 10pm
• Usage is capped at 40 minutes a day
• The algorithm mainly shows science, museums, and educational content

Now compare that with the version our kids get:
• Infinite scroll.
• Endless dopamine loops.
• No off switch.

Same company: ByteDance runs Douyin in China and TikTok everywhere else.

For Chinese kids:
• Youth Mode enabled by default
• Real-name verification
• Educational feeds
• Hard time limits

For other kids all over the world?
An algorithm optimized for maximum watch time.

You don’t have to agree with China’s internet policies to notice something interesting here: The people who built these algorithms know exactly what they do to developing brains.

And they chose to protect their own kids from it.
The real problem isn’t just lack of regulation all over the world. It’s also lack of tech literacy among everyday non-tech people.

Engineers understand these systems. Many tech executives do too.

But most parents were never taught how algorithms actually work. So billions of kids are growing up inside systems their parents don’t understand.

And with AI-driven feeds, that gap is only getting wider.

Question:
Should platforms be responsible for protecting kids from their own algorithms?
________________
♻️ share this post to spread awareness.
➕ Follow Tiana Zivkovic for more on tech/AI posts. | 85 comments on LinkedIn
No comment yet.
Scooped by Gilbert C FAURE
March 14, 8:32 AM
Scoop.it!

#genai | Frederic CAVAZZA

#genai | Frederic CAVAZZA | Notebook or My Personal Learning Network | Scoop.it
Microsoft lance Copilot Health, son chatbot dédié à la santé, capable d’analyser les données de vos objets connectés (montre, balance...), de vous aider à comprendre vos bilans ou analyses, ainsi que de répondre à des questions générales de santé et bien-être.
> https://lnkd.in/e-ZisUmT

Ce nouveau chatbot s’appuie sur une collaboration avec des institutions et un panel de 230 professionnels de santé dans 40 pays. Mais également sur l’historique des 50 M de questions de santé posées à Copilot et Bing tous les jours.
> https://lnkd.in/eYBAB_uf

Copilot Health est conforme à la certification ISO/IEC 42001 et s’intègre dans une initiative plus large qui inclue le système d’orchestration de diagnostiques MAI-DxO. L’ambition de Microsoft est de mettre au point une superintelligence médicale. #GenAI
> https://lnkd.in/eVVHbF-E
No comment yet.
Scooped by Gilbert C FAURE
March 14, 8:28 AM
Scoop.it!

#healthcareai #medicalai #dataannotation #clinicalai #aiinhealthcare #digitalhealth #aitraining #healthtech #futureofhealthcare | Biki Adhikari

#healthcareai #medicalai #dataannotation #clinicalai #aiinhealthcare #digitalhealth #aitraining #healthtech #futureofhealthcare | Biki Adhikari | Notebook or My Personal Learning Network | Scoop.it
🩺 AI Can Assist Healthcare — But It Can’t Replace Healthcare Professionals
Artificial Intelligence is transforming healthcare. From medical imaging analysis to predictive diagnostics, AI is helping doctors make faster and more informed decisions. But one truth remains clear:
AI cannot replace healthcare professionals. It learns from them.
Every medical AI system depends on the knowledge, experience, and judgment of doctors, nurses, pharmacists, and clinical experts. Without their expertise, AI models simply cannot function safely or accurately.
Here’s why healthcare professionals remain essential in the AI era:
🔬 Training AI with Medical Annotation
Medical experts label and annotate clinical data such as X-rays, CT scans, pathology slides, EHR notes, and prescriptions. These annotations teach AI what a disease looks like, how conditions progress, and how treatments work.
📊 Clinical Validation & Evaluation
AI models must be evaluated by clinicians to verify that predictions are accurate and safe for real patients. Doctors help identify errors, biases, and edge cases that algorithms alone cannot detect.
🧠 Clinical Judgment & Context
Medicine is not just pattern recognition. It involves patient history, symptoms, emotions, ethical decisions, and complex case interpretation. Human judgment remains irreplaceable.
🤝 Human Connection in Care
Patients need empathy, trust, and communication — something AI cannot truly provide.
The future of healthcare is not AI vs Doctors.
It’s AI + Doctors working together.
When clinicians collaborate with AI developers through data annotation, model training, and clinical evaluation, we build smarter and safer medical systems that support better patient care.
As someone working with medical AI data annotation and training, I strongly believe that healthcare professionals will always remain the core intelligence behind medical AI.
Let’s build technology that empowers clinicians — not replaces them.

#HealthcareAI #MedicalAI #DataAnnotation #ClinicalAI #AIinHealthcare #DigitalHealth #AITraining #HealthTech #FutureOfHealthcare
No comment yet.
Scooped by Gilbert C FAURE
March 14, 8:26 AM
Scoop.it!

5 Ways AI is Reshaping the Future of Healthcare | Advanced Health Education Center

5 Ways AI is Reshaping the Future of Healthcare | Advanced Health Education Center | Notebook or My Personal Learning Network | Scoop.it
The transition of artificial intelligence in healthcare from a futuristic concept to a daily reality is happening faster than many expected. While some clinicians still view the technology with a healthy dose of skepticism—citing concerns over privacy, displacement of judgment, and systemic bias—early adopters are proving that AI isn’t here to replace doctors. Instead, it’s here to act as the ultimate assistant.

By automating the mundane and sharpening diagnostic accuracy, AI is allowing healthcare providers to get back to what they do best: caring for patients. Here are five groundbreaking ways AI is currently transforming the medical landscape.

https://lnkd.in/gdag5GHU
No comment yet.
Scooped by Gilbert C FAURE
March 14, 8:24 AM
Scoop.it!

[ 🧠 WALDSEEMÜLLER AUTREMENT] 🗺️ En ce moment, je travaille sur la traduction de la mappemonde de Martin Waldseemüller (1507), une carte (notamment) célèbre du fait d'être la première sur laquel...

[ 🧠 WALDSEEMÜLLER AUTREMENT] 🗺️ En ce moment, je travaille sur la traduction de la mappemonde de Martin Waldseemüller (1507), une carte (notamment) célèbre du fait d'être la première sur laquel... | Notebook or My Personal Learning Network | Scoop.it
[ 🧠 WALDSEEMÜLLER AUTREMENT]

🗺️ En ce moment, je travaille sur la traduction de la mappemonde de Martin Waldseemüller (1507), une carte (notamment) célèbre du fait d'être la première sur laquelle on trouve mention de l'Amérique ("America").

📚 Si la mention "America" est un fait historique majeur (considéré, aux États-Unis d'Amérique et d'un point de vue colonial, comme l'acte de naissance du continent), et que la carte, fondée pour partie sur l’œuvre de Ptolémée (alors redécouvert en Occident), témoigne d'une modernité saisissante... le reste de la mappemonde nous raconte aussi une autre histoire, et c'est bien dommage qu'on s'en préoccupe moins 😉

Bon nombre d'annotations ou de toponymes rapprochent cette mappemonde du Moyen-Âge, tant les sources mobilisées et les commentaires sont issus de cette période (comme /et donc / et ou / de l'Antiquité !). C'est tout à fait logique, puisque nous ne sommes "qu'en" 1507, et que les classifications historiques posent des frontières nettes sur des objets flous. Sur la mappemonde de Waldseemüller, on trouve beaucoup (en fait, 80%) de notes anthropologiques, de faits historiques ou de toponymes médiévaux. Et bien entendu, on trouve un bestiaire magnifique !

🦄 Il faut donc le dire et le redire : dragons, cynocéphales, léviathans, licornes et autres bestioles, on trouve bien tout cela, sur le cartes, à la Renaissance (et jusqu'à tardivement) !

Un magnifique exemple de travail d’agrégation des découvertes les plus récentes de l'époque à de (très) (très) vieilles informations.

🥳 Petit florilège de commentaires traduits hier !

(et à celles et ceux qui se demandent si cette traduction fera l'objet d'une nouvelle carte pour la série 𝑪𝒂𝒓𝒕𝒐 𝑮𝒓𝒂𝒑𝒉𝒊𝒆, peut-être, mais ce n'est pas certain : il reste l'éléphant dans la pièce à traduire - l'ASIE -, ce qui, en général, termine sur un échec cuisant)
No comment yet.
Scooped by Gilbert C FAURE
March 13, 2:15 PM
Scoop.it!

China's AI Agent Hospital: Evaluating AI in Healthcare | Salim Bouguermouh posted on the topic

China's AI Agent Hospital: Evaluating AI in Healthcare | Salim Bouguermouh posted on the topic | Notebook or My Personal Learning Network | Scoop.it
A very interesting new review from a Chinese team map where AI agents are heading in healthcare and, more importantly, how we should evaluate them.

The paper goes beyond the usual excitement around diagnosis and automation and asks the harder questions: safety, trust, controllability, humanistic care, and real clinical usefulness. In other words, the future of agentic artificial intelligence in medicine will not be defined only by what agents can do, but by whether they can be deployed responsibly inside real workflows (with proper evaluation and governance).

On China’s progress in AI agents in healthcare, the trend is real and strategically important. Tsinghua University officially inaugurated its AI Agent Hospital in April 2025, with pilot operations planned at Beijing Tsinghua Changgung Hospital and its internet hospital, beginning in areas such as general practice, ophthalmology, radiology, and respiratory medicine.

Separately, Chinese authorities said in late 2025 that by 2030 AI-assisted diagnosis and treatment should become broadly available across primary-level medical institutions, and hospitals are expected to widely adopt AI for imaging, decision support, triage, and follow-up. At the same time, a Nature Medicine commentary warned that the rapid hospital deployment of DeepSeek in China has created a regulatory gray zone, which suggests China is moving fast on implementation, sometimes faster than governance catches up (https://lnkd.in/evfzJemp).

📄 The npj Artificial Intelligence paper - AI agent in healthcare: applications, evaluations, and future directions:
https://lnkd.in/ePzCQnYB

#ArtificialIntelligence #AI #AgenticAI #HealthcareAI #DigitalHealth #ClinicalAI #LargeLanguageModels #LLM #HealthTech #AIinHealthcare #PatientCare #ChinaTech
No comment yet.
Scooped by Gilbert C FAURE
March 13, 8:28 AM
Scoop.it!

3 key factors shaping patient choice and trust in medical AI :- 1)   Medical AI performance at specialist level (AMCE, 0.248 [95%CI, 0.234 to 0.262]) or above specialist level (AMCE, 0.325 [95%CI...

3 key factors shaping patient choice and trust in medical AI :- 1)   Medical AI performance at specialist level (AMCE, 0.248 [95%CI, 0.234 to 0.262]) or above specialist level (AMCE, 0.325 [95%CI... | Notebook or My Personal Learning Network | Scoop.it
3 key factors shaping patient choice and trust in medical AI :-

1)   Medical AI performance at specialist level (AMCE, 0.248 [95%CI, 0.234 to 0.262]) or above specialist level (AMCE, 0.325 [95%CI, 0.310 to 0.339]) was the most significant factor associated with shaping patient choice and trust (pages 6 & 8). Above-specialist performance was nearly 3 times as important as FDA approval (page 6).  
  
2)  Clinician oversight of medical AI (AMCE, 0.184 [95%CI, 0.173 to 0.195]) was the second most significant factor associated with shaping patient choice and trust (pages 6 & 9). "Meaningfully, clinician presence was associated with a greater change in patient trust and choice than any individual form of governance. However, effective clinician governance presents substantial challenges: not only are clinicians limited in their ability effectively to oversee medical AI performance, but the availability of adequately trained clinicians is limited" (page 9). Clinical oversight of medical AI is more important than FDA approval.  

3)   FDA approval (AMCE, 0.111 [95%CI, 0.101 to 0.121]) or national-level validation of medical AI via the Mayo Clinic (AMCE, 0.111 [95%CI, 0.101 to 0.121]) than local hospital validation was the third factor associated with shaping patient choice and trust (page 6).

Ana Bracic, Kayte Spector-Bagdady, Sophie Towle, Rina Zhang, Cornelius A. James, and W. Nicholson Price II, Factors for Patient Trust and Acceptance of Medical Artificial Intelligence, (2026) 9(3) 𝘑𝘈𝘔𝘈 𝘕𝘦𝘵𝘸𝘰𝘳𝘬 𝘖𝘱𝘦𝘯 e260815, published online 5 March 2026, https://lnkd.in/g4uR3wug
No comment yet.
Scooped by Gilbert C FAURE
March 13, 4:55 AM
Scoop.it!

limpact-economique-mondial-de-la-desinformation-2026

No comment yet.