... designed to collect posts and informations I found and want to keep available but not relevant to the other topics I am curating on Scoop.it (on behalf of ASSIM):
We talk about health literacy as if it lives inside people â as if the solution is to hand individuals better tools, clearer brochures, simpler language, and hope they can ânavigateâ the system. But that framing is fundamentally wrong. Health literacy is not an individual skill problem. It is a system design problem.
If a person struggles to understand, act, or make informed decisions, that is not a sign of their failure. It is a sign that the environment was not built to support them. It is an organizational failure, a policy failure, a leadership failure â a failure of design.
Health literacy is not about teaching people to try harder. It is about building systems that make health understanding, access, and action natural â not heroic. It is a matter of equity and power, not worksheets and pamphlets.
The true measure of a health-literate society is not how well individuals adapt to complexity â but how well institutions remove the complexity in the first place.
Until we shift the responsibility from people to systems, from coping to designing, from deficit to empowerment â we will keep treating symptoms while ignoring the root cause. | 11 comments on LinkedIn
Review articles used to be essential for scientific publishing - an important academic exercise, while reading them was important for anyone entering a field.
This week, I learned about Consensus App (thanks, Julian A. Serna), which seems to generate excellent, referenced summaries on any topic â often better than many ârealâ review papers.
So, my questions are: -Does it still make sense for humans to write review articles? Especially since no one today can realistically read and process all relevant papers in an active field. -If AI can already produce (and will soon perfect) summaries that are comprehensive, accurate, and continuously updated â what unique value does a traditional human-written (and probably with the use of AI anyway) review still add?
Itâs also interesting what this shift means for publishers like Wiley, Elsevier, or Springer, whose journal impact factors often rely heavily on review articles.
My prediction is that the traditional concept of a âreview paperâ will soon lose its relevance.
Once considered a luxury during times of scarcity in China, pork fat with rice (çȘæČčæé„) is a timeless comfort dish. A chunk of jade-like solidified lard melts over a steaming bowl of rice, releasing its rich aroma and silky flavor.
Discover how to cook this classic favorite, below: #FriedRiceDay
I discovered that a reading pack for my doctoral leadership subject contained fabricated references. Almost everything listed was AI-generated citations that either didnât exist or linked to the wrong papers.
When I raised it, the provider confirmed that AI had been used and that the material was shared before human review. They also reminded me that doctoral candidates should be able to verify their own sources.
That response was so disappointing. Doctoral candidates are expected to build on verified scholarship, not correct institutional errors. Iâve asked to withdraw from the course because the university doesnât seem to understand why this is a serious concern and has pushed the responsibility back on me.
Distributing unverified academic material in a foundation subject is a breach of academic integrity and sets entirely the wrong ethical tone for the course.
Am I overreacting? Or is this yet another symptom of the wider issues that are undermining confidence in the sector? | 306 comments on LinkedIn
My first visit to Tbilisi, Georgia for the International Conference on Medical Education has been incredible and filled with thoughtful discussions, engaged learners, and the perfect mix of local and international perspectives. Thanks to Salome Voronovi for the invitation, and always nice to see David Taylor.
A concept that really struck a chord is what Iâve started calling the Suitcase Paradox:
In lifelong learning or curriculum design, just like when packing for a trip, you canât keep adding new things unless you take something out first. And you have to fit it into the overhead compartment on a plane and in our anatomical overhead compartment, the brain!
Healthcare professionals must continually unlearn outdated practices to make room for new evidence, new technologies, and new ways of thinking.
Thatâs what lifelong learning, and particularly continuing professional development (CPD), is all about.
But to make it work, educators must evolve into learning facilitators, helping learners curate, adapt, and apply knowledge depending on where they are on the learning continuum.
And because healthcare doesnât happen in silos, neither should learning. Interprofessional education (IPE) brings students from different health professions together.
Interprofessional continuing education (IPCE) extends that collaboration into practice. And when itâs done right, it leads to interprofessional collaborative practice (IPCP), where the ultimate outcome is better patient care.
I even got in a mention of the Donald & Barbara Zucker School of Medicine curriculum!
Plenty more to come: Iâve still got a wine tour đ· ahead and a masterclass on lifelong learning and CPD on Monday!
This video is the FULL interview of the Business of Colorado segment featured on Studio Twelve.
From Studio Twelve: Business of Colorado, Frannie Matthews interviews Nicholas Sly of the Federal Reserve on Coloradoâs economic trends, inflation challenges, and the evolving job market in the AI era.
AI slopâlow-quality, often fake AI-generated content â is proliferating at a staggering rate. So what do you and your students need to know to combat it?
"AI slop is the low-quality, often fake content, such as text, images, or videos, that is generated by AI. Itâs currently overwhelming social media and the internet"
New paper just published with Shaydanay Urbani and Eric Wang. We wanted to understand how people searched for health information on AI-powered technologies (specifically ChatGPT, Alexa and Gemini Overviews on Google Search results), so we interviewed 27 people while watching their behavior and asking for additional information about why they were doing certain things and what they would do with the results. https://lnkd.in/eMfZUcxD
tl;dr : Participants integrated AI tools into their broader search routines rather than using them in isolation. ChatGPT was valued for its clarity, speed, and ability to generate keywords or summarize complex topics, even by users skeptical of its accuracy. Trust and utility did not always align; participants often used ChatGPT despite concerns about sourcing and bias. Googleâs AI Overviews were met with cautionâparticipants frequently skipped them to review traditional search results. Alexa was viewed as convenient but limited, particularly for in-depth health queries. Platform choice was influenced by the seriousness of the health issue, context of use, and prior experience. One-third of participants were multilingual, and they identified challenges with voice recognition, cultural relevance, and data provenance. Overall, users exhibited sophisticated âmix-and-matchâ behaviors, drawing on multiple tools depending on context, urgency, and familiarity.
Fascinating project, and as ever, people's behavior tends to be much more complex and nuanced than headlines would suggest.
To get content containing either thought or leadership enter:
To get content containing both thought and leadership enter:
To get content containing the expression thought leadership enter:
You can enter several keywords and you can refine them whenever you want. Our suggestion engine uses more signals but entering a few keywords here will rapidly give you great content to curate.