The higher education community continues to grapple with questions related to using artificial intelligence (AI) in learning and work. In support of these efforts, we present the 2025 EDUCAUSE AI Landscape Study, summarizing our community's sentiments and experiences related to strategy and leadership, policies and guidelines, use cases, the higher education workforce, and the institutional digital divide. The survey for this research project was distributed from November 4 to November 18, 2024, building on insights previously published in the 2024 EDUCAUSE AI Landscape Study.
The higher education community continues to grapple with questions related to using artificial intelligence (AI) in learning and work. In support of these efforts, we present the 2025 EDUCAUSE AI Landscape Study, summarizing our community's sentiments and experiences related to strategy and leadership, policies and guidelines, use cases, the higher education workforce, and the institutional digital divide. The survey for this research project was distributed from November 4 to November 18, 2024, building on insights previously published in the 2024 EDUCAUSE AI Landscape Study.
La comunidad de la Educación Superior sigue lidiando con cuestiones relacionadas con el uso de la Inteligencia Artificial (IA) en el aprendizaje y el trabajo. En apoyo de estos esfuerzos, presentamos el Estudio del Panorama de la IA de EDUCAUSE 2025, que resume los sentimientos y experiencias de nuestra comunidad relacionados con la estrategia y liderazgo, las políticas y directrices, los casos de uso, la fuerza laboral de la Educación Superior y la brecha digital institucional.
From May 2025, Norah O'Donnell's report on why China's spies are on the rise, and what happens when one gets caught in the U.S. From June 2025, Cecilia Vega’s report on the Americans spying for Cuba in the U.S. From July 2022, Scott Pelley's interview with a former top intelligence official in the Saudi Arabian government, Saad Aljabri, who claims the kingdom's ruler plotted to kill him and has taken his children hostage. From August 2019, Anderson Cooper’s interview with Justice and FBI officials, who reveal how they caught a former CIA officer spying for the Chinese. And from March 2025, Bill Whitaker's investigation into the mysterious swarms of drones that have been spotted in the sky above the United States.
Richard Platt's insight:
From May 2025, Norah O'Donnell's report on why China's spies are on the rise, and what happens when one gets caught in the U.S. From June 2025, Cecilia Vega’s report on the Americans spying for Cuba in the U.S. From July 2022, Scott Pelley's interview with a former top intelligence official in the Saudi Arabian government, Saad Aljabri, who claims the kingdom's ruler plotted to kill him and has taken his children hostage. From August 2019, Anderson Cooper’s interview with Justice and FBI officials, who reveal how they caught a former CIA officer spying for the Chinese. And from March 2025, Bill Whitaker's investigation into the mysterious swarms of drones that have been spotted in the sky above the United States.
Then Pedraza was introduced to Microsoft 365 Copilot Chat, the AI companion that helps with work tasks. A group of AI experts recently trained him on how to write effective prompts to quickly generate personalized activities for the students just by typing a few traits of each. He was amazed by the results.
Then Pedraza was introduced to Microsoft 365 Copilot Chat, the AI companion that helps with work tasks. A group of AI experts recently trained him on how to write effective prompts to quickly generate personalized activities for the students just by typing a few traits of each. He was amazed by the results. From skepticism to success: How AI is helping teachers transform classrooms in Peru - Very positive report as you would expect from Microsoft: https://news.microsoft.com/source/latam/features/ai/world-bank-peru-teachers-copilot/?lang=en
China leads in AI innovation in 2025 with DeepSeek V3, challenging the US in global AI dominance. Learn how cost drops and security risks shape this race. China's DeepSeek V3 0324 has become a top-performing non-reasoning model globally, highlighting the country's growing dominance in open-weight AI development. Chinese AI models are often optimized for speed and cost efficiency and are specially used for large-scale deployment. The report showcased that the rise of Chinese AI is a significant milestone and showed how Chinese AI labs are bridging the gap and surpassing their US counterparts in the major area of AI innovation. The rise of Chinese AI has certainly pushed the US back. However, US-based labs like OpenAI, Google, and xAI continue to lead in reasoning models, essential for more complex tasks involving step-by-step problem-solving. OpenAI's o4-mini (high) currently tops the global intelligence index, but competitors like Chinese AI are quickly narrowing the performance gap and leading the AI race. If the Chinese AI model continues to grow, it will soon overthrow the US hegemony in AI innovation and become the leading open-source network in the world.
Neurological conditions, including dementia, pose a major public health challenge, contributing to a significant and growing clinical, economic, and societal burden. Traditionally, research and clinical practice have focused on diseases like dementia in isolation.
Neurological conditions, including dementia, pose a major public health challenge, contributing to a significant and growing clinical, economic, and societal burden. Traditionally, research and clinical practice have focused on diseases like dementia in isolation.
AI, when intentionally integrated, offers unique opportunities to deepen critical thinking. AI-powered platforms can support inquiry-based learning, providing students with immediate feedback on the logic and coherence of their arguments or exposing them to multiple perspectives on contentious issues (Luckin et al., 2016). Advanced models can simulate debates, challenge students with counterarguments, and prompt metacognitive reflection: “Why do you believe this? What assumptions are you making? What evidence supports your claim?” In these cases, AI becomes a “thinking partner” rather than a shortcut or crutch.
AI, when intentionally integrated, offers unique opportunities to deepen critical thinking. AI-powered platforms can support inquiry-based learning, providing students with immediate feedback on the logic and coherence of their arguments or exposing them to multiple perspectives on contentious issues (Luckin et al., 2016). Advanced models can simulate debates, challenge students with counterarguments, and prompt metacognitive reflection: “Why do you believe this? What assumptions are you making? What evidence supports your claim?” In these cases, AI becomes a “thinking partner” rather than a shortcut or crutch.
What was once considered a bad business venture, the 8008’s lasting legacy went on to drive the technological world we live in today.
Richard Platt's insight:
Computer Terminal Corporation (now defunct Datapoint) launched the DataPoint 3300 computer terminal in 1969 as a platform to replace teleprinters, or the precursors to fax machines. The machine was implemented using TTL logic in a mix of small- and medium-scale integrated circuits (ICs), which could produce an enormous amount of heat during operation. When the terminal was announced in 1967, RAM was extremely expensive (and heavy). To address the excessive heat and other issues, CTC co-founder Austin Roche looked to Intel to help with the endeavor, as the company was well known for being a primary vendor of RAM chips at the time. Intel had found promise with the production of its 1st programmable microprocessor—the 4-bit 4004. Roche took his processor design, reportedly drawn on tablecloths in a private club, and met with Intel founder Bob Noyce. Roche presented his design as a potentially revolutionary development and suggested Intel could develop the chip at its own expense and sell it to the companies that would surely come knocking, including CTC. Noyce expressed his concern with the processor concept, saying that it was an intriguing idea and Intel could definitely manufacture the processor, but it would be a dumb move. Noyce thought that if you had a computer chip, you could ONLY sell one chip per computer, but with memory, you could sell 100s of chips per computer. Noyce was also concerned about his existing customer base. Intel was already selling a healthy amount of RAM chips to computer manufacturers. If the company started making CPUs, would those existing customers look at Intel as competition and go elsewhere for RAM?
It starts with a sense that delivery is happening, but impact is not. That teams are busy, but progress is unclear. That AI pilots are everywhere, but transformation is nowhere. Executives sit in…
Richard Platt's insight:
There’s a growing unease inside even the most well-run enterprises.
It starts with a sense that delivery is happening, but impact is not. That teams are busy, but progress is unclear. That AI pilots are everywhere, but transformation is nowhere.
Executives sit in dashboard reviews surrounded by metrics — velocity, throughput, feature counts — and still ask the same question: “Why aren’t we seeing results?”
The enterprise machine hums, but the outputs feel flat. Strategy offsites generate enthusiasm, but not traction.
Agile ceremonies are conducted with precision, but somehow, customers are still waiting, innovators are still frustrated, and priorities shift faster than teams can respond.
This is not a problem of tools or frameworks. It’s a problem of thinking. A problem of systems.
In recent years, the landscape of Parkinson’s disease management has been dramatically transformed by the advent of wearable technology.A groundbreaking systematic review, led by Packer, Debelle, Bailey, and colleagues, published in npj Parkinson’s Disease, illuminates how these devices are revolutionizing...
In recent years, the landscape of Parkinson’s disease management has been dramatically transformed by the advent of wearable technology.A groundbreaking systematic review, led by Packer, Debelle, Bailey, and colleagues, published in npj Parkinson’s Disease, illuminates how these devices are revolutionizing...
In recent years, the landscape of Parkinson’s disease management has been dramatically transformed by the advent of wearable technology.A groundbreaking systematic review, led by Packer, Debelle, Bailey, and colleagues, published in npj Parkinson’s Disease, illuminates how these devices are revolutionizing...
Schools are building innovative use cases for artificial intelligence that improve lesson planning and guide students into deeper creativity and critical thinking.
Schools are building innovative use cases for artificial intelligence that improve lesson planning and guide students into deeper creativity and critical thinking.
The governance of generative artificial intelligence (AI) technologies in medicine has become a major topic of discussion due to concerns about their rapid development and use, which have outpaced existing regulatory measures.1 Findings from a recent large-scale survey underscore the urgency of renewed oversight, revealing that one (20%) in five of the UK-based general practitioners surveyed use large language model (LLM) chatbots for clinical tasks.2 Although much attention has been focused on popular and widely available LLM-based chatbots, such as ChatGPT (OpenAI and Microsoft), addressing unresolved challenges related to privacy, bias, accuracy, and accountability requires a standardised frameworks that goes beyond the regulation of chatbot as conversational tools and considers the wider AI capabilities for data generation.
The governance of generative artificial intelligence (AI) technologies in medicine has become a major topic of discussion due to concerns about their rapid development and use, which have outpaced existing regulatory measures.
1 Findings from a recent large-scale survey underscore the urgency of renewed oversight, revealing that one in five (20%) of the UK-based general practitioners surveyed use large language model (LLM) chatbots for clinical tasks.
2 Although much attention has been focused on popular and widely available LLM-based chatbots, such as ChatGPT (OpenAI and Microsoft), addressing unresolved challenges related to privacy, bias, accuracy, and accountability requires a standardised framework that goes beyond the regulation of chatbots as conversational tools and considers the wider AI capabilities for data generation.
The core purpose of education is to foster meaningful learning: developing students’ knowledge, skills, and critical thinking. Thus, the most pressing question is how ubiquitous AI assistance affects student learning and engagement with course material. There are valid concerns that easy access to generative AI may encourage academic shortcutting at the expense of learning. Writing an essay or solving a problem set is not busy work; it is structured adversity that develops reasoning, creativity, and resilience. If AI tools simply hand students the answers, they risk short-circuiting that developmental journey. Indeed, early evidence suggests some students are becoming less engaged in the learning process when AI is there to do the heavy lifting. This attitude is troubling: if a generation of students concludes that studying is futile because a chatbot can do it for them, education could face a crisis of engagement.
The core purpose of education is to foster meaningful learning: developing students’ knowledge, skills, and critical thinking. Thus, the most pressing question is how ubiquitous AI assistance affects student learning and engagement with course material. There are valid concerns that easy access to generative AI may encourage academic shortcutting at the expense of learning. Writing an essay or solving a problem set is not busy work; it is structured adversity that develops reasoning, creativity, and resilience. If AI tools simply hand students the answers, they risk short-circuiting that developmental journey. Indeed, early evidence suggests some students are becoming less engaged in the learning process when AI is there to do the heavy lifting. This attitude is troubling: if a generation of students concludes that studying is futile because a chatbot can do it for them, education could face a crisis of engagement.
We see increasing levels of disengagement from the curriculum. Fewer students carry on to higher education. The intellectual elites become smaller and more powerful, but we also see a disruption. Academia is peeled away. Innovation occurs outside of the walls of schools.
We see increasing levels of disengagement from the curriculum. Fewer students carry on to higher education. The intellectual elites become smaller and more powerful, but we also see a disruption. Academia is peeled away. Innovation occurs outside of the walls of schools. -- This article from the AI English Teacher looks at how we can educate students in the future to ensure that we aren’t just evaluating their use of AI and also touches on why this probably won’t happen. Can you guess why? - Well worth reading https://theaienglishteacher.wordpress.com/2025/06/14/two-futures-a-choice-for-education-in-the-age-of-ai/
Clinical decision-making in oncology is challenging and requires the analysis of various data types – from medical imaging and genetic information to patient records and treatment guidelines.
Clinical decision-making in oncology is challenging and requires the analysis of various data types – from medical imaging and genetic information to patient records and treatment guidelines.
Purpose The use of Artificial Intelligence (AI) in education has the potential to further customise and personalise students’ learning, and encourage self-directed learning. It can also augment teachers’ professional practice by automating routine tasks and allowing teachers to spend more time...
Purpose The use of Artificial Intelligence (AI) in education has the potential to further customise and personalise students’ learning, and encourage self-directed learning. It can also augment teachers’ professional practice by automating routine tasks and allowing teachers to spend more time...
AI, ChatGPT, and LLMs "have absolutely blown up what I try to accomplish with my teaching." Nik Peachey's insight Some interesting comments from teachers in this article about how AI has now impacted their teaching
In a landmark comparative study published in the Journal of Health Organization and Management, researchers from the University of Maine have embarked on a rigorous investigation to evaluate the diagnostic capabilities of artificial intelligence (AI) models against those of seasoned human clinicians...
In a landmark comparative study published in the Journal of Health Organization and Management, researchers from the University of Maine have embarked on a rigorous investigation to evaluate the diagnostic capabilities of artificial intelligence (AI) models against those of seasoned human clinicians.
Behavioral health doesn’t need tech that replaces people. It needs technology that respects them, amplifying what clinicians do best and helping more people get the care they deserve.
Behavioral health doesn’t need tech that replaces people. It needs technology that respects them, amplifying what clinicians do best and helping more people get the care they deserve.
FaceAge, a face-reading AI tool that estimates biological age from facial photographs, predicts cancer outcomes with an impressive 81% accuracy rate. This surpasses traditional methods and outperforms doctors in survival assessments. Developed by researchers at Mass General Brigham, FaceAge shows early promise in predicting short-term life expectancy. The findings suggest the AI tool can identify high-risk patients and provide a low-cost, accessible way to inform care decisions, enhancing clinical decision-making.
This improvement was observed across multiple cancer types and stages, with FaceAge delivering a reliable prognostic signal, offering doctors more objective and precise survival estimates. FaceAge has drawn a mix of cautious optimism and concern from medical experts. Irbaz Riaz, assistant professor of medicine and artificial intelligence consultant at Mayo Clinic, called it “a promising early-stage tool.” Hugo Aerts, director of the AI in Medicine program at Mass General Brigham and a lead researcher on the tool, acknowledged its potential benefits but warned it could also cause harm if misused. He stressed that hospitals have strict rules and oversight to ensure AI is used properly and only to benefit patients, not insurers or other parties. -- Researchers say more studies are needed before FaceAge can be routinely used in clinical settings. Their goal is to turn it into an early detection system for various health issues, supported by strong ethical and regulatory standards.
Multi-Agent Generative AI (Gen AI) systems are reshaping the way enterprises design intelligent automation, handle decision-making, and optimize workflows across knowledge-intensive functions. By…
Richard Platt's insight:
Multi-Agent Generative AI (Gen AI) systems are reshaping the way enterprises design intelligent automation, handle decision-making, and optimize workflows across knowledge-intensive functions. By orchestrating specialized agents — often powered by foundation models like OpenAI GPT-4, Claude Sonnet, Mistral, or Gemini — enterprises can now simulate complex human workflows with remarkable scalability and responsiveness. However, as promising as this landscape is, many early-stage implementations are riddled with anti-patterns — recurring design and operational mistakes that compromise performance, accuracy, scalability, security, and human trust. These issues often emerge due to premature architectural choices, misuse of tools like CrewAI, LangGraph, or AutoGen, and a poor understanding of enterprise needs, agent autonomy, and alignment boundaries. This article delves deep into these anti-patterns, identifies their root causes across enterprise use cases, and provides a blueprint of best practices for preventing, detecting, and remediating them. Whether you’re building agentic systems for legal workflow augmentation, customer service automation, knowledge retrieval, or product research, avoiding these pitfalls will be key to building robust, enterprise-grade solutions.
While people who have spent years cultivating their writing skills might bemoan the arrival of AI-assisted writing, there is also a much more optimistic way to view these changes. Until now, the ability to write well was inherently elitist. People fortunate enough to have the time and financial capacity to pursue higher education were better positioned to produce excellent writing.
This is an interesting perspective “While people who have spent years cultivating their writing skills might bemoan the arrival of AI-assisted writing, there is also a much more optimistic way to view these changes. Until now, the ability to write well was inherently elitist. People fortunate enough to have the time and financial capacity to pursue higher education were better positioned to produce excellent writing.” https://www.brookings.edu/articles/ai-has-rendered-traditional-writing-skills-obsolete-education-needs-to-adapt/
This is an interesting perspective “While people who have spent years cultivating their writing skills might bemoan the arrival of AI-assisted writing, there is also a much more optimistic way to view these changes. Until now, the ability to write well was inherently elitist. People fortunate enough to have the time and financial capacity to pursue higher education were better positioned to produce excellent writing.” https://www.brookings.edu/articles/ai-has-rendered-traditional-writing-skills-obsolete-education-needs-to-adapt/
In following along on a conversation via Linked In, I saw Shana V. White ask this question: How do you define "ethically"? Later, someone asks the question that Shana may really be asking in regards to AI, "What is ethical AI?" From my perspective, the questions raise the bigger issues. It reminded me of a…
The current research landscape may be messy and contradictory, but it illuminates a crucial truth: the impact of AI on education isn’t predetermined by the technology itself—it’s determined by the educational system we choose to implement it within.
The current research landscape may be messy and contradictory, but it illuminates a crucial truth: the impact of AI on education isn’t predetermined by the technology itself—it’s determined by the educational system we choose to implement it within.
There have been extensive evaluations of artificial intelligence (AI) systems for narrow medical tasks, but more work is needed to systematically evaluate and deploy AI systems that can perform a broad range of medical tasks. The medical training process itself might offer a template for addressing this challenge. Clinicians undergo rigorous education and training, progressing through stages of increasing responsibility and autonomy. Similarly, generalist medical AI systems could be subjected to a phased certification model before they are granted greater autonomy in patient care.
There have been extensive evaluations of artificial intelligence (AI) systems for narrow medical tasks, but more work is needed to systematically evaluate and deploy AI systems that can perform a broad range of medical tasks. The medical training process itself might offer a template for addressing this challenge. Clinicians undergo rigorous education and training, progressing through stages of increasing responsibility and autonomy. Similarly, generalist medical AI systems could be subjected to a phased certification model before they are granted greater autonomy in patient care.
To get content containing either thought or leadership enter:
To get content containing both thought and leadership enter:
To get content containing the expression thought leadership enter:
You can enter several keywords and you can refine them whenever you want. Our suggestion engine uses more signals but entering a few keywords here will rapidly give you great content to curate.
The higher education community continues to grapple with questions related to using artificial intelligence (AI) in learning and work. In support of these efforts, we present the 2025 EDUCAUSE AI Landscape Study, summarizing our community's sentiments and experiences related to strategy and leadership, policies and guidelines, use cases, the higher education workforce, and the institutional digital divide. The survey for this research project was distributed from November 4 to November 18, 2024, building on insights previously published in the 2024 EDUCAUSE AI Landscape Study.