The new frontier in large language models is the ability to “reason” their way through problems. New research from Apple says it's not quite what it's cracked up to be.
EDTECH@UTRGV's insight:
"For a while now, companies like OpenAI and Google have been touting advanced "reasoning" capabilities as the next big step in their latest artificial intelligence models. Now, though, a new study from six Apple engineers shows that the mathematical "reasoning" displayed by advanced large language models can be extremely brittle and unreliable in the face of seemingly trivial changes to common benchmark problems."
The latest news related to the meaningful and effective implementation of educational technology and e-learning in K-12, higher education, corporate and government sectors.
Watch this video to learn more about the fully online, accelerated, project-based Master of Education in Educational Technology at the University of Texas Rio Grande Valley. For more information, visit: https://www.utrgv.edu/edtech/index.htm
EDTECH@UTRGV's insight:
This 30-hour accelerated program designed to prepare persons in K-12, higher education, corporate, and military settings to develop the skills and knowledge necessary for the classrooms and boardrooms of tomorrow. Students in this program have the opportunity to earn one or more graduate certificates in E-Learning, Technology Leadership, and Online Instructional Design.
This is a fantastic program! Its practical, real-world based and applicable to many areas of industry where teaching and learning, training and development are used.
One thing is clear in higher ed today: A growing focus on upskilling and career preparation is igniting a renewed focus on micro-credentials.
EDTECH@UTRGV's insight:
"A growing emphasis on upskilling and career preparation is igniting a renewed focus on micro-credentials and attracting a new wave of students to campus."
Apple's executives are thinking of acquiring Perplexity AI both to get more talent and to be able to offer an AI-based search engine in the future, according to Bloomberg.
EDTECH@UTRGV's insight:
"[T]he idea is to develop an AI search engine powered by Perplexity and to integrate Perplexity's technology into Siri."
Commentary on Stephen's Web ~ On Ethical AI Principles by Stephen Downes. Online learning, e-learning, new media, connectivism, MOOCs, personal learning environments, new literacy, and more
EDTECH@UTRGV's insight:
"Ethics is personal. It's based in our own sense of what's right and what's wrong (itself a product of culture and education and upbringing and experience and reflection) and is manifest in different ways in different people (and not at all in psychopaths) and for me is a combination of empathy and fear and loathing and - on my good days - of peace and harmony and balance. It consists of what I am willing to allow of myself, what guides my decisions, what I am willing to accept, and what will cause me to push back with a little force or all the might I possess."
Two veteran teachers give 4 rules for responsibly using chatbots in writing workshops.
EDTECH@UTRGV's insight:
With proper ethical guidance, ChatGPT has inspired 9th-grade English students to critically analyze its responses and deepen their own writing and thinking instead of using the tool to cut corners.
"This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM)... Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users... While LLMs offer immediate convenience, our findings highlight potential cognitive costs."
EDTECH@UTRGV's insight:
Using LLMs to assist essay writing reduced participants’ brain connectivity, cognitive engagement, and sense of authorship compared with search‐engine or tool-free writing, suggesting that long-term reliance on AI may carry cognitive and educational costs.
In trying to personalize learning for students via AI, maybe we’ve focused too much on tailoring content and not on transforming context.
EDTECH@UTRGV's insight:
"What if the missing ingredient in student achievement isn’t better curriculum, tech, or teachers, but better motivation? What if the key to unlocking motivation isn’t something intrinsic to students, but something found in their relationships with peers, teachers, mentors, and communities? And what if the one thing AI can’t do is the one thing students need most?"
The use of AI in education has risks, but it could help personalize learning and free teachers to spend more time doing what only humans can do: connect, mentor, care. Let’s ensure we get this right — by aligning educators and tech experts around what matters most: student outcomes.
EDTECH@UTRGV's insight:
"Before we let AI teach our children, we must build the scaffolding for responsible AI use among professionals"
A recent academic study found that as organizations adopt AI tools, they're not just streamlining workflows — they're piling on new demands. Researchers suggested that "AI technostress" is driving burnout and disrupting personal lives, even as organizations hail productivity gains.
EDTECH@UTRGV's insight:
"The study explores AI's dual impact on employees' work and life well-being, finding that while it can increase productivity, it can also cause negative effects, such as the demand to always do more."
"By leveraging generative artificial intelligence to convert lengthy instructional videos into micro-lectures, educators can enhance efficiency while delivering more engaging and personalized learning experiences."
EDTECH@UTRGV's insight:
"A variety of tools are available to help educators streamline video production; create informative, engaging, and customized videos; and facilitate content mastery. When used appropriately, GenAI tools can add value to the higher education student's experience."
Gen Z is more mindful and less divided than some suggest, writes Jeff LeBlanc in a thoughtful commentary on his experiences teaching post-Millennials.
EDTECH@UTRGV's insight:
"What I haven’t seen is a loss of values. I’ve seen values under stress. And I’ve seen students rise to meet that stress with reflection, humor, honesty, and in some cases, the emotional clarity that many of us didn’t learn until adulthood. They’re not fractured so much as they’re adapting."
Learn how institutional leaders can develop mission-driven AI policies that balance innovation, ethics, and stakeholder needs in higher education.
EDTECH@UTRGV's insight:
"This article delves into the complexities of developing and implementing an AI framework that not only aligns with an institution’s unique mission but also addresses the diverse needs of its stakeholders, including faculty, students, staff, and administration."
Did you know that a recent survey found most students can’t tell when AI-generated content is wrong? Chapter 2 of Teaching and Learning in the Age of Generative AI—by Dr. Leticia De Leon—spots this blind-spot and offers a “Nested Framework” to build the AI literacy and safeguards every classroom needs. Curious? Dive into Chapter 2 to see how you can turn this challenge into an opportunity for smarter, safer learning. Preview the book here: bit.ly/4jVce93
EDTECH@UTRGV's insight:
"This chapter proposes a framework—the Nested Framework for Implementing AI in Education—for evaluating the effectiveness of AI in education by utilizing a framework synthesis methodology to develop it." Preview the book here: bit.ly/4jVce93
In a new blog post, Altman laid out his vision for a hugely prosperous future powered by superintelligent AI. We'll figure things out as we go along, he argues.
EDTECH@UTRGV's insight:
"In his 2005 book "The Singularity is Near," the futurist Ray Kurzweil predicted that the Singularity -- the moment in which machine intelligence surpasses our own -- would occur around the year 2045. Sam Altman believes it's much closer."
Arshitha S Ashok draws on this anecdote to highlight the phenomenon of “cognitive offloading” that’s been caused by the proliferation of AI tools. By using ChatGPT and other LLMs in our daily lives, she writes, we’re not just outsourcing our critical thinking and decision-making skills; we’re outsourcing our curiosity, too."
EDTECH@UTRGV's insight:
"Allain compares AI to an elevator, and cognitive problem-solving (like solving a physics problem by hand) to a stairmaster machine. The elevator will get you to your destination faster, but the stairmaster will make your mind stronger and more agile."
"AI bots are quietly overwhelming the digital infrastructure behind our cultural memory. In early 2025, libraries, museums, and archives around the world began reporting mysterious traffic surges on their websites. The culprit? Automated bots scraping entire online collections to fuel training datasets for large AI models. What started as a few isolated incidents is now becoming a global pattern."
EDTECH@UTRGV's insight:
"A new survey reveals that AI data extraction is overwhelming cultural institutions’ infrastructure, often leading to outages."
Did you know? Generative AI might feel brand-new, yet its roots stretch all the way back to the 1950 Turing Test and even earlier neural-network breakthroughs—decades before ChatGPT hit the scene.
Discover this surprising timeline, plus the ethical questions and classroom possibilities it unlocks, in Dr. Maria Elena Corbeil’s opening chapter of Teaching and Learning in the Age of Generative AI.
Preview the book here: bit.ly/4jVce93
EDTECH@UTRGV's insight:
Chapter 1 of Teaching and Learning in the Age of Generative AI affords readers with a deeper understanding of the disruptive, yet transformative role generative AI plays in modern education, as well as the balance required to navigate its opportunities and challenges responsibly.
As AI-generated text is becoming increasingly ubiquitous on the internet, some distinctive linguistic patterns are starting to emerge.
EDTECH@UTRGV's insight:
"As AI-generated text is becoming increasingly ubiquitous on the internet, some distinctive linguistic patterns are starting to emerge... Once you notice it, you start to see it everywhere. One teacher on Reddit even noticed that certain AI phrase structures are making the jump into spoken language."
"When generative AI entered classrooms, it promised a revolution. For many teachers, it delivered an avalanche of tools instead.
While edtech vendors race to integrate AI into every aspect of teaching and learning, educators are drawing clearer boundaries: AI should save them time, not replace their judgment. They want support for differentiation, not decision-making. Most of all, they want tools that align with the values and realities of teaching."
EDTECH@UTRGV's insight:
"Even as teachers adopt AI tools, they’re drawing clear lines in the sand. One of those lines? Relationships."
The AI ship has sailed. If you send home complex homework assignments, many of your students will most likely use AI to do the work. So what should you do? How can you ensure that students actually learn in your class?
EDTECH@UTRGV's insight:
"If you send home complex homework assignments, many of your students will most likely use AI to do the work. So what should you do? How can you ensure that students actually learn in your class?"
Generative artificial intelligence technology is rapidly changing the labor market. In response, colleges are increasingly looking for ways to offer AI courses to their students to keep up with employer demands.
EDTECH@UTRGV's insight:
"Generative AI technology is rapidly changing the labor market. Employers are increasingly posting job listings that include AI skills for positions even outside of the technology sector"
Darrell West discusses how the tech industry cannot assuage the public's concerns and backlash by pretending AI harms don't exist.
EDTECH@UTRGV's insight:
"Despite claims from some industry leaders that AI oversight is unnecessary, widespread public concerns and documented problems—including privacy risks, algorithmic biases, and security breaches—underscore the need for responsible regulation."
The increasing accessibility of AI technologies among K-12 and higher education students has raised concerns around academic integrity, although research shows that these tools may be used to supplement instruction, prioritize critical thinking, and promote digital literacy. The new book “Teaching and Learning in the Age of Generative AI,” edited by Joseph Rene Corbeil, Ed.D., and Maria Elena Corbeil, is a comprehensive resource providing evidence-based strategies for classroom implementation and helpful summaries of common benefits and risks.
"As educators we must strike a balance between harnessing generative AI's immense potential with upholding education's core values of fairness, integrity, and high standards."
How to equip teachers and staff with the necessary skills and confidence to integrate AI into their classrooms
EDTECH@UTRGV's insight:
"[W]hen we talk about artificial intelligence or anything that's associated with personalized learning, we need professional development consistently for teachers across the country to help build that capacity"
"As UX professionals, we need to be more like good doctors. When someone comes to us with “users are confused” or “conversion is low,” our first instinct shouldn’t be to grab our favorite “design treatment”. We should ask: I wonder what layers this problem lives in?"
To get content containing either thought or leadership enter:
To get content containing both thought and leadership enter:
To get content containing the expression thought leadership enter:
You can enter several keywords and you can refine them whenever you want. Our suggestion engine uses more signals but entering a few keywords here will rapidly give you great content to curate.
"For a while now, companies like OpenAI and Google have been touting advanced "reasoning" capabilities as the next big step in their latest artificial intelligence models. Now, though, a new study from six Apple engineers shows that the mathematical "reasoning" displayed by advanced large language models can be extremely brittle and unreliable in the face of seemingly trivial changes to common benchmark problems."