A few years ago, I thought I was about to die. And while (spoiler alert) I didn't, my severe health anxiety and ability to always assume the worst has persisted. However, the increasing proliferation of health-tracking smart devices and new ways that AI tries to make sense of our body's data has led me to make a decision. For my peace of mind, AI needs to stay far away from my health. And having just watched Samsung's Unpacked event, I'm more convinced of this than ever. I'll explain. Sometime around 2016, I had severe migraines that persisted for a couple of weeks. My anxiety steeply increased during this period due to the attendant worry, and when I eventually called the UK's NHS helpline and explained my various symptoms, they told me I needed to go to the nearest hospital and be seen within 2 hours. "Walk there with someone," I distinctly remember them telling me, "It'll be quicker than getting an ambulance to you." This call confirmed my worst fears -- that death was imminent. As it turned out, my fears of an early demise were unfounded. The cause was severe muscle strain from having hung multiple heavy cameras around my neck for an entire day while photographing a wedding. But the helpline agent was simply working on the limited data I'd provided, and as a result, they'd -- probably quite rightly -- taken a "better safe than sorry" approach and urged me to seek immediate medical attention. I've spent most of my adult life struggling with health anxiety, and episodes such as this have taught me a lot about my ability to jump to the absolute worst conclusions despite there being no real evidence to support them. A ringing in my ears? Must be a brain tumor. A twinge in my stomach? Well, better get my affairs in order. I've learned to live with this over the years, and while I still have my ups and downs, I know better about what triggers things. For one, I learned never to Google my symptoms. No matter what my symptoms were, cancer was always one of the possibilities a search would throw up. Medical sites -- including the NHS's website -- provided no comfort and usually only resulted in brain-shattering panic attacks. Sadly, I've found I have a similar response with many health-tracking tools. I liked my Apple Watch at first, and its ability to read my heart rate during workouts was helpful. Then I found I was checking it increasingly more often throughout the day. Then the doubt crept in: "Why is my heart rate high when I'm just sitting down? Is that normal? I'll try again in 5 minutes." When, inevitably, it wasn't different (or it was worse), panic would naturally ensue.
cnet-voices-apple-watch-heart-rate-zone I've used Apple Watches multiple times, but I find heart rate tracking more stressful than helpful. - (Vanessa Hand Orellana/CNET) Whether tracking heart rate, blood oxygen levels, or even sleep scores, I'd obsess over what a "normal" range should be, and any time my data fell outside of that range, I'd immediately assume it meant I was about to keel over right there and then. The more data these devices provided, the more things I felt I had to worry about. I've learned to keep my worries at bay and have continued to use smartwatches, without them being much of a problem for my mental health (I have to actively not use any heart-related functions like ECGs), but AI-based health tools scare me. During its Unpacked keynote, Samsung talked about how its new Galaxy AI tools -- and Google's Gemini AI -- will supposedly help us in our daily lives. Samsung Health's algorithms will track your heart rate as it fluctuates throughout the day, notifying you of changes. It will offer personalized insights from your diet and exercise to help with cardiovascular health and you can even ask the AI agent questions related to your health. To many, it may sound like a great holistic view of your health, but not to me. To me it sounds like more data being collected and waved in front of me, forcing me to acknowledge it and creating an endless feedback loop of obsession, worry, and, inevitably, panic. But it's the AI questions that are the biggest red flag for me. AI tools by their nature have to make "best guess" answers based usually on information publicly available online. Asking AI a question is really just a quick way of running a Google search, and as I've found, Googling health queries does not end well for me Samsung showed off various ways AI will be used within its health app during the Unpacked keynote. Much like the NHS phone operator who inadvertently caused me to panic about dying, an AI-based health assistant will be able to provide answers based only on the limited information it has about me. Asking a question about my heart health could bring up a variety of information, just as looking on a health website would about why I have a headache. But much like how a headache can technically be a symptom of cancer, it's also much more likely to be a muscular twinge. Or I haven't drank enough water. Or do I need to look away from my screen for a bit? Or I shouldn't have stayed up until 2 a.m. playing Yakuza: Infinite Wealth. Or a hundred other reasons, all of which are far more likely than the one I've already decided is the culprit. But will an AI give me the context I need to not worry and obsess? Or will it just provide me with all the potential as a way of trying to give a full understanding but instead feed that "what if" worry? And, like how Google's AI Overviews told people to eat glue on pizza, will an AI health tool simply scour the internet and provide me with a hash of an answer, with inaccurate inferences that could tip my anxiety into full panic attack territory?
Or perhaps, much like the kind doctor at the hospital that day who smiled gently at the sobbing man sitting opposite who'd already drafted a goodbye note to his family on his phone in the waiting room, an AI tool might be able to see that data and simply say, "You're fine, Andy, stop worrying and go to sleep." Maybe one day that'll be the case. Maybe health tracking tools and AI insights will be able to offer me a much-needed dose of logic and reassurance to counter my anxiety, rather than being the cause of it. But until then, it's not a risk I'm willing to take.
When you roll your future TV out of sight into a little box, thank LG Display. (Well they had the competitive help of Samsung also going after this, and both of these companies use TRIZ to come up with these systems - too bad other companies aren't savvy in this way as they too could bring awesome new designs to the world). The leader in big-screen OLED manufacturing, not satisfied to debut the first 88-inch 8K OLED TV, will show off another world's first at CES: a 65-inch 4K OLED display that's, get this, rollable. Although some concept big-screen TVs shown at past CES shows have been bendy, this is the first one that's flexible enough to spin up into tube form. LG's images depict it descending into a little box the size of a sound bar, but the company also talks about making the display portable. The secret, as usual, is its paper-thin organic light emitting diode display (OLED).
Learn why and how teachers can use generative AI to streamline lesson planning, personalize explanations, and automate retrieval practice—without losing instructional control.
Learn why and how teachers can use generative AI to streamline lesson planning, personalize explanations, and automate retrieval practice—without losing instructional control.
Following a privacy-preserving framework, an open-source platform allows for continuous monitoring of a wide range of smartphone-based signals, including moment-by-moment capture of screenshots, application usage logs, interaction histories and phone sensor readings.
Following a privacy-preserving framework, an open-source platform allows for continuous monitoring of a wide range of smartphone-based signals, including moment-by-moment capture of screenshots, application usage logs, interaction histories and phone sensor readings.
Why consciousness is more likely a property of life than of computation and why creating conscious, or even conscious-seeming AI, is a bad idea. -- Read the full article at: www.noemamag.com (Via Complexity Digest)
Perché la coscienza è più probabilmente una proprietà della vita che del calcolo e perché creare un'intelligenza artificiale cosciente, o anche solo apparentemente cosciente, è una cattiva idea.
Researchers tested whether generative AI could handle complex medical datasets as well as human experts. In some cases, the AI matched or outperformed teams that had spent months building prediction models. By generating usable analytical code from precise prompts, the systems dramatically reduced the time needed to process health data. The findings hint at a future where AI helps scientists move faster from data to discovery.
Researchers tested whether generative AI could handle complex medical datasets as well as human experts. In some cases, the AI matched or outperformed teams that had spent months building prediction models. By generating usable analytical code from precise prompts, the systems dramatically reduced the time needed to process health data. The findings hint at a future where AI helps scientists move faster from data to discovery.
These AI guidelines for teachers, now with checklists, are part of a series of steps designed to address that gap. The British Council is committed to supporting teachers with the skills, tools and resources they need for managing their classrooms and keeping up to date with key developments. These AI guidelines offer principles to help teachers make responsible choices when using AI in teaching or for continuing professional development. While primarily designed for English language teachers, these guidelines may also be useful for other subject teachers in English-medium education. We hope the guidelines are a valuable resource for teachers, empowering them to take an informed, ethical and context-specific approach to integrating AI into their teaching.
These AI guidelines for teachers, now with checklists, are part of a series of steps designed to address that gap. The British Council is committed to supporting teachers with the skills, tools and resources they need for managing their classrooms and keeping up to date with key developments. These AI guidelines offer principles to help teachers make responsible choices when using AI in teaching or for continuing professional development. While primarily designed for English language teachers, these guidelines may also be useful for other subject teachers in English-medium education. We hope the guidelines are a valuable resource for teachers, empowering them to take an informed, ethical and context-specific approach to integrating AI into their teaching.
As OpenAI and Anthropic move deeper into healthcare, experts say AI chatbots are becoming the new front door to medicine. This shift is shaking things up for some health tech startups, redefining the patient-provider relationship, and intensifying debates over safety, privacy and accountability.
As OpenAI and Anthropic move deeper into healthcare, experts say AI chatbots are becoming the new front door to medicine. This shift is shaking things up for some health tech startups, redefining the patient-provider relationship, and intensifying debates over safety, privacy and accountability.
Anthropic CEO Dario Amodei gave his most honest interview yet. 10 lessons from 68 minutes on human-level AI, why he left OpenAI, what jobs survive, and where the next investment wave begins.
Anthropic CEO Dario Amodei gave his most honest interview yet. 10 lessons from 68 minutes on human-level AI, why he left OpenAI, what jobs survive, and where the next investment wave begin
Quest Diagnostics is launching a chatbot tailored for laboratory test results as it seeks to allow patients the ability to get a handle on their findings and perform deeper dives on their own healt | Quest Diagnostics is launching a chatbot tailored for laboratory test results as it seeks to give patients the ability to get a handle on their findings and perform deeper dives on their own health.
Quest Diagnostics is launching a chatbot tailored for laboratory test results as it seeks to allow patients the ability to get a handle on their findings and perform deeper dives on their own healt | Quest Diagnostics is launching a chatbot tailored for laboratory test results as it seeks to give patients the ability to get a handle on their findings and perform deeper dives on their own health.
Salesforce is building out its library of pre-wired artificial intelligence agents to take on manual, administrative work on behalf of payers, providers and public health organizations. | Salesforce is building out its library of pre-wired AI agents to take on manual, administrative work on behalf of payers, providers and public health organizations.
Salesforce is building out its library of pre-wired artificial intelligence agents to take on manual, administrative work on behalf of payers, providers and public health organizations. | Salesforce is building out its library of pre-wired AI agents to take on manual, administrative work on behalf of payers, providers and public health organizations.
Very useful PDF download - Includes research findings from 12 studies - The Portrait of a Teacher in the Age of AI - How will the role of a teacher evolve in a world transformed by AI? https://www.ed3global.org/portraitofateacher Very useful PDF download - Includes research findings from 12 studies - The Portrait of a Teacher in the Age of AI - How will the role of a teacher evolve in a world transformed by AI? https://www.ed3global.org/portraitofateacher
Very useful PDF download - Includes research findings from 12 studies - The Portrait of a Teacher in the Age of AI - How will the role of a teacher evolve in a world transformed by AI? https://www.ed3global.org/portraitofateacher
The DfE published its updated Generative AI Product Safety Standards on 19 January 2026. They are, in principle, exactly what education needed. Guardrails for AI in schools. Protections for children. A clear statement that developers can't just ship consumer AI into classrooms and hope for the best. And overnight, they've made the AI tools most…
One of the promises of AI is that it can reduce workloads so employees can focus more on higher-value and more engaging tasks. But according to new research, AI tools don’t reduce work, they consistently intensify it: In the study, employees worked at a faster pace, took on a broader scope of tasks, and extended work into more hours of the day, often without being asked to do so. That may sound like a win, but it’s not quite so simple. These changes can be unsustainable, leading to workload creep, cognitive fatigue, burnout, and weakened decision-making. The productivity surge enjoyed at the beginning can give way to lower quality work, turnover, and other problems. To correct for this, companies need to adopt an “AI practice,” or a set of norms and standards around AI use that can include intentional pauses, sequencing work, and adding more human grounding.close
One of the promises of AI is that it can reduce workloads so employees can focus more on higher-value and more engaging tasks. But according to new research, AI tools don’t reduce work, they consistently intensify it: In the study, employees worked at a faster pace, took on a broader scope of tasks, and extended work into more hours of the day, often without being asked to do so. That may sound like a win, but it’s not quite so simple. These changes can be unsustainable, leading to workload creep, cognitive fatigue, burnout, and weakened decision-making. The productivity surge enjoyed at the beginning can give way to lower quality work, turnover, and other problems. To correct for this, companies need to adopt an “AI practice,” or a set of norms and standards around AI use that can include intentional pauses, sequencing work, and adding more human grounding.close
Quiet quit your TEFL job? Here's how to fall back in love with teaching using Action Research - turn your classroom into a laboratory and rediscover curiosity.
Until a few months ago, for the vast majority of people, “using AI” meant talking to a chatbot in a back-and-forth conversation. But over the past few months, it has become practical to use AI as an agent: you can assign them to a task and they do them, using tools as appropriate. Because of this change, you have to consider three things when deciding what AI to use: Models, Apps, and Harnesses.
Until a few months ago, for the vast majority of people, “using AI” meant talking to a chatbot in a back-and-forth conversation. But over the past few months, it has become practical to use AI as an agent: you can assign them to a task and they do them, using tools as appropriate. Because of this change, you have to consider three things when deciding what AI to use: Models, Apps, and Harnesses. - Worth some of your time to get a better understanding of what you are using or should use.
Why you should not use VSMs (Value Stream Mapping) to try and find the bottleneck in your factory. VSM is a powerful and widely used tool. It can be used to improve the speed and efficiency of the production of one product. But is is not the right tool to identify the bottleneck or capacity constraint in a production system. This short video explains why.
I recently encountered a term that stopped me in my tracks - cognitive outsourcing (thanks to Professor Miles Berry for that one!). We're all well-acquainted with cognitive offloading. That's the act of letting a tool take the weight of a task so we don't have to. But "outsourcing"? That feels different. It sounds distinctly middle-class,…
Cognitive offloading is often a total hand-off. Dumping the task and walking away. The responsibility transfers completely. You’ve got the answer, job done, move on.
Cognitive offloading is often a total hand-off. Dumping the task and walking away. The responsibility transfers completely. You’ve got the answer, job done, move on.
To get content containing either thought or leadership enter:
To get content containing both thought and leadership enter:
To get content containing the expression thought leadership enter:
You can enter several keywords and you can refine them whenever you want. Our suggestion engine uses more signals but entering a few keywords here will rapidly give you great content to curate.
A few years ago, I thought I was about to die. And while (spoiler alert) I didn't, my severe health anxiety and ability to always assume the worst has persisted. However, the increasing proliferation of health-tracking smart devices and new ways that AI tries to make sense of our body's data has led me to make a decision. For my peace of mind, AI needs to stay far away from my health. And having just watched Samsung's Unpacked event, I'm more convinced of this than ever. I'll explain. Sometime around 2016, I had severe migraines that persisted for a couple of weeks. My anxiety steeply increased during this period due to the attendant worry, and when I eventually called the UK's NHS helpline and explained my various symptoms, they told me I needed to go to the nearest hospital and be seen within 2 hours. "Walk there with someone," I distinctly remember them telling me, "It'll be quicker than getting an ambulance to you." This call confirmed my worst fears -- that death was imminent. As it turned out, my fears of an early demise were unfounded. The cause was severe muscle strain from having hung multiple heavy cameras around my neck for an entire day while photographing a wedding. But the helpline agent was simply working on the limited data I'd provided, and as a result, they'd -- probably quite rightly -- taken a "better safe than sorry" approach and urged me to seek immediate medical attention. I've spent most of my adult life struggling with health anxiety, and episodes such as this have taught me a lot about my ability to jump to the absolute worst conclusions despite there being no real evidence to support them. A ringing in my ears? Must be a brain tumor. A twinge in my stomach? Well, better get my affairs in order. I've learned to live with this over the years, and while I still have my ups and downs, I know better about what triggers things. For one, I learned never to Google my symptoms. No matter what my symptoms were, cancer was always one of the possibilities a search would throw up. Medical sites -- including the NHS's website -- provided no comfort and usually only resulted in brain-shattering panic attacks. Sadly, I've found I have a similar response with many health-tracking tools. I liked my Apple Watch at first, and its ability to read my heart rate during workouts was helpful. Then I found I was checking it increasingly more often throughout the day. Then the doubt crept in: "Why is my heart rate high when I'm just sitting down? Is that normal? I'll try again in 5 minutes." When, inevitably, it wasn't different (or it was worse), panic would naturally ensue.
cnet-voices-apple-watch-heart-rate-zone I've used Apple Watches multiple times, but I find heart rate tracking more stressful than helpful. - (Vanessa Hand Orellana/CNET) Whether tracking heart rate, blood oxygen levels, or even sleep scores, I'd obsess over what a "normal" range should be, and any time my data fell outside of that range, I'd immediately assume it meant I was about to keel over right there and then. The more data these devices provided, the more things I felt I had to worry about. I've learned to keep my worries at bay and have continued to use smartwatches, without them being much of a problem for my mental health (I have to actively not use any heart-related functions like ECGs), but AI-based health tools scare me. During its Unpacked keynote, Samsung talked about how its new Galaxy AI tools -- and Google's Gemini AI -- will supposedly help us in our daily lives. Samsung Health's algorithms will track your heart rate as it fluctuates throughout the day, notifying you of changes. It will offer personalized insights from your diet and exercise to help with cardiovascular health and you can even ask the AI agent questions related to your health. To many, it may sound like a great holistic view of your health, but not to me. To me it sounds like more data being collected and waved in front of me, forcing me to acknowledge it and creating an endless feedback loop of obsession, worry, and, inevitably, panic. But it's the AI questions that are the biggest red flag for me. AI tools by their nature have to make "best guess" answers based usually on information publicly available online. Asking AI a question is really just a quick way of running a Google search, and as I've found, Googling health queries does not end well for me
Samsung showed off various ways AI will be used within its health app during the Unpacked keynote. Much like the NHS phone operator who inadvertently caused me to panic about dying, an AI-based health assistant will be able to provide answers based only on the limited information it has about me. Asking a question about my heart health could bring up a variety of information, just as looking on a health website would about why I have a headache. But much like how a headache can technically be a symptom of cancer, it's also much more likely to be a muscular twinge. Or I haven't drank enough water. Or do I need to look away from my screen for a bit? Or I shouldn't have stayed up until 2 a.m. playing Yakuza: Infinite Wealth. Or a hundred other reasons, all of which are far more likely than the one I've already decided is the culprit. But will an AI give me the context I need to not worry and obsess? Or will it just provide me with all the potential as a way of trying to give a full understanding but instead feed that "what if" worry? And, like how Google's AI Overviews told people to eat glue on pizza, will an AI health tool simply scour the internet and provide me with a hash of an answer, with inaccurate inferences that could tip my anxiety into full panic attack territory?
Or perhaps, much like the kind doctor at the hospital that day who smiled gently at the sobbing man sitting opposite who'd already drafted a goodbye note to his family on his phone in the waiting room, an AI tool might be able to see that data and simply say, "You're fine, Andy, stop worrying and go to sleep." Maybe one day that'll be the case. Maybe health tracking tools and AI insights will be able to offer me a much-needed dose of logic and reassurance to counter my anxiety, rather than being the cause of it. But until then, it's not a risk I'm willing to take.