Internet of Things - Technology focus
42.9K views | +0 today
Follow
 
Scooped by Richard Platt
onto Internet of Things - Technology focus
November 28, 2022 3:47 PM
Scoop.it!

Smartphone Addiction linked with Lower Cognitive Abilities, Less Self-Control, and Worse off Psychological Well-Being

Smartphone Addiction linked with Lower Cognitive Abilities, Less Self-Control, and Worse off Psychological Well-Being | Internet of Things - Technology focus | Scoop.it
Richard Platt's insight:

Published in the International Journal of Environmental Research and Public Health, researchers found that problematic smartphone use is linked with low self-esteem as well as negative cognitive outcomes.

The majority of people who live in industrialized countries have smartphones. The fear of being without one’s smartphones is known as “nomophobia” and has become a social problem. Research shows that people who have smartphone addiction tend to report more loneliness and experience self-regulation deficits. Furthermore, people who have smartphone addictions are likely to experience withdrawal symptoms when their smartphone use is restricted. Researchers were interested in investigating the relationship between smartphone usage and behavioral and cognitive self-control deficits.

The researchers argue that their findings shows that people with high levels of smartphone addiction display less self-control. Poor self-regulation could have negative consequences on people’s daily lives, such as deficiencies in cognitive tasks and slower reaction times.

Results show that participants who had higher levels of smartphone addiction had a higher percentage of noncompliance. Participants with higher levels of smartphone addiction spent more time using their phones in all three phases, even when they were instructed to limit their smartphone use during the experimental phase. Results also show that participants with higher levels of smartphone addiction tended to exhibit worse working memory, visual reaction time, auditory reaction time, ability to inhibit motor response, and behavioral inhibition compared to participants with lower levels of smartphone addiction. A limitation of this study is that some of the original participants left the study when they found out they would have to limit their smartphone use to one hour a day for three consecutive days, so data from people with likely very high levels of smartphone addiction is missing.  Recruiting 111 participants, ranging from ages 18 -65. 28% of the participants were college students and 78% were workers. Each participant’s phone data was retrieved via the “SocialStatsApp” which provides information about the use of TikTok, Facebook, Instagram, and WhatsApp. The Smartphone Addiction Scale – Short Version (SAS-SV) was used to determine each participant’s risk of smartphone addiction and severity. Participants also responded to items on the short version of the Psychological General Well-Being Index, the Fear of Missing Out Scale, and the Procrastination Scale.

No comment yet.
Internet of Things - Technology focus
Your new post is loading...
Your new post is loading...
Scooped by Richard Platt
Today, 8:44 PM
Scoop.it!

A Short Guide to Fatigue Failure in Machine Design

A Short Guide to Fatigue Failure in Machine Design | Internet of Things - Technology focus | Scoop.it
Fatigue failure is a common challenge in machine design. For engineers and designers alike, addressing fatigue failure is key to ensuring the integrity of structures an
Richard Platt's insight:

Fatigue failure is a common challenge in machine design. For engineers and designers alike, addressing fatigue failure is key to ensuring the integrity of structures and components throughout the lifecycle. 

1/- An overview of fatigue failure and how it affects mechanical systems.

2/- How FEA software enables engineers to predict fatigue failure points on structural designs, before the manufacturing process.

From the dawn of the Industrial Revolution, humanity has become increasingly reliant on machines in daily life. This frequent use results in mechanical structures having to compete not only with the erosion of time, but also the fatigue stemming from repetitive loads being placed on them.  Defined as the initiation and propagation of cracks due to cyclic loads, fatigue failure can affect even the most well-built contraptions. If left unnoticed for long enough, these cracks can snowball into much larger structural deformities, leading to severe damage in mechanical components that might otherwise seem to be operating well within their limits.  This article provides an overview of what fatigue failure is, explores notable real-world examples and outlines the critical principles that every engineer should understand to help prevent this pervasive issue. It will also look at the relationship between fatigue failure and FEA software. 

-- A Notable Example of Fatigue Failure in the Aerospace Industry.  --  Before segueing into a discussion of fatigue failure, a good exercise would be to consider the repercussions of mechanical failure. A notable real-world example to aid this is the 1954 crash of the De Havilland Comet. As the world’s first commercial jetliner, the Comet was the crown jewel of Britain’s aerospace industry, representing a symbol of British aviation prowess…until a string of accidents changed its course. Starting with BOAC Flight 781, two fatal Comet aircraft disasters were the subject of a multi-year investigation, with authorities concluding that metal fatigue due to design defects led to explosive cabin decompression mid-flight. In particular, the square design of forward-facing windows created opportunities for stress accumulation at the corners, the effect of which was exacerbated by riveted window supports (instead of glued).  Together, these design decisions triggered fatigue cracks following cyclic cabin pressurization, eventually leading to propagation of cracks and subsequent violent decompression of the aircraft.

No comment yet.
Scooped by Richard Platt
February 10, 8:09 AM
Scoop.it!

8 Months into the Brave New World of Windows on ARM, and This is the State of Play for PC gaming outside of the x86 Arena

8 Months into the Brave New World of Windows on ARM, and This is the State of Play for PC gaming outside of the x86 Arena | Internet of Things - Technology focus | Scoop.it
Qualcomm was serious in its gaming ambitions for the Snapdragon X Elite, so what does that look like half a year down the line?
Richard Platt's insight:

For years, we PC gamers have had it easy. No, really. Not quite as easy as console users, perhaps, but we could be reasonably sure that any software we tried to run was written for the processor architecture inside our box. And that's because there was only one: x86, as championed by Intel and AMD. Companies like Cyrix, VIA and even Fujitsu have produced x86 chips too, but when it comes down to home-use gaming PCs in the 21st century, there have only been two horses in this town. Then Qualcomm came along and ruined everything. Or possibly changed everything, potentially for the better. The ARM-based Snapdragon chips were released in June last year and have sparked a debate over whether x86 is dead, but there's a more pressing question fizzing in the brains of those for whom a controller is a more pressing concern than a spreadsheet: are Snapdragon chips any good for gaming? ---  The compatibility is  pretty robust, but not all-pervasive.  Still, the performance is relatively impressive when it does work, and impressively quiet, too. And that compatibility is only going to improve—as are the numbers of native apps—so we still think the future for ARM-based PCs is bright, even if right now it's a tough call for a PC gamer. But, if other players get in the game alongside Qualcomm, things could start to look very different. I mean, Nvidia's surely going to release its own ARM-y CPU at some point, right?  So, all we need now are ARM-native Nvidia and AMD drivers for graphics cards, a desktop platform from Qualcomm and/or Nvidia, and PC gaming on ARM can really begin in earnest.

 

 

No comment yet.
Scooped by Richard Platt
February 8, 6:55 PM
Scoop.it!

Soldiers Give the Army's New Rifle Optic Low Ratings

Soldiers Give the Army's New Rifle Optic Low Ratings | Internet of Things - Technology focus | Scoop.it
Richard Platt's insight:

The Army has officially fielded its brand-new rifles to soldiers, but the service is apparently still working out the kinks with the systems' advanced optic, according to a new assessment from the Defense Department's top weapons tester.  Soldiers graded the XM157 Fire Control smart scope for the Army's Next Generation Squad Weapon rifles as "below average/failing," according to a new assessment from the Defense Department's top weapons tester. The fiscal 2024 report on the Army's Next-Generation Squad Weapon program from the Pentagon's Director, Operational Test and Evaluation published last week indicates that the XM157 Fire Control smart scope that's intended to augment the program's XM7 Next Generation Rifle and XM250 Next Generation Automatic Rifle received negative ratings from soldiers during testing last year. While the report doesn't contain specific feedback on the XM7 or XM250 rifles, which were developed by gunmaker Sig Sauer to replace the M4 carbine and M249 Squad Automatic Weapon in the Army's arsenal, it clearly states that soldiers "assessed the usability of the XM157 as below average/failing."  The XM7 with mounted XM157 demonstrated a low probability of completing one 72-hour wartime mission without incurring a critical failure," the Operational Test and Evaluation report adds. However, the report wasn't entirely negative: The assessment concluded that the specialized 6.8mm ammo for the XM7 and XM250 does, in fact, "provide increased lethality" over the legacy 5.56mm M855A1 Enhanced Performance Round used in the M4 and M249. A 1-8x30 variable magnification direct view optic built by Vortex Optics subsidiary Sheltered Wings, the XM157 incorporates advanced technologies such as a laser rangefinder, aiming lasers, environmental sensors, ballistic solver, compass and a digital display overlay, all of which are designed to "increase the probability of hit and decrease the time to engage" with a computerized assist, according to the Army's fiscal 2025 budget request.  The XM157 also features wireless connectivity that will purportedly allow it to integrate with heads-up displays like the Army's current Enhanced Night Vision Goggle-Binocular, or ENVG-B, and future Integrated Visual Augmentation System, or IVAS, do-it-all goggles, allowing soldiers to survey the battlefield from cover using a live video feed from their weapon optic.  The Army plans on eventually fielding the optic alongside the XM7 and XM250 to close combat formations and security force assistance brigades, replacing the M68 Close Combat Optic and Advanced Combat Optical Gunsight.  The Army selected  Vortex Optics and Sheltered Wings back in January 2022 to produce as many as 250,000 XM157 systems for around $2.7 billion over a decade. The service has so far spent roughly $584.64 million on 50,161 XM157 systems through fiscal 2025, according to budget documents, with plans on procuring a total of 124,749 of the optics in the coming years.  The dour XM157 assessment emerged as part of a classified combined operational demonstration and limited lethality assessment report in May, according to the DOT&E assessment. Ironically, that report was apparently authored shortly after the Army released video of a noncommissioned officer with the 101st Airborne Division praising the advanced optic following a month of testing -- praise the NCO himself later claimed was the "one nice thing" he had to say about the XM157 after 10 minutes critiquing the system.

Despite the documented issues detailed in the DOT&E report, the Army is still plowing ahead with the system's development. Indeed, the service released a sources sought notice in late January for "novel technologies or ongoing research that would be beneficial for the XM157 system as a module and/or software that provides enhanced capability."

No comment yet.
Scooped by Richard Platt
February 8, 6:01 PM
Scoop.it!

Canada Detects China-linked Campaign Targeting Prime Minister Candidate Freeland

Canada Detects China-linked Campaign Targeting Prime Minister Candidate Freeland | Internet of Things - Technology focus | Scoop.it
A task force found that WeChat accounts linked to the Chinese government launched "coordinated" and "malicious" content against Chrystia Freeland who is in the running to replace Justin Trudeau as PM.A Canadiantask force, which tracks suspected foreign interference, on Friday said it had detected "coordinated and malicious activity" originating fro
Richard Platt's insight:

A task force found that WeChat accounts linked to the Chinese government launched "coordinated" and "malicious" content against Chrystia Freeland who is in the running to replace Justin Trudeau as PM.A Canadiantask force, which tracks suspected foreign interference, on Friday said it had detected "coordinated and malicious activity" originating from China, targeting Chrystia Freeland -- a leadership candidate from the country's Liberal Party.  Freeland is currently in the running to replace Prime Minister Justin Trudeau, come March 9.  Campaign targeting Freeland traced to WeChat account The activity has allegedly been traced to a WeChat account linked to the Chinese government, Global Affairs Canada's Rapid Response Mechanism said in a press release. The same probe found that China viewed Canada as a high-priority target and was the most active foreign party targeting all levels of government.  In a post on social media platform X, Freeland said she will "not be intimidated by Chinese foreign interference."  "Having spent years confronting authoritarian regimes, I know firsthand the importance of defending our freedoms. Canada's democracy is strong. My thanks to our national security agencies for protecting it," .  Formerly Canada's finance minister, Freeland abruptly resigned in December forcing Trudeau to announce his resignation as prime minister and Liberal Party leader. The department is mandated to monitor the digital information ecosystem under the Security and Intelligence Threats to Elections (SITE) Task Force.  "The launch of this information operation was traced to WeChat's most popular news account -- an anonymous blog that has been previously linked by experts at the China Digital Times to the People's Republic of China,". "RRM Canada identified over 30 WeChat news accounts taking part in the campaign. The campaign received very high levels of engagement and views."  China's embassy in Ottawa did not immediately respond to the allegations but Beijing has repeatedly denied attempting to interfere in Canadian affairs.  The department had already briefed the Liberal Party and members of Freeland's campaign team.  -- Not the first time  --- SITE's findings add to previous allegations Ottawa has made against China regarding election meddling. Just in January, an official investigation alleged that China has tried to meddle in previous Canadian elections but their attempts did not affect the outcome.  

No comment yet.
Scooped by Richard Platt
February 8, 11:27 AM
Scoop.it!

DOGE Is Coming for Your Social Security, States Prepare to Sue

DOGE Is Coming for Your Social Security, States Prepare to Sue | Internet of Things - Technology focus | Scoop.it
As Musk's group heads to more agencies, more legal challenges are cropping up.
Richard Platt's insight:

Elon Musk’s Department of Government Efficiency continues to tear through the federal government like a virus. DOGE has moved from one bureaucratic organ to the next, stopping at each juncture to fire people, seize data, and generally cause mayhem and distress. Now, as various legal challenges begin to pile up, Musk’s pro-corporate, anti-government organization is turning to two agencies that directly impact millions of Americans: the Consumer Financial Protection Bureau and the Social Security Administration. On Thursday, an anonymous source communicated to Semafor that DOGE would soon be training its sights on the Social Security Administration. At least one DOGE staffer is preparing to “work with the agency,” the outlet reported. While there’s little information available as to what Musk’s group plans to do to America’s popular public benefits program, DOGE’s whole modus operandi has been to take a butcher’s knife to the agencies it “works” with, so it’s difficult to imagine that the changes will be helpful to the average American.  A recent press release stated: “The unelected Musk recently announced plans for a new payments platform run jointly by Visa and “X” (formerly Twitter). Now, he’s moved his power grab to the CFPB, in a clear attempt to attack union workers and defang the only agency that checks the greed of payment providers, as well as auto lenders like Tesla.”  CNN also reported on Thursday that one of DOGE’s minions—a “23-year-old former SpaceX intern”—had shown up at the Department of Energy, where—against the advice of the agency’s general counsel—he was given access to the department’s IT systems. God only knows who thought that was a good idea.  DOGE is now weathering multiple lawsuits and, on Thursday, it was announced that attorneys general for over a dozen states were readying a collective litigation effort against the organization over its access to U.S. Treasury data. An announcement related to the suit notes that DOGE “staffers’ access to payment systems containing Americans’ private information, bank account information and other sensitive data” is “unlawful.” The states involved—including New York, California, Nevada, Arizona, Illinois, Maine, Maryland, Minnesota, and others—say they want to “ensure those responsible are held accountable.”

“As the richest man in the world, Elon Musk is not used to being told ‘no,’ but in our country, no one is above the law,” the statement reads. “This level of access for unauthorized individuals is unlawful, unprecedented and unacceptable.”

No comment yet.
Scooped by Richard Platt
February 7, 12:33 PM
Scoop.it!

ex-Intel CEO Gelsinger Invests In British AI Chip Startup Fractile

Richard Platt's insight:

With the advent of reasoning models, which require memory-bound generation of thousands of output tokens, the limitations of existing hardware roadmaps have compounded. To achieve our aspirations for AI, we will need radically faster, cheaper and much lower power inference. I’m pleased to share that I’ve recently invested in Fractile, a U.K.-founded AI hardware company who are pursuing a path that’s radical enough to offer such a leap,” Gelsinger said  Fractile founded in 2022 by CEO Walter Goodwin. The company raised a $15 million seed round in summer 2024 and received a grant from the U.K. government’s ARIA program of $6.52 million in October. It will use the funding to develop its data center LLM inference accelerator for models with billions or hundreds of billions of parameters. Fractile’s projections have its accelerator running Llama2-70B 100× faster than Nvidia H100s (decode tokens/second) at 1/10th of the system cost. Goodwin told EE Times in an earlier interview. “That’s what led to me starting Fractile—I started to see that for inference, because of the memory bottleneck, you spend a lot of time not at the top of your roofline.” Fractile’s AI accelerator concept uses in-memory compute. While Goodwin declined to say whether there are any analog compute elements to the design, he said it will use Fractile’s own CMOS SRAM cell design. Fractile’s Technical Design Authority, Tony Stansfield, is a 10-year veteran of SureCore, a British SRAM designer. “It’s transistor-level design, for sure,” Goodwin said. “This is not a fundamentally novel approach to memory, we’re using relatively standard, albeit somewhat modified memory cells, and doing a custom circuit layout and design for that. That’s part of how we’re driving up density and TOPS/W.”

 

 

“I was excited by the scaling hypothesis and the idea that deep learning would [change] from thousands of companies training thousands of different models to tackle particular problems, we’d all just say thank you to whoever spent the most money to train something on a web-scale dataset and then find ways to leverage that same backbone architecture to power different things,”

In-memory compute as a technique to provide fast, high throughput matrix-vector multiplication is well known, and this increases in importance as we move to foundation models, Goodwin said.

While in-memory compute offers modest advantages for CNN inference, Goodwin argues, CNN workloads often require a mixture of matmul and other operations, and they have smaller matrices and kernels. In-memory accelerators keep weights stationary in memory so they do not need to keep transferring weights between processor and memory, but moving activations around the chip is still a relatively large part of the workload, so the performance advantages in-memory compute brings for CNNs are relatively modest.

For LLMs, there are many, many more weights than activations, and activations are smaller—the nature of the workload amplifies the advantages in-memory compute offers.  

“One of the hallmarks of multi-billion-parameter models is the matrix multiply, in particular the very wide matrix,” Goodwin said. “Because activations come out of the sides of those matrices, they’re dramatically smaller, one ten-thousandth as many activations as weights for inference. That’s a change in design point in terms of how far you can push the advantage from keeping matrices or weights stationary in memory.”

While in-memory compute is well suited to LLMs, many existing in-memory compute architectures that were built for the CNN era are also at a disadvantage because unlike CNNs, LLMs are characterized by variable length inputs and outputs, Goodwin added.

“[The existing concept] is strained by LLMs even for a single user, where there are two distinct stages and those stages are of uncertain duration,” he said. “If your compiler paradigm expects a static list of what needs to be done in what order, and has compiled it to flow through the chip in a certain way, [and calculated when] things are going to trigger, you’re inherently going to be leaving some performance on the table because you’ll have to be padding things to fit that sequence length, and so on.”  

Existing architectures are built around matrix-matrix multiplication for better data reuse or because there is a systolic array of a particular size. For workloads that switch between prompt processing (long sequences of data) and the decode stage (one word at a time) matrix-vector multiplication is more flexible and, therefore, a better fit, Goodwin added, noting that flexibility is a key part of Fractile’s architecture.

“This sounds like an LLM-specific concept, but one of the things that is really sticky about AI is, even five years into the future, we’re still going to be operating on data as sequences, we’ll still be tokenizing everything,” he said. “In-memory compute, by not having the memory access bottleneck, allows efficient matrix-vector multiply, which allows you to create systems that are much more effective at sequence processing.”

With Goodwin’s background in robotics, can he see a need for efficient LLM inference at the edge, in robots in particular? While the company is open to talking with teams who are defining frontier models in every sector, Fractile’s solution may not be suitable for current edge applications, he said.

“When you can serve huge throughput, it makes sense to look for those places where there is throughput demand, and amortize your hardware and get those cost savings,” he said. “Right now, that’s quite clearly a data center grade solution. But the edge actually does have that all-you-can-eat appetite for token processing as well, it’s just that we’re only just starting to build those systems.”

The dawn of agentic AI in robotics could be the turning point, since it would require multiple times more throughput with fast latency.

“For future robot platforms, I think we should be rethinking what it means to have inference running there,” he said. “It’s not going to be something processing one image every 30 milliseconds. It might very well look like a multi-user data center grade inference server, with perhaps 32 separate threads running at thousands of tokens per second to come up with the next action that you take 20 milliseconds later.”

No comment yet.
Scooped by Richard Platt
February 7, 5:26 AM
Scoop.it!

Why Being Intelligent Is Hated by Society 

Why Being Intelligent Is Hated by Society | Schopenhauer




#philosophy
#psychology
#psychologyfacts
#stoicism

Richard Platt's insight:

Have you sat down and wondered why being smart sometimes feels like a curse? Why do the most intelligent people often end up alone, while those of average minds seem to thrive socially?  Arthur Schopenhauer, one of history's most brutally honest philosophers, cracked this riddle nearly 200 years ago.

No comment yet.
Scooped by Richard Platt
February 6, 5:19 PM
Scoop.it!

Former Google CEO Eric Schmidt talks Artificial Intelligence

Former Google CEO Eric Schmidt talks Artificial Intelligence | Internet of Things - Technology focus | Scoop.it
Richard Platt's insight:

Neither the public or the tech giants pushing artificial intelligence understand its long-term implications, warns former Google CEO Eric Schmidt.  Eric Schmidt, the former Google CEO, is thinking about artificial intelligence – how it interacts with humans, and how it may reshape democracy. Or replace it.  Schmidt coauthored Genesis, with former Microsoft executive Craig Mundie and the late Henry Kissinger

Schmidt talked about the book on NPRs Morning Edition. Here are six key takeaways from our discussion:

1/ AI will be available to almost anyone.  -- China's recent release of the new, less expensive, large language model DeepSeek, which startled the AI industry, creates "a problem of proliferation," Schmidt says. Almost anyone can enjoy the services of "a great philosopher and a great polymath and a great Leonardo da Vinci"—but that includes those of us who "are really, really evil."

2/ AI can be a tool for demagogues. --  These systems can become "the great addiction machines and the great persuaders," which a political leader could use to "promise everything to everyone," using messages built "that are targeted to each individual individually."

3/ People are interacting with tech they don't fully grasp. -- The coauthors considered the way that humans may respond to the increasing power of computers. Two scenarios particularly concern them: People may begin to worship this new intelligence and "develop it into a religion," or else "they'll fight a war against it." 

4/ Tech leaders may not grasp the implications either. -- "The reason they don't get it right," Schmidt says, is that "these are really social and moral questions. The companies are doing what companies do. They're trying to maximize their revenue." What's missing, Schmidt says, is a social consensus "of what's right and what's wrong."

5/ People might allow themselves to be governed by AI. The book explores the ancient idea of a "philosopher king," a strong and absolute ruler, an ideal much discussed in history and rarely if ever achieved. What if the machine became that king?

6/ "Artificial intelligence should ultimately be able to follow reasoning better than humans. So let's imagine you take an AI system and you give it a constitution," Schmidt says, adding that the thought experiment runs up against a problem: Who gets to write the constitution?"Human societies have compromises. We allow for deviance. We allow for certain mistakes, but not others. If you wrote the perfect computer to tell people what to do, the people would revolt." 

 

Having conducted this thought experiment, Schmidt says he prefers democracy, The recent presidential inauguration showed a concentration of power.  "To some degree I was proud of my industry and impressed with my friends," Schmidt says of the tech executives who shared the inaugural stage with President Trump on Jan. 20. "On the other hand, I was concerned that there is a line that it's important business not cross. I would like our country to be run by our political leadership." The book Schmidt co-authored shows that the businesses represented on stage have enormous implications for politics.

 

 

No comment yet.
Scooped by Richard Platt
February 6, 1:23 PM
Scoop.it!

Google launches Gemini 2.0 Pro, Flash-Lite and connects reasoning model Flash Thinking to YouTube, Maps and Search

Google launches Gemini 2.0 Pro, Flash-Lite and connects reasoning model Flash Thinking to YouTube, Maps and Search | Internet of Things - Technology focus | Scoop.it
Google has released a whole new range of AI-powered research and interactions that simply can't be matched by DeepSeek or OpenAI.
Richard Platt's insight:

Google’s Gemini series of AI large language models (LLMs) started off rough nearly a year ago with some embarrassing incidents of image generation gone awry, but it has steadily improved since then, and the company appears to be intent on making its second-generation effort — Gemini 2.0 — the biggest and best yet for consumers and enterprises. Today, the company announced the general release of Gemini 2.0 Flash, introduced Gemini 2.0 Flash-Lite, and rolled out an experimental version of Gemini 2.0 Pro. These models, designed to support developers and businesses, are now accessible through Google AI Studio and Vertex AI, with Flash-Lite in public preview and Pro available for early testing. “All of these models will feature multimodal input with text output on release, with more modalities ready for general availability in the coming months,” Koray Neither DeepSeek-R1 nor OpenAI’s new o3-mini model can accept multimodal inputs — that is, images and file uploads or attachments.  Google is bringing to the table even as competitors such as DeepSeek and OpenAI continue to launch powerful rivals. 

While R1 can accept them on its website and mobile app chat, The model performs optical character recognition (OCR) a more than 60-year-old technology, to extract the text only from these uploads — not actually understanding or analyzing any of the other features contained therein. However, both are a new class of “reasoning” models that deliberately take more time to think through answers and reflect on “chains-of-thought” and the correctness of their responses. That’s opposed to typical LLMs like the Gemini 2.0 pro series, so the comparison between Gemini 2.0, DeepSeek-R1 and OpenAI o3 is a bit of an apples-to-oranges.

But there was some news on the reasoning front today from Google, too: Google CEO Sundar Pichai took to the social network X to declare that the Google Gemini mobile app for iOS and Android has been updated with Google’s own rival reasoning model Gemini 2.0 Flash Thinking. The model can be connected to Google Maps, YouTube and Google Search, allowing for a whole new range of AI-powered research and interactions that simply can’t be matched by upstarts without such services like DeepSeek and OpenAI.

No comment yet.
Scooped by Richard Platt
February 5, 2:50 PM
Scoop.it!

Universities Augment Engineering Curricula To Boost Employability

Universities Augment Engineering Curricula To Boost Employability | Internet of Things - Technology focus | Scoop.it
Richard Platt's insight:

Increasing numbers of universities are offering semiconductor courses in their engineering programs, and also in math, physics, and business degrees.  Companies need engineers across all disciplines and universities are stepping up to deliver them; schools reap benefits, too. Most universities now offer a broad foundation so students can pivot to other industries during cyclical downturns, or when technology and science create entirely new and potentially lucrative opportunities, such as generative AI, advanced packaging, and at some point in the future, the rollout of quantum computing. Schools see this as a big opportunity, as well, and some are going all-in to prepare students for a career in microelectronics.

No comment yet.
Scooped by Richard Platt
February 5, 2:44 PM
Scoop.it!

Rethinking innovation: a smarter way to solve problems

Rethinking innovation: a smarter way to solve problems | Internet of Things - Technology focus | Scoop.it
Innovation often seems like a game of chance, but TRIZ—the Theory of Inventive Problem-Solving—offers a structured way to generate breakthrough ideas. Used by leading companies worldwide, TRIZ replaces trial-and-error with systematic methods to tackle complex challenges.

Unlike traditional brainstorming, it identifies patterns in innovation to find reliable, efficient solutions. Ivan Gekht, CEO of Gehtsoft, breaks it all down, outlining how TRIZ helps businesses streamline problem-solving and drive sustainable innovation. From engineering to business strategy, TRIZ is shaping the future. But how can companies integrate this methodology? The answer could redefine the way we approach innovation.
Richard Platt's insight:

Innovation often seems like a game of chance, but TRIZ—the Theory of Inventive Problem-Solving—offers a structured way to generate breakthrough ideas. Used by leading companies worldwide, TRIZ replaces trial-and-error with systematic methods to tackle complex challenges.  Unlike traditional brainstorming, it identifies patterns in innovation to find reliable, efficient solutions. Ivan Gekht, CEO of Gehtsoft, breaks it all down, outlining how TRIZ helps businesses streamline problem-solving and drive sustainable innovation. From engineering to business strategy, TRIZ is shaping the future. But how can companies integrate this methodology? The answer could redefine the way we approach innovation.

No comment yet.
Rescooped by Richard Platt from Learning & Technology News
February 3, 11:03 AM
Scoop.it!

AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking

AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking | Internet of Things - Technology focus | Scoop.it

The proliferation of artificial intelligence (AI) tools has transformed numerous aspects of daily life, yet its impact on critical thinking remains underexplored. This study investigates the relationship between AI tool usage and critical thinking skills, focusing on cognitive offloading as a mediating factor.


Via Nik Peachey
Richard Platt's insight:

The proliferation of artificial intelligence (AI) tools has transformed numerous aspects of daily life, yet its impact on critical thinking remains underexplored. This study investigates the relationship between AI tool usage and critical thinking skills, focusing on cognitive offloading as a mediating factor.

Nik Peachey's curator insight, February 3, 6:08 AM

This looks like an interesting read - On my list when i find some time https://www.mdpi.com/2075-4698/15/1/6

Scooped by Richard Platt
February 3, 9:28 AM
Scoop.it!

Neuromorphic Semiconductor Chip Learns and Corrects Itself 

Neuromorphic Semiconductor Chip Learns and Corrects Itself  | Internet of Things - Technology focus | Scoop.it
The research team of the School of Electrical Engineering posed by the newly developed processor. (From center to the right) Professor Young-Gyu Yoon, Integrated Master's and Doctoral Program Students Seungjae Han and Hakcheon Jeong and Professor Shinhyun Choi (Image: The researchers) Existing computer systems have separate data processing and storage devices, making them inefficient for processing complex data like AI.
Richard Platt's insight:

Existing computer systems have separate data processing and storage devices, making them inefficient for processing complex data like AI. A Korea Advanced Institute of Science and Technology (KAIST) research team has developed a memristor-based integrated system that is similar to the way our brain processes information.

It is now ready for use in various devices including smart security cameras, allowing them to recognize suspicious activity immediately without having to rely on remote cloud servers, and medical devices with which it can help analyze health data in real time.  What is special about this computing chip is that it can learn and correct errors that occur due to non-ideal characteristics that were difficult to solve in existing neuromorphic devices. For example, when processing a video stream, the chip learns to automatically separate a moving object from the background, and it becomes better at this task over time.  This self-learning ability has been proven by achieving accuracy comparable to ideal computer simulations in real-time image processing. The research team's main achievement is that it has completed a system that is both reliable and practical, beyond the development of brain-like components. It can adapt to immediate environmental changes and has presented an innovative solution that overcomes the limitations of existing technology.  At the heart of this innovation is a next-generation semiconductor device called a memristor. The variable resistance characteristics of this device can replace the role of synapses in neural networks, and by utilizing it, data storage and computation can be performed simultaneously, just like our brain cells do. The memristor is highly reliable and can precisely control resistance changes. This efficient system excludes complex compensation processes through self-learning.  The study is significant in that it experimentally verified the commercialization possibility of a next-generation neuromorphic semiconductor-based integrated system that supports real-time learning and inference. This technology will revolutionize the way artificial intelligence is used in everyday devices, allowing AI tasks to be processed locally without relying on remote cloud servers, making them faster, more privacy-protected, and more energy-efficient.  “This system is like a smart workspace where everything is within arm’s reach instead of having to go back and forth between desks and file cabinets,” explained KAIST researchers Hakcheon Jeong and Seungjae Han, who led the development of this technology. “This is similar to the way our brain processes information, where everything is processed efficiently at once at one spot.”

No comment yet.
Scooped by Richard Platt
Today, 9:47 AM
Scoop.it!

Microsoft Study Finds AI Makes Human Cognition “Atrophied and Unprepared”

Microsoft Study Finds AI Makes Human Cognition “Atrophied and Unprepared” | Internet of Things - Technology focus | Scoop.it
Researchers find that the more people use AI at their job, the less critical thinking they use.
Richard Platt's insight:

A new paper from researchers at Microsoft and Carnegie Mellon University finds that as humans increasingly rely on generative AI in their work, they use less critical thinking, which can “result in the deterioration of cognitive faculties that ought to be preserved.”

“[A] key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise,”  Overall, these workers self-reported that the more confidence they had in AI doing the task, the more they observed “their perceived enaction of critical thinking.” When users had less confidence in the AI’s output, they used more critical thinking and had more confidence in their ability to evaluate and improve the quality of the AI’s output and mitigate the consequences of AI responses. “The data shows a shift in cognitive effort as knowledge workers increasingly move from task execution to oversight when using GenAI,” the researchers wrote. “Surprisingly, while AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving.” The researchers also found that “users with access to GenAI tools produce a less diverse set of outcomes for the same task, compared to those without. This tendency for convergence reflects a lack of personal, contextualised, critical and reflective judgement of AI output and thus can be interpreted as a deterioration of critical thinking.”  The researchers also noted some unsurprising conditions that make workers use more or less critical thinking and pay attention to the quality of the AI outputs. For example, workers who felt crunched for time used less critical thinking, while workers in “high-stakes scenarios and workplaces” who were worried about harm caused by faulty outputs used more critical thinking.

 

 

  The researchers recruited 319 knowledge workers for the study, who self reported 936 first-hand examples of using generative AI in their job, and asked them to complete a survey about how they use generative AI (including what tools and prompts), how confident they are the generative AI tools’ ability to do the specific work task, how confident they are in evaluating the AI’s output, and how confident they are in their abilities in completing the same work task without the AI tool. Some tasks cited in the paper include a teacher using the AI image generator DALL-E to create images for a presentation about hand washing at school, a commodities trader using ChatGPT to “generate recommendations for new resources and strategies to explore to hone my trading skills,” and a nurse who “verified a ChatGPT-generated educational pamphlet for newly diagnosed diabetic patients.” 

So, does this mean AI is making us dumb, is inherently bad, and should be abolished to save humanity's collective intelligence from being atrophied? That’s an understandable response to evidence suggesting that AI tools are reducing critical thinking among nurses, teachers, and commodity traders, but the researchers’ perspective is not that simple. As they correctly point out, humanity has a long history of “offloading” cognitive tasks to new technologies as they emerge and that people are always worried these technologies will destroy human intelligence.

“Generative AI tools [...] are the latest in a long line of technologies that raise questions about their impact on the quality of human thought, a line that includes writing (objected to by Socrates), printing (objected to by Trithemius), calculators (objected to by teachers of arithmetic), and the Internet,” the researcher wrote. “Such consternation is not unfounded. Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved.”

I, for example, am old enough to remember a time when I memorized the phone numbers of many friends and family members. The only number I remember now that all those contacts are saved on my phone is my own. I also remember when I first moved to San Francisco for college I bought a little pocket map and eventually learned to navigate the city and which Muni busses to take where. There are very few places I can get to today without Google Maps.

I don’t feel particularly dumb for outsourcing my brain’s phonebook to a digital contacts list, but the same kind of outsourcing could be dangerous in a critical job where someone is overrelying on AI tools, stops using critical thinking, and incorporates bad outputs into their work. As one of the biggest tech companies in the world, and the biggest investor in OpenAI, Microsoft is pot committed to the rapid development of generative AI tools, so unsurprisingly the researchers here have some thoughts about how to develop AI tools without making us all incredibly dumb. To avoid that situation, the researchers suggest developing AI tools with this problem in mind and design them so they motivate users to use critical thinking.

“GenAI tools could incorporate features that facilitate user learning, such as providing explanations of AI reasoning, suggesting areas for user refinement, or offering guided critiques,” the researchers wrote. “The tool could help develop specific critical thinking skills, such as analysing arguments, or cross-referencing facts against authoritative sources. This would align with the motivation enhancing approach of positioning AI as a partner in skill development.”

No comment yet.
Scooped by Richard Platt
February 10, 8:01 AM
Scoop.it!

Taiwan's Legacy Chip Industry Contemplates Future as China Eats into Share​

Taiwan's Legacy Chip Industry Contemplates Future as China Eats into Share​ | Internet of Things - Technology focus | Scoop.it

When Taiwan's Powerchip Technology entered a deal with the eastern Chinese city of Hefei in 2015 to set up a new chip foundry, it hoped the move would help provide better access the promising Chinese market. Nine years later, however, that Chinese foundry, Nexchip, has become one of its biggest rivals in the legacy chip space, leveraging steep discounts after Beijing's localisation call forced Powerchip to give up the once-lucrative business making integrated circuits for Chinese flat panels. These Chinese foundries, which include Hua Hong and SMIC, are threatening the long-held dominance of Powerchip, UMC and Vanguard International in the market for chips used in cars and display panels by slashing prices and embarking on aggressive capacity expansion plans.

Richard Platt's insight:

When Taiwan's Powerchip Technology entered a deal with the eastern Chinese city of Hefei in 2015 to set up a new chip foundry, it hoped the move would help provide better access to the promising Chinese market. Nine years later, however, that Chinese foundry, Nexchip, has become one of its biggest rivals in the legacy chip space, leveraging steep discounts after Beijing's localization call forced Powerchip to give up the once-lucrative business of making integrated circuits for Chinese flat panels. Nexchip is among Chinese foundries quickly winning market share in the crucial $56.3B industry of so-called legacy or mature node chips made on 28-nm technology and larger, a trend that prompted the Biden administration to initiate an investigation and is alarming Taiwanese industry.  These Chinese foundries, which include Hua Hong and SMIC, are threatening the long-held dominance of Powerchip, UMC, and Vanguard International in the market for chips used in cars and display panels by slashing prices and embarking on aggressive capacity expansion plans.

Taiwanese foundries are then forced to retreat or pursue more advanced and specialty processes, executives in Taiwan said.

“Mature-node foundries like us must transform; otherwise, Chinese price cuts will mess us up even further," said Frank Huang, chairman of Powerchip Investment Holding and its listed unit Powerchip Manufacturing Semiconductor Corporation, which the company was reorganized into in 2019. UMC told Reuters that the expansion of capacity globally had created "severe challenges" for the industry and that it was working with Intel to develop more advanced, smaller chips and diversify beyond legacy chipmaking.

No comment yet.
Scooped by Richard Platt
February 8, 6:48 PM
Scoop.it!

Elon Musk Aide Is Now Working at the VA and Accessing Its Computer Systems

Elon Musk Aide Is Now Working at the VA and Accessing Its Computer Systems | Internet of Things - Technology focus | Scoop.it
A VA spokesman said the Department of Government Efficiency employee was given access to find waste and improve operations, but lawmakers and advocates have raised alarms.
Richard Platt's insight:

A representative from Elon Musk's Department of Government Efficiency now works at the Department of Veterans Affairs, where they have been given access to contracting systems as well as information on VA operations and information technology systems.

Over the weekend, DOGE representatives and billionaire Elon Musk accessed Treasury Department systems, that are used to make U.S. government payments and holds the Social Security numbers for nearly every American, including military personnel and veterans. Wired reported that DOGE largely is staffed by 19- to 24-year-olds with previous connections to Musk's companies, and by Musk's own description, the entity aims to stop what its staff considers wasteful spending. Rumors began circulating Tuesday that DOGE representatives visited the VA on Tuesday with an intent to mine data on disability compensation and benefits.  In response to questions from Military.com over DOGE activities at the VA, a VA spokesman said the department has a single DOGE employee who is "specifically focused on identifying wasteful contracts, improving VA operations and strengthening management of the department's IT projects." "The DOGE employee will be solely focused on improving VA performance and efficiency and will not have access to veterans' or VA beneficiaries' data," said VA Press Secretary Pete Kasperowicz 

 

The unfettered access to sensitive personal data has alarmed privacy advocates and cybersecurity experts who question the legality of allowing non-departmental personnel entry to financial information, bank accounts and more.  Sen. Richard Blumenthal of Connecticut, the ranking Democrat on the Senate Veterans Affairs Committee, said Tuesday that Musk's access to Treasury Department databases gives him access to information that includes addresses, bank accounts and disability benefits.  "Veterans risked their lives to defend this country, and they deserve more than to have unaccountable billionaires playing with the benefits they earned and rely on," Blumenthal wrote in a statement on Monday.  Members of Congress on Tuesday began raising concerns that Musk or his representatives at DOGE visited the VA with the intent to access all data from the VA to discern disability ratings and payments to veterans, diagnostic codes for health conditions and more. The Veterans Health Administration oversees  the medical treatment of veterans and the Veterans Benefits Administration is the VA's compensation arm.

Sen. Patty Murray, D-Wash., issued a press release Wednesday, saying she had heard that DOGE "may have barged into VA today."

"Musk and his associates already have the personal financial information of every veteran receiving disability or education benefits because of their illegal data mining at the Department of Treasury. Will they now look at private health records of veterans? What else will they do that could put the health and safety of our veterans at risk?" Murray, the ranking Democrat on the Senate Appropriations Committee, said in the statement.  Murray joined 22 other Democrats Tuesday in voting against the nomination of VA Secretary Doug Collins. Collins won handily in a 77-23 vote, but Murray, a former chairwoman of the Senate Veterans Affairs Committee, said she could not vote for him while President Donald Trump "dismantles government and breaks the law."  Trump signed an executive order Jan. 20 remaking a small government body that was charged with improving and streamlining federal websites, the U.S. Digital Service, into DOGE. Musk, the world's richest man, was a key funder of Trump's election campaign and has been given unprecedented and potentially illegal access to sensitive systems in the federal government, apparently with Trump's OK. Neither Trump nor Musk has the authority to dismantle agencies and funding created by Congress. Musk and his DOGE team already took control of the U.S. Agency for International Development, or USAID, by shutting out workers, freezing funding and removing leadership, depriving millions of people around the world of sometimes life-saving aid.

On Monday, the White House told The New York Times that Musk's team's access to the Treasury Department system is "read only," meaning they cannot make changes or stop payments themselves.

Alan Butler, executive director and president of the nonprofit advocacy group Electronic Privacy Information Center, told Military.com that nonetheless, the access is concerning.  "That is still a massive invasion of privacy, first of all, and still imposes significant ongoing risks," Butler said. "Because 'read' includes the ability to exfiltrate data. ... All of this just exponentially increases the odds that this data gets breached, that people's private information is breached, that national security is breached."  The VA maintains that DOGE will not have access to veterans' personal data and the department will help streamline the VA. "VA looks forward to working with DOGE to improve services to veterans, their families, caregivers and survivors," Kasperowicz said in his statement.

No comment yet.
Scooped by Richard Platt
February 8, 11:37 AM
Scoop.it!

Automakers Urge USDOT to Quickly Restart Federal EV Charging Program

Automakers Urge USDOT to Quickly Restart Federal EV Charging Program | Internet of Things - Technology focus | Scoop.it
A group representing automakers and electric vehicle charging companies on Friday urged the U.S. Transportation Department to quickly restart a $5 billion government EV infrastructure program.
Richard Platt's insight:
A group representing automakers and electric vehicle charging companies on Friday urged the U.S. Transportation Department to quickly restart a $5 billion government EV infrastructure program.
On Thursday, the Trump administration said it was suspending the electric vehicle charging program and rescinding approval of state EV charging plans pending a new review.  The Electric Drive Transportation Association, whose members include General Motors , Toyota, BorgWarner, EVGo, Stellantis, Walmart and others, said it urged the Trump administration "to quickly resume the critical work of the program and minimize uncertainty for states and their businesses, who have invested in infrastructure to serve local and national goals for advanced transportation."  On President Donald Trump's first day in office, he took aim at electric vehicles, saying he was halting distribution of unspent government funds for vehicle charging stations from the $5 billion National Electric Vehicle Infrastructure Fund. Trump also revoked a 2021 executive order signed by his predecessor Joe Biden that sought to ensure half of all new vehicles sold in the U.S. by 2030 were electric. Trump also called for ending a waiver for states to adopt zero-emission vehicle rules by 2035 and said his administration would consider ending EV tax credits.  Biden's 50% target, which was not legally binding, had won the support of U.S. and foreign automakers. Trump has said he could take other actions on EVs, including seeking to repeal the $7,500 consumer tax credit for electric-vehicle purchases as part of broader tax-reform legislation.  Last week, U.S. Transportation Secretary Sean Duffy directed U.S. regulators to rescind landmark fuel economy standards issued under Biden that aimed to drastically reduce fuel use for cars and trucks as well as highway climate rules.  The National Highway Traffic Safety Administration said in June it would hike Corporate Average Fuel Economy requirements to about 50.4 miles per gallon (4.67 liters per 100 km) by 2031 from 39.1 mpg currently for light-duty vehicles.
No comment yet.
Scooped by Richard Platt
February 7, 3:08 PM
Scoop.it!

ASML CEO says China is 10 to 15 years behind in chipmaking capabilities

ASML CEO says China is 10 to 15 years behind in chipmaking capabilities | Internet of Things - Technology focus | Scoop.it
Without EUV, Chinese semiconductor industry is over a decade behind Taiwan, U.S.
Richard Platt's insight:

Although advancements that SMIC and Huawei have made in the semiconductor sector in recent years are pretty impressive, the companies are 10 to 15 years behind industry giants like Intel, TSMC, and Samsung, said Christophe Fouquet, chief executive of toolmaker ASML. It's well known that even with the best-in-class DUV tools, Chinese fab SMIC will be unable to match TSMC's process technologies cost-effectively. This is because Chinese companies cannot access leading-edge EUV lithography tools.  "By banning the export of EUV, China will lag 10 - 15 years behind the West," said Christophe Fouquet in an interview with NRC (machine translated). "That really has an effect."  ASML has never shipped its EUV tools to China due to the Wassenaar Arrangement, despite SMIC's reported order for one EUV machine. The details remain unclear, but ASML did not deliver the machine to the Chinese foundry due to US sanctions. However, ASML kept shipping advanced DUV lithography tools, such as the Twinscan NXT:2000i, which are capable of producing chips on 5nm and 7nm-class process technologies.  As a result, SMIC has been producing chips for Huawei using its 1st-generation and 2nd-generation 7nm-class process technology for years now. This has certainly helped the Chinese high-tech giants weather U.S. government sanctions.  Having understood that EUV tools are not coming to China, Huawei and its partners have explored extreme ultraviolet lithography themselves with the aim of building their own lithography chipmaking tools and ecosystem, which will take 10 – 15 years at best. For reference, it has taken over 20 years for ASML and its partners from foundational work to complete commercial machines to build the EUV ecosystem. Keeping in mind that many of the technologies developed in the early/mid-1990s are openly known, Chinese companies will not have to develop everything from scratch. However, by the time the Chinese semiconductor industry develops Low-NA EUV tools, the Western chip industry will have High-NA EUV lithography and even Hyper-NA EUV equipment.

However, the main concern is not that Chinese companies may develop their own EUV lithography tools some 15 years down the road, but that they might copy ASML's mainstream DUV machines (such as Twinscan NXT:2000i) over the next several years.

The American government is pressuring ASML to halt the maintenance and repair of its advanced DUV systems in China, which will make it consistent with existing sanctions against China's semiconductor sector. However, the Dutch government has not agreed to this demand so far. ASML aims to retain control over its machines in China to prevent the risk of sensitive information leaking, which could happen if Chinese companies take over maintenance to keep their chip factories operational.

For now, Chinese companies are among the main customers of ASML, and the company earns billions selling DUV litho tools to SMIC, Hua Hong, and YMTC. What happens if (or rather when) Chinese makers of lithography equipment build their own DUV lithography systems (or just copy those developed by ASML) is unknown. On the one hand, they could reduce purchases from ASML, but on the other hand, they could start selling these tools outside of China, essentially competing with ASML. While it is unlikely that they will build a Twinscan NXT:2000i-like machine any time soon, replicating something less advanced could be much easier.

No comment yet.
Scooped by Richard Platt
February 7, 8:02 AM
Scoop.it!

Single-photon LiDAR Delivers Detailed 3D Images at Distances up to 1-km

Single-photon LiDAR Delivers Detailed 3D Images at Distances up to 1-km | Internet of Things - Technology focus | Scoop.it
Researchers have designed a single-photon time-of-flight LiDAR system that can acquire a high-resolution 3D image of an object or scene up to 1 kilometer away. The new system could help enhance security, monitoring, and remote sensing by enabling detailed imaging even in challenging environmental conditions or when objects are obscured by foliage or camouflage netting.
Richard Platt's insight:

Researchers have designed a single-photon time-of-flight LiDAR system that can acquire a high-resolution 3D image of an object or scene up to 1 kilometer away. The new system could help enhance security, monitoring, and remote sensing by enabling detailed imaging even in challenging environmental conditions or when objects are obscured by foliage or camouflage netting.  "Our system uses a single-photon detector approximately twice as efficient as detectors deployed in similar LiDAR systems reported by other research groups and has a system timing resolution at least 10 times better," said research team member Aongus McCarthy, from Heriot-Watt University in the UK. "The excellent depth resolution of the system means that it would be particularly well suited for imaging objects behind clutter, such as foliage or camouflage netting, a scenario that would be difficult for a digital camera," said McCarthy. "For example, it could distinguish an object located a few centimeters behind a camouflage netting while systems with poorer resolution would not be able to make out the object." While the field trials for the LiDAR system were limited to a range of 1 kilometer, researchers plan to test the system at distances of up to 10 km and explore imaging through atmospheric obscurants like smoke and fog. Future work will also focus on using advanced computational methods to accelerate data analysis and enable imaging of more distant scenes.

 

"These improvements allow the imaging system to collect more scattered photons from the target and achieve a much higher spatial resolution."

In Optica, a multi-institutional group of researchers from the UK and U.S. shows that the new system can construct a 3D image depicting a clearly recognizable human face of a person 325 meters away.

The researchers were from Gerald Buller's group at Heriot-Watt University, Robert Hadfield's group at the University of Glasgow, Matthew Shaw's group at the NASA Jet Propulsion Laboratory, and Karl Berggren's group at MIT.

"This type of measurement system could lead to improved security and monitoring systems that could, for example, acquire detailed depth images through smoke or fog and of cluttered scenes," said McCarthy, first author of the new paper.

"It could also enable the remote identification of objects in various environments and monitoring of movement of buildings or rock faces to assess subsidence or other potential hazards."

The researchers also used the new imaging system to image Lego characters from 32 m away. Credit: Aongus McCarthy, Heriot-Watt University
Light-based range finding
The single-photon time-of-flight depth imaging system uses the time it takes for a laser pulse to travel from the system to a point on an object and back to calculate the distance to the object. These time-of-flight measurements are then repeated for points across the object to obtain 3D information.

The new system uses an ultrasensitive detector called a superconducting nanowire single-photon detector (SNSPD) developed by the MIT and JPL research groups. The SNSPD can detect a single photon of light, which means that lasers with very low powers, including eye-safe lasers, can be used to perform measurements in a very short time and over long distances.

To reduce noise levels, the detector was cooled to just below 1 Kelvin in a compact cryocooler system designed and built by the University of Glasgow group.

The researchers combined the cooled SNSPD with a new custom single-pixel scanning transceiver operating at a 1550-nm wavelength that was designed by McCarthy at Heriot-Watt University. They also added advanced timing equipment to measure extremely precise time intervals—accurate down to trillionths of a second (picoseconds).

To put that into perspective, in just 1,000 picoseconds, light can travel about 300 millimeters (about 1 foot). This precision made it possible to distinguish surfaces separated by about 1 mm in depth from 325 meters away.

Scans from 325 meters away of a life-sized polystyrene head and research co-author Gregor Taylor. Credit: Heriot-Watt University
"These factors all provide improved flexibility in the trade-off between standoff distance, laser power levels, data acquisition time and depth resolution," said McCarthy.

"Also, since SNSPD detectors can operate at wavelengths longer than 1550 nm, this design opens the door to developing a mid-infrared single-photon LiDAR system, which could further enhance imaging through fog and smoke and other obscurants."

3D measurements of distant objects
The researchers performed field tests of their LiDAR system on the Heriot-Watt University campus by taking measurements from objects that were 45 meters, 325 meters or 1 kilometer away.

To evaluate the spatial and depth resolution, they scanned a custom 3D-printed target with varying pillar sizes and heights. The system resolved features as small as 1 mm in daylight at 45 and 325 meters—a depth resolution approximately 10 times better than they had achieved previously. They also captured a 3D image of a human face at these distances using a 1 ms per-pixel acquisition time, an eye-safe 3.5 mW laser, and minimal data processing.

 

No comment yet.
Rescooped by Richard Platt from Learning & Technology News
February 6, 5:22 PM
Scoop.it!

How Easy is it for School Students to Cheat with AI? 

How Easy is it for School Students to Cheat with AI?  | Internet of Things - Technology focus | Scoop.it

Via Nik Peachey
Richard Platt's insight:

Humanities teacher Jack Dougall set 100 of his students the challenge of cheating on their homework with AI without being detected - and the results were shocking.  When children set themselves the specific aim of avoiding detection: 50% of the 12-year-olds bypassed detection, and 38% of the 13-year-olds bypassed detection. And no surprise that 100% of the 17-year-olds bypassed detection.

Nik Peachey's curator insight, February 6, 6:25 AM

When children set themselves the specific aim of avoiding detection: 50 per cent of the 12-year-olds bypassed detection. 38 per cent of the 13-year-olds bypassed detection. 100 per cent of the 17-year-olds bypassed detection.

Scooped by Richard Platt
February 6, 1:28 PM
Scoop.it!

Not every AI Prompt Deserves Multiple Seconds of Thinking: How Meta is Teaching Models to Prioritize

Not every AI Prompt Deserves Multiple Seconds of Thinking: How Meta is Teaching Models to Prioritize | Internet of Things - Technology focus | Scoop.it
Let models explore different solutions and they will find optimal solutions to properly allocate inference budget to AI reasoning problems.
Richard Platt's insight:

Reasoning models like OpenAI o1 and DeepSeek-R1 have a problem: They overthink. Ask them a simple question such as “What is 1+1?” and they will think for several seconds before answering.

Ideally, like humans, AI models should be able to tell when to give a direct answer and when to spend extra time and resources to reason before responding. A new technique presented by researchers at Meta AI and the University of Illinois Chicago trains models to allocate inference budgets based on the difficulty of the query. This results in faster responses, reduced costs, and better allocation of compute resources. -- Costly reasoning  -- Large language models (LLMs) can improve their performance on reasoning problems when they produce longer reasoning chains, often referred to as “chain-of-thought” (CoT).  The success of CoT has led to an entire range of inference-time scaling techniques that prompt the model to “think” longer about the problem, produce and review multiple answers and choose the best one.  One of the main ways used in reasoning models is to generate multiple answers and choose the one that recurs most often, also known as “majority voting” (MV). The problem with this approach is that the model adopts a uniform behavior, treating every prompt as a hard reasoning problem and spending unnecessary resources to generate multiple answers.  -- Smart reasoning  --  The new paper proposes a series of training techniques that make reasoning models more efficient at responding. The first step is “sequential voting” (SV), where the model aborts the reasoning process as soon as an answer appears a certain number of times. For example, the model is prompted to generate a maximum of eight answers and choose the answer that comes up at least three times. If the model is given the simple query mentioned above, the first three answers will probably be similar, which will trigger the early-stopping, saving time and compute resources. Their experiments show that SV outperforms classic MV in math competition problems when it generates the same number of answers. However, SV requires extra instructions and token generation, which puts it on par with MV in terms of token-to-accuracy ratio. The second technique, “adaptive sequential voting” (ASV), improves SV by prompting the model to examine the problem and only generate multiple answers when the problem is difficult. For simple problems (such as the 1+1 prompt), the model simply generates a single answer without going through the voting process. This makes the model much more efficient at handling both simple and complex problems.  --  Reinforcement learning  --  While both SV and ASV improve the model’s efficiency, they require a lot of hand-labeled data. To alleviate this problem, the researchers propose “Inference Budget-Constrained Policy Optimization” (IBPO), a reinforcement learning algorithm that teaches the model to adjust the length of reasoning traces based on the difficulty of the query.  IBPO is designed to allow LLMs to optimize their responses while remaining within an inference budget constraint. The RL algorithm enables the model to surpass the gains obtained through training on manually labeled data by constantly generating ASV traces, evaluating the responses, and choosing outcomes that provide the correct answer and the optimal inference budget.  Their experiments show that IBPO improves the Pareto front, which means for a fixed inference budget, a model trained on IBPO outperforms other baselines.  The findings come against the backdrop of researchers warning that current AI models are hitting a wall. Companies are struggling to find quality training data and are exploring alternative methods to improve their models.  One promising solution is reinforcement learning, where the model is given an objective and allowed to find its own solutions as opposed to supervised fine-tuning (SFT), where the model is trained on manually labeled examples.  Surprisingly, the model often finds solutions that humans haven’t thought of. This is a formula that seems to have worked well for DeepSeek-R1, which has challenged the dominance of U.S.-based AI labs.  The researchers note that “prompting-based and SFT-based methods struggle with both absolute improvement and efficiency, supporting the conjecture that SFT alone does not enable self-correction capabilities. This observation is also partially supported by concurrent work, which suggests that such self-correction behavior emerges automatically during RL rather than manually created by prompting or SFT.”

No comment yet.
Scooped by Richard Platt
February 6, 2:17 AM
Scoop.it!

Foundries in Europe 2024

Foundries in Europe 2024 | Internet of Things - Technology focus | Scoop.it
Europe will be ready to grow again its market share but should maintain the current status quo and still source about 50% of production abroad.
Richard Platt's insight:

In 2023 for the first time in a 50-year long history European semiconductor companies have reached 14% market share, departing from the mere 9% to 11% it had in the last decades. Many had laughed at the 20% market share goal set by Thierry Breton who just resigned from the commissioner position. Considering those recent numbers this was not unreachable after all. There is a little bit of Mark Twain in this situation “they did it because they didn’t know it was impossible”. While European device companies sell chips and are part of a $530B market, the key question of securing supply chains has come to the forefront. In the wake of COVID-19 semiconductor shortages combined with geopolitical tensions had regulators in the US and EU sign up for massive CHIP-ACT subsidies to the combined amount of $100B. The goal is to “re-shore” semiconductor production with state-of-the-art foundries on both side of the Atlantic instead of relying – too much – on Asian supplies.   Doing the math of all announces around the world Yole has come-up with a picture (Overview of semiconductor foundry 2024) of the possible outcome in 2029. It is one thing to announce the construction of a “fab”, yet another to ramp-up production, put yields under control, and output 25k wafers per month as the most modern 300mm foundries do. The recent push back of at least 2 years of Intel’s Magdeburg Fab is a reminder that words are cheap and actions expensive, especially when it come to semiconductor foundries. In 2024 Europe control 7% of the world foundry capacity which means it must process 50% of its wafers elsewhere either by foundry ownership abroad or purchases to an open foundry such as TSMC, or Samsung Semiconductor.


So far European foundry capacity is expected to grow 20% by 2029, just enough to maintain the current capacity share. What is new is the effort from foreign players INTEL and TSMC outpacing the locals, Infineon Technologies, ST Microelectronics and Bosch who will only grow by 5%.

This is indeed matching the expected wafer demand growth at world level. European players are playing carefully and for good reasons. As of October 2024, they are suffering the recession, others have experienced a year ago. On their markets, automotive and industrial semiconductors, prices have gone down and so the fear of shortages, reducing stockpiling and thus orders.

These markets will start to grow again next year. Europe will be ready to grow again its market share but should maintain the current status quo and still source about 50% of production abroad. Maybe a little more from the U.S., if foundry constructions get more successful over there. But was it not the goal after all?

No comment yet.
Scooped by Richard Platt
February 5, 2:48 PM
Scoop.it!

Using AI In Semiconductor Inspection

Using AI In Semiconductor Inspection | Internet of Things - Technology focus | Scoop.it
Finding anomalies and defects faster in complex chip and package topographies.
Richard Platt's insight:

AI is exceptionally good at spotting anomalies in semiconductor inspection. The challenge is training different models for different inspection tools and topographies, and knowing which model to use at any particular time. Different textures in backgrounds are difficult for traditional algorithms, for example. But once machine learning models are trained properly, they have proven effective in identifying real and potential defects. The same is true for PCB segmentation. Charlie Zhu, vice president of research and development at Nordson Test & Inspection, talks about how AI is used in different types of inspection today, how it may be used in the future, and where it works best.

No comment yet.
Scooped by Richard Platt
February 5, 2:40 PM
Scoop.it!

Trucking, Retail, and Port Leaders Sound Alarm Regarding Tariff Impact on U.S. economy

Trucking, Retail, and Port Leaders Sound Alarm Regarding Tariff Impact on U.S. economy | Internet of Things - Technology focus | Scoop.it
Amid all of the tariff-related tumult seen in recent days—and, to be clear, there has been a lot of it‚ there has been no shortage of feedback from various industry leaders about the potential impact these tariffs can have on a whole host of things, including: economic conditions; import levels; and supply chain operations, among many others.
Richard Platt's insight:

Amid all of the tariff-related tumult seen in recent days—and, to be clear, there has been a lot of it‚ there has been no shortage of feedback from various industry leaders about the potential impact these tariffs can have on a whole host of things, including: economic conditions; import levels; and supply chain operations, among many others.  In looking at respective comments issued by leadership in various modes, one common theme expressed focused around concerns relating to how the usage of tariffs can directly translate into increased costs, coupled with tariff actions having the high potential of negatively impacting supply chain stability and also resilience, two things which were impacted following the pandemic.

American Trucking Associations (ATA) President & CEO Chris Spear explained that as the trucking industry continues to recover from the lengthy freight recession, replete with low freight volumes, depressed rates, and increasing operational costs, the ATA is concerned that tariffs could decrease freight volumes and increase costs for motor carriers—at a time when a recovery is starting to see signs of a recovery.  “A 25% tariff levied on Mexico could see the price of a new tractor increase by as much as $35,000,” said Spear. “That is cost-prohibitive for many small carriers, and for larger fleets, it would add tens of millions of dollars in annual operating costs. Trucks move 85% of goods that cross our southern border and 67% of goods that cross our northern border, supporting hundreds of thousands of trucking jobs in the U.S. The trucking industry understands the crises motivating these tariff proposals, which is why we have been a leader in efforts to fight drug and human trafficking. 

No comment yet.
Scooped by Richard Platt
February 3, 9:32 AM
Scoop.it!

Integrating Fragile 2D Materials into IC Devices 

Integrating Fragile 2D Materials into IC Devices  | Internet of Things - Technology focus | Scoop.it
Richard Platt's insight:

Two-dimensional materials, which are only a few atoms thick, can exhibit some incredible properties, such as the ability to carry electric charge extremely efficiently, which could boost the performance of next-gen electronic devices. But integrating 2D materials into devices and systems like computer chips is notoriously difficult. These ultrathin structures can be damaged by conventional fabrication techniques, which often rely on the use of chemicals, high temperatures, or destructive processes like etching. An MIT team has developed a new technique to integrate 2D materials into devices in a single step while keeping the surfaces of the materials pristine.

Overcoming this challenge, researchers from MIT and elsewhere have developed a new technique to integrate 2D materials into devices in a single step while keeping the surfaces of the materials and the resulting interfaces pristine and free from defects.  Their method relies on engineering surface forces available at the nanoscale to allow the 2D material to be physically stacked onto other prebuilt device layers. Because the 2D material remains undamaged, the researchers can take full advantage of its unique optical and electrical properties. They used this approach to fabricate arrays of 2D transistors that achieved new functionalities compared to devices produced using conventional fabrication techniques. Their method, which is versatile enough to be used with many materials, could have diverse applications in high-performance computing, sensing, and flexible electronics.  Core to unlocking these new functionalities is the ability to form clean interfaces, held together by special forces that exist between all matter, called van der Waals forces.  However, such van der Waals integration of materials into fully functional devices is not always easy, said Farnaz Niroui, Assistant Professor of Electrical Engineering and Computer Science (EECS), a member of the Research Laboratory of Electronics (RLE), and senior author of a new paper describing the work.  “Van der Waals integration has a fundamental limit,” she explained. “Since these forces depend on the intrinsic properties of the materials, they cannot be readily tuned. As a result, there are some materials that cannot be directly integrated with each other using their van der Waals interactions alone. We have come up with a platform to address this limit to help make van der Waals integration more versatile, to promote the development of 2D-materials-based devices with new and improved functionalities.”

To make electronic devices, they form a hybrid surface of metals and insulators on a carrier substrate. This surface is then peeled off and flipped over to reveal a completely smooth top surface that contains the building blocks of the desired device.  This smoothness is important, since gaps between the surface and 2D material can hamper van der Waals interactions. Then, the researchers prepare the 2D material separately, in a completely clean environment, and bring it into direct contact with the prepared device stack. This single-step process keeps the 2D material interface completely clean, which enables the material to reach its fundamental limits of performance without being held back by defects or contamination.  And because the surfaces also remain pristine, researchers can engineer the surface of the 2D material to form features or connections to other components. For example, they used this technique to create p-type transistors, which are generally challenging to make with 2D materials. Their transistors have improved on previous studies and can provide a platform toward studying and achieving the performance needed for practical electronics.  In the future, the researchers want to build on this platform to enable integration of a diverse library of 2D materials to study their intrinsic properties without the influence of processing damage and develop new device platforms that leverage these superior functionalities.

No comment yet.