Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends
31.8K views | +0 today
Follow
Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends
This curated collection includes updates, resources, and research with critical perspectives related to the intersections of educational psychology and emerging technologies in education. The page also serves as a research tool to organize online content (funnel shaped icon allows keyword search). For more on the intersections of privatization and technologization of education with critiques of social impact finance and related technologies, please visit http://bit.ly/sibgamble and http://bit.ly/chart_look. For posts regarding screen time risks to health and development, see http://bit.ly/screen_time and for updates related to AI and data concerns, please visit http://bit.ly/PreventDataHarms.   [Note: Views presented on this page are re-shared from external websites.  The content may not necessarily represent the views nor official position of the curator nor employer of the curator.
Your new post is loading...
Your new post is loading...
Scooped by Roxana Marachi, PhD
July 1, 2024 12:08 PM
Scoop.it!

Whistleblower: L.A. Schools’ Chatbot Misused Student Data as Tech Co. Crumbled // The 74

Whistleblower: L.A. Schools’ Chatbot Misused Student Data as Tech Co. Crumbled // The 74 | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

"Allhere, ed tech startup hired to build LAUSD's lauded AI chatbot "Ed", played fast and loose with sensitive records, ex-software engineer alleges." 

 

 

By Mark Keierleber, July 1, 2024

Just weeks before the implosion of AllHere, an education technology company that had been showered with cash from venture capitalists and featured in glowing profiles by the business press, America’s second-largest school district was warned about problems with AllHere’s product.

As the eight-year-old startup rolled out Los Angeles Unified School District’s flashy new AI-driven chatbot — an animated sun named “Ed” that AllHere was hired to build for $6 million — a former company executive was sending emails to the district and others that Ed’s workings violated bedrock student data privacy principles. 

 

Those emails were sent shortly before The 74 first reported last week that AllHere, with $12 million in investor capital, was in serious straits. A June 14 statement on the company’s website revealed a majority of its employees had been furloughed due to its “current financial position.” Company founder and CEO Joanna Smith-Griffin, a spokesperson for the Los Angeles district said, was no longer on the job. 

Smith-Griffin and L.A. Superintendent Alberto Carvalho went on the road together this spring to unveil Ed at a series of high-profile ed tech conferences, with the schools chief dubbing it the nation’s first “personal assistant” for students and leaning hard into LAUSD’s place in the K-12 AI vanguard. He called Ed’s ability to know students “unprecedented in American public education” at the ASU+GSV conference in April. 

Through an algorithm that analyzes troves of student information from multiple sources, the chatbot was designed to offer tailored responses to questions like “what grade does my child have in math?” The tool relies on vast amounts of students’ data, including their academic performance and special education accommodations, to function.

Meanwhile, Chris Whiteley, a former senior director of software engineering at AllHere who was laid off in April, had become a whistleblower. He told district officials, its independent inspector general’s office and state education officials that the tool processed student records in ways that likely ran afoul of L.A. Unified’s own data privacy rules and put sensitive information at risk of getting hacked. None of the agencies ever responded, Whiteley told The 74. 

“When AllHere started doing the work for LAUSD, that’s when, to me, all of the data privacy issues started popping up,” Whiteley said in an interview last week. The problem, he said, came down to a company in over its head and one that “was almost always on fire” in terms of its operations and management. LAUSD’s chatbot was unlike anything it had ever built before and — given the company’s precarious state — could be its last. 

If AllHere was in chaos and its bespoke chatbot beset by porous data practices, Carvalho was portraying the opposite. One day before The 74 broke the news of the company turmoil and Smith-Griffin’s departure, EdWeek Marketbrief spotlighted the schools chief at a Denver conference talking about how adroitly LAUSD managed its ed tech vendor relationships — “We force them to all play in the same sandbox” — while ensuring that “protecting data privacy is a top priority.”

In a statement on Friday, a district spokesperson said the school system “takes these concerns seriously and will continue to take any steps necessary to ensure that appropriate privacy and security protections are in place in the Ed platform.” 

“Pursuant to contract and applicable law, AllHere is not authorized to store student data outside the United States without prior written consent from the District,” the statement continued. “Any student data belonging to the District and residing in the Ed platform will continue to be subject to the same privacy and data security protections, regardless of what happens to AllHere as a company.” 

 

A district spokesperson, in response to earlier questioning from The 74 last week, said it was informed that Smith-Griffin was no longer with the company and that several businesses “are interested in acquiring AllHere.” Meanwhile Ed, the spokesperson said, “belongs to Los Angeles Unified and is for Los Angeles Unified.”

Officials in the inspector general’s office didn’t respond to requests for comment. The state education department “does not directly oversee the use of AI programs in schools or have the authority to decide which programs a district can utilize,” a spokesperson said in a statement.

It’s a radical turn of events for AllHere and the AI tool it markets as a “learning acceleration platform,” which were all the buzz just a few months ago. In April, Time Magazine named AllHere among the world’s top education technology companies. That same month, Inc. Magazine dubbed Smith-Griffin a global K-12 education leader in artificial intelligence in its Female Founders 250 list. 

 

 

Ed has been similarly blessed with celebrity treatment. 

“He’s going to talk to you in 100 different languages, he’s going to connect with you, he’s going to fall in love with you,” Carvalho said at ASU+GSV. “Hopefully you’ll love it, and in the process we are transforming a school system of 540,000 students into 540,000 ‘schools of one’ through absolute personalization and individualization.”

Smith-Griffin, who graduated from the Miami school district that Carvalho once led before going onto Harvard, couldn’t be reached for comment. Smith-Griffin’s LinkedIn page was recently deactivated and parts of the company website have gone dark. Attempts to reach AllHere were also unsuccessful.

‘The product worked, right, but it worked by cheating’

Smith-Griffin, a former Boston charter school teacher and family engagement director, founded AllHere in 2016. Since then, the company has primarily provided schools with a text messaging system that facilitates communication between parents and educators. Designed to reduce chronic student absences, the tool relies on attendance data and other information to deliver customized, text-based “nudges.” 

 

The work that AllHere provided the Los Angeles school district, Whiteley said, was on a whole different level — and the company wasn’t prepared to meet the demand and lacked expertise in data security. In L.A., AllHere operated as a consultant rather than a tech firm that was building its own product, according to its contract with LAUSD obtained by The 74. Ultimately, the district retained rights to the chatbot, according to the agreement, but AllHere was contractually obligated to “comply with the district information security policies.” 

 The contract notes that the chatbot would be “trained to detect any confidential or sensitive information” and to discourage parents and students from sharing with it any personal details. But the chatbot’s decision to share and process students’ individual information, Whiteley said, was outside of families’ control. 

In order to provide individualized prompts on details like student attendance and demographics, the tool connects to several data sources, according to the contract, including Welligent, an online tool used to track students’ special education services. The document notes that Ed also interfaces with the Whole Child Integrated Data stored on Snowflake, a cloud storage company. Launched in 2019, the Whole Child platform serves as a central repository for LAUSD student data designed to streamline data analysis to help educators monitor students’ progress and personalize instruction. 

Whiteley told officials the app included students’ personally identifiable information in all chatbot prompts, even in those where the data weren’t relevant. Prompts containing students’ personal information were also shared with other third-party companies unnecessarily, Whiteley alleges, and were processed on offshore servers. Seven out of eight Ed chatbot requests, he said, are sent to places like Japan, Sweden, the United Kingdom, France, Switzerland, Australia and Canada. 

Taken together, he argued the company’s practices ran afoul of data minimization principles, a standard cybersecurity practice that maintains that apps should collect and process the least amount of personal information necessary to accomplish a specific task. Playing fast and loose with the data, he said, unnecessarily exposed students’ information to potential cyberattacks and data breaches and, in cases where the data were processed overseas, could subject it to foreign governments’ data access and surveillance rules. 

Chatbot source code that Whiteley shared with The 74 outlines how prompts are processed on foreign servers by a Microsoft AI service that integrates with ChatGPT. The LAUSD chatbot is directed to serve as a “friendly, concise customer support agent” that replies “using simple language a third grader could understand.” When querying the simple prompt “Hello,” the chatbot provided the student’s grades, progress toward graduation and other personal information. 

AllHere’s critical flaw, Whiteley said, is that senior executives “didn’t understand how to protect data.” 

“The issue is we’re sending data overseas, we’re sending too much data, and then the data were being logged by third parties,” he said, in violation of the district’s data use agreement. “The product worked, right, but it worked by cheating. It cheated by not doing things right the first time.”

In a 2017 policy bulletin, the district notes that all sensitive information “needs to be handled in a secure way that protects privacy,” and that contractors cannot disclose information to other parties without parental consent. A second policy bulletin, from April, outlines the district’s authorized use guidelines for artificial intelligence, which notes that officials, “Shall not share any confidential, sensitive, privileged or private information when using, prompting or communicating with any tools.” It’s important to refrain from using sensitive information in prompts, the policy notes, because AI tools “take whatever users enter into a prompt and incorporate it into their systems/knowledge base for other users.” 

“Well, that’s what AllHere was doing,” Whiteley said. 

‘Acid is dangerous’

Whiteley’s revelations present LAUSD with its third student data security debacle in the last month. In mid-June, a threat actor known as “Sp1d3r” began to sell for $150,000 a trove of data it claimed to have stolen from the Los Angeles district on Breach Forums, a dark web marketplace. LAUSD told Bloomberg that the compromised data had been stored by one of its third-party vendors on the cloud storage company Snowflake, the repository for the district’s Whole Child Integrated Data. The Snowflake data breach may be one of the largest in history. The threat actor claims that the L.A. schools data in its possession include student medical records, disability information, disciplinary details and parent login credentials. 

The chatbot interacted with data stored by Snowflake, according to the district’s contract with AllHere, though any connection between AllHere and the Snowflake data breach is unknown. 

In its statement Friday, the district spokesperson said an ongoing investigation has “revealed no connection between AllHere or the Ed platform and the Snowflake incident.” The spokesperson said there was no “direct integration” between Whole Child and AllHere and that Whole Child data was processed internally before being directed to AllHere.

The contract between AllHere and the district, however, notes that the tool should “seamlessly integrate” with the Whole Child Integrated Data “to receive updated student data regarding attendance, student grades, student testing data, parent contact information and demographics.”

 

Earlier in the month, a second threat actor known as Satanic Cloud claimed it had access to tens of thousands of L.A. students’ sensitive information and had posted it for sale on Breach Forums for $1,000. In 2022, the district was victim to a massive ransomware attack that exposed reams of sensitive data, including thousands of students’ psychological evaluations, to the dark web. 

With AllHere’s fate uncertain, Whiteley blasted the company’s leadership and protocols.

“Personally identifiable information should be considered acid in a company and you should only touch it if you have to because acid is dangerous,” he told The 74. “The errors that were made were so egregious around PII, you should not be in education if you don’t think PII is acid.” 

For original post, please visit:

https://www.the74million.org/article/whistleblower-l-a-schools-chatbot-misused-student-data-as-tech-co-crumbled/ 

 

 

No comment yet.
Scooped by Roxana Marachi, PhD
February 10, 2023 2:27 PM
Scoop.it!

Resources to Learn about AI Hype, AI Harms, BigData, Blockchain Harms, and Data Justice 

Resources to Learn about AI Hype, AI Harms, BigData, Blockchain Harms, and Data Justice  | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

Resources to Learn about AI, BigData, Blockchain, Algorithmic Harms, and Data Justice

Shortlink to share this page: http://bit.ly/DataJusticeLinks 

No comment yet.
Scooped by Roxana Marachi, PhD
July 4, 2023 10:06 PM
Scoop.it!

Tokenizing Toddlers: Cradle-to-Career Behavioral Tracking on Blockchain, Web3, and the "Internet of Education" [Slidedeck] // Marachi, 2022

Tokenizing Toddlers: Cradle-to-Career Behavioral Tracking on Blockchain, Web3, and the "Internet of Education" [Slidedeck] // Marachi, 2022 | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

http://bit.ly/Tokenizing_Toddlers 

No comment yet.
Scooped by Roxana Marachi, PhD
August 25, 2023 5:44 PM
Scoop.it!

Roblox facilitates “illegal gambling” for minors, according to new lawsuit //ArsTechnica

Roblox facilitates “illegal gambling” for minors, according to new lawsuit //ArsTechnica | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

By Kyle Orland

"A new proposed class-action lawsuit (as noticed by Bloomberg Law) accuses user-generated "metaverse" company Roblox of profiting from and helping to power third-party websites that use the platform's Robux currency for unregulated gambling activities. In doing so, the lawsuit says Roblox is effectively "work[ing] with and facilitat[ing] the Gambling Website Defendants... to offer illegal gambling opportunities to minor users."

 

The three gambling website companies named in the lawsuit—Satozuki, Studs Entertainment, and RBLXWild Entertainment—allow users to connect a Roblox account and convert an existing balance of Robux virtual currency into credits on the gambling site. Those credits act like virtual casino chips that can be used for simple wagers on those sites, ranging from Blackjack to "coin flip" games.

If a player wins, they can transfer their winnings back to the Roblox platform in the form of Robux. The gambling sites use fake purchases of worthless "dummy items" to facilitate these Robux transfers, according to the lawsuit, and Roblox takes a 30 percent transaction fee both when players "cash in" and "cash out" from the gambling sites. If the player loses, the transferred Robux are retained by the gambling website through a "stock" account on the Roblox platform.

 

In either case, the Robux can be converted back to actual money through the Developer Exchange Program. For individuals, this requires a player to be at least 13 years old, to file tax paperwork (in the US), and to have a balance of at least 30,000 Robux (currently worth $105, or $0.0035 per Robux).

The gambling websites also use the Developer Exchange Program to convert their Robux balances to real money, according to the lawsuit. And the real money involved isn't chump change, either; the lawsuit cites a claim from RBXFlip's owners that 7 billion Robux (worth over $70 million) was wagered on the site in 2021 and that the site's revenues increased 10 times in 2022. The sites are also frequently promoted by Roblox-focused social media influencers to drum up business, according to the lawsuit.

Advertisement

Who’s really responsible?

Roblox's terms of service explicitly bar "experiences that include simulated gambling, including playing with virtual chips, simulated betting, or exchanging real money, Robux, or in-experience items of value." But the gambling sites get around this prohibition by hosting their games away from Roblox's platform of user-created "experiences" while still using Robux transfers to take advantage of players' virtual currency balances from the platform.

 

This can be a problem for parents who buy Robux for their children thinking they're simply being used for in-game cosmetics and other gameplay items (over half of Roblox players were 12 or under as of 2020). Two parents cited in the lawsuit say their children have lost "thousands of Robux" to the gambling sites, which allegedly have nonexistent or ineffective age-verification controls.

Through its maintenance of the Robux currency platform that powers these sites, the lawsuit alleges that Roblox "monitors and records each of these illegal transactions, yet does nothing to prevent them from happening." Allowing these sites to profit from minors gambling with Robux amounts to "tacitly approv[ing] the Illegal Gambling Websites’ use of [Robux] that Roblox’s minor users can utilize to place bets on the Illegal Gambling Websites." This amounts to a violation of the federal RICO act, as well as California's Unfair Competition Law and New York's General Business Law, among other alleged violations.

In a statement provided to Bloomberg Law, Roblox said that "these are third-party sites and have no legal affiliation to Roblox whatsoever. Bad actors make illegal use of Roblox’s intellectual property and branding to operate such sites in violation of our standards.”


This isn't the first time a game platform has run into problems with its virtual currency powering gambling. In 2016, Valve faced a lawsuit and government attention from Washington state over third-party sites that use Counter-Strike skins as currency for gambling games. The lawsuit against Steam was eventually dismissed last year."

 

For original post, please visit:

https://arstechnica.com/gaming/2023/08/roblox-facilitates-illegal-gambling-for-minors-according-to-new-lawsuit/ 

No comment yet.
Scooped by Roxana Marachi, PhD
April 27, 2023 7:07 PM
Scoop.it!

The Complications of Regulating AI // Marketplace.org Interview with Elizabeth Renieris

The Complications of Regulating AI // Marketplace.org Interview with Elizabeth Renieris | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it
Podcast produced by Lydia Morell
"The idea that advancing technology outpaces regulation serves the industry's interests, says Elizabeth Renieris of Oxford. Current oversight methods apply to specific issues, but "general purpose" AI is harder to keep in check."...

 

https://www.marketplace.org/shows/marketplace-tech/the-complications-of-regulating-ai/ 

 

No comment yet.
Scooped by Roxana Marachi, PhD
May 19, 2022 8:34 PM
Scoop.it!

Algorithmic personalization is disrupting a healthy teaching environment // LSE

Algorithmic personalization is disrupting a healthy teaching environment // LSE | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it
By Velislava Hillman and Molly Esquivel

"The UK government has given no sign of when it plans to regulate digital technology companies. In contrast, the US Federal Trade Commission will tomorrow consider whether to make changes on the Children’s Online Privacy Protection Act to address the risks emanating from the growing power of digital technology companies, many of which already play substantial roles in children’s lives and schooling. The free rein offered thus far has so far led many businesses to infiltrate education, slowly degrading the teaching profession and spying on children, argue LSE Visiting fellow Dr Velislava Hillman and junior high school teacher and Doctor of Education candidate Molly Esquivel. They take a look here at what they describe as the mess that digitalized classrooms have become, due to the lack of regulation and absence of support if businesses cause harm."

 

"Any teacher would attest to the years of specialized schooling, teaching practice, code of ethics and standards they face to obtain a license to teach; those in higher education also need a high-level degree, published scholarship, postgraduate certificates such as PGCE and more. In contrast, businesses offering education technologies enter the classroom with virtually no demonstration of any licensing or standards.

The teaching profession has now become an ironic joke of sorts. If teachers in their college years once dreamed of inspiring their future students, today these dreamers are facing a different reality: one in which they are required to curate and operate with all kinds of applications and platforms; collect edtech badges of competency (fig1); monitor data; navigate students through yet more edtech products.

Unlicensed and unregulated, without years in college and special teaching credentials, edtech products not only override teachers’ competencies and roles; they now dictate them.

  

Figure 1Teachers race to collect edtech badges

[See original article for image]

Wellbeing indexes and Karma Points

“Your efforts are being noticed” is how Thrively, an application that monitors students and claims to be used by over 120,000 educators across the US, greets its user. In the UK, Symanto, an AI-based software that analyses texts to infer about the psychological state of an individual, is used for a similar purpose.

 

The Thrively software gathers metrics on attendance, library use, grades, online learning activities and makes inferences about students – how engaged they are or how they feel. Solutionpath, offering support for struggling students, is used in several universities in the UK. ClassDojo claims to be used by 85% of UK primary schools and a global community of over 50 million teachers and families.

 

Classroom management software Impero, offers teachers remote control of children’s devices. The company claims to provide direct access to over 2 million devices in more than 90 countries. Among other things, the software has a ‘wellbeing keyword library index’ which seeks to identify students who may need emotional support. A form of policing: “with ‘who, what, when and why’ information staff members can build a full picture of the capture and intervene early if necessary”.

These products and others adopt the methodology of  algorithm-based monitoring and profiling of students’ mental health. Such products steer not only student behavior but that of teachers too. One reviewer says of Impero: “My teachers always watch our screens with this instead of teaching”.

 

When working in Thrively, each interaction with a student earns “Karma Points”. The application lists teacher goals – immediately playing on an educator’s deep-seeded passion to be their best for their students (fig2). Failure to obtain such points becomes internalized as failure in the teaching profession. Thrively’s algorithms could also trigger an all-out battle of who on the teaching staff can earn the most Points. Similarly, ClassDojo offers a ‘mentor’ program to teachers and awards them ‘mentor badges’.

Figure 2Thrively nudges teachers to engage with it to earn badges and “Karma points”; its tutorial states: “It’s OK to brag when you are elevating humanity.” [See original article for image]

 

The teacher becomes a ‘line operator’ on a conveyor belt run by algorithms. The amassed data triggers algorithmic diagnostics from each application, carving up the curriculum, controlling students and teachers. Inferential software like Thrively throws teachers into rabbit holes by asking them not only to assess students’ personal interests, but their mental state, too. Its Wellbeing Index takes “pulse checks” to tell how students feel as though teachers are incapable of direct connection with their students. In the UK, the lax legislation with regards to biometric data collection, can further lead to advancing technologies’ exploitation of such data into developing mental health prediction and psychometric analytics. Such practices not only increase the risks of harm towards children and students in general; they dehumanize the whole educational process.

Many other technology-infused, surveillance-based applications are thrust into the classroom. Thrively captures data of 12-14-year-olds and suggests career pathways besides how they feel. They share the captured data with third parties such as YouTube Kids, game-based and coding apps – outside vendors that Thrively curates. Impero enables integration with platforms like Clever, used by over 20 million teachers and students, and Microsoft, thus expanding the tech giant’s own reach by millions of individuals. As technology intersects with education, teachers are merely a second thought in curriculum design and leading the classroom.

Teachers must remain central in children’s education, not businesses

The digitalization of education has swiftly moved towards an algorithmic hegemony which is degrading the teaching profession. Edtech companies are judging how students learn, how teachers work – and how they both feel. Public-private partnerships are giving experimental software with arbitrary algorithms warrantless titles of “school official” to untested beta programme, undermining teachers. Ironically, teachers still carry the responsibility for what happens in class.

Parents should ask what software is used to judge how their children feel or do in class and why. At universities, students should enquire what inferences are made about their work or their mental health that emerges from algorithms. Alas, this means heaping yet more responsibility on individuals – parents, children, students, teachers – to fend for themselves. Therefore, at least two things must also happen. First, edtech products and companies must be licensed to operate, the way banks, hospitals or teachers are. And second, educational institutions should consider transparency about how mental health or academic profiling in general is assessed. If and when software analytics play a part, educators (through enquiry) as well as policymakers (through law) should insist on transparency and be critical about the data points collected and the algorithms that process them.

This article represents the views of the author and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

 

For original post, please visit: 

https://blogs.lse.ac.uk/medialse/2022/05/18/algorithmic-personalization-is-disrupting-a-healthy-teaching-environment/ 

No comment yet.
Scooped by Roxana Marachi, PhD
November 28, 2023 3:20 PM
Scoop.it!

At Meta, Millions of Underage Users Were an ‘Open Secret,’ States Say // The New York Times

At Meta, Millions of Underage Users Were an ‘Open Secret,’ States Say // The New York Times | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it
By Natasha Singer
Nov. 25, 2023

"Meta has received more than 1.1 million reports of users under the age of 13 on its Instagram platform since early 2019 yet it “disabled only a fraction” of those accounts, according to a newly unsealed legal complaint against the company brought by the attorneys general of 33 states.

Instead, the social media giant “routinely continued to collect” children’s personal information, like their locations and email addresses, without parental permission, in violation of a federal children’s privacy law, according to the court filing. Meta could face hundreds of millions of dollars, or more, in civil penalties should the states prove the allegations.

“Within the company, Meta’s actual knowledge that millions of Instagram users are under the age of 13 is an open secret that is routinely documented, rigorously analyzed and confirmed,” the complaint said, “and zealously protected from disclosure to the public.”

The privacy charges are part of a larger federal lawsuit, filed last month by California, Colorado and 31 other states in U.S. District Court for the Northern District of California. The lawsuit accuses Meta of unfairly ensnaring young people on its Instagram and Facebook platforms while concealing internal studies showing user harms. And it seeks to force Meta to stop using certain features that the states say have harmed young users.

 

But much of the evidence cited by the states was blacked out by redactions in the initial filing.

Now the unsealed complaint, filed on Wednesday evening, provides new details from the states’ lawsuit. Using snippets from internal emails, employee chats and company presentations, the complaint contends that Instagram for years “coveted and pursued” underage users even as the company “failed” to comply with the children’s privacy law.

The unsealed filing said that Meta “continually failed” to make effective age-checking systems a priority and instead used approaches that enabled users under 13 to lie about their age to set up Instagram accounts. It also accused Meta executives of publicly stating in congressional testimony that the company’s age-checking process was effective and that the company removed underage accounts when it learned of them — even as the executives knew there were millions of underage users on Instagram.

“Tweens want access to Instagram, and they lie about their age to get it now,” Adam Mosseri, the head of Instagram, said in an internal company chat in November 2021, according to the court filing.

 

In Senate testimony the following month, Mr. Mosseri said: “If a child is under the age of 13, they are not permitted on Instagram.”

In a statement on Saturday, Meta said that it had spent a decade working to make online experiences safe and age-appropriate for teenagers and that the states’ complaint “mischaracterizes our work using selective quotes and cherry-picked documents.”

 

The statement also noted that Instagram’s terms of use prohibit users under the age of 13 in the United States. And it said that the company had “measures in place to remove these accounts when we identify them.”

The company added that verifying people’s ages was a “complex” challenge for online services, especially with younger users who may not have school IDs or driver’s licenses. Meta said it would like to see federal legislation that would require “app stores to get parents’ approval whenever their teens under 16 download apps” rather than having young people or their parents supply personal details like birth dates to many different apps.

The privacy charges in the case center on a 1998 federal law, the Children’s Online Privacy Protection Act. That law requires that online services with content aimed at children obtain verifiable permission from a parent before collecting personal details — like names, email addresses or selfies — from users under 13. Fines for violating the law can run to more than $50,000 per violation.

The lawsuit argues that Meta elected not to build systems to effectively detect and exclude such underage users because it viewed children as a crucial demographic — the next generation of users — that the company needed to capture to assure continued growth.

Meta had many indicators of underage users, according to the Wednesday filing. An internal company chart displayed in the unsealed material, for example, showed how Meta tracked the percentage of 11- and 12-year-olds who used Instagram daily, the complaint said.

 

Meta also knew about accounts belonging to specific underage Instagram users through company reporting channels. But it “automatically” ignored certain reports of users under 13 and allowed them to continue using their accounts, the complaint said, as long as the accounts did not contain a user biography or photos.

In one case in 2019, Meta employees discussed in emails why the company had not deleted four accounts belonging to a 12-year-old, despite requests and “complaints from the girl’s mother stating her daughter was 12,” according to the complaint. The employees concluded that the accounts were “ignored” partly because Meta representatives “couldn’t tell for sure the user was underage,” the legal filing said.

This is not the first time the social media giant has faced allegations of privacy violations. In 2019, the company agreed to pay a record $5 billion, and to alter its data practices, to settle charges from the Federal Trade Commission of deceiving users about their ability to control their privacy.

It may be easier for the states to pursue Meta for children’s privacy violations than to prove that the company encouraged compulsive social media use — a relatively new phenomenon — among young people. Since 2019, the F.T.C. has successfully brought similar children’s privacy complaints against tech giants including Google and its YouTube platform, AmazonMicrosoft and Epic Games, the creator of Fortnite." 

 

For full/original post, please visit:

Nov. 25, 2023

Meta has received more than 1.1 million reports of users under the age of 13 on its Instagram platform since early 2019 yet it “disabled only a fraction” of those accounts, according to a newly unsealed legal complaint against the company brought by the attorneys general of 33 states.

Instead, the social media giant “routinely continued to collect” children’s personal information, like their locations and email addresses, without parental permission, in violation of a federal children’s privacy law, according to the court filing. Meta could face hundreds of millions of dollars, or more, in civil penalties should the states prove the allegations.

“Within the company, Meta’s actual knowledge that millions of Instagram users are under the age of 13 is an open secret that is routinely documented, rigorously analyzed and confirmed,” the complaint said, “and zealously protected from disclosure to the public.”

The privacy charges are part of a larger federal lawsuit, filed last month by California, Colorado and 31 other states in U.S. District Court for the Northern District of California. The lawsuit accuses Meta of unfairly ensnaring young people on its Instagram and Facebook platforms while concealing internal studies showing user harms. And it seeks to force Meta to stop using certain features that the states say have harmed young users.

 

But much of the evidence cited by the states was blacked out by redactions in the initial filing.

Now the unsealed complaint, filed on Wednesday evening, provides new details from the states’ lawsuit. Using snippets from internal emails, employee chats and company presentations, the complaint contends that Instagram for years “coveted and pursued” underage users even as the company “failed” to comply with the children’s privacy law.

The unsealed filing said that Meta “continually failed” to make effective age-checking systems a priority and instead used approaches that enabled users under 13 to lie about their age to set up Instagram accounts. It also accused Meta executives of publicly stating in congressional testimony that the company’s age-checking process was effective and that the company removed underage accounts when it learned of them — even as the executives knew there were millions of underage users on Instagram.

“Tweens want access to Instagram, and they lie about their age to get it now,” Adam Mosseri, the head of Instagram, said in an internal company chat in November 2021, according to the court filing.

 

In Senate testimony the following month, Mr. Mosseri said: “If a child is under the age of 13, they are not permitted on Instagram.”

In a statement on Saturday, Meta said that it had spent a decade working to make online experiences safe and age-appropriate for teenagers and that the states’ complaint “mischaracterizes our work using selective quotes and cherry-picked documents.”

 

The statement also noted that Instagram’s terms of use prohibit users under the age of 13 in the United States. And it said that the company had “measures in place to remove these accounts when we identify them.”

The company added that verifying people’s ages was a “complex” challenge for online services, especially with younger users who may not have school IDs or driver’s licenses. Meta said it would like to see federal legislation that would require “app stores to get parents’ approval whenever their teens under 16 download apps” rather than having young people or their parents supply personal details like birth dates to many different apps.

The privacy charges in the case center on a 1998 federal law, the Children’s Online Privacy Protection Act. That law requires that online services with content aimed at children obtain verifiable permission from a parent before collecting personal details — like names, email addresses or selfies — from users under 13. Fines for violating the law can run to more than $50,000 per violation.

The lawsuit argues that Meta elected not to build systems to effectively detect and exclude such underage users because it viewed children as a crucial demographic — the next generation of users — that the company needed to capture to assure continued growth.

Meta had many indicators of underage users, according to the Wednesday filing. An internal company chart displayed in the unsealed material, for example, showed how Meta tracked the percentage of 11- and 12-year-olds who used Instagram daily, the complaint said.

 

Meta also knew about accounts belonging to specific underage Instagram users through company reporting channels. But it “automatically” ignored certain reports of users under 13 and allowed them to continue using their accounts, the complaint said, as long as the accounts did not contain a user biography or photos.

In one case in 2019, Meta employees discussed in emails why the company had not deleted four accounts belonging to a 12-year-old, despite requests and “complaints from the girl’s mother stating her daughter was 12,” according to the complaint. The employees concluded that the accounts were “ignored” partly because Meta representatives “couldn’t tell for sure the user was underage,” the legal filing said.

This is not the first time the social media giant has faced allegations of privacy violations. In 2019, the company agreed to pay a record $5 billion, and to alter its data practices, to settle charges from the Federal Trade Commission of deceiving users about their ability to control their privacy.

It may be easier for the states to pursue Meta for children’s privacy violations than to prove that the company encouraged compulsive social media use — a relatively new phenomenon — among young people. Since 2019, the F.T.C. has successfully brought similar children’s privacy complaints against tech giants including Google and its YouTube platform, AmazonMicrosoft and Epic Games, the creator of Fortnite."

 

Franck's curator insight, November 30, 2023 6:20 AM
La Commission irlandaise de protection des données a constaté que TikTok avait violé plusieurs articles du Règlement général sur la protection des données (RGPD) , notamment en omettant de protéger la vie privée des enfants. Par conséquent, l’organisme européen de surveillance de la vie privée a pénalisé TikTok d’une amende d’environ 367 millions de dollars.13 oct. 2023
Scooped by Roxana Marachi, PhD
April 8, 2024 9:00 PM
Scoop.it!

What's in a Name? Auditing Large Language Models for Race and Gender Bias // (Haim, Salinas, & Nyarko, 2024) // Stanford Law School via arxiv.org  

What's in a Name? Auditing Large Language Models for Race and Gender Bias.
By Amit Haim, Alejandro Salinas, Julian Nyarko

ABSTRACT

"We employ an audit design to investigate biases in state-of-the-art large language models, including GPT-4. In our study, we prompt the models for advice involving a named individual across a variety of scenarios, such as during car purchase negotiations or election outcome predictions. We find that the advice systematically disadvantages names that are commonly associated with racial minorities and women. Names associated with Black women receive the least advantageous outcomes. The biases are consistent across 42 prompt templates and several models, indicating a systemic issue rather than isolated incidents. While providing numerical, decision-relevant anchors in the prompt can successfully counteract the biases, qualitative details have inconsistent effects and may even increase disparities. Our findings underscore the importance of conducting audits at the point of LLM deployment and implementation to mitigate their potential for harm against marginalized communities."

 

Please visit following for abstract on arxiv.org and link to download:

https://arxiv.org/abs/2402.14875 

 

No comment yet.
Scooped by Roxana Marachi, PhD
January 14, 2023 5:39 PM
Scoop.it!

Controversy erupts over non-consensual AI mental health experiment [Updated] // ArsTechnica

Controversy erupts over non-consensual AI mental health experiment [Updated] // ArsTechnica | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

"Koko let 4,000 people get therapeutic help from GPT-3 without telling them first."

 

By Benj Edwards

"On Friday, Koko co-founder Rob Morris announced on Twitter that his company ran an experiment to provide AI-written mental health counseling for 4,000 people without informing them first, Vice reports. Critics have called the experiment deeply unethical because Koko did not obtain informed consent from people seeking counseling.

 

Koko is a nonprofit mental health platform that connects teens and adults who need mental health help to volunteers through messaging apps like Telegram and Discord.

 

On Discord, users sign in to the Koko Cares server and send direct messages to a Koko bot that asks several multiple-choice questions (e.g., "What's the darkest thought you have about this?"). It then shares a person's concerns—written as a few sentences of text—anonymously with someone else on the server who can reply anonymously with a short message of their own."...

 

For full article, please visit:

https://arstechnica.com/information-technology/2023/01/contoversy-erupts-over-non-consensual-ai-mental-health-experiment/ 

No comment yet.
Scooped by Roxana Marachi, PhD
July 18, 2023 7:53 PM
Scoop.it!

Securly Sued Over Surveillance of Students on School Chromebooks // 7/17/23

Securly Sued Over Surveillance of Students on School Chromebooks // 7/17/23 | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

Securly Sued Over Surveillance of Students on School Chromebooks - by Christopher Brown

https://news.bloomberglaw.com/privacy-and-data-security/securly-sued-over-surveillance-of-students-on-school-chromebooks 

 

Court case docket - public complaint file

https://www.bloomberglaw.com/public/desktop/document/BateetalvSecurlyIncDocketNo323cv01304SDCalJul172023CourtDocket?doc_id=X6CKD5LOJEB9EMRO1KU3RFCR8K7

No comment yet.
Scooped by Roxana Marachi, PhD
September 17, 2022 11:48 AM
Scoop.it!

Report – Hidden Harms: The Misleading Promise of Monitoring Students Online // Center for Democracy and Technology 

Report – Hidden Harms: The Misleading Promise of Monitoring Students Online // Center for Democracy and Technology  | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

By Elizabeth Laird, Hugh Grant-Chapman, Cody Venzke, Hannah Quay-de la Vallee

 

"The pressure on schools to keep students safe, especially to protect them physically and support their mental health, has never been greater. The mental health crisis, which has been exacerbated by the COVID-19 pandemic, and concerns about the increasing number of school shootings have led to questions about the role of technology in meeting these goals. From monitoring students’ public social media posts to tracking what they do in real-time on their devices, technology aimed at keeping students safe is growing in popularity. However, the harms that such technology inflicts are increasingly coming to light. 

 

CDT conducted survey research among high school students and middle and high school parents and teachers to better understand the promise of technologies aimed at keeping students safe and the risks that they pose, as reported by those most directly interacting with such tools. In particular, the research focused on student activity monitoring, the nearly ubiquitous practice of schools using technology to monitor students’ activities online, especially on devices provided by the school. CDT built on its previous research, which showed that this monitoring is conducted primarily to comply with perceived legal requirements and to keep students safe. While stakeholders are optimistic that student activity monitoring will keep students safe, in practice it creates significant efficacy and equity gaps: 

  • Monitoring is used for discipline more often than for student safety: Despite assurances and hopes that student activity monitoring will be used to keep students safe, teachers report that it is more frequently used for disciplinary purposes in spite of parent and student concerns. 

  • Teachers bear considerable responsibility but lack training for student activity monitoring: Teachers are generally tasked with responding to alerts generated by student activity monitoring, despite only a small percentage having received training on how to do so privately and securely. 

  • Monitoring is often not limited to school hours despite parent and student concerns: Students and parents are the most comfortable with monitoring being limited to when school is in session, but monitoring frequently occurs outside of that time frame. 

  • Stakeholders demonstrate large knowledge gaps in how monitoring software functions: There are significant gaps between what teachers report is communicated about student activity monitoring, often via a form provided along with a school-issued device, and what parents and students retain and report about it. 

Additionally, certain groups of students, especially those who are already more at risk than their peers, disproportionately experience the hidden harms of student activity monitoring: 

  • Students are at risk of increased interactions with law enforcement: Schools are sending student data collected from monitoring software to law enforcement officials, who use it to contact students. 

  • LGBTQ+ students are disproportionately targeted for action: The use of student activity monitoring software is resulting in the nonconsensual disclosure of students’ sexual orientation and gender identity (i.e., “outing”), as well as more LGBTQ+ students reporting they are being disciplined or contacted by law enforcement for concerns about committing a crime compared to their peers. 

  • Students’ mental health could suffer: While students report they are being referred to school counselors, social workers, and other adults for mental health support, they are also experiencing detrimental effects from being monitored online. These effects include avoiding expressing their thoughts and feelings online, as well as not accessing important resources that could help them. 

  • Students from low-income families, Black students, and Hispanic students are at greater risk of harm: Previous CDT research showed that certain groups of students, including students from low-income families, Black students, and Hispanic students, rely more heavily on school-issued devices. Therefore, they are subject to more surveillance and the aforementioned harms, including interacting with law enforcement, being disciplined, and being outed, than those using personal devices. 


Given that the implementation of student activity monitoring falls short of its promises, this research suggests that education leaders should consider alternative strategies to keep students safe that do not simultaneously put students’ safety and well-being in jeopardy.


See below for our complete report, summary brief, and in-depth research slide deck. For more information, see our letter calling for action from the U.S. Department of Education’s Office for Civil Rights — jointly signed by multiple civil society groups — as well as our related press release and recent blog post discussing findings from our parent and student focus groups.

Read the full report here.

Read the summary brief here.

Read the research slide deck here.

 
 
No comment yet.
Scooped by Roxana Marachi, PhD
May 22, 2023 9:32 PM
Scoop.it!

FTC Says EdTech Provider Edmodo Unlawfully Used Children’s Personal Information for Advertising and Outsourced Compliance to School Districts

FTC Says EdTech Provider Edmodo Unlawfully Used Children’s Personal Information for Advertising and Outsourced Compliance to School Districts | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

"The Federal Trade Commission has obtained an order against education technology provider Edmodo for collecting personal data from children without obtaining their parent’s consent and using that data for advertising, in violation of the Children’s Online Privacy Protection Act Rule (COPPA Rule), and for unlawfully outsourcing its COPPA compliance responsibilities to schools. 

Under the proposed order, filed by the Department of Justice on behalf of the FTC, Edmodo, Inc. will be prohibited from requiring students to hand over more personal data than is necessary in order to participate in an online educational activity. This is a first for an FTC order and is in line with a policy statement the FTC issued in May 2022 that warned education technology companies about forcing parents and schools to provide personal data about children in order to participate in online education. During the course of the FTC’s investigation, Edmodo suspended operations in the United States. The order, if approved by the court, will bind the company, including if it resumes U.S. operations.

“This order makes clear that ed tech providers cannot outsource compliance responsibilities to schools, or force students to choose between their privacy and education,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection. “Other ed tech providers should carefully examine their practices to ensure they’re not compromising students’ privacy.”

In a complaint, also filed by DOJ, the FTC says Edmodo violated the COPPA Rule by failing to provide information about the company’s data collection practices to schools and teachers, and failing to obtain verifiable parental consent. The COPPA Rule requires online services and websites directed to children under 13 to notify parents about the personal information they collect and obtain verifiable parental consent for the collection and use of that information.

Until approximately September 2022, California-based Edmodo offered an online platform and mobile app with virtual class spaces to host discussions, share materials and other online resources for teachers and schools in the United States via a free and subscription-based service. The company collected personal information about students including their name, email address, date of birth and phone number as well as persistent identifiers, which it used to provide ads.

Under the COPPA Rule, schools can authorize collection of children’s personal information on behalf of parents. But a website operator must provide notice to the school of the operator’s collection, use and disclosure practices, and the school can only authorize collection and use of personal information for an educational purpose.

Edmodo required schools and teachers to authorize data collection on behalf of parents or to notify parents about Edmodo’s data collection practices and obtain their consent to that collection. Edmodo, however, failed to provide schools and teachers with the information they would need to comply in either scenario as required by the COPPA Rule, according to the complaint. For example, during the signup process for Edmodo’s free service, Edmodo provided minimal information about the COPPA Rule to teachers—providing only a link to the company’s terms of service and privacy policy, which teachers were not required to review before signing up for the company’s service.

Those teachers and schools that did read Edmodo’s terms of service were falsely told that they were “solely” responsible for complying with the COPPA Rule. The terms of service also failed to adequately disclose what personal information the company actually collects or indicate how schools or teachers should go about obtaining parental consent.

These failures led to the illegal collection of personal information from children, according to the complaint.

In addition, Edmodo could not rely on schools to authorize collection on behalf of parents because the company used the personal information it collected from children for a non-educational purpose—to serve advertising. For such commercial uses, the COPPA Rule required Edmodo to obtain consent directly from parents. 

Edmodo also violated the COPPA Rule by retaining personal information indefinitely until at least 2020 when it put in place a policy to delete the data after two years, according to the complaint. COPPA prohibits retaining personal information about children for longer than is reasonably necessary to fulfill the purpose for which it was collected.

In addition to violating the COPPA Rule, the FTC says Edmodo violated the FTC Act’s prohibition on unfair practices by relying on schools to obtain verifiable parental consent. Specifically, the FTC says that Edmodo outsourced its COPPA compliance responsibilities to schools and teachers while providing confusing and inaccurate information about obtaining consent. This is the first time the FTC has alleged an unfair trade practice in the context of an operator’s interaction with schools.

Proposed Order

The proposed order with Edmodo includes a $6 million monetary penalty, which will be suspended due to the company’s inability to pay. Other order provisions, which will provide protections for children’s data should Edmodo resume operations in the United States, include:

  • prohibiting Edmodo from conditioning a child’s participation in an activity on the child disclosing more information than is reasonably necessary to participate in such activity;

  • requiring the company to complete several requirements before obtaining school authorization to collect information from a child;

  • prohibiting the company from using children’s information for non-educational purposes such as advertising or building user profiles;

  • banning the company from using schools as intermediaries in the parental consent process;

  • requiring the company to implement and adhere to a retention schedule that details what information it collects, what the data is used for and a time frame for deleting it; and

  • requiring Edmodo to delete models or algorithms developed using personal information collected from children without verifiable parental consent or school authorization.

The Commission voted 3-0 to refer the civil penalty complaint and proposed federal order to the Department of Justice. The DOJ filed the complaint and stipulated order in the U.S. District Court for the Northern District of California.

NOTE: The Commission authorizes the filing of a complaint when it has “reason to believe” that the named defendant is violating or is about to violate the law and it appears to the Commission that a proceeding is in the public interest. Stipulated orders have the force of law when approved and signed by the District Court judge.

The lead FTC attorneys on this matter are Gorana Neskovic and Peder Magee from the FTC’s Bureau of Consumer protection.

The Federal Trade Commission works to promote competition and protect and educate consumers. Learn more about consumer topics at consumer.ftc.gov, or report fraud, scams, and bad business practices at ReportFraud.ftc.gov. Follow the FTC on social media, read consumer alerts and the business blog, and sign up to get the latest FTC news and alerts."

 

For original post, please visit: 

https://www.ftc.gov/news-events/news/press-releases/2023/05/ftc-says-ed-tech-provider-edmodo-unlawfully-used-childrens-personal-information-advertising?utm_source=govdelivery 

No comment yet.
Scooped by Roxana Marachi, PhD
May 2, 2023 9:57 PM
Scoop.it!

Ransomware Gang Claims Edison Learning Data Theft // THE Journal

Ransomware Gang Claims Edison Learning Data Theft // THE Journal | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

By Kristal Kuykendall 05/02/23

"The Royal Ransomware is claiming to have infiltrated public school management and virtual learning provider Edison Learning, posting on its dark web data leak site on Wednesday, April 26, that it had stolen 20GB of the company’s data “including personal information of employees and students” and threatening to post the data “early next week.”

Typically, when Royal and similar ransomware groups post such warnings, it indicates they have likely made a ransomware demand and may be in negotiations with the targeted organization, said cybersecurity expert Doug Levin, who is national director at K12 Security Information Exchange and sits on CISA’s Cybersecurity Advisory Committee

Edison Learning confirmed a cyber incident has occurred and said it could not divulge anything else. "Our investigation into this incident is ongoing, and we are unable to provide additional details at this time," Edison Learning Director of Communications Michael Serpe told THE Journal in an email. "We do not have any student data on impacted systems." 


Based in Fort Lauderdale, Florida, Edison Learning was founded in 1992 as the Edison Project to provide school management services for public charter schools and struggling districts in the United States and United Kingdom. 

According to an archived 2015 website page, Edison Learning has managed hundreds of schools in 32 states, serving millions of students over the years. A 2012 Edison Learning sales presentation found online by THE Journal states that during the 2009–2010 school year, the company’s services were providing schooling for 400,000 children in 25 states, the U.K., and the United Arab Emirates.

More recently, Edison Learning has expanded to provide virtual schooling for middle and high school students as well as CTE courses for high school students, social-emotional learning courses for middle and high school, and more. The company operates its own in-house learning management system, called eSchoolware, and on its website touts other services such as “management solutions, alternative education, personal learning plans, and turnaround services for underperforming schools.”

The Royal ransomware gang — whose tactics were the subject of a CISA cybersecurity advisory in March 2023 — wrote on its data leak site on the dark web: “Looks like knowledge providers missed some lessons of cyber security [sic]. Recently we gave one to EdisonLearning and they have failed.”

Levin at K12SIX said that while “occasionally, these groups list victims they didn’t actually compromise,” the opposite is true more often than not. For example, on Royal’s data leak site, scores of companies — including a handful of public school districts, community colleges, and universities — are listed as victims targeted since the beginning of this year, and many include links to the stolen data files for the respective victims, who presumably did not pay the ransom.... 

 

For full post, please visit: 

https://thejournal.com/articles/2023/05/01/ransomware-gang-claims-edison-learning-data-theft.aspx?s=the_nu_020523&oly_enc_id=8831J2755401H5M 

No comment yet.
Scooped by Roxana Marachi, PhD
January 29, 2024 2:16 PM
Scoop.it!

Could a court really order the destruction of ChatGPT? The New York Times thinks so, and it may be right // The Conversation

Could a court really order the destruction of ChatGPT? The New York Times thinks so, and it may be right // The Conversation | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

By João Marinotti

"On Dec. 27, 2023, The New York Times filed a lawsuit against OpenAI alleging that the company committed willful copyright infringement through its generative AI tool ChatGPT. The Times claimed both that ChatGPT was unlawfully trained on vast amounts of text from its articles and that ChatGPT’s output contained language directly taken from its articles.

 

To remedy this, the Times asked for more than just money: It asked a federal court to order the “destruction” of ChatGPT.

If granted, this request would force OpenAI to delete its trained large language models, such as GPT-4, as well as its training data, which would prevent the company from rebuilding its technology.

This prospect is alarming to the 100 million people who use ChatGPT every week. And it raises two questions that interest me as a law professor. First, can a federal court actually order the destruction of ChatGPT? And second, if it can, will it?

 

Destruction in the court

The answer to the first question is yes. Under copyright law, courts do have the power to issue destruction orders.

To understand why, consider vinyl records. Their resurging popularity has attracted counterfeiters who sell pirated records.

 

If a record label sues a counterfeiter for copyright infringement and wins, what happens to the counterfeiter’s inventory? What happens to the master and stamper disks used to mass-produce the counterfeits, and the machinery used to create those disks in the first place?

 

To address these questions, copyright law grants courts the power to destroy infringing goods and the equipment used to create them. From the law’s perspective, there’s no legal use for a pirated vinyl record. There’s also no legitimate reason for a counterfeiter to keep a pirated master disk. Letting them keep these items would only enable more lawbreaking.

 

So in some cases, destruction is the only logical legal solution. And if a court decides ChatGPT is like an infringing good or pirating equipment, it could order that it be destroyed. In its complaint, the Times offered arguments that ChatGPT fits both analogies.

Copyright law has never been used to destroy AI models, but OpenAI shouldn’t take solace in this fact. The law has been increasingly open to the idea of targeting AI.

Consider the Federal Trade Commission’s recent use of algorithmic disgorgement as an example. The FTC has forced companies such as WeightWatchers to delete not only unlawfully collected data but also the algorithms and AI models trained on such data.

Why ChatGPT will likely live another day

It seems to be only a matter of time before copyright law is used to order the destruction of AI models and datasets. But I don’t think that’s going to happen in this case. Instead, I see three more likely outcomes.

The first and most straightforward is that the two parties could settle. In the case of a successful settlement, which may be likely, the lawsuit would be dismissed and no destruction would be ordered.

The second is that the court might side with OpenAI, agreeing that ChatGPT is protected by the copyright doctrine of “fair use.” If OpenAI can argue that ChatGPT is transformative and that its service does not provide a substitute for The New York Times’ content, it just might win.

The third possibility is that OpenAI loses but the law saves ChatGPT anyway. Courts can order destruction only if two requirements are met: First, destruction must not prevent lawful activities, and second, it must be “the only remedy” that could prevent infringement.

That means OpenAI could save ChatGPT by proving either that ChatGPT has legitimate, noninfringing uses or that destroying it isn’t necessary to prevent further copyright violations.

Both outcomes seem possible, but for the sake of argument, imagine that the first requirement for destruction is met. The court could conclude that, because of the articles in ChatGPT’s training data, all uses infringe on the Times’ copyrights – an argument put forth in various other lawsuits against generative AI companies.

 

In this scenario, the court would issue an injunction ordering OpenAI to stop infringing on copyrights. Would OpenAI violate this order? Probably not. A single counterfeiter in a shady warehouse might try to get away with that, but that’s less likely with a US$100 billion company.

Instead, it might try to retrain its AI models without using articles from the Times, or it might develop other software guardrails to prevent further problems. With these possibilities in mind, OpenAI would likely succeed on the second requirement, and the court wouldn’t order the destruction of ChatGPT.

Given all of these hurdles, I think it’s extremely unlikely that any court would order OpenAI to destroy ChatGPT and its training data. But developers should know that courts do have the power to destroy unlawful AI, and they seem increasingly willing to use it."

 

For original post, please visit: 
https://theconversation.com/could-a-court-really-order-the-destruction-of-chatgpt-the-new-york-times-thinks-so-and-it-may-be-right-221717 

No comment yet.
Scooped by Roxana Marachi, PhD
October 30, 2023 11:36 AM
Scoop.it!

Research at a Glance: Data Privacy and Children // Children and Screens: Institute of Digital Media and Child Development

https://www.childrenandscreens.com/data-privacy-and-children/ 

No comment yet.
Scooped by Roxana Marachi, PhD
April 2, 2024 6:57 PM
Scoop.it!

AI: Two reports reveal a massive enterprise pause over security and ethics // Diginomica

AI: Two reports reveal a massive enterprise pause over security and ethics // Diginomica | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

by Chris Middleton 

"No one doubts that artificial intelligence is a strategic boardroom issue, though diginomica revealed last year that much of the initial buzz was individuals using free cloud tools as shadow IT, while many business leaders talked up AI in their earnings calls just to keep investors happy. 

In 2024, those caveats remain amidst the hype. As one of my stories from KubeCon + CloudNativeCon last week showed, the reality for many software engineering teams is the C-suite demanding an AI ‘hammer’ with little idea of what business nail they want to hit with it. 

Or, as Intel Vice President and General Manager for Open Ecosystem Arun Gupta put it: 

"When we go into a CIO discussion, it’s ‘How can I use Gen-AI?’ And I’m like, ‘I don’t know. What do you want to do with it?’ And the answer is, ‘I don’t know, you figure it out!’"


So, now that AI Spring is in full bloom, what is the reality of enterprise adoption? Two reports this week unveil some surprising new findings, many of which show that the hype cycle is ending more quickly than the industry would like.

First up is a white paper from $2 billion cloud incident-response provider, PagerDuty. According to its survey of 100 Fortune 1,000 IT leaders, 100% are concerned about the security risks of the technology, and 98% have paused Gen-AI projects as a result. 

 

Those are extraordinary figures. However, the perceived threats are not solely about cybersecurity (with phishing, deep fakes, complex fraud, and automated attacks on the rise), but are rooted in what PagerDuty calls the “moral implications”. These include worries over copyright theft in training data and any legal exposure that may arise from that. 

As previously reported (see diginomica, passim), multiple IP infringement lawsuits are ongoing in the US, while in the UK, the House of Lords’ Communications and Digital Committee was clear, in its inquiry into Large Language Models, that copyright theft had taken place. A conclusion that peers arrived at after interviewing expert witnesses from all sides of the debate, including vendors and lawyers.

According to PagerDuty, unease over these issues keeps more than half of respondents (51%) awake at night, with nearly as many concerned about the disclosure of sensitive information (48%), data privacy violations (47%), and social engineering attacks (46%). They are right to be cautious: last year, diginomica reported that source code is the most common form of privileged data disclosed to cloud-based AI tools.

The white paper adds:
"Any of these security risks could damage the company’s public image, which explains why Gen-AI’s risk to the organization’s reputation tops the list of concerns for 50% of respondents. More than two in five also worry about the ethics of the technology (42%). Among the executives with these moral concerns, inherent societal biases of training data (26%) and lack of regulation (26%) top the list."

Despite this, only 25% of IT leaders actively mistrust the technology, adds the white paper – cold comfort for vendors, perhaps. Even so, it is hard to avoid the implication that, while some providers might have first- or big-mover advantage in generative AI, any that trained their systems unethically may have stored up a world of problems for themselves.

However, with nearly all Fortune 1,000 companies pausing their AI programmes until clear guidelines can be put in place – though the figure of 98% seems implausibly high – the white paper adds:

 

"Executives value these policies, so much so that a majority (51%) believe they should adopt Gen-AI only after they have the right guidelines in place. [But] others believe they risk falling behind if they don’t adopt Gen-AI as quickly as possible, regardless of parameters (46%)."

 

Those figures suggest a familiar pattern in enterprise tech adoption: early movers stepping back from their decisions, while the pack of followers is just getting started. 

 

Yet the report continues:
"Despite the emphasis and clear need, only 29% of companies have established formal guidelines. Instead, 66% are currently setting up these policies, which means leaders may need to keep pausing Gen-AI until they roll out a course of action."

 

That said, the white paper’s findings are inconsistent in some respects, and thus present a confusing picture – conceivably, one of customers confirming a security researcher’s line of questioning. Imagine that: confirmation bias in a Gen-AI report!

 

For example, if 98% of IT leaders say they have paused enterprise AI programmes until organizational guidelines are put in place, how are 64% of the same survey base able to report that Gen-AI is still being used in “some or all” of their departments? 

 

One answer may be that, as diginomica found last year, that ‘departmental’ use may in fact be individuals experimenting with cloud-based tools as shadow IT. That aside, the white paper confirms that early enterprise adopters may be reconsidering their incautious rush."...

 

For full post, please visit: 

https://diginomica.com/ai-two-reports-reveal-massive-enterprise-pause-over-security-and-ethics  

 
No comment yet.
Scooped by Roxana Marachi, PhD
August 6, 2023 1:10 PM
Scoop.it!

Breaking Free from Blockchaining Babies & "Cradle-to-Career" Data Traps // #CivicsOfTech23 Closing Keynote

The 2nd Annual Civics of Technology conference held on August 3rd and 4th, 2023 closed with a keynote address provided by Dr. Roxana Marachi, Professor of Education at San José State University.

The video for this talk is available here, accompanying slides are available at http://bit.ly/MarachiKeynote23, and links to additional resources can also be found at http://bit.ly/DataJusticeLinks.   

 

A blogpost introducing the Civics of Technology community to some of the research and trends discussed in the keynote can be found here


Thanks to the entire Civics of Technology team for their efforts in organizing the conference, to all the presenters and participants, and to the University of North Texas College of Education and Loyola University Maryland School of Education for their generous support. For more information about Civics of Technology events and discussions, please visit http://civicsoftechnology.org

 

https://www.youtube.com/watch?v=OiuKT-LwH4Q

No comment yet.
Scooped by Roxana Marachi, PhD
May 21, 2023 4:36 PM
Scoop.it!

A patchwork of platforms: mapping data infrastructures in schools // Pangrazio, Selwyn, & Cumbo (2022)

A patchwork of platforms: mapping data infrastructures in schools // Pangrazio, Selwyn, & Cumbo (2022) | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

"Abstract
This paper explores the significance of schools’ data infrastructures as a site of institutional power and (re)configuration. Using ‘infrastructure studies’ as a theoretical framework and drawing on in-depth studies of three contrasting Australian secondary schools, the paper takes a holistic look at schools’ data infrastructures. In contrast to the notion of the ‘platformatised’ school, the paper details the ad hoc and compromised ways that these school data infrastructures have developed – highlighting a number of underlying sociotechnical conditions that lead to an ongoing process of data infrastructuring. These include issues of limited technical interoperability and differences between educational requirements and commercially-led designs. Also apparent is the disjuncture between the imagined benefits of institutional data use and the ongoing maintenance and repair required to make the infrastructures function. Taking an institutional perspective, the paper explores why digital technologies continue to complicate (rather than simplify) school processes and practices."

 

For journal access, please visit: 

https://www.tandfonline.com/doi/abs/10.1080/17439884.2022.2035395?journalCode=cjem20 

No comment yet.
Scooped by Roxana Marachi, PhD
June 2, 2023 1:55 PM
Scoop.it!

Generating Harms: Generative AI's Impact and Paths Forward // EPIC, Electronic Privacy Information Center

https://epic.org/wp-content/uploads/2023/05/EPIC-Generative-AI-White-Paper-May2023.pdf 

No comment yet.
Scooped by Roxana Marachi, PhD
July 24, 2023 10:49 AM
Scoop.it!

K-12 Dealmaking: Kahoot Acquired by Investor Group That Includes Goldman Sachs // EdWeek Market Brief

K-12 Dealmaking: Kahoot Acquired by Investor Group That Includes Goldman Sachs // EdWeek Market Brief | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

By Michelle Caffrey

"Norwegian ed-tech company Kahoot agreed to be acquired by a group of investors led by Goldman Sachs Asset Management for $1.7 billion, or 17.2 Norwegian Krones.

Lego’s parent company, Kirkbi, is among the other co-investors, alongside private equity firm General Atlantic, Kahoot CEO Eilert Hanoa’s investment vehicle Glitrafjord, and other investors and management shareholders, Kahoot said.

The deal is expected to close by the end of the year, pending regulatory approvals.

In announcing the acquisition, Kahoot, which is traded on the Oslo Stock Exchange, also shared preliminary financial results for the second quarter. The company reported recognized revenue of more than $41 million for the quarter, up 14 percent year-over-year. The company also said its adjusted earnings before interest, taxes, depreciation, and amortization was $11 million for the quarter, an increase of 60 percent from the prior year’s period.

 

The offer to take Kahoot private offers shareholders about $3.48 a share, a 51 percent premium to its closing price of $2.28 in May when the investors disclosed their shareholding positions.

Kahoot was founded in 2013 and designed to offer students a game-based platform to learn a range of subjects. It has acquired seven companies in since it launched, one of the largest being a $500 million acquisition of digital learning platform Clever in May 2021.

“As the need for engaging learning, across home, school and work, continues to grow, I am excited about the opportunities this partnership represents for our users, our ecosystem of partners, and for the talented team across the Kahoot! Group, to advance education for hundreds of millions of learners everywhere,” Hanoa, Kahoot’s CEO, said in a statement.

Goldman Sachs noted Kahoot’s unique brand, extensive reach, and scalable technology and operations in its announcement of the deal, as well as its focus on a wide range of customers, from school children to enterprise clients.

The acquisition will allow Kahoot to benefit from operating as a private company, Goldman Sachs and co-investors said, noting that it plans to invest in product innovation and growth both organically and through acquisitions. Having access to private capital will allow the company to significantly boost it go-to-market strategy, it added.

“Kahoot is unlocking learning potential for children, students and employees across the world. The company has a clear mission and value proposition and our investment will help to grow its impact and accelerate value for all stakeholders,” Michael Bruun, global co-head of private equity at Goldman Sachs Asset Management, said in a statement.

“Through this transaction, we are pleased to partner with a fantastic leadership team and group of co-investors to expand a mission-critical learning and engagement platform and contribute to its further growth and innovation.”

The investment is another move by Lego parent company Kirkbi to grow its presence in the ed-tech market, after the company acquired BrainPop, maker of video-based learning tools, in October 2022.

In a statement, Kirkbi Chief Investment Officer Thomas Lau Schleicher said Kahoot’s mission resonates with his organization’s “core values” and it finds “the investment fits very well with Kirkbi’s long-term investment strategy.”...

 

For full post:

https://marketbrief.edweek.org/marketplace-k-12/k-12-dealmaking-kahoot-acquired-investor-group-includes-goldman-sachs/?cmp=eml-enl-mb+20230724&id=1712683 

No comment yet.
Scooped by Roxana Marachi, PhD
July 21, 2023 6:43 PM
Scoop.it!

Districts, Take Note: Privacy Is Rare in Apps Used in Schools // EdWeek 

Districts, Take Note: Privacy Is Rare in Apps Used in Schools // EdWeek  | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

By Arianna Prothero


"Schools are falling short on vetting the apps and internet services they require or recommend that students use.


That’s among the findings of a comprehensive analysis of school technology practices by Internet Safety Labs, a nonprofit group that researches tech product safety.

Researchers analyzed more than 1,300 apps used in 600 schools across the country looking at what information the apps—and the browser versions of those apps—are collecting on students and who that information is shared with or sold to.

 

Not protecting students’ personal information in the digital space can cause real-world harms, said Lisa LeVasseur, the founder and executive director of Internet Safety Labs and one of the co-authors of the report.

Strangers can glean a lot of sensitive information about individuals, she said, from even just their location and calendar data.

“It’s like pulling a thread,” LeVassuer said. “Even data that may seem innocuous can be used maliciously, potentially—certainly in ways unanticipated and undesired. These kids are not signing up for data broker profiles. None of us are, actually."  (Data brokers are companies that collect people’s personal data from various sources, package it together into profiles, and sell it to other companies for marketing purposes.)

Only 29 percent of schools appear to be vetting all apps used by students, the analysis found. Schools that systematically vet all apps were less likely to recommend or require students use apps that feature ads.

 

But in an unusual twist, those schools that vet their tech were actually more likely to require students use apps with poor safety ratings from the Internet Research Labs.

Although LeVassuer said she’s not sure why that is the case, it might be because schools with systematic vetting procedures wound up requiring that students use more apps, giving schools a false sense of security that the apps they approved were safe to use.

It’s also hard for families to find information online about the technology their children are required to use for school and difficult to opt out of using that tech, according to the report.

Less than half of schools—45 percent—provide a technology notice that clearly lists all of the technology products students must use, the researchers found. While not required under federal or most state laws, it is considered a best practice, the report said.

Only 14 percent of schools gave parents and students older than 18 years of age the opportunity to consent to technology use.

Certifications can give a false sense of security

Researchers for the Internet Safety Lab also found that apps with the third-party COPA certification called Safe Harbor—which indicates that an app follows federal privacy-protection laws for children—are frequently sharing student data with the likes of Facebook and Twitter. Safe Harbor certified apps also have more advertising than the overall sample of apps the report examined.

The certification verifies that the apps abstain from some important data privacy practices, like behavioral advertising, said LeVasseur. But school leaders may not be getting the data privacy protection for students that they believe they are.

 

“Third-party certifications may not be doing what you think they are,” said LeVassuer.

But overall, apps with third-party certifications, such as 1EdTech, and pledges or promises, such as the Student Privacy Pledge or the Student Data Privacy Consortium, received better data privacy safety ratings under the rubric developed by the Internet Safety Labs.

In all, the Internet Safety Labs examined and tested 1,357 apps that schools across the country either recommend or require students and families to use. It created its sample of apps by assessing the apps recommended or required in a random sample of 13 schools from each of the 50 states and the District of Columbia, totaling 663 schools serving 456,000 students.

While researchers for Internet Safety Labs were only able to analyze the off-the-shelf versions of the apps schools used (they did not have access to school versions of these apps), the group estimates that 8 out of every 10 apps recommended by schools to students are of the off-the-shelf variety.

This is the second report from an ambitious evaluation of the technology used in schools by Internet Safety Labs. The first report, released in December, labeled the vast majority of those apps—96 percent—as not safe for children to use because they share information with third parties or contain ads."...

 

For full post, please visit:

https://www.edweek.org/technology/districts-take-note-privacy-is-rare-in-apps-used-in-schools/2023/07 

 

For report published by Internet Safety Labs, visit here:
https://internetsafetylabs.org/wp-content/uploads/2023/06/2022-K12-Edtech-Safety-Benchmark-Findings-Report-2.pdf 

No comment yet.
Scooped by Roxana Marachi, PhD
September 22, 2022 10:59 AM
Scoop.it!

Tracked: How colleges use AI to monitor student protests // Dallas Morning News

Tracked: How colleges use AI to monitor student protests // Dallas Morning News | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

By Ari Sen & Derêka K. Bennett

 

The pitch was attractive and simple.

 

For a few thousand dollars a year, Social Sentinel offered schools across the country sophisticated technology to scan social media posts from students at risk of harming themselves or others. Used correctly, the tool could help save lives, the company said.

For some colleges that bought the service, it also served a different purpose — allowing campus police to surveil student protests.

 

During demonstrations over a Confederate statue at UNC-Chapel Hill, a Social Sentinel employee entered keywords into the company’s monitoring tool to find posts related to the protests. At Kennesaw State University in Georgia five years ago, authorities used the service to track protesters at a town hall with a U.S. senator, records show. And at North Carolina A&T, a campus official told a Social Sentinel employee to enter keywords to find posts related to a cheerleader’s allegation that the school mishandled her rape complaint.

 

An investigation by The Dallas Morning News and the Investigative Reporting Program at UC Berkeley’s Graduate School of Journalism reveals for the first time that as more students have embraced social media as a digital town square to express opinions and organize demonstrations, many college police departments have been using taxpayer dollars to pay for Social Sentinel’s services to monitor what they say. At least 37 colleges, including four in North Texas, collectively educating hundreds of thousands of students, have used Social Sentinel since 2015.

The true number of colleges that used the tool could be far higher. In an email to a UT Dallas police lieutenant, the company’s co-founder, Gary Margolis, said it was used by “hundreds of colleges and universities in 36 states.” Margolis declined to comment on this story.

 

The News examined thousands of pages of emails, contracts and marketing material from colleges around the country, and spoke to school officials, campus police, activists and experts. The investigation shows that, despite publicly saying its service was not a surveillance tool, Social Sentinel representatives promoted the tool to universities for “mitigating” and “forestalling” protests. The documents also show the company has been moving in a new and potentially more invasive direction — allowing schools to monitor student emails on university accounts.

 

For colleges struggling to respond to high-profile school shootings and a worsening campus mental health crisis, Social Sentinel’s low-cost tool can seem like a good deal. In addition to the dozens of colleges that use the service, News investigation last year revealed that at least 52 school districts in Texas have adopted Social Sentinel as an additional security measure since 2015, including Uvalde CISD where a gunman killed 19 children and two teachers in May. The company’s current CEO J.P. Guilbault also said their services are used by one in four K-12 schools in the country.

 

Some experts said AI tools like Social Sentinel are untested, and even if they are adopted for a worthwhile purpose, they have the potential to be abused.

 

For public colleges, the use of the service sets up an additional conflict between protecting students' Constitutional rights of free speech and privacy and schools’ duty to keep them safe on campus, said Andrew Guthrie Ferguson, a law professor at American University’s Washington College of Law.

 

“What the technology allows you to do is identify individuals who are associated together or are associated with a place or location,” said Ferguson. “That is obviously somewhat chilling for First Amendment freedoms of people who believe in a right to protest and dissent.”

 

Navigate360, the private Ohio-based company that acquired Social Sentinel in 2020, called The News’ investigation “inaccurate, speculative or by opinion in many instances and significantly outdated.” The company also changed the name of the service from Social Sentinel to Navigate360 Detect earlier this year."...

 

For full post, please visit: 

https://interactives.dallasnews.com/2022/social-sentinel/ 

 

 

Scooped by Roxana Marachi, PhD
April 26, 2023 4:24 PM
Scoop.it!

Students’ psychological reports, abuse allegations leaked by ransomware hackers // NBC News

Students’ psychological reports, abuse allegations leaked by ransomware hackers // NBC News | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

"The leak is a stark reminder of the reams of sensitive information held by schools, and that such leaks often leave parents and administrators with little recourse."

 

By Kevin Collier

Hackers who broke into the Minneapolis Public Schools earlier this year have circulated an enormous cache of files that appear to include highly sensitive documents on schoolchildren and teachers, including allegations of teacher abuse and students’ psychological reports.

The files appeared online in March after the school district announced that it had been the victim of a ransomware cyberattack. NBC News was able to download the cache of documents and reviewed about 500 files. Some were printed on school letterheads. Many were listed in folder sets named after Minneapolis schools. 

 

NBC News was able to view the leaked files after downloading them from links posted to the hacker group’s Telegram account. NBC News has not verified the authenticity of the cache, which totals about 200,000 files, and Minneapolis Public Schools declined to answer specific questions about the documents, instead pointing to its previous public statements

The files reviewed by NBC News include everything from relatively benign data like contact information to far more sensitive information including descriptions of students’ behavioral problems and teachers’ Social Security numbers. 

In addition to leaking the documents, the hacking group appeared to go a step further, posting about the documents on Twitter and Facebook as well as on a website, which hosted a video that opens with an animated short of a flaming motorcycle, followed by 50 minutes of screengrabs of the stolen files. NBC News is not naming the group.

It’s a stark reminder that schools often hold reams of sensitive information, and that such leaks often leave parents and administrators with little recourse once their information is released.

“The fact of the matter is, school districts really should be treating this more like nuclear waste, where they need to identify it and contain it and make sure that access to it is restricted,” said Doug Levin, the director of the K12 Security Information Exchange, a nonprofit that helps schools protect themselves from hackers.'

 

“Organizations that are supposed to be helping to uplift children and prepare them for the future could instead be introducing significant headwinds to their lives just for participating in public school.”

School districts really should be treating this more like nuclear waste.

 

In an update published to the Minneapolis Public Schools website on April 11, Interim Superintendent Rochelle Cox said the school district was working with “external specialists and law enforcement to review the data” that was posted online. Cox also said the district was reaching out to individuals whose information had been found in the leak. Cox also warned about reports that people had received messages telling them their information had been leaked.

 

“This week, we’re seeing an uptick in reports of messages — sometimes multiple messages — sent to people in our community stating something like ‘your social security number has been posted on the dark web,’” Cox wrote. “First — I want to remind everyone to NOT interact with such messages unless you KNOW the sender.”

Cybersecurity experts who are familiar with the leak have said it is among the worst they can remember.

“It’s awful. As bad as I’ve seen,” Brett Callow, an analyst who tracks ransomware attacks for the cybersecurity company Emsisoft, said about the breach.

Ransomware attacks on schools, which often end with the hackers releasing sensitive information, have become frequent across the U.S. since 2015. 

At least 122 public school districts in the U.S. have been hit with ransomware since 2021, Callow said, with more than half — 76 — resulting in the hackers leaking sensitive school and student data.

In such cases, districts often provide parents and students with identity theft protection services, though it’s impossible for them to keep the files from being shared after they’re posted.

The leak has left some Minneapolis parents wondering what to do next.

“I feel like my hands are tied and I feel like the information that the district is giving us is just very limited,” said Heather Paulson, who teaches high school in the district and is the mother of a younger child who attends school in Minneapolis.

 

Lydia Kauppi, a parent of a student in the district, said it’s unsettling to know that her family’s private information may have been shared by hackers.

“It causes anxiety on multiple, multiple fronts for everybody involved,” she said. “And it’s just kind of one of those weird, vague, unsettling feelings because you just don’t know how long do I have to worry about it?”

 


Minneapolis Public Schools, which oversees around 30,000 students across 68 schools, said on April 11 it was continuing to notify people who had been affected by the breach, and that it was offering free credit monitoring and identity theft protection services to victims.

 

Ransomware hackers have drastically escalated their tactics in recent years, increasing how much they ask for and launching efforts to pressure schools to pay up — including by contacting people whose information has been leaked. The group that hacked the Minneapolis schools publicly demanded $1 million. The district announced in March that it had not paid, and ransomware gangs usually only leak large datasets of victims who refuse to pay.


Since last year, various criminal hacker groups have leaked troves of files on some of the largest school districts in the country, including in Los Angeles and Chicago

The leaked Minneapolis files appear to include dossiers on hundreds of children with special needs, identifying each by name, birthday and school. Those dossiers often include pages of details about students, including problems at home like divorcing or incarcerated parents, conditions like Attention Deficit Disorder, documented indications where they appear to have been injured, results of intelligence tests and what medications they take.

Other files include databases of instances where teachers have written up students for behavioral issues, sorted by school, student ID number, behavioral issue and the students’ race. 

The leaked files also include hundreds of forms documenting times when faculty learned that a student had been potentially mistreated. Most of those are allegations that a student had suffered neglect or was physically harmed by a teacher or student. Some are extraordinarily sensitive, and allege incidents like a student being sexually abused by a teacher or by another student. Each report names the victim and cites their birthday and address.

 


In one report, a special education student claimed her bus driver groped her and made her touch him. Minnesota police later charged a man whose name matches the driver named in the report and the date of the incident.

Others describe a teacher accused of having had romantic relationships with two students. Another describes a student whom faculty suspected was the victim of female genital mutilation. NBC News was able to verify that faculty listed in those reports worked for Minneapolis schools, but has not verified those reports.

 

Those files have been promoted online in what experts said is an unorthodox and particularly aggressive manner."... 

 

For full post, please visit: 

https://www.nbcnews.com/tech/security/students-psychological-reports-abuse-allegations-leaked-ransomware-hac-rcna79414 

 

No comment yet.
Scooped by Roxana Marachi, PhD
June 6, 2023 12:28 PM
Scoop.it!

Microsoft to pay $20 million over FTC charges surrounding kids' data collection // CBS News

Microsoft to pay $20 million over FTC charges surrounding kids' data collection // CBS News | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

[CBSNews]

"Microsoft will pay a fine of $20 million to settle Federal Trade Commission charges that it illegally collected and retained the data of children who signed up to use its Xbox video game console.

The agency charged that Microsoft gathered the data without notifying parents or obtaining their consent, and that it also illegally held onto the data. Those actions violated the Children's Online Privacy Protection Act, which limits data collection on kids under 13, the FTC stated.

In a blog post, Microsoft corporate vice president for Xbox Dave McCarthy outlined additional steps the company is now taking to improve its age verification systems and to ensure that parents are involved in the creation of children's accounts for the service. These mostly concern efforts to improve age verification technology and to educate children and parents about privacy issues.

McCarthy also said the company had identified and fixed a technical glitch that failed to delete child accounts in cases where the account creation process never finished. Microsoft policy was to hold that data no longer than 14 days in order to allow players to pick up account creation where they left off if they were interrupted.

The settlement must be approved by a federal court before it can go into effect, the FTC said.

British regulators in April blocked Microsoft's $69 billion deal to buy video game maker Activision Blizzard over worries that the move would stifle competition in the cloud gaming market. The company is now "in search of solutions," Microsoft President Brad Smith said at a tech conference in London Tuesday.

 

Software giant said it has identified and fixed technical glitch that failed to delete child accounts in certain cases."

For original post, please visit:

https://www.cbsnews.com/news/microsoft-settlement-ftc-charges-childrens-data-collection-20-million-dollars/ 

No comment yet.
Scooped by Roxana Marachi, PhD
February 23, 2023 9:32 AM
Scoop.it!

Trove of L.A. Students’ Mental Health Records Posted to Dark Web After Cyber Hack – The 74

Trove of L.A. Students’ Mental Health Records Posted to Dark Web After Cyber Hack – The 74 | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

By Mark Keierleber

"Update: After this story published, the Los Angeles school district acknowledged in a statement that “approximately 2,000” student psychological evaluations — including those of 60 current students — had been uploaded to the dark web.

Detailed and highly sensitive mental health records of hundreds — and likely thousands — of former Los Angeles students were published online after the city’s school district fell victim to a massive ransomware attack last year, an investigation by The 74 has revealed. 

The student psychological evaluations, published to a “dark web” leak site by the Russian-speaking ransomware gang Vice Society, offer a startling degree of personally identifiable information about students who received special education services, including their detailed medical histories, academic performance and disciplinary records. 


But people are likely unaware their sensitive information is readily available online because the Los Angeles Unified School District hasn’t alerted them, a district spokesperson confirmed, and leaders haven’t acknowledged the trove of records even exists. In contrast, the district publicly acknowledged last month that the sensitive information of district contractors had been leaked. 

Cybersecurity experts said the revelation that student psychological records were exposed en masse and a lack of transparency by the district highlight a gap in existing federal privacy laws. Rules that pertain to sensitive health records maintained by hospitals and health insurers, which are protected by stringent data breach notification policies, differ from those that apply to education records kept by schools — even when the files themselves are virtually identical. Under existing federal privacy rules, school districts are not required to notify the public when students’ personal information, including medical records, is exposed."... 

 

For full article, please visit:

https://www.the74million.org/article/trove-of-l-a-students-mental-health-records-posted-to-dark-web-after-cyber-hack/ 

 

No comment yet.