Educational Psychology & Technology: Critical Perspectives and Resources
29.9K views | +0 today
Scooped by Roxana Marachi, PhD
onto Educational Psychology & Technology: Critical Perspectives and Resources!

Top 20 Principles from Psychology for PreK-12 Teaching and Learning // Coalition for Psychology in Schools and Education

Main page at:


"Psychological science has much to contribute to enhancing teaching and learning in the everyday classroom by providing key insights on:


* Effective instruction

* Classroom environments that promote learning

* Appropriate use of assessment — including data, tests, measurement and research methods that inform practice.

We present here the most important principles from psychology — the Top 20 — that would be of greatest use in the context of pre-K to 12 classroom teaching and learning. We encourage consideration and practice of the Top 20 throughout all teacher preparation programs to ensure a solid foundation of psychological knowledge in pre-K to 12 instruction."


Download the full report (PDF 453KB) to find supporting research and learn why each principle is relevant in the classroom.  Click on title above or here:

No comment yet.
Educational Psychology & Technology: Critical Perspectives and Resources
This curated collection includes news, resources, and research related to the intersections of Educational Psychology and Technology. The page also serves as a research tool to organize online content. A similar curation of posts may also be found at  The grey funnel shaped icon at the top allows for searching by keyword. For research more specific to tech, screen time and health/safety concerns, please see:, to learn about the next wave of privatization involving technology intersections with Pay For Success,  Social Impact Bonds, and Results Based Financing (often marketed with language promoting 'public-private-partnerships'), see, and for additional Educator Resources, please visit [Links to an external site].
Your new post is loading...
Your new post is loading...
Scooped by Roxana Marachi, PhD!

When "Innovation" is Exploitation: Data Ethics, Data Harms and Why We Need to Demand Data Justice // Marachi, 2019, Summer Institute of A Black Education Network 

To download pdf, please click on title or arrow above.


For more on the data brokers selling personal information from a variety of platforms, including education, please see: 


Please also visit: Parent Coalition for Student Privacy


the Data Justice Lab:


and the Algorithmic Justice League:  


No comment yet.
Scooped by Roxana Marachi, PhD!

The Surveillant University: Remote Proctoring, AI, and Human Rights // Tessa Scassa, Canada Research Chair in Information Law and Policy, University of Ottawa

The Surveillant University: Remote Proctoring, AI, and Human Rights // Tessa Scassa, Canada Research Chair in Information Law and Policy, University of Ottawa | Educational Psychology & Technology: Critical Perspectives and Resources |

Please visit link below to access document: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Algorithmic personalization is disrupting a healthy teaching environment // LSE

Algorithmic personalization is disrupting a healthy teaching environment // LSE | Educational Psychology & Technology: Critical Perspectives and Resources |
 The UK government has given no sign of when it plans to regulate digital technology companies. In contrast, the US Federal Trade Commission will tomorrow consider whether to make changes on the Children’s Online Privacy Protection Act to address the risks emanating from the growing power of digital technology companies, many of which already play substantial roles in children’s lives and schooling. The free rein offered thus far has so far led many businesses to infiltrate education, slowly degrading the teaching profession and spying on children, argue LSE Visiting fellow Dr Velislava Hillman and junior high school teacher and Doctor of Education candidate Molly Esquivel. They take a look here at what they describe as the mess that digitalized classrooms have become, due to the lack of regulation and absence of support if businesses cause harm.


"Any teacher would attest to the years of specialized schooling, teaching practice, code of ethics and standards they face to obtain a license to teach; those in higher education also need a high-level degree, published scholarship, postgraduate certificates such as PGCE and more. In contrast, businesses offering education technologies enter the classroom with virtually no demonstration of any licensing or standards.

The teaching profession has now become an ironic joke of sorts. If teachers in their college years once dreamed of inspiring their future students, today these dreamers are facing a different reality: one in which they are required to curate and operate with all kinds of applications and platforms; collect edtech badges of competency (fig1); monitor data; navigate students through yet more edtech products.

Unlicensed and unregulated, without years in college and special teaching credentials, edtech products not only override teachers’ competencies and roles; they now dictate them.


Figure 1Teachers race to collect edtech badges

[See original article for image]

Wellbeing indexes and Karma Points

“Your efforts are being noticed” is how Thrively, an application that monitors students and claims to be used by over 120,000 educators across the US, greets its user. In the UK, Symanto, an AI-based software that analyses texts to infer about the psychological state of an individual, is used for a similar purpose. The Thrively software gathers metrics on attendance, library use, grades, online learning activities and makes inferences about students – how engaged they are or how they feel. Solutionpath, offering support for struggling students, is used in several universities in the UK. ClassDojo claims to be used by 85% of UK primary schools and a global community of over 50 million teachers and families. Classroom management software Impero, offers teachers remote control of children’s devices. The company claims to provide direct access to over 2 million devices in more than 90 countries. Among other things, the software has a ‘wellbeing keyword library index’ which seeks to identify students who may need emotional support. A form of policing: “with ‘who, what, when and why’ information staff members can build a full picture of the capture and intervene early if necessary”.

These products and others adopt the methodology of  algorithm-based monitoring and profiling of students’ mental health. Such products steer not only student behavior but that of teachers too. One reviewer says of Impero: “My teachers always watch our screens with this instead of teaching”. When working in Thrively, each interaction with a student earns “Karma Points”. The application lists teacher goals – immediately playing on an educator’s deep-seeded passion to be their best for their students (fig2). Failure to obtain such points becomes internalized as failure in the teaching profession. Thrively’s algorithms could also trigger an all-out battle of who on the teaching staff can earn the most Points. Similarly, ClassDojo offers a ‘mentor’ program to teachers and awards them ‘mentor badges’.

Figure 2Thrively nudges teachers to engage with it to earn badges and “Karma points”; its tutorial states: “It’s OK to brag when you are elevating humanity.” [See original article for image]


The teacher becomes a ‘line operator’ on a conveyor belt run by algorithms. The amassed data triggers algorithmic diagnostics from each application, carving up the curriculum, controlling students and teachers. Inferential software like Thrively throws teachers into rabbit holes by asking them not only to assess students’ personal interests, but their mental state, too. Its Wellbeing Index takes “pulse checks” to tell how students feel as though teachers are incapable of direct connection with their students. In the UK, the lax legislation with regards to biometric data collection, can further lead to advancing technologies’ exploitation of such data into developing mental health prediction and psychometric analytics. Such practices not only increase the risks of harm towards children and students in general; they dehumanize the whole educational process.

Many other technology-infused, surveillance-based applications are thrust into the classroom. Thrively captures data of 12-14-year-olds and suggests career pathways besides how they feel. They share the captured data with third parties such as YouTube Kids, game-based and coding apps – outside vendors that Thrively curates. Impero enables integration with platforms like Clever, used by over 20 million teachers and students, and Microsoft, thus expanding the tech giant’s own reach by millions of individuals. As technology intersects with education, teachers are merely a second thought in curriculum design and leading the classroom.

Teachers must remain central in children’s education, not businesses

The digitalization of education has swiftly moved towards an algorithmic hegemony which is degrading the teaching profession. Edtech companies are judging how students learn, how teachers work – and how they both feel. Public-private partnerships are giving experimental software with arbitrary algorithms warrantless titles of “school official” to untested beta programme, undermining teachers. Ironically, teachers still carry the responsibility for what happens in class.

Parents should ask what software is used to judge how their children feel or do in class and why. At universities, students should enquire what inferences are made about their work or their mental health that emerges from algorithms. Alas, this means heaping yet more responsibility on individuals – parents, children, students, teachers – to fend for themselves. Therefore, at least two things must also happen. First, edtech products and companies must be licensed to operate, the way banks, hospitals or teachers are. And second, educational institutions should consider transparency about how mental health or academic profiling in general is assessed. If and when software analytics play a part, educators (through enquiry) as well as policymakers (through law) should insist on transparency and be critical about the data points collected and the algorithms that process them.

This article represents the views of the author and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.


For original post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

A teen girl sexually exploited on Snapchat takes on American tech // The Washington Post 

A teen girl sexually exploited on Snapchat takes on American tech // The Washington Post  | Educational Psychology & Technology: Critical Perspectives and Resources |

A 16-year-old girl is leading a class-action lawsuit against one of the country's most popular apps — claiming its designers have done almost nothing to prevent the sexual exploitation of girls like her.


By Drew Harwell

"She was 12 when he started demanding nude photos, saying she was pretty, that he was her friend. She believed, because they had connected on Snapchat, that her photos and videos would disappear.

Now, at 16, she is leading a class-action lawsuit against an app that has become a mainstay of American teen life — claiming its designers have done almost nothing to prevent the sexual exploitation of girls like her.


Her case against Snapchat reveals a haunting story of shame and abuse inside a video-messaging app that has for years flown under lawmakers’ radar, even as it has surpassed 300 million active users and built a reputation as a safe space for young people to trade their most intimate images and thoughts.


But it also raises difficult questions about privacy and safety, and it throws a harsh spotlight on the tech industry’s biggest giants, arguing that the systems they depend on to root out sexually abusive images of children are fatally flawed.


“There isn’t a kid in the world who doesn’t have this app,” the girl’s mother told The Washington Post, “and yet an adult can be in correspondence with them, manipulating them, over the course of many years, and the company does nothing. How does that happen?”

In the lawsuit, filed Monday in a California federal court, the girl — requesting anonymity as a victim of sexual abuse and referred to only as L.W. — and her mother accuse Snapchat of negligently failing to design a platform that could protect its users from “egregious harm.”


The man — an active-duty Marine who was convicted last year of charges related to child pornography and sexual abuse in a military court — saved her Snapchat photos and videos and shared them with others around the Web, a criminal investigation found.


Snapchat’s parent company, Snap, has defended its app’s core features of self-deleting messages and instant video chats as helping young people speak openly about important parts of their lives.


In a statement to The Post, the company said it employs “the latest technologies” and develops its own software “to help us find and remove content that exploits or abuses minors.”

“While we cannot comment on active litigation, this is tragic, and we are glad the perpetrator has been caught and convicted,” Snap spokeswoman Rachel Racusen said. “Nothing is more important to us than the safety of our community.”


Founded in 2011, the Santa Monica, Calif., company told investors last month that it now has 100 million daily active users in North America, more than double Twitter’s following in the United States, and that it is used by 90 percent of U.S. residents aged 13 to 24 — a group it designated the “Snapchat Generation.”

For every user in North America, the company said, it received about $31 in advertising revenue last year. Now worth nearly $50 billion, the public company has expanded its offerings to include augmented-reality camera glasses and auto-flying selfie drones.


But the lawsuit likens Snapchat to a defective product, saying it has focused more on innovations to capture children’s attention than on effective tools to keep them safe.


The app relies on “an inherently reactive approach that waits until a child is harmed and places the burden on the child to voluntarily report their own abuse,” the girl’s lawyers wrote. “These tools and policies are more effective in making these companies wealthier than [in] protecting the children and teens who use them.”


Apple and Google are also listed as defendants in the case because of their role in hosting an app, Chitter, that the man had used to distribute the girl’s images. Both companies said they removed the app Wednesday from their stores following questions from The Post.

Apple spokesman Fred Sainz said in a statement that the app had repeatedly broken Apple’s rules around “proper moderation of all user-generated content.” Google spokesman José Castañeda said the company is “deeply committed to fighting online child sexual exploitation” and has invested in techniques to find and remove abusive content. Chitter’s developers did not respond to requests for comment.


The suit seeks at least $5 million in damages and assurances that Snap will invest more in protection. But it could send ripple effects through not just Silicon Valley but Washington, by calling out how the failures of federal lawmakers to pass tech regulation have left the industry to police itself.

“We cannot expect the same companies that benefit from children being harmed to go and protect them,” Juyoun Han, the girl’s attorney, said in a statement. “That’s what the law is for.”

Brian Levine, a professor at the University of Massachusetts at Amherst who studies children’s online safety and digital forensics and is not involved in the litigation, said the legal challenge adds to the evidence that the country’s lack of tech regulation has left young people at risk.


“How is it that all of the carmakers and all of the other industries have regulations for child safety, and one of the most important industries in America has next to nothing?” Levine said.


“Exploitation results in lifelong victimization for these kids,” and it’s being fostered on online platforms developed by “what are essentially the biggest toymakers in the world, Apple and Google,” he added. “They’re making money off these apps and operating like absentee landlords. … After some point, don’t they bear some responsibility?”

An anti-Facebook

While most social networks focus on a central feed, Snapchat revolves around a user’s inbox of private “snaps” — the photos and videos they exchange with friends, each of which self-destructs after being viewed.


The simple concept of vanishing messages has been celebrated as a kind of anti-Facebook, creating a low-stakes refuge where anyone can express themselves as freely as they want without worrying how others might react.

Snapchat, in its early years, was often derided as a “sexting app,” and for some users the label still fits. But its popularity has also solidified it as a more broadly accepted part of digital adolescence — a place for joking, flirting, organizing and working through the joys and awkwardness of teenage life.


In the first three months of this year, Snapchat was the seventh-most-downloaded app in the world, installed twice as often as Amazon, Netflix, Twitter or YouTube, estimates from the analytics firm Sensor Tower show. Jennifer Stout, Snap’s vice president of global public policy, told a Senate panel last year that Snapchat was an “antidote” to mainstream social media and its “endless feed of unvetted content.”


Snapchat photos, videos and messages are designed to automatically vanish once the recipient sees them or after 24 hours. But Snapchat’s carefree culture has raised fears that it’s made it too easy for young people to share images they may one day regret.

Snapchat allows recipients to save some photos or videos within the app, and it notifies the sender if a recipient tries to capture a photo or video marked for self-deletion. But third-party workarounds are rampant, allowing recipients to capture them undetected.


Parent groups also worry the app is drawing in adults looking to prey on a younger audience. Snap has said it accounts for “the unique sensitivities and considerations of minors” when developing the app, which now bans users younger than 18 from posting publicly in places such as Snap Maps and limits how often children and teens are served up as “Quick Add” friend suggestions in other users’ accounts. The app encourages people to talk with friends they know from real life and only allows someone to communicate with a recipient who has marked them as a friend.


The company said that it takes fears of child exploitation seriously. In the second half of 2021, the company deleted roughly 5 million pieces of content and nearly 2 million accounts for breaking its rules around sexually explicit content, a transparency report said last month. About 200,000 of those accounts were axed after sharing photos or videos of child sexual abuse.

But Snap representatives have argued they’re limited in their abilities when a user meets someone elsewhere and brings that connection to Snapchat. They’ve also cautioned against more aggressively scanning personal messages, saying it could devastate users’ sense of privacy and trust.

Some of its safeguards, however, are fairly minimal. Snap says users must be 13 or older, but the app, like many other platforms, doesn’t use an age-verification system, so any child who knows how to type a fake birthday can create an account. Snap said it works to identify and delete the accounts of users younger than 13 — and the Children’s Online Privacy Protection Act, or COPPA, bans companies from tracking or targeting users under that age.


Snap says its servers delete most photos, videos and messages once both sides have viewed them, and all unopened snaps after 30 days. Snap said it preserves some account information, including reported content, and shares it with law enforcement when legally requested. But it also tells police that much of its content is “permanently deleted and unavailable,” limiting what it can turn over as part of a search warrant or investigation.


In 2014, the company agreed to settle charges from the Federal Trade Commission alleging Snapchat had deceived users about the “disappearing nature” of their photos and videos, and collected geolocation and contact data from their phones without their knowledge or consent.

Snapchat, the FTC said, had also failed to implement basic safeguards, such as verifying people’s phone numbers. Some users had ended up sending “personal snaps to complete strangers” who had registered with phone numbers that weren’t actually theirs.

A Snapchat representative said at the time that “while we were focused on building, some things didn’t get the attention they could have.” The FTC required the company submit to monitoring from an “independent privacy professional” until 2034.

‘Breaking point’

Like many major tech companies, Snapchat uses automated systems to patrol for sexually exploitative content: PhotoDNA, built in 2009, to scan still images, and CSAI Match, developed by YouTube engineers in 2014, to analyze videos.

The systems work by looking for matches against a database of previously reported sexual-abuse material run by the government-funded National Center for Missing and Exploited Children (NCMEC).

But neither system is built to identify abuse in newly captured photos or videos, even though those have become the primary ways Snapchat and other messaging apps are used today.

When the girl began sending and receiving explicit content in 2018, Snap didn’t scan videos at all. The company started using CSAI Match only in 2020.

In 2019, a team of researchers at Google, the NCMEC and the anti-abuse nonprofit Thorn had argued that even systems like those had reached a “breaking point.” The “exponential growth and the frequency of unique images,” they argued, required a “reimagining” of child-sexual-abuse-imagery defenses away from the blacklist-based systems tech companies had relied on for years.

They urged the companies to use recent advances in facial-detection, image-classification and age-prediction software to automatically flag scenes where a child appears at risk of abuse and alert human investigators for further review.

“Absent new protections, society will be unable to adequately protect victims of child sexual abuse,” the researchers wrote.

Three years later, such systems remain unused. Some similar efforts have also been halted due to criticism they could improperly pry into people’s private conversations or raise the risks of a false match.


In September, Apple indefinitely postponed a proposed system — to detect possible sexual-abuse images stored online — following a firestorm that the technology could be misused for surveillance or censorship.

But the company has since released a separate child-safety feature designed to blur out nude photos sent or received in its Messages app. The feature shows underage users a warning that the image is sensitive and lets them choose to view it, block the sender or to message a parent or guardian for help.

Privacy advocates have cautioned that more-rigorous online policing could end up penalizing kids for being kids. They’ve also worried that such concerns could further fuel a moral panic, in which some conservative activists have called for the firings of LGBTQ teachers who discuss gender or sexual orientation with their students, falsely equating it to child abuse.

But the case adds to a growing wave of lawsuits challenging tech companies to take more responsibility for their users’ safety — and arguing that past precedents should no longer apply.

The companies have traditionally argued in court that one law, Section 230 of the Communications Decency Act, should shield them from legal liability related to the content their users post. But lawyers have increasingly argued that the protection should not inoculate the company from punishment for design choices that promoted harmful use.

In one case filed in 2019, the parents of two boys killed when their car smashed into a tree at 113 mph while recording a Snapchat video sued the company, saying its “negligent design” decision to allow users to imprint real-time speedometers on their videos had encouraged reckless driving.

A California judge dismissed the suit, citing Section 230, but a federal appeals court revived the case last year, saying it centered on the “predictable consequences of designing Snapchat in such a way that it allegedly encouraged dangerous behavior.” Snap has since removed the “Speed Filter.” The case is ongoing.

In a separate lawsuit, the mother of an 11-year-old Connecticut girl sued Snap and Instagram parent company Meta this year, alleging she had been routinely pressured by men on the apps to send sexually explicit photos of herself — some of which were later shared around her school. The girl killed herself last summer, the mother said, due in part to her depression and shame from the episode.


Congress has voiced some interest in passing more-robust regulation, with a bipartisan group of senators writing a letter to Snap and dozens of other tech companies in 2019 asking about what proactive steps they had taken to detect and stop online abuse.

But the few proposed tech bills have faced immense criticism, with no guarantee of becoming law. The most notable — the Earn It Act, which was introduced in 2020 and passed a Senate committee vote in February — would open tech companies to more lawsuits over child-sexual-abuse imagery, but technology and civil rights advocates have criticized it as potentially weakening online privacy for everyone.

Some tech experts note that predators can contact children on any communications medium and that there is no simple way to make every app completely safe. Snap’s defenders say applying some traditional safeguards — such as the nudity filters used to screen out pornography around the Web — to personal messages between consenting friends would raise its own privacy concerns.

But some still question why Snap and other tech companies have struggled to design new tools for detecting abuse.

Hany Farid, an image-forensics expert at University of California at Berkeley, who helped develop PhotoDNA, said safety and privacy have for years taken a “back seat to engagement and profits.”

The fact that PhotoDNA, now more than a decade old, remains the industry standard “tells you something about the investment in these technologies,” he said. “The companies are so lethargic in terms of enforcement and thinking about these risks … at the same time, they’re marketing their products to younger and younger kids.”

Farid, who has worked as a paid adviser to Snap on online safety, said that he believes the company could do more but that the problem of child exploitation is industry-wide.

“We don’t treat the harms from technology the same way we treat the harms of romaine lettuce,” he said. “One person dies, and we pull every single head of romaine lettuce out of every store,” yet the children’s exploitation problem is decades old. “Why do we not have spectacular technologies to protect kids online?”

‘I thought this would be a secret’

The girl said the man messaged her randomly one day on Instagram in 2018, just before her 13th birthday. He fawned over her, she said, at a time when she was feeling self-conscious. Then he asked for her Snapchat account.

“Every girl has insecurities,” said the girl, who lives in California. “With me, he fed on those insecurities to boost me up, which built a connection between us. Then he used that connection to pull strings.” The Post does not identify victims of sexual abuse without their permission.

He started asking for photos of her in her underwear, then pressured her to send videos of herself nude, then more explicit videos to match the ones he sent of himself. When she refused, he berated her until she complied, the lawsuit states. He always demanded more.

She blocked him several times, but he messaged her through Instagram or via fake Snapchat accounts until she started talking to him again, the lawyers wrote. Hundreds of photos and videos were exchanged over a three-year span.

She felt ashamed, but she was afraid to tell her parents, the girl told The Post. She also worried what he might do if she stopped. She thought reporting him through Snapchat would do nothing, or that it could lead to her name getting out, the photos following her for the rest of her life.

“I thought this would be a secret,” she said. “That I would just keep this to myself forever.” (Snap officials said users can anonymously report concerning messages or behaviors, and that its “trust and safety” teams respond to most reports within two hours.)


Last spring, she told The Post, she saw some boys at school laughing at nude photos of young girls and realized it could have been her. She built up her confidence over the next week. Then she sat with her mother in her bedroom and told her what had happened.

Her mother told The Post that she had tried to follow the girl’s public social media accounts and saw no red flags. She had known her daughter used Snapchat, like all of her friends, but the app is designed to give no indication of who someone is talking to or what they’ve sent. In the app, when she looked at her daughter’s profile, all she could see was her cartoon avatar.


The lawyers cite Snapchat’s privacy policy to show that the app collects troves of data about its users, including their location and who they communicate with — enough, they argue, that Snap should be able to prevent more users from being “exposed to unsafe and unprotected situations.”


Stout, the Snap executive, told the Senate Commerce, Science and Transportation Committee’s consumer protection panel in October that the company was building tools to “give parents more oversight without sacrificing privacy,” including letting them see their children’s friends list and who they’re talking to. A company spokesman told The Post those features are slated for release this summer.


Thinking back to those years, the mother said she’s devastated. The Snapchat app, she believes, should have known everything, including that her daughter was a young girl. Why did it not flag that her account was sending and receiving so many explicit photos and videos? Why was no one alerted that an older man was constantly messaging her using overtly sexual phrases, telling her things like “lick it up?”


After the family called the police, the man was charged with sexual abuse of a child involving indecent exposure as well as the production, distribution and possession of child pornography.

At the time, the man had been a U.S. Marine Corps lance corporal stationed at a military base, according to court-martial records obtained by The Post.


As part of the Marine Corps’ criminal investigation, the man was found to have coerced other underage girls into sending sexually explicit videos that he then traded with other accounts on Chitter. The lawsuit cites a number of Apple App Store reviews from users saying the app was rife with “creeps” and “pedophiles” sharing sexual photos of children.


The man told investigators he used Snapchat because he knew the “chats will go away.” In October, he was dishonorably discharged and sentenced to seven years in prison, the court-martial records show.


The girl said she has suffered from guilt, anxiety and depression after years of quietly enduring the exploitation and has attempted suicide. The pain “is killing me faster than life is killing me,” she said in the suit.


Her mother said that the last year has been devastating, and that she worries about teens like her daughter — the funny girl with the messy room, who loves to dance, who wants to study psychology so she can understand how people think.


“The criminal gets punished, but the platform doesn’t. It doesn’t make sense,” the mother said. “They’re making billions of dollars on the backs of their victims, and the burden is all on us.”


For original post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Illuminate Data Breach Impact in Colorado Grows to 7 Districts Plus 1 California District and 3 in Connecticut // THE Journal

Illuminate Data Breach Impact in Colorado Grows to 7 Districts Plus 1 California District and 3 in Connecticut // THE Journal | Educational Psychology & Technology: Critical Perspectives and Resources |

By Kristal Kuykendall

"The impact of the Illuminate Education data breach that occurred in January continues growing as more K–12 school districts in Colorado and Connecticut and one in California have notified parents that their students, too, had their private information stolen.

Seven school districts in Colorado — with total current enrollment of about 132,000 students — have recently alerted parents that current and former students were impacted in the breach, which Illuminate has said was discovered after it began investigating suspicious access to its systems in early January.

The incident at Illuminate resulted in a week-long outage of all Illuminate’s K–12 school solutions, including IO Classroom (previously named Skedula), PupilPath, EduClimber, IO Education, SchoolCity, and others, according to its service status site. The company’s website states that its software products serve over 5,000 schools nationally with a total enrollment of about 17 million U.S. students.


The New York State Education Department last week told THE Journal that 565 schools in the state — including “at least” 1 million current and former students — were among those impacted by the Illuminate data breach, and data privacy officials there opened an investigation on April 1.

The list of all New York schools impacted by the data breach was sent to THE Journal in response to a Freedom of Information request; NYSED officials said the list came from Illuminate. Each impacted district was working to confirm how many current and former students were among those whose data were compromised, and each is required by law to report those totals to NYSED, so the total number of students affected was expected to grow, the department said last week.

Since late April, the following school districts have confirmed in letters to parents or on their websites that current and/or former students were impacted by the data breach:

Colorado Districts Known to be Impacted by Data Breach:

California Districts Known to be Impacted by Data Breach:

Connecticut Districts Known to be Impacted by Data Breach:

  • Coventry Public Schools in Connecticut, current enrollment 1,650; did not specify the total impacted.
  • Pomperaug Regional School District 15, current enrollment about 3,600; said the breach affected students enrolled during 2017–2019 school years; the district ceased using Illuminate Education in 2019.
  • Cheshire Public Schools, current enrollment about 1,500; said the breach affected students enrolled during the 2017–2019 school years.

New York's Investigation of Illuminate Breach

As of last week, 17 local education agencies in New York — 15 districts and two charter school groups — had filed their data breach reports with NYSED showing that 179,377 current and former students had their private data stolen during the incident, according to the document sent to THE Journal. That total does not include the number impacted at NYC Schools, where officials said in late March that about 820,000 current and former students had been impacted by the Illuminate breach.

All but one of the agencies whose data breach reports have been filed with the state said that more students were impacted than were currently enrolled, meaning both current and former students were impacted by the breach. For example, Success Academy Charter Schools, which has nearly 3 dozen schools in its network, reported 55,595 students affected by the breach, while current enrollment is just under 20K." 


No comment yet.
Scooped by Roxana Marachi, PhD!

Roblox Metaverse Playing Games with Consumers: Truth In Advertising files complaint with the FTC concerning deceptive advertising on Roblox //

Roblox Metaverse Playing Games with Consumers: Truth In Advertising files complaint with the FTC concerning deceptive advertising on Roblox // | Educational Psychology & Technology: Critical Perspectives and Resources |

"Roblox, a multibillion-dollar public company based in California, says its mission is to “bring the world together through play.” The Roblox platform is an immersive virtual space consisting of 3D worlds in which users can play games, attend concerts and throw birthday parties, among a host of other activities. With more than 54 million daily users and over 40 million games and experiences, it’s not surprising that in 2021 alone, users from 180 different countries spent more than 40 billion hours in this closed platform metaverse.

But according to an investigation by, advertising is being surreptitiously pushed in front of millions of users on Roblox by a multitude of companies and avatar influencers. Such digital deception is possible because Roblox has failed to establish any meaningful guardrails to ensure compliance with truth in advertising laws. As a result, the brands Roblox has invited into its metaverse, including but not limited to DC Entertainment, Hasbro, Hyundai, Mattel, Netflix, NFL Enterprise, Nike and Paramount Game Studios, along with undisclosed avatar brand influencers and AI-controlled brand bots are running roughshod on the platform, manipulating and exploiting consumers, including its most vulnerable players – more than 25 million children.

Roblox community standards dictate that “[a]ds may not contain content intended for users under the age of 13,” presumably because this vulnerable age group, which makes up nearly half of Roblox’s daily users, can’t identify advertisements disguised as games (also known as advergames). In fact, even adults can have trouble accurately identifying advergames, which are found on Roblox in ever-increasing numbers. And as brands exploit unsuspecting consumers, tricking them into taking part in immersive advertising experiences, the companies, including Roblox, are taking users’ time, attention and money while extracting their personal data. And to make matters worse, Roblox lures consumers, including minors, to its platform with atypical earnings representations including claims that users can make millions of dollars as game developers, despite the fact that the vast majority of Roblox game developers will never make any money.

On April 19, filed a complaint with the FTC concerning Roblox and a multitude of other companies and sponsored avatar influencers on the platform urging the agency to open an investigation into the deceptive advertising on and by Roblox and take appropriate enforcement action. At a minimum, Roblox needs to stop breaching its own community standards and uphold its promise to parents that it will keep children as safe as possible online by enforcing its own rule prohibiting ads from containing contain intended for users under the age of 13.

In a world…

Advergames or branded worlds are everywhere on Roblox. Or maybe not. It is difficult to say exactly how many there are given the lack of clear and conspicuous disclosures on the platform. Take, for example, the following search results on Roblox for experiences based on the Netflix series “Stranger Things.” It is not at all clear, which, if any, of these experiences are sponsored.


Clicking on an individual thumbnail image provides little clarity.

The only indication that the second experience in the above search results – Stranger Things: Starcourt Mall – is an advergame is the small print under the name of the game that says “By Netflix,” which is not likely to be seen by most Roblox users (and even if they do notice this fine-print disclosure, they may not understand what it means).

And while the other experiences in the search results have the brand – Stranger Things – in their name, and brand imagery, none of those games are sponsored. So just because a brand is in the name of a game or experience doesn’t necessarily mean it is an advergame. Indeed, a search for “sports worlds” brings up more than a dozen Vans Worlds, only one of which is sponsored.


Additional examples of undisclosed advergames in the Roblox metaverse include Nikeland, which has been visited more than 13 million times since Nike launched the branded world last fall. In Nike’s advertisement, users can “[e]xplore the world of sport, swim in Lake Nike, race your friends on the track [and] discover hidden secrets.” Then there’s Vans World (the sponsored one), which has been visited more than 63 million times since its launch last April, where users “[e]xplore different skate sites” and “[k]it out [their] Avatar in Vans Apparel.” Like with many worlds on Roblox, the apparel will cost you, as it must be purchased using Robux, Roblox’s virtual currency that powers its digital economy and has been crucial to the company’s success. (More on that later.)

Venturing outside their own branded worlds

In addition to creating their own undisclosed advergames, brands have also deceptively infiltrated organic games. For example, in May 2020, to coincide with the release of the studio’s “Scoob!” movie that month, Warner Brothers’ Scooby-Doo brand made a limited promotional appearance in the organic pet-raising game, Adopt Me! which is the most popular game on Roblox of all time with more than 28 billion visits. During the promotional event in the family-friendly game, players could adopt Scooby as a pet and take a spin in the Mystery Machine. However, there was never any discernible disclosure to its audience that this was a sponsored event, nor did it comply with Roblox criteria that ads not be directed at children under the age of 13.

Avatar influencers

Perhaps even more insidious than the use of advergames and sponsored content within organic games is the use of undisclosed avatar influencer marketing. These avatars look and act like any other avatar you might run into on Roblox but these avatars, controlled by paid brand influencers, have a hidden agenda: to promote brands throughout the Roblox metaverse. And this means that there are potentially millions of players seeing, communicating with, and interacting with brand endorsers in the Roblox metaverse without ever knowing it.

For example, one (of at least a dozen) undisclosed Nike avatar influencers was complimented on his Nike gear by another player while in Nikeland, saying in the chat bar “how doyou get the gear,” “that nike hat is drippy,” while another player spotted the popular avatar in Nikeland and wrote, “TW dessi??? omgomg.”


In addition to these avatar influencers (which, besides Nike, are used by numerous other brands including Vans, Hyundai and Forever 21) are Roblox’s army of more than 670 influencers, known as Roblox Video Stars. Roblox Video Stars are Roblox users who have large followings on social media and who Roblox has invited to participate in its influencer program in which the company rewards the Stars with a number of benefits, including free Roblox Premium memberships, early access to certain Roblox events and features, and the ability to earn commissions on Robux sales to users. And while Roblox requires the Stars to disclose their material connection to the platform in their social media posts, it does not require Stars to disclose their material connection to Roblox while on the platform itself even though the law requires such disclosure when brand influencers are interacting with users within the platform’s ecosystem.


Brands are also using undisclosed AI-controlled avatars in advergames to promote consumer engagement and spending, among other things. In the Hot Wheels Open World (an undisclosed Mattel advergame), for example, AI bots urge players to upgrade their cars using virtual currency. While in the NASCAR showroom in Jailbreak, a popular organic game with a cops and robbers theme, which hosted an undisclosed sponsored event by NASCAR in February, AI bots let players know that NASCAR was giving away a car for free. In Nikeland, there were even AI bots modeled after real life NBA stars Giannis Antetokounmpo and LeBron James, each of which were giving away Nike gear to players. While Antetokounmpo tweeted to more than 2 million followers and posted to more than 12 million Instagram fans late last year that they should “[c]ome find me” in Nikeland because he was giving away “free gifts,” it appears that neither Antetokounmpo nor James ever controlled their avatars in Nikeland – rather, the look-a-like avatars interacting with other users were simply AI-controlled agents of Nike. In none of these examples did the brands inform users that they were seeing and interacting with AI-controlled brand avatars.



In its complaint letter to the FTC, reiterated its position that consumers have a right to know when they are interacting with bots that are used by brands in their advertisements. In fact, wherever endorsements take place, advertisers must fulfill their duty to ensure that the form, content and disclosure used by any influencer, at a minimum, complies with the law. Even in the metaverse, companies are legally responsible for ensuring that consumers, whatever their age may be, know that what they are viewing or interacting with is an endorsement. And despite the transitory nature of avatar influencers participating as walking and talking endorsements within the Roblox metaverse, no brand (including Roblox) should ignore its legal obligation to disclose these endorsements. Indeed, earlier this year, Roblox and many other companies, including Nike, Hyundai, VF Corp. (which owns Vans) and Mattel (which owns Hot Wheels), were formally reminded by the FTC that material connections between endorsers and brands must be clearly and conspicuously disclosed in a manner that will be easily understood by the intended audience. Now, after receiving this notice, violations can carry with them penalties of up to $46,517 per violation.

Vulnerable consumers

Of the platform’s more than 50 million daily users, more than 25 million are children aged 13 and under, an age group that generally cannot identify advertisements disguised as games, of which there is an ever-increasing number on Roblox as brands have been eager to enter the Roblox ecosystem. Roblox community standards dictate that “[a]ds may not contain content intended for users under the age of 13.” But the reality is many of these advergames are aimed at precisely this age group.

In fact, even adults can have trouble accurately identifying undisclosed advergames. But rather than requiring brands to follow the law and clearly and conspicuously disclose advergames, avatar influencers and other promotional content as marketing – so that consumers aren’t tricked into taking part in immersive advertising experiences without their knowledge – Roblox has failed to establish any meaningful guardrails to ensure compliance with truth in advertising laws. Instead, it has generally abdicated this responsibility to its developers and brands.

Strike it rich on Roblox? Think again

Excerpt from Roblox webpage (August 2, 2021)

As of December 2020, there were more than 8 million active developers on Roblox. One of the ways that Roblox persuades these developers (which include minors) to create games for free is by deceptively representing that games can earn them real world cash despite the fact that the typical Roblox developer earns no money.

“More and more, our developers and creators are starting to make a living on Roblox,” Roblox Founder and CEO Dave Baszucki said on a Roblox Investor Day call in February 2021, adding:

"What used to be a hobby has become a job for an individual person … Developers have enjoyed meaningful earnings expansion over time. … People [are] starting to fund their college education. … [This is] an amazing economic opportunity … We can imagine developers making $100 million a year and more."

But the reality is that it is incredibly difficult for a developer to get their game noticed on Roblox and begin earning cash (which is why some Roblox developers – and brands – have resorted to buying likes to enhance their game’s visibility on the platform, a deceptive tactic that Roblox says it does not permit but apparently does not adequately monitor or prevent). The numbers don’t lie: Only 0.1 percent of Roblox developers and creators gross $10,000 or more and only 0.02 percent gross $100,000 or more annually, according to a Roblox SEC filing.

Robux: Roblox’s virtual currency

Whether users come to Roblox as developers or players, there is one thing they both need in order to maximize their experience: Robux. Robux, which can be purchased with U.S. dollars, are used by players to buy accessories and clothing for their avatar, special abilities within games, access to pay-to-play games and Roblox Premium subscriptions, among other things. Robux can be purchased in various increments, from 400 for $4.99 to 10,000 for $99.99.

And, as Roblox likes to advertise, Robux can also be earned by creators and developers in a variety of ways, including creating and selling accessories and clothes for avatars; selling special abilities within games; driving engagement, meaning that developers are rewarded by Roblox for the amount of time Premium subscribers spend in their games; and selling content and tools, such as plugins, to other developers. Not only are Robux hard to earn, but for every dollar a user spends on something developers have created, developers get, on average, 28 cents. And to make matters worse, the exchange rate for earned Robux is 0.0035 USD per Robux, meaning that earned Robux are worth nearly 300 percent less than purchased Robux.

In addition, unlike other metaverse platforms, Roblox virtual items and its currency are not created or secured using blockchain technology, which means Roblox objects are not NFTs (non-fungible tokens) and Robux is not a cryptocurrency. As a result, when a Roblox user loses their account for whatever reason, they also lose every asset that was in the account, an occurrence that appears to happen with some frequency according to consumer complaints filed against Roblox with the FTC. (While the FTC said it has received nearly 1,300 consumer complaints against Roblox, the agency only provided with a sampling of 200 complaints in response to its FOIA request, citing FTC policy. has appealed the decision in order to gain access to all 1,291 of the complaints.)

Action cannot wait

Roblox, one of the largest gaming companies in the world, and the brands it has invited into its metaverse are actively exploiting and manipulating its users, including millions of children who cannot tell the difference between advertising and organic content, for their own financial gain. The result is that kids and other consumers are spending an enormous amount of money, attention and time on the platform. The FTC must act now, before an entire generation of minors is harmed by the numerous forms of deceptive advertising occurring on the Roblox platform."


For original post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Don’t go ‘Along’ with corporate schemes to gather up student data // Boninger & Molnar (2022), Phi Delta Kappan  

Don’t go ‘Along’ with corporate schemes to gather up student data // Boninger & Molnar (2022), Phi Delta Kappan   | Educational Psychology & Technology: Critical Perspectives and Resources | 

No comment yet.
Scooped by Roxana Marachi, PhD!

New *Privacy Not Included Research Finds #MentalHealth Apps Are Terrible at Privacy // Mozilla

New *Privacy Not Included Research Finds #MentalHealth Apps Are Terrible at Privacy // Mozilla | Educational Psychology & Technology: Critical Perspectives and Resources |

The following is text from an email announcement received from Mozilla on May 2, 2022

"May is Mental Health Awareness Month. With so many people struggling with mental health issues recently, there has been a rapid rise in the use of mental health apps. These apps help people access online therapists, meditate to reduce stress, or find someone (or something) to listen to their problems. Unfortunately, *Privacy Not Included learned through our research that mental health apps are a growing industry with a lot of growing pains — privacy concerns are at the top of that list.

For our mental health app guide, we reviewed the privacy and security of 32 popular mental health and prayer apps, including Better, and Glorify.

What we found was frightening. In the six years we’ve been doing privacy research for *Privacy Not Included, we've never seen any group of products as bad at protecting and respecting privacy as mental health apps. 

A few of our findings:

  • 28 of the 32 apps we reviewed earned our *Privacy Not Included warning label.
  • Weak passwords ranging from “1” to “11111111” were allowed on numerous apps.
  • Two apps earned our Best Of distinction — PTSD Coach, an app created by the US Department of Veterans Affairs to help treat PTSD in veterans, and Wysa, an AI chatbot app with a very solid privacy policy.

Our main concerns with these apps are:

  • They collect an incredible amount of very personal information. Things like if you’re feeling depressed, how often you feel anxious, if you are having suicidal thoughts, if you are seeing a therapist and how often, and personally identifying information like name, location, sexual orientation, gender identify, and so much more. This is information that should be handled with extreme caution and care.
  • But it’s not. Too many of these mental health and prayer apps use your personal information as a business asset to help them make money. They can share this information with third-party advertisers to target you with interest-based advertising, they can sell your information, they can combine your information with additional information they gather from outside sources such as social media platforms and data brokers.
  • They don’t always give users the option to access and delete the personal information they collect.
  • And too often they don’t take appropriate measures to secure and protect all this extremely sensitive personal information you share.

So, what can you do? We thought very carefully about what guidance to offer people because these mental health apps do have benefits. They provide access to more affordable and accessible therapists, they offer research-backed strategies to help cope with everything from anxiety and depression to suicidal thoughts, OCD, and eating disorders, they help us meditate to calm down and reduce stress. Unfortunately, too often this comes at the cost of privacy. If you use a mental health-related app, please consider doing the following.

  • Read the reviews of the apps we provide in our new *Privacy Not Included guide to educate yourself on what data is collected and how it can be used. Choose apps that respect your privacy whenever possible.
  • Never sign up to or link your social media account such as Facebook or Google to these apps as that allows these social media companies to collect even more data on you, like the fact you’re using a mental health app, how often, and when.
  • Provide as little personal information as possible to access the service.
  • Do not give consent to allow your data to be shared, sold, or used for advertising purposes if you are able to opt out. Unfortunately, you won’t always be able to opt-out.
  • Always use a strong password to protect the information stored in these apps.
  • If your employer offers you access to an app through a wellness program, ask them what data they can receive from the app and how your privacy as an employee will be respected.

Mental health is an incredibly important issue in our world today. We want to do what we can to help you access the help you need while understanding the risks to your privacy. Thank you for reading and supporting our *Privacy Not Included mental health apps guide.

Please share this important information with your friends, family, and colleagues. 




Thank you,
Jen Caltrider & Misha Rykov
Your *Privacy Not Included Team



For more, please see: 



No comment yet.
Scooped by Roxana Marachi, PhD!

The State of Biometrics 2022: A Review of Policy and Practice in UK Education // 

No comment yet.
Scooped by Roxana Marachi, PhD!

Big Tech Makes Big Data Out of Your Child: The FERPA Loophole EdTech Exploits to Monetize Student Data // Rhoades 2020 

Rhoades, Amy "Big Tech Makes Big Data Out of Your Child: The FERPA Loophole EdTech Exploits to Monetize Student Data," American University Business Law Review, Vol. 9, No. 3 (2020) . Available at: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Surveillance and the Tyrant Test // Ferguson, 2022 // Georgetown Law Journal 

Surveillance and the Tyrant Test // Ferguson, 2022 // Georgetown Law Journal  | Educational Psychology & Technology: Critical Perspectives and Resources |

"How should society respond to police surveillance technologies? This question has been at the center of national debates around facial recognition, predictive policing, and digital tracking technologies. It is a debate that has divided activists, law enforcement officials, and academics and will be a central question for years to come as police surveillance technology grows in scale and scope. Do you trust police to use the technology without regulation? Do you ban surveillance technology as a manifestation of discriminatory carceral power that cannot be reformed? Can you regulate police surveillance with a combination of technocratic rules, policies, audits, and legal reforms? This Article explores the taxonomy of past approaches to policing technologies and—finding them all lacking—offers the “tyrant test” as an alternative.

The tyrant test focuses on power. Because surveillance technology offers government a new power to monitor and control citizens, the response must check that power. The question is how, and the answer is to assume the worst. Power will be abused, and constraints must work backwards from that cynical starting point. The tyrant test requires institutional checks that decenter government power into overlapping community institutions with real authority and enforceable individual rights.

The tyrant test borrows its structure from an existing legal framework also designed to address the rise of a potentially tyrannical power—the United States Constitution and, more specifically, the Fourth Amendment. Fearful of a centralized federal government with privacy invading intentions, the Fourth Amendment—as metaphor and methodology—offers a guide to approaching surveillance; it allows some technologies but only within a self-reinforcing system of structural checks and balances with power centered in opposition to government. The fear of tyrannical power motivated the original Fourth Amendment and still offers lessons for how society should address the growth of powerful, new surveillance technologies."



Ferguson, Andrew Guthrie, Surveillance and the Tyrant Test (January 24, 2022). Georgetown Law Journal, Vol. 110, No. 2, 2021, Available at SSRN:

No comment yet.
Scooped by Roxana Marachi, PhD!

How Big Tech Sees Big Profits in Social-Emotional Learning at School // Telegraf

How Big Tech Sees Big Profits in Social-Emotional Learning at School // Telegraf | Educational Psychology & Technology: Critical Perspectives and Resources |

By Anna Noble

"In June 2021, as students and teachers were finishing up a difficult school year, Priscilla Chan, wife of Facebook founder and CEO Mark Zuckerberg, made a live virtual appearance on the “Today” show, announcing that the Chan Zuckerberg Initiative (CZI), along with its “partner” Gradient Learning, was launching Along, a new digital tool to help students and teachers create meaningful connections in the aftermath of the pandemic.


According to CZI and Gradient Learning, the science of Along shows that students who form deep connections with teachers are more likely to be successful in school and less likely to show “disruptive behaviors,” resulting in fewer suspensions and lower school dropout rates. To help form those deep connections, the Along platform offers prompts such as “What is something that you really value and why?” or “When you feel stressed out, what helps?” Then, students may, on their “own time, in a space where they feel safe,” record a video of themselves responding to these questions and upload the video to the Along program.


CZI, the LLC foundation set up by Zuckerberg and Chan to give away 99 percent of his Facebook stock, is one of many technology companies that have created software products that claim to address the social and emotional needs of children. And school districts appear to be rapidly adopting these products to help integrate the social and emotional skills of students into the school curriculum, a practice commonly called social-emotional learning (SEL).

Panorama Education—whose financial backers also include CZI as well as other Silicon Valley venture capitalists such as the Emerson Collective, founded by Laurene Powell Jobs, the widow of Apple cofounder Steve Jobs—markets a survey application for collecting data on students’ social-emotional state that is used by 23,000 schools serving a quarter of the nation’s students, according to TechCrunch.


Gaggle, which uses students’ Google and Microsoft accounts to scan for keywords and collect social-emotional-related data, has contracts with at least 1,500 school districts, Education Week reports.


Before the pandemic temporarily shuttered school buildings, the demand for tracking what students do while they’re online, and how that activity might inform schools about how to address students’ social and emotional needs, was mostly driven by desires to prevent bullying and school shootings, according to a December 2019 report by Vice.

Tech companies that make and market popular software products such as GoGuardian, Securly, and Bark claim to alert schools of any troubling social-emotional behaviors students might exhibit when they’re online so that educators can intervene, Vice reports, but “[t]here is, however, no independent research that backs up these claims.”


COVID-19 and its associated school closures led to even more concerns about students’ “anxiety, depression and other serious mental health conditions,” reports EdSource. The article points to a survey conducted from April 25 to May 1, 2020, by the American Civil Liberties Union (ACLU) of Southern California, which found that 68 percent of students said they were in need of mental health support post-pandemic.


A major focus of CZI’s investment in education is its partnership with Summit Public Schools to “co-build the Summit Learning Platform to be shared with schools across the U.S.” As Valerie Strauss reported in the Washington Post following the release of a critical research brief by the National Education Policy Center at the University of Colorado Boulder, in 2019, Summit Public Schools spun off TLP Education to manage the Summit Learning program, which includes the Summit Learning Platform, according to Summit Learning’s user agreement. TLP Education has since become Gradient Learning, which has at this point placed both the Summit Learning program and Along in 400 schools that serve 80,000 students.

Since 2015, CZI has invested more than $280 million in developing the Summit Learning program. This total includes $134 million in reported contributions revenue to Summit Public Schools 501(c)(3) from 2015 to 2018 and another $140 million in reported awards to Summit Public Schools, Gradient Learning, and TLP Education (as well as organizations that helped in their SEL tools’ development) posted since 2018; a further $8 million has been given to “partner” organizations listed on the Along website—which include GripTape, Character Lab, Black Teacher Collaborative, and others—and their evaluations by universities.

An enticement that education technology companies are using to get schools to adopt Along and other student monitoring products is to offer these products for free, at least for a trial period, or for longer terms depending on the level of service. But “free” doesn’t mean without cost.

As CZI funds and collaborates with its nonprofit partners to expand the scope of student monitoring software in schools, Facebook (aka Meta) is actively working to recruit and retain young users on its Facebook and Instagram applications.


That CZI’s success at getting schools to adopt Along might come at the cost of exploiting children was revealed when Facebook whistleblower Frances Haugen, a former employee of the company, who made tens of thousands of pages of Facebook’s internal documents public, disclosed that Facebook is highly invested in creating commercial products for younger users, including an Instagram Kids application intended for children who are under 13 years. While Facebook executives discussed the known harms of their products on “tweens,” they nevertheless forged ahead, ignoring suggestions from researchers on ways to reduce the harm. As Haugen explained, “they have put their astronomical profits before people.”

The information gathered from SEL applications such as Along will likely be used to build out the data infrastructure that generates knowledge used to make behavioral predictions. This information is valuable to corporations seeking a competitive edge in developing technology products for young users.

Schools provide a useful testing ground to experiment with ways to hold the attention of children, develop nudges, and elicit desirable behavioral responses. What these tech companies learn from students using their SEL platforms can be shared with their own product developers and other companies developing commercial products for children, including social media applications.

Yet Facebook’s own internal research confirms social media is negatively associated with teen mental health, and this association is strongest for those who are already vulnerable—such as teens with preexisting mental health conditions, those who are from socially marginalized groups, and those who have disabilities.

Although Facebook claimed it was putting the Instagram Kids app “on hold” in September 2021, a November 2021 study suggests the company continues to harvest data on children.

There are legislative restrictions governing the collection and use of student data.

The Family Educational Rights and Privacy Act (FERPA) protects the privacy of student data collected by educational institutions, and the Children’s Online Privacy Protection Rule (COPPA) requires commercial businesses to obtain parental consent to gather data from “children under 13 years of age.” Unfortunately, if a commercial contract with a school or district designates that business a “school official,” the child data can be extracted by the business, leaving the responsibility to obtain consent with the school district.

While these agreements contain information relating to “privacy,” the obfuscatory language and lack of alternative options mean the “parental consent” obtained is neither informed nor voluntary.

Although these privacy policies contain data privacy provisions, there’s a caveat: Those provisions don’t apply to “de-identified” data, i.e., personal data with “unique identifiers” (e.g., names and ID numbers) that have been removed. De-identified data information is valuable to tech corporations because it is used for research, product development, and improvement of services; however, this de-identified data is relatively easy to re-identify. “Privacy protection” just means it might be a little bit more difficult to find an individual.

What privacy protection doesn’t mean is that the privacy of children is protected from the “personalized” content delivered to them by machine algorithms. It doesn’t mean the video of a child talking about “the time I felt afraid” isn’t out there floating in the ether, feeding the machines to adjust their future.

The connections between the Along platform and corporate technology giant Facebook are a good example of how these companies can operate in schools while maintaining their right to use personal information of children for their own business purposes.

Given concerns that arose in a congressional hearing in December 2021 about Meta’s Instagram Kids application, as reported by NPR, there is reason to believe these companies will continue to skirt key questions about how they play fast and loose with children’s data and substitute a “trust us” doctrine for meaningful protections.

As schools ramp up these SEL digital tools, parents and students are increasingly concerned about how school-related data can be exploited. According to a recent survey by the Center for Democracy and Technology, 69 percent of parents are concerned about their children’s privacy and security protection, and large majorities of students want more knowledge and control of how their data is used.

Schools are commonly understood to be places where children can make mistakes and express their emotions without their actions and expressions being used for profit, and school leaders are customarily charged with the responsibility to protect children from any kind of exploitation. Digital SEL products, including Along, may be changing those expectations.


By Anna L. Noble is a doctoral student in the School of Education at the University of Colorado, Boulder. 

No comment yet.
Scooped by Roxana Marachi, PhD!

Hackers prey on public schools, adding stress amid pandemic // AP News

Hackers prey on public schools, adding stress amid pandemic // AP News | Educational Psychology & Technology: Critical Perspectives and Resources |

By Cedar Attanasio
"ALBUQUERQUE, N.M. (AP) — For teachers at a middle school in New Mexico’s largest city, the first inkling of a widespread tech problem came during an early morning staff call.

On the video, there were shout-outs for a new custodian for his hard work, and the typical announcements from administrators and the union rep. But in the chat, there were hints of a looming crisis. Nobody could open attendance records, and everyone was locked out of class rosters and grades.

Albuquerque administrators later confirmed the outage that blocked access to the district’s student database — which also includes emergency contacts and lists of which adults are authorized to pick up which children — was due to a ransomware attack.

“I didn’t realize how important it was until I couldn’t use it,” said Sarah Hager, a Cleveland Middle School art teacher.

Cyberattacks like the one that canceled classes for two days in Albuquerque’s biggest school district have become a growing threat to U.S. schools, with several high-profile incidents reported since last year. And the coronavirus pandemic has compounded their effects: More money has been demanded, and more schools have had to shut down as they scramble to recover data or even manually wipe all laptops."...

For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Remote learning apps shared children’s data at a ‘dizzying scale’ // The Washington Post

Remote learning apps shared children’s data at a ‘dizzying scale’ // The Washington Post | Educational Psychology & Technology: Critical Perspectives and Resources |

By Drew Harwell

"Millions of children had their online behaviors and personal information tracked by the apps and websites they used for school during the pandemic, according to an international investigation that raises concerns about the impact remote learning had on children’s privacy online.


The educational tools were recommended by school districts and offered interactive math and reading lessons to children as young as prekindergarten. But many of them also collected students’ information and shared it with marketers and data brokers, who could then build data profiles used to target the children with ads that follow them around the Web.

Those findings come from the most comprehensive study to date on the technology that children and parents relied on for nearly two years as basic education shifted from schools to homes.


Researchers with the advocacy group Human Rights Watch analyzed 164 educational apps and websites used in 49 countries, and they shared their findings with The Washington Post and 12 other news organizations around the world. The consortium, EdTech Exposed, was coordinated by the investigative nonprofit the Signals Network and conducted further reporting and technical review.

What the researchers found was alarming: nearly 90 percent of the educational tools were designed to send the information they collected to ad-technology companies, which could use it to estimate students’ interests and predict what they might want to buy.

Researchers found that the tools sent information to nearly 200 ad-tech companies, but that few of the programs disclosed to parents how the companies would use it. Some apps hinted at the monitoring in technical terms in their privacy policies, the researchers said, while many others made no mention at all.


The websites, the researchers said, shared users’ data with online ad giants including Facebook and Google. They also requested access to students’ cameras, contacts or locations, even when it seemed unnecessary to their schoolwork. Some recorded students’ keystrokes, even before they hit “submit.”

The “dizzying scale” of the tracking, the researchers said, showed how the financial incentives of the data economy had exposed even the youngest Internet users to “inescapable” privacy risks — even as the companies benefited from a major revenue stream.

“Children,” lead researcher Hye Jung Han wrote, were “just as likely to be surveilled in their virtual classrooms as adults shopping in the world’s largest virtual malls.”

School districts and the sites’ creators defended their use, with some companies saying researchers had erred by including in their study homepages for the programs, which included tracking codes, instead of limiting their analysis to the internal student pages, which they said contained fewer or no trackers. The researchers defended the work by noting that students often had to sign in on the homepages before their lessons could begin.


The coronavirus pandemic abruptly upended the lives of children around the world, shuttering schools for more than 1.5 billion students within the span of just a few weeks. Though some classrooms have reopened, tens of millions of students remain remote, and many now depend on education apps for the bulk of their school days.

Yet there has been little public discussion of how the companies that provided the programs remote schooling depends on may have profited from the pandemic windfall of student data.

The learning app Schoology, for example, says it has more than 20 million users and is used by 60,000 schools across some of the United States’ largest school districts. The study identified code in the app that would have allowed it to extract a unique identifier from the student’s phone, known as an advertising ID, that marketers often use to track people across different apps and devices and to build a profile on what products they might want to buy.


A representative for PowerSchool, which developed the app, referred all questions to the company’s privacy policy, which said it does not collect advertising IDs or provide student data to companies for marketing purposes. But the policy also says the company’s website uses third-party tools to show targeted ads to users based on their “browsing history on other websites or on other devices.” The policy did not say which third-party companies had received users’ data.

The policy also said that it “does not knowingly collect any information from children under the age of 13,” in keeping with the Children’s Online Privacy Protection Act, or COPPA, the U.S. law that requires special restrictions on data collected from young children. The company’s software, however, is marketed for classrooms as early as kindergarten, which for many children starts around age 4.

The investigation acknowledged that it could not determine exactly what student data would have been collected during real-world use. But the study did reveal how the software was designed to work, what data it had been programmed to seek access to, and where that data would have been sent.


School districts and public authorities that had recommended the tools, Han wrote, had “offloaded the true costs of providing education online onto children, who were forced to pay for their learning with their fundamental rights to privacy.”

The researchers said they found a number of trackers on websites common among U.S. schools. The website of ST Math, a “visual instructional program” for prekindergarten, elementary and middle school students, was shown to have shared user data with 19 third-party trackers, including Facebook, Google, Twitter and the e-commerce site Shopify.

Kelsey Skaggs, a spokeswoman for the California-based MIND Research Institute, which runs ST Math, said in a statement that the company does not “share any personally identifiable information in student records for the purposes of targeted advertising or other commercial purposes” and does not use the same trackers on its student platform as it does on its homepage.


But the researchers said they found trackers not just on ST Math’s main site but on pages offering math games for prekindergarten and the first grade.

Google spokesperson Christa Muldoon said the company is investigating the researchers’ claims and will take action if they find any violations of their data privacy rules, which include bans on personalized ads aimed at minors’ accounts. A spokesperson for Facebook’s parent company Meta said it restricts how businesses share children’s data and how advertisers can target children and teens.

The study comes as concern grows over the privacy risks of the educational-technology industry. The Federal Trade Commission voted last week on a policy statement urging stronger enforcement of COPPA, with Chair Lina Khan arguing that the law should help “ensure that children can do their schoolwork without having to surrender to commercial surveillance practices.” 


COPPA requires apps and websites to get parents’ consent before collecting children’s data, but schools can consent on their behalf if the information is designated for educational use.

In an announcement, the FTC said it would work to “vigilantly enforce” provisions of the law, including bans against requiring children to provide more information than is needed and restrictions against using personal data for marketing purposes. Companies that break the law, it said, could face fines and civil penalties.

Clearly, the tools have wide impact. In Los Angeles, for example, more than 447,000 students are using Schoology and 79,000 are using ST Math. Roughly 70,000 students in Miami-Dade County Public Schools use Schoology.


Both districts said they’ve taken steps to limit privacy risks, with Los Angeles requiring software companies to submit a plan showing how student information will be protected while Miami-Dade said it had conducted a “thorough and extensive” evaluation process before bringing on Schoology last year.


The researchers said most school districts they examined had conducted no technical privacy evaluations before endorsing the educational tools. Because the companies’ privacy policies often obscured the extent of their monitoring, the researchers said, district officials and parents often were left in the dark on how students’ data would be collected or used.

Some popular apps reviewed by the researchers didn’t track children at all, showing that it is possible to build an educational tool without sacrificing privacy. Apps such as Math Kids and African Storybook didn’t serve ads to children, collect their identifying details, access their cameras, request more software permissions than necessary or send their data to ad-tech companies, the analysis found. They just offered simple learning lessons, the kind that students have relied on for decades.

Vivek Dave, a father of three in Texas whose company RV AppStudios makes Math Kids, said the company charges for in-app purchases on some word-search and puzzle games designed for adults and then uses that money to help build ad-free educational apps. Since launching an alphabet game seven years ago, the company has built 14 educational apps that have been installed 150 million times this year and are now available in more than 35 languages.

“If you have the passion and just try to understand them, you don’t need to do all this level of tracking to be able to connect with kids,” he said. “My first beta testers were my kids. And I didn’t want that for my kids, period.”

The researchers argued that governments should conduct data-privacy audits of children’s apps, remove the most invasive, and help guide teachers, parents and children on how best to prevent data over-collection or misuse.

Companies, they said, should work to ensure that children’s information is treated differently than everyone else’s, including by being siloed away from ads and trackers. And lawmakers should encode these kinds of protections into regulation, so the companies aren’t allowed to police themselves.

Bill Fitzgerald, a privacy researcher and former high school teacher who was not involved in the study, sees apps’ tracking of students not only as a loss of privacy but as a lost opportunity to use the best of technology for their benefit. Instead of rehashing old ways to vacuum up user data, schools and software developers could have been pursuing fresher, more creative ideas to get children excited to learn.

“We have outsourced our collective imagination and our vision as to what innovation with technology could be to third-party product offerings that aren’t remotely close to the classroom and don’t have our best interests at heart,” Fitzgerald said.

“The conversation the industry wants us to have is: What’s the harm?” he added. “The right conversation, the ethical conversation is: What’s the need? Why does a fourth-grader need to be tracked by a third-party vendor to learn math?”



Abby Rufer, a high school algebra teacher in Dallas, said she’s worked with a few of the tested apps and many others during a frustratingly complicated two years of remote education.

School districts felt pressured during the pandemic to quickly replace the classroom with online alternatives, she said, but most teachers didn’t have the time or technical ability to uncover how much data they gobbled up.

“If the school is telling you to use this app and you don’t have the knowledge that it might be recording your students’ information, that to me is a huge concern,” Rufer said.

Many of her students are immigrants from Latin America or refugees from Afghanistan, she said, and some are already fearful of how information on their locations and families could be used against them.

“They’re being expected to jump into a world that is all technological,” she said, “and for many of them it’s just another obstacle they’re expected to overcome.” 


For original post, please visit: 

No comment yet.
Rescooped by Roxana Marachi, PhD from Social Impact Bonds, "Pay For Success," Results-Based Contracting, and Blockchain Digital Identity Systems!

Ableism And Disability Discrimination In New Surveillance Technologies: How new surveillance technologies in education, policing, health care, and the workplace disproportionately harm disabled peo...

Ableism And Disability Discrimination In New Surveillance Technologies: How new surveillance technologies in education, policing, health care, and the workplace disproportionately harm disabled peo... | Educational Psychology & Technology: Critical Perspectives and Resources |

Full report – PDF 

Plain language version – PDF

By Lydia X. Z. Brown, Ridhi Shetty, Matt Scherer, and Andrew Crawford

"Algorithmic technologies are everywhere. At this very moment, you can be sure students around the world are complaining about homework, sharing gossip, and talking about politics — all while computer programs observe every web search they make and every social media post they create, sending information about their activities to school officials who might punish them for what they look at. Other things happening right now likely include:

  • Delivery workers are trawling up and down streets near you while computer programs monitor their location and speed to optimize schedules, routes, and evaluate their performance;
  • People working from home are looking at their computers while their computers are staring back at them, timing their bathroom breaks, recording their computer screens, and potentially listening to them through their microphones;
  • Your neighbors – in your community or the next one over – are being tracked and designated by algorithms targeting police attention and resources to some neighborhoods but not others;
  • Your own phone may be tracking data about your heart rate, blood oxygen level, steps walked, menstrual cycle, and diet, and that information might be going to for-profit companies or your employer. Your social media content might even be mined and used to diagnose a mental health disability.

This ubiquity of algorithmic technologies has pervaded every aspect of modern life, and the algorithms are improving. But while algorithmic technologies may become better at predicting which restaurants someone might like or which music a person might enjoy listening to, not all of their possible applications are benign, helpful, or just.

Scholars and advocates have demonstrated myriad harms that can arise from the types of encoded prejudices and self-perpetuating cycles of discrimination, bias, and oppression that may result from automated decision-makers. These potentially harmful technologies are routinely deployed by government entities, private enterprises, and individuals to make assessments and recommendations about everything from rental applications to hiring, allocation of medical resources, and whom to target with specific ads. They have been deployed in a variety of settings including education and the workplace, often with the goal of surveilling activities, habits, and efficiency.

Disabled people comprise one such community that experiences discrimination, bias, and oppression resulting from automated decision-making technology. Disabled people continually experience marginalization in society, especially those who belong to other marginalized communities such as disabled women of color. Yet, not enough scholars or researchers have addressed the specific harms and disproportionate negative impacts that surveillance and algorithmic tools can have on disabled people. This is in part because algorithmic technologies that are trained on data that already embeds ableist (or relatedly racist or sexist) outcomes will entrench and replicate the same ableist (and racial or gendered) bias in the computer system. For example, a tenant screening tool that considers rental applicants’ credit scores, past evictions, and criminal history may prevent poor people, survivors of domestic violence, and people of color from getting an apartment because they are disproportionately likely to have lower credit scores, past evictions, and criminal records due to biases in the credit and housing systems and in policing disparities.

This report examines four areas where algorithmic and/or surveillance technologies are used to surveil, control, discipline, and punish people, with particularly harmful impacts on disabled people. They include: (1) education; (2) the criminal legal system; (3) health care; and (4) the workplace. In each section, we describe several examples of technologies that can violate people’s privacy, contribute to or accelerate existing harm and discrimination, and undermine broader public policy objectives (such as public safety or academic integrity).

Full report – PDF 

Plain language version – PDF 



No comment yet.
Scooped by Roxana Marachi, PhD!

Policy Statement of the Federal Trade Commission on Education Technology // FTC 

No comment yet.
Scooped by Roxana Marachi, PhD!

EdTech Tools Coming Under FTC Scrutiny Over Children’s Privacy // BloombergLaw

EdTech Tools Coming Under FTC Scrutiny Over Children’s Privacy // BloombergLaw | Educational Psychology & Technology: Critical Perspectives and Resources |

The Federal Trade Commission is planning to scrutinize educational technology in its enforcement of children’s online privacy rules.


By Andrea Vittorio
"The Federal Trade Commission is planning to scrutinize educational technology in its enforcement of children’s online privacy rules.


The commission is slated to vote at a May 19 meeting on a policy statement related to how the Children’s Online Privacy Protection Act applies to edtech tools, according to an agenda issued Thursday.


The law, known as COPPA, gives parents control over what information online platforms can collect about their kids. Parents concerned about data that digital learning tools collect from children have called for stronger oversight of technology increasingly used in schools.

The FTC’s policy statement “makes clear that parents and schools must not be required to sign up for surveillance as a condition of access to tools needed to learn,” the meeting notice said.

It’s the first agency meeting since George University law professor Alvaro Bedoya was confirmed as a member of the five-seat commission, giving Chair Lina Khan a Democratic majority needed to pursue policy goals. Bedoya has said he wants to strengthen protections for children’s digital data.

Companies that violate COPPA can face fines from the FTC. Past enforcement actions under the law have been brought against companies including TikTok and Google’s YouTube.

Alphabet Inc.‘s Google has come under legal scrutiny for collecting data on users of its educational tools and relying on schools to give consent for data collection on parents’ behalf.

New Mexico’s attorney general recently settled a lawsuit against Google that alleged COPPA violations. Since the suit was filed in 2020, Google has launched new features to protect children’s data.

To contact the reporter on this story: Andrea Vittorio in Washington at To contact the editors responsible for this story: Jay-Anne B. Casuga at; Tonia Moore at 



No comment yet.
Scooped by Roxana Marachi, PhD!

Face up to it – this surveillance of kids in school is creepy // Stephanie Hare // The Guardian

Face up to it – this surveillance of kids in school is creepy // Stephanie Hare // The Guardian | Educational Psychology & Technology: Critical Perspectives and Resources |

"Facial recognition technology doesn’t just allow children to make cashless payments – it can gauge their mood and behaviour in class"



By Stephanie Hare

"A few days ago, a friend sent me a screenshot of an online survey sent by his children’s school and a company called ParentPay, which provides technology for cashless payments in schools. “To help speed up school meal service, some areas of the UK are trialling using biometric technology such as facial identity scanners to process payments. Is this something you’d be happy to see used in your child’s school?” One of three responses was allowed: yes, no and “I would like more information before agreeing”.

My friend selected “no”, but I wondered what would have happened if he had asked for more information before agreeing. Who would provide it? The company that stands to profit from his children’s faces? Fortunately, Defend Digital Me’s report, The State of Biometrics 2022: A Review of Policy and Practice in UK Education, was published last week, introduced by Fraser Sampson, the UK’s biometrics and surveillance camera commissioner. It is essential reading for anyone who cares about children.


First, it reminds us that the Protection of Freedoms Act 2012, which protects children’s biometrics (such as face and fingerprints), applies only in England and Wales. Second, it reveals that the information commissioner’s office has still not ruled on the use of facial recognition technology in nine schools in Ayrshire, which was reported in the media in October 2021, much less the legality of the other 70 schools known to be using the technology across the country. Third, it notes that the suppliers of the technology are private companies based in the UK, the US, Canada and Israel.

One of the suppliers, CRB Cunninghams, advertises that it scans children’s faces every three months


The report also highlights some gaping holes in our knowledge about the use of facial recognition technology in British schools. For instance, who in government approved these contracts? How much has this cost the taxpayer? Why is the government using a technology that is banned in several US states and which regulators in France, Sweden, Poland and Bulgaria have ruled unlawful on the grounds that it is neither necessary nor proportionate and does not respect children’s privacy? Why are British children’s rights not held to the same standard as their continental counterparts?


The report also warns that this technology does not just identify children or allow them to transact with their bodies. It can be used to assess their classroom engagement, mood, attentiveness and behaviour. One of the suppliers, CRB Cunninghams, advertises that it scans children’s faces every three months and that its algorithm “constantly evolves to match the child’s growth and change of appearance”.

So far, MPs have been strikingly silent on the use of such technology in schools. Instead, two members of the House of Lords have sounded the alarm. In 2019, Lord Clement-Jones put forward a private member’s bill for a moratorium and review of all uses of facial recognition technology in the UK. The government has yet to give this any serious consideration. Undaunted, his colleague Lord Scriven said last week that he would put forward a private member’s bill to ban its use in British schools.

It’s difficult not to wish the two lords well when you return to CRB Cunninghams’ boasts about its technology. “The algorithm grows with the child,” it proclaims. That’s great, then: what could go wrong?

Stephanie Hare is a researcher and broadcaster. Her new book is Technology Is Not Neutral: A Short Guide to Technology Ethics


Photograph: Getty Images/iStockphoto 


For original post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

The Datafication of Student Life and the Consequences for Student Data Privacy // Kyle M.L. Jones 

The Datafication of Student Life and the Consequences for Student Data Privacy // Kyle M.L. Jones  | Educational Psychology & Technology: Critical Perspectives and Resources |
By Kyle M. L. Jones (MLIS, PhD)
Indiana University-Indianapolis (IUPUI)

"The COVID-19 pandemic changed American higher education in more ways than many people realize: beyond forcing schools to transition overnight to fully online learning, the health crisis has indirectly fueled institutions’ desire to datafy students in order to track, measure, and intervene in their lives. Higher education institutions now collect enormous amounts of student data, by tracking students’ performance and behaviors through learning management systems, learning analytic systems, keystroke clicks, radio frequency identification, and card swipes throughout campus locations. How do institutions use all this data, and what are the implications for student data privacy? Are the technologies as effective as institutions claim? This blog explores these questions and calls for higher education institutions to better protect students, their individuality, and their power to make the best choices for their education and lives.

When the pandemic prevented faculty and students from accessing their common campus haunts, including offices and classrooms, they relied on technologies to fill their information, communication, and education needs. Higher education was arguably better prepared than other organizations and institutions for immersive online education. For decades, universities and colleges have invested significant resources in networking infrastructures and applications to support constant communication and information sharing. Educational technologies, such as learning management systems (LMSs) licensed by Instructure (Canvas) and Blackboard, and productivity tools such as Microsoft’s Office365 are ubiquitous in higher education. So, while the transition to online education was difficult for some in pedagogical terms, the technological ability to do so was not: higher education was prepared.


Datafication Explained: How Institutions Quantify Students

The same technological ubiquity that has helped higher education succeed during the pandemic has also fueled institutions’ growing desire to datafy students for the purposes of observing, measuring, and intervening in their lives. These practices are not new to universities and colleges, who have long held that creating education records about students supports administrative record keeping and instruction. But data and informational conditions today are much different than just 10 to 20 years ago: the ability to track, capture, and analyze a student’s online information behaviors, communications, and system actions (e.g., clicks, keystrokes, facial movements), not to mention their granular academic history, is possible.

In non-pandemic times, when students are immersed in campus life, myriad sensors (e.g., WiFi, RFID) and systems (e.g., building and transactional card swipes) associated with a specific location also make it possible to analyze a student’s physical movements. These data points enable institutions to track where a student has been and with whom that student has associated, by examining similar patterns in the data.

How are institutions and the educational technology (edtech) companies they rely on using their growing stores of data? There have been infamous cases over the years, such as the Mount St. Mary’s “drown the bunnies” fiasco, when the previous president attempted to use predictive measures to identify and force out students unlikely to achieve academic success and be retained. Then-president Simon Newman, who was eventually fired, argued, “This is hard for [the faculty] because you think of the students as cuddly bunnies, but you can’t…. You just have to drown the bunnies…. put a Glock to their heads.” At the University of Arizona, its “Smart Campus research” aims to “repurpose the data already being captured from student ID cards to identify those most at risk for not returning after their first year of college.” It used student ID card data to track and measure social interactions through time-stamp and geolocation metadata. The analysis enabled the university to map student interactions and their social networks, all for the purpose of predicting a student’s likelihood of being retained. 

Edtech has also invested heavily in descriptive and predictive analytic capabilities, sometimes referred to as learning analytics. Common LMSs often record and share descriptive statistics with instructors concerning which pages and resources (e.g., PDFs, quizzes, etc.) a student has clicked on; some instructors use the data to create visualizations to make students aware of their engagement levels in comparison to their peers in a course. Other companies use their access to real-time system data and the students who create the data, to run experiments. Pearson gained attention for its use of social-psychological interventions on over 9,000 students at 165 institutions to test “whether students who received the messages attempted and completed more problems than their counterparts at other institutions.” While some characterize Pearson’s efforts as simple A/B testing, often used to examine interface tweaks on websites and applications, Pearson did the interventions based on its own ethical review, without input from any of the 165 institutions and without students’ consent.


Is Datafication Worth It? Privacy Considerations

The higher education data ecosystem and the paths it opens for universities, edtech, and other third-party actors to use it raises significant questions about the effects on students’ privacy. The datafication of student life may lead institutions to improve student learning as well as retention and graduation rates. Maybe studying student life at a granular, identifiable level, or even at broader subgroup levels, improves institutional decision making and improves an institution’s financial situation. But what are the costs of these gains? The examples above, many of which I have more comprehensively summarized and analyzed elsewhere, point to clear issues. 

Chief among them is privacy. It is not normative for institutions—or the companies they contract for services—to expose a student’s life, regardless of the purposes or justifications. Yet, universities and colleges continue to push the point that they can do so and are often justified in doing so if it improves student success. But student success is a broad term. Whose success matters and warrants the intrusion? Often an analytic, especially a predictive measure, requires historical data, meaning that one student’s life is made analyzable only for another student downstream to benefit months or years later. And how do institutions define success? Student success may be learning gains, but education institutions often construe it as retention and graduation, which are just proxies. 

When institutions datafy student life for some purpose other than to directly help students, they treat students as objects—not human beings with unique interests, goals, and autonomy over their lives. Institutions and others can use data and related artifacts to guide, nudge, and even manipulate student choices with an invisible hand, since students are rarely aware of the full reach of an institution’s data infrastructure. Students trust that institutions will protect identifiable data and information, but that trust is misplaced if institutions 1) are not transparent about their data practices and 2) do not enable students to make their own privacy choices to the greatest extent possible. Student privacy policy is often difficult for students to understand and locate. 


Moreover, institutions need to justify their analytic practices. They should provide an overview of the intention of their practice and explain the empirical support for that justification. If the practice is experimental, institutions must communicate that they have no clear evidence that the practice will produce benefits for students. If science supports the practice, institutions should provide that science to students to review and summarize. 

Many other policy and practice recommendations are relevant, as the literature outlines ethics codes, philosophical arguments, and useful principles for practice. The key point here is that the datafication of student life and the privacy problems it creates are justified only if higher education institutions protect students and put their interests first, treat students as humans, and respect their choices about their lives."


To view original post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Parents Nationwide File Complaints with U.S. Department of Education; Seek to Address Massive Student Data Privacy Protection Failures // Parents' Coalition of Montgomery County, MD

Parents Nationwide File Complaints with U.S. Department of Education; Seek to Address Massive Student Data Privacy Protection Failures // Parents' Coalition of Montgomery County, MD | Educational Psychology & Technology: Critical Perspectives and Resources |
 "On July 9, 2021, parents of school-age children from Maryland to Alaska, in collaboration with the Student Data Privacy Project (SDPP), will file over a dozen complaints with the U.S. Department of Education (DoE) demanding accountability for the student data that schools share with Educational Technology (EdTech) vendors.
Formed during the pandemic, SDPP is comprised of parents concerned about how their children’s personally identifiable information (PII) is increasingly being mined by EdTech vendors, with the consent of our schools, and without parental consent or school oversight.
With assistance and support from SDPP, 14 parents from 9 states filed requests with their school districts under the Family Educational Rights and Privacy Act (FERPA) seeking access to the PII collected about their children by EdTech vendors. No SDPP parents were able to obtain all of the requested PII held by EdTech vendors, a clear violation of FERPA.
One parent in Maryland never received a response. A New Jersey parent received a generic reply with no date, school name or district identification. Yet a Minnesota parent received over 2,000 files, none of which contained the metadata requested, but did reveal a disturbing amount of personal information held by an EdTech vendor, including the child’s baby pictures, videos of her in an online yoga class, her artwork and answers to in-class questions.
Lisa Cline, SDPP co-founder and parent in Maryland said, “When I tried to obtain data gathered by one app my child uses in class, the school district said, ‘Talk to the vendor.’ The vendor said, ‘Talk to the school.’ This is classic passing of the buck. And the DoE appears to be looking the other way.”
FERPA, a statute enacted in 1974 — almost two decades before the Internet came into existence, at a time when technology in schools was limited to mimeograph machines and calculators  — affords parents the right to obtain their children’s education records, to seek to have those records amended, and to have control over the disclosure of the PII in those records.
Unfortunately, this law is now outdated. Since the digital revolution, schools are either unaware, unable or unwilling to apply FERPA to EdTech vendors. Before the pandemic, the average school used 400-1,000 online tools, according to the Student Data Privacy Consortium. Remote learning has increased this number exponentially.
SDPP co-founder, privacy consultant, law professor and parent Joel Schwarz, noted that “DOE’s failure to enforce FERPA, means that EdTech providers are putting the privacy of millions of children at risk, leaving these vendors free to collect, use and monetize student PII, and share it with third parties at will.”
A research study released by the Me2B Alliance in May 2021, showed that 60% of school apps send student data to potentially high-risk third parties without knowledge or consent. SDPP reached out to Me2B and requested an audit of the apps used by schools in the districts involved in the Project. Almost 70% of the apps reviewed used Software Development Kits (SDKs) that posed a “High Risk” to student data privacy, and almost 40% of the apps were rated “Very High Risk,” meaning the code used is known to be associated with registered Data Brokers. Even more concerning, Google showed up in approximately 80% of the apps that included an SDK, and Facebook ran a close second, showing up in about 60% of the apps.
Emily Cherkin, an SDPP co-founder who writes and speaks nationally about screen use as The Screentime Consultant, noted, “because these schools failed to provide the data requested, we don’t know what information is being collected about our children, how long these records are maintained, who has access to them, and with whom they’re being shared.”
“FERPA says that parents have a right to know what information is being collected about their children, and how that data is being used,” according to Andy Liddell, a federal court litigator in Austin, TX and another SDPP co-founder. “But those rights are being trampled because neither the schools nor the DoE are focused on this issue.”

The relief sought of the DoE includes requiring schools to:
•  actively oversee their EdTech vendors, including regular audits of vendors’ access, use and disclosure of student PII and publicly posting the results of those audits so that parents can validate that their children’s data is being adequately protected;

•  provide meaningful access to records held by EdTech in response to a FERPA request, clarifying that merely providing a student’s account log-in credentials, or referring the requester to the Vendor, does not satisfy the school’s obligations under FERPA;

•  ensure that when their EdTech vendors share student PII with third parties, the Vendor and the school maintain oversight of third-party access and use of that PII, and apply all FERPA rights and protections to that data, including honoring FERPA access requests;

•  protect all of a students’ digital footprints — including browsing history, searches performed, websites visited, etc. (i.e., metadata) — under FERPA, and that all of this data be provided in response to a FERPA access request.

# # # 

If you would like more information, please contact Joel Schwarz at

Parents are invited to join the Student Data Privacy Project. A template letter to school districts can be downloaded from the SDPP website:

SDPP is an independent parent-led organization founded by Joel Schwarz, Andy Liddell, Emily Cherkin and Lisa Cline. Research and filing assistance provided pro bono by recent George Washington University Law School graduate Gina McKlaveen.
No comment yet.
Scooped by Roxana Marachi, PhD!

Stop Invasive Remote Proctoring: Pass California’s Student Test Taker Privacy Protection Act

Stop Invasive Remote Proctoring: Pass California’s Student Test Taker Privacy Protection Act | Educational Psychology & Technology: Critical Perspectives and Resources |


THE OFFICIAL ANDREASCY's comment, April 13, 4:33 PM
Keep sharing Roxana!
Scooped by Roxana Marachi, PhD!

Dismantling the "Black Opticon": Race Equity and Online Data Privacy Regulation // Anita L. Allen, JD, PhD

February 9, 2022
Lecture by Professor Anita L. Allen of the University of Pennsylvania, co-hosted by the Penn Program on Regulation (PPR) and the Center for Technology, Innovation and Competition. Part of PPR’s 2021-2022 Lecture Series on Race and Regulation.

In the opening decades of the 21st century, popular online platforms rapidly transformed the world. These platforms have come with benefits, but at a heavy price to information privacy and data protection. African Americans online face three distinguishable but related categories of vulnerability to bias and discrimination that Prof. Allen dubs the "Black Opticon": discriminatory over-surveillance (panoptic vulnerabilities to, for example, AI-empowered facial recognition and geolocation technologies); discriminatory exclusion (ban-optic vulnerabilities to, for example, unequal access to goods, services, and public accommodations advertised or offered online); and discriminatory predation (con-optic vulnerabilities to, for example, con-jobs, scams, and exploitation relating to credit, employment, business and educational opportunities). Escaping the Black Opticon is unlikely without acknowledgement of privacy's unequal distribution and privacy law's outmoded and unduly race-neutral façade.

African Americans could benefit from race-conscious efforts to shape a more equitable digital public sphere through improved laws and legal institutions. Prof. Allen discusses the Black Opticon triad and considers whether the Virginia Consumer Data Protection Act (2021), the federal Data Protection Act (2021), and new resources for the Federal Trade Commission proposed in 2021 possibly meet imperatives of a race-conscious African American Online Equity Agenda, specifically designed to help dismantle the Black Opticon. The 2021 enacted Virginia law and the bill proposing a new federal data protection agency include civil rights and non-discrimination provisions, and the Federal Trade Commission has an impressive stated commitment to marginalized peoples within the bounds of its authority.

Nonetheless, the limited scope and pro-business orientation of the Virginia law, and barriers to follow-through on federal measures, are substantial hurdles in the road to true platform equity. The path forward requires jumping those hurdles, regulating platforms, and indeed all of the digital economy, in the interests of nondiscrimination, anti-racism, and anti-subordination." 

No comment yet.
Scooped by Roxana Marachi, PhD!

California: Speak Up For Biometric and Student Privacy //

California: Speak Up For Biometric and Student Privacy // | Educational Psychology & Technology: Critical Perspectives and Resources |

By Hayley Tsukayama

"California has shown itself to be a national privacy leader. But there is still work to do. That’s why EFF is proud to sponsor two bills in this year’s legislature—both with the co-sponsorship of Privacy Rights Clearinghouse—that would strengthen privacy protections in the state. These bills are focused on two particularly pernicious forms of data collection. Both will be heard on April 5 in the California Senate Judiciary Committee, and we’re asking Californians to tell committee members to pass these bills.

Advancing Biometric Privacy

Authored by Senator Bob Wieckowski, S.B. 1189 requires private entities to obtain your opt-in consent before collecting your biometric information. Biometric information is incredibly sensitive and, by its very nature, is tied immutably to our identities. While you can change a password, you can’t change easily change your face, the rhythm of your walk, or the ridges of your fingerprints. Despite this, some companies collect and share this information without asking first—by, for example, taking faceprints from every person who walks into a store. They may then go on to share or sell that information.


This is wrong. People should have control over who they trust with their biometric information. And companies must be held accountable if they break that trust. Like the landmark Illinois Biometric Information Privacy Act (BIPA), S.B. 1189 gives individuals the right to sue companies that violate the law. This is the same type of provision that allowed Facebook users in Illinois to take the company to task for collecting their faceprints without permission. That case ended in a $650 million settlement for Illinois’ Facebook users.

This bill has the support of a broad range of both California and national organizations active on surveillance issues, which speaks to the importance of implementing this privacy protection. The Greenlining Institute, Media Alliance, Oakland Privacy, the Consumer Federation of California, the Consumer Federation of America, Consumer Action and Fairplay are all in support. If you'd like to join them in supporting this bill, take our action to support S.B. 1189.

Protecting Student Privacy

EFF is also proud to sponsor S.B. 1172, the Student Test Taker Privacy Protection Act (STTPPA), a first-of-its-kind piece of legislation aimed at curbing some of the worst practices of remote proctoring companies. Authored by Senator Dr. Richard Pan, this bill places limits on what proctoring companies collect and provides students the right to their day in court for privacy violations. There has been a 500% increase in the use of these proctoring tools during the pandemic—in 2020, more than half of higher education institutions used remote proctoring services and another 23% were considering doing so.

Proctoring companies have also suffered data breaches­, and federal lawmakers and California’s Supreme Court have raised questions about proctoring company practices. But no meaningful data protections have been put into place to protect the privacy of test takers. Given their widespread use, proctoring companies must be held accountable, and this bill will do that.


The STTPPA directs proctoring companies not to collect, use, retain, or disclose test takers’ personal information except as strictly necessary to provide proctoring services. If they do not, the student has the opportunity to take the proctoring company to court. This simple bill gives those directly harmed by privacy violations—test takers—the opportunity to protect their data and privacy.


Leading student and privacy advocates have lent their support to the bill, including: Center for Digital Democracy, Citizens Privacy Coalition of Santa Clara County (CPCSCC), Common Sense, Fairplay, The Greenlining Institute, The Parent Coalition for Student Privacy, Media Alliance, and Oakland Privacy.


If you believe that companies should have limits on the information they collect and that people should have ways to hold them accountable, please tell the California Senate Judiciary Committee to vote “yes” on S.B. 1189 and S.B. 1172."






No comment yet.
Scooped by Roxana Marachi, PhD!

Bringing in the technological, ethical, educational and social-structural for a new education data governance // Hillman, 2022 // Learning, Media and Technology

Bringing in the technological, ethical, educational and social-structural for a new education data governance // Hillman, 2022 // Learning, Media and Technology | Educational Psychology & Technology: Critical Perspectives and Resources |

By Velislava Hillman



"The need for a comprehensive education data governance – the regulation of who collects what data, how it is used and why – continues to grow. Technologically, data can be collected by third parties, rendering schools unable to control their use. Legal frameworks partially achieve data governance as businesses continue to exploit existing loopholes. Education data use practices undergo no prior ethical reviews. And at a personal level, students have no agency over these practices. In other words, there is no coherent and meaningful oversight and data governance framework that ensures accountable data use in an increasingly digitalised education sector. In this article, I contextualise the issues arising from education data transactions in one school district in the United States. The case study helps to contextualise what needs governance, who may access education data and how the district governs data use and transactions, emphasising the need for a coherent education data governance but also the limitations of such isolated efforts." 

No comment yet.
Scooped by Roxana Marachi, PhD!

Proctorio Is Going After Digital Rights Groups That Bash Their Proctoring Software //

Proctorio Is Going After Digital Rights Groups That Bash Their Proctoring Software // | Educational Psychology & Technology: Critical Perspectives and Resources |
"The controversial exam surveillance company has filed a subpoena against Fight For The Future, continuing an aggressive campaign against its critics." 

By Janus Rose

"Proctorio, the company behind invasive exam monitoring software that has drawn the ire of students throughout the pandemic, has subpoenaed a prominent digital rights group in what privacy advocates are calling another attempt by the notoriously litigious company to silence its critics.

Fight For The Future, a group which has run a campaign opposing the use of online proctoring software, said it received a broad subpoena demanding internal communications related to the company. The subpoena demands that the group surrender communications between Fight For The Future and other critics of the company, including the Electronic Frontier Foundation (EFF) and Ian Linkletter, a researcher who was sued by the company in 2020 after posting a critical analysis of the software that linked to public training videos on YouTube.


The subpoena was issued as part of an ongoing lawsuit against Proctorio filed by the EFF on behalf of Erik Johnson, a student at Miami University who publicly criticized the company. Johnson’s tweets criticizing Proctorio were removed after the company claimed they had violated copyright under the Digital Millennium Copyright Act (DMCA), and Proctorio’s subpoena specifically demands that Fight For The Future hand over “all documents and communications related to [Johnson].”

Notably, Fight for the Future is not a party in this lawsuit.

The subpoena also calls on Fight For The Future to surrender documents related to the online proctoring industry as a whole, as well as communications between the organization and Linkletter.

Fight For The Future released a statement condemning the move, saying the group “will not be silenced or intimidated.”

“Proctorio’s attempts to bully us through their legal team will not change our principled view that surveillance-based eproctoring is inherently harmful and incompatible with students’ basic human rights and safety,” the group said in a statement published to its website and emailed to Motherboard. “Nor will it deter us from running campaigns pressuring institutions to cut ties with Proctorio and other eproctoring companies."


In response to a request for comment, lawyers representing Proctorio scoffed at the claims, stating that the scope of their subpoena is reasonable.

“We seek these highly-relevant documents solely to defend against Mr. Johnson’s baseless claims and to support Proctorio’s counterclaims, in full accordance with the Federal Rules of Civil Procedure,” wrote Justin Kingsolver, a representative for Crowell & Moring LLP, which is representing Proctorio in the case. “Our subpoena is narrowly tailored to seek only the most essential documents, and we have made every reasonable attempt to work with FFTF’s counsel to narrow the requests even further, but FFTF has refused to engage.”

Evan Greer, the deputy director of Fight For The Future, pushed back against Proctorio’s claims that the subpoena is “narrowly tailored” in an email sent to Motherboard.

“Even after we pushed back, Proctorio is still demanding that we hand over internal communications about our advocacy work that have no bearing on Proctorio's case with Erik Johnson,” Greer told Motherboard. “Proctorio says our internal communications will aid in their lawsuit against Johnson. They won’t – but they could be used to harass us and other privacy advocates.”

Like other companies offering exam surveillance tools, Proctorio has faced widespread criticism from studentsteachersparents, and digital security experts since its rise to prominence during the pandemic-era shift to remote learning. The software’s predictive algorithms automatically flag students for “abnormal” and “suspicious” behavior based on head and eye movements, mouse scrolling, and other metrics, raising fears of discrimination against neurodivergent students. The software also uses facial recognition algorithms that have been shown to be racially biased; one researcher found that Proctorio failed to detect Black faces 57 percent of the time.

Some schools have responded to the criticism by dropping support for Proctorio, and both universities and students have reported that the monitoring software doesn’t actually prevent cheating. Students using Proctorio and similar exam monitoring tools have also said that they often need to jump through hoops just to get the software to work."... 

For full post, please visit: 


No comment yet.
Scooped by Roxana Marachi, PhD!

Information Privacy and the Inference Economy by Alicia Solow-Niederman // SSRN

Information Privacy and the Inference Economy by Alicia Solow-Niederman // SSRN | Educational Psychology & Technology: Critical Perspectives and Resources |

"Information privacy is in trouble. Contemporary information privacy protections emphasize individuals’ control over their own personal information. But machine learning, the leading form of artificial intelligence, facilitates an inference economy that strains this protective approach past its breaking point. Machine learning provides pathways to use data and make probabilistic predictions—inferences—that are inadequately addressed by the current regime. For one, seemingly innocuous or irrelevant data can generate machine learning insights, making it impossible for an individual to anticipate what kinds of data warrant protection. Moreover, it is possible to aggregate myriad individuals’ data within machine learning models, identify patterns, and then apply the patterns to make inferences about other people who may or may not be part of the original data set. The inferential pathways created by such models shift away from “your” data, and towards a new category of “information that might be about you.” And because our law assumes that privacy is about personal, identifiable information, we miss the privacy interests implicated when aggregated data that is neither personal nor identifiable can be used to make inferences about you, me, and others.

This Article contends that accounting for the power and peril of inferences requires reframing information privacy governance as a network of organizational relationships to manage—not merely a set of data flows to constrain. The status quo magnifies the power of organizations that collect and process data, while disempowering the people who provide data and who are affected by data-driven decisions. It ignores the triangular relationship among collectors, processors, and people and, in particular, disregards the co-dependencies between organizations that collect data and organizations that process data to draw inferences. It is past time to rework the structure of our regulatory protections. This Article provides a framework to move forward. Accounting for organizational relationships reveals new sites for regulatory intervention and offers a more auspicious strategy to contend with the impact of data on human lives in our inference economy."


Keywords: Information Privacy, Data Privacy, Privacy Law, Law and Technology, AI and Law, Administrative Law, Regulation, Legislation, Machine Learning


To view original abstract and download full paper, please visit: 

No comment yet.