Educational Psychology & Technology: Critical Perspectives and Resources
29.5K views | +0 today
Educational Psychology & Technology: Critical Perspectives and Resources
This curated collection includes news, resources, and research related to the intersections of Educational Psychology and Technology. The page also serves as a research tool to organize online content. A similar curation of posts may also be found at  The grey funnel shaped icon at the top allows for searching by keyword. For research more specific to tech, screen time and health/safety concerns, please see:, to learn about the next wave of privatization involving technology intersections with Pay For Success,  Social Impact Bonds, and Results Based Financing (often marketed with language promoting 'public-private-partnerships'), see, and for additional Educator Resources, please visit [Links to an external site].
Your new post is loading...
Your new post is loading...
Scooped by Roxana Marachi, PhD!

When "Innovation" is Exploitation: Data Ethics, Data Harms and Why We Need to Demand Data Justice // Marachi, 2019, Summer Institute of A Black Education Network 

To download pdf, please click on title or arrow above.


For more on the data brokers selling personal information from a variety of platforms, including education, please see: 


Please also visit: Parent Coalition for Student Privacy


the Data Justice Lab:


and the Algorithmic Justice League:  


No comment yet.
Scooped by Roxana Marachi, PhD!

Me2B Alliance Product Testing Report: Deeper Look at K-12 School Utility Apps Surprisingly Uncovers Global Advertising Company From CBS/Viacom, Unexpected Security Risks // Spotlight Report #4, Dec...

"TLDR: Common technical frameworks and app templates used by hundreds of organizations, when combined with technical weaknesses built into devices and operating systems from Google and Apple, are leading to unregulated and out of control student and parent data sharing to unexpected online advertising companies.


In our Spotlight Report #1 report from May 2021, the Me2B Alliance Product Testing team audited and analyzed a random sample of 73 mobile applications on Android and iOS used by 38 schools in 14 states across the U.S. — apps used at least a half a million people (students, their families, educators, etc.).


The audit methodology in Spotlight Report #1 primarily consisted of evaluating the 3rd party code packages (also known as software development toolkits or “SDKs”) included in each app, using an external database with historical data on SDKs in apps, combined with a Me2BA risk scoring process. 


After publishing Spotlight Report #1, we were contacted by the Student Data Privacy Project to examine apps used by 18 schools/districts for their FERPA Complaint with the Department of Education. Our data supply testers noticed significant network traffic, well beyond the SDK channels, as well as certain legacy development tactics that relied upon in-app browsers opening websites within the apps. The hypothesis was that many of the school utility apps were using in-app WebView methods to display content, and this was, indeed, the culprit. The WebView development technique allows external websites to open within an app, without launching a separate browser (see Appendix A for an example, and guidance on how to spot this technique). This process results in all of the vendors integrated into a website receiving user data in the context of the app that opened that webpage within their in-app browser. For school utility apps, this so-called “context” typically includes the name of their school, or school district. We took a closer look at the network traffic to confirm the assumption and to determine the scope and scale of this data sharing.


Within the domains in the sample, we noticed significant amounts of network traffic associated with school sports pages and discovered a vendor providing sports scores for K-12 schools across the U.S. and monetizing their “free service” with extremely aggressive advertising monetization schemes, baked right into these taxpayers funded school utility apps.


The company providing this “free service for sports coaches” monetized with online advertising is, a subsidiary of CBS/Viacom, who are also the owners of the popular kids’ television channel, Nickelodeon.


We did not expect that our deep dive into WebView in-app browsers within K-12 school utility apps would end up requiring us to focus significant time researching a data supply chain owned by one of the largest media companies in the U.S., but we followed the data pipelines and the facts. As noted in Spotlight Report #1, in April 2021, Disney, CBS/Viacom and about a dozen other companies were parties to the largest settlement against brokers of kids’ data in U.S. history. CBS and other parties were required to make changes to some of their products and delete certain data. Yet MaxPreps seems to have never come up in that lawsuit, it’s never come up in any significant public reporting or research, and any changes to other CBS products as a result of the settlement do not appear to have not made their way to MaxPreps products. While MaxPreps was never mentioned in the California settlement and the details seemingly would not have required CBS/Viacom to make changes to their subsidiary MaxPreps, it’s clear that the behavior that the settlement pointed out, which CBS/Viacom agreed to stop, is similar if not worse within this subsidiary that offers free products for schools.


Our research took another unexpected turn in the course of the deep dive into MaxPreps, when we came across a handful of “dangling domains.” We wrote about one such dangling domain in particular that Apple quietly purchased for $3,695 in late September 2021. The domain for sale was previously owned by a company that went bankrupt and the domain was integrated into a legacy SDK product across 159 mobile apps, with 155 of them being on Apple’s iOS marketplace, with a potential install base of tens of millions of devices. 


In addition to the dangling domains, we also observed several hijacked domains leading to malicious sites. In at least one instance, we observed in dismay when a dangling domain was purchased by an unknown actor over the course of a few days.


The following apps/domains fell prey to hijackers before we could intervene: 

  • The Santa Monica-Malibu USD Android App from Blackboard Inc. had a dangling domain of “” – this domain still to this day hosts a fake legal website, and there could still be risks from Business Email Compromise schemes or other ways to abuse the fact that this was a real domain used by a school district in one of the wealthiest counties in the United States. Here is a Google search result showing files where this legacy domain was referenced as being valid – other government agencies communicated with this domain at different points in the past.

  • Maryland’s largest school district’s Android App, also from Blackboard Inc., already lost their sports domain by the time we figured it out, with Magruderathletics (WARNING).org being compromised and still hosting malicious redirects to this very day. After the Me2B Alliance alerted Blackboard Inc., they were able to quickly remove this domain from their active mobile app, reducing some of the risks. This is also an active domain, and Business Email Compromise risks for emails that originate from this domain (i.e. “”) remain a real threat.

  • The Quinlan, Texas School District had a domain that went up for sale for $30 that was integrated into their Android app, which was purchased before anyone could take action. After the Me2B Alliance alerted Blackboard Inc., the dangling domain link was removed from the app, and subsequently the Android app was pulled down from the Google Play Store

The research we are releasing today focuses on an intensive evaluation of 11 school utility apps (from an original pool of 18 apps) made by companies who support thousands of other schools with similar app frameworks. 

In short, the use of WebView in school utility apps, and the operational challenges to maintain them, creates a significant channel for data sharing and also introduces serious security risks. If people using these mobile apps “can’t choose their own browser” they can’t make informed choices that empower them to block and stop some of these data transfers, which can be downright dangerous when an app for kids integrates dangling domains into WebView interfaces. If Google and Apple merely made a few changes to empower users over developers, these risks for schools, kids, parents, and administrators would nearly completely disappear. 

These risks have been compounded by certain companies providing “free software for schools” that purposefully monetizes these free tools for apps and websites via data sharing and online advertising, with the new Me2B Alliance research focusing on the CBS/Viacom subsidiary called Another way to think about this research is that we’ve attempted to point out a technical framework that numerous school utility apps are using, which utilizes a type of “Content Management System” (CMS) that allows school administrators to “add links” into an app, without actually submitting a new version of that app into the app store – and the links being added into the apps are merely web URLs, with the web content rendering within the app’s in-app browser. These websites rendering in the apps contain advertising pixels/javascript code, which then collect data within the apps when opened by users, and are sharing the access and user data with new companies – hundreds of them, sometimes more. Rarely do app privacy labels account for these data transfers, and neither the app makers, the schools, or the vendors collecting data within those apps and WebView URLs are currently taking accountability for making these kids user data flows safer. We’re surprised and alarmed by this “advertising for kids” architecture happening within school utility apps paid for by taxpayers, with some companies seemingly earning sizable revenues from these data pipelines. As a result, serious questions need to be asked of all the organizations participating in these schemes.

This report includes guidance on how to identify a school utility app with potentially unsafe WebView links in Appendix A, which we hope provides investigative journalists, data auditors, school administrators, parents, students, app developers, and everyone-in-between with a way to recognize when an app is opening web links. 

If you’re interested in having the Me2B Alliance take a closer look at your school’s apps, please contact us at


To download full pdf, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

The Popular Family Safety App Life360 Is Selling Precise Location Data on Its Tens of Millions of Users // The Markup

The Popular Family Safety App Life360 Is Selling Precise Location Data on Its Tens of Millions of Users // The Markup | Educational Psychology & Technology: Critical Perspectives and Resources |

By Jon Keegan and Alfred Ng

Life360, a popular family safety app used by 33 million people worldwide, has been marketed as a great way for parents to track their children’s movements using their cellphones. The Markup has learned, however, that the app is selling data on kids’ and families’ whereabouts to approximately a dozen data brokers who have sold data to virtually anyone who wants to buy it. 


Through interviews with two former employees of the company, along with two individuals who formerly worked at location data brokers Cuebiq and X-Mode, The Markup discovered that the app acts as a firehose of data for a controversial industry that has operated in the shadows with few safeguards to prevent the misuse of this sensitive information. The former employees spoke with The Markup on the condition that we not use their names, as they are all still employed in the data industry. They said they agreed to talk because of concerns with the location data industry’s security and privacy and a desire to shed more light on the opaque location data economy. All of them described Life360 as one of the largest sources of data for the industry. 


“We have no means to confirm or deny the accuracy” of whether Life360 is among the largest sources of data for the industry, Life360 founder and CEO Chris Hulls said in an emailed response to questions from The Markup. “We see data as an important part of our business model that allows us to keep the core Life360 services free for the majority of our users, including features that have improved driver safety and saved numerous lives.”


A former X-Mode engineer said the raw location data the company received from Life360 was among X-Mode’s most valuable offerings due to the sheer volume and precision of the data. A former Cuebiq employee joked that the company wouldn’t be able to run its marketing campaigns without Life360’s constant flow of location data."


The Markup was able to confirm with a former Life360 employee and a former employee of X-Mode that X-Mode—in addition to Cuebiq and Allstate’s Arity, which the company discloses in its privacy policy—is among the companies that Life360 sells data to. The former Life360 employee also told us Safegraph was among the buyers, which was confirmed by an email from a Life360 executive that was viewed by The Markup. There are potentially more companies that benefit from Life360’s data based on those partners’ customers. 


Hulls declined to disclose a full list of Life360’s data customers and declined to confirm that Safegraph is among them, citing confidentiality clauses, which he said are in the majority of its business contracts. Data partners are only publicly disclosed when partners request transparency or there’s “a particular reason to do so,” Hulls said. He did confirm that X-Mode buys data from Life360 and that it is one of “approximately one dozen data partners.” Hulls added that the company would be supportive of legislation that would require public disclosure of such partners.


X-Mode, SafeGraph, and Cuebiq are known location data companies that supply data and insights gleaned from that data to other industry players, as well as customers like hedge funds or firms that deal in targeted advertising. 


Cuebiq spokesperson Bill Daddi said in an email that the company doesn’t sell raw location data but provides access to an aggregated set of data through its “Workbench” tool to customers including the Centers for Disease Control and Prevention. Cuebiq, which receives raw location data from Life360, has publicly disclosed its partnership with the CDC to track “mobility trends” related to the COVID-19 pandemic.


“The CDC only exports aggregate, privacy-safe analytics for research purposes, which completely anonymizes any individual user data,” Daddi said. “Cuebiq does not sell data to law enforcement agencies or provide raw data feeds to government partners (unlike others, such as X-Mode and SafeGraph).”

X-Mode has sold location data to the U.S. Department of Defense, and SafeGraph has sold location data to the CDC, according to public records.


X-Mode and SafeGraph didn’t respond to requests for comment.

The Life360 CEO said that the company implemented a policy to prohibit the selling or marketing of Life360’s data to any government agencies to be used for a law enforcement purpose in 2020, though the company has been selling data since at least 2016. 


“From a philosophical standpoint, we do not believe it is appropriate for government agencies to attempt to obtain data in the commercial market as a way to bypass an individual’s right to due process,” Hulls said. 

The policy also applies to any companies that Life360’s customers share data with, he said. Hulls said the company maintains “an open and ongoing dialogue” with its customers to ensure they comply with the policy, though he acknowledged that it was a challenge to monitor partners’ activities. 


Life360 discloses in the fine print of its privacy policy that it sells the data it gleans from app users, but Justin Sherman, a cyber policy fellow at the Duke Tech Policy Lab, said people are probably not aware of how far their data can travel.

The company’s privacy policy notes Life360 “may also share your information with third parties in a form that does not reasonably identify you directly. These third parties may use the de-identified information for any purpose.”


“Families probably would not like the slogan, ‘You can watch where your kids are, and so can anyone who buys this information,’ ” Sherman said.


Two former Life360 employees also told The Markup that the company, while it states it anonymizes the data it sells, fails to take necessary precautions to ensure that location histories cannot be traced back to individuals. They said that while the company removed the most obvious identifying user information, it did not make efforts to “fuzz,” “hash,” aggregate, or reduce the precision of the location data to preserve privacy."... 


For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Meta will continue to use facial recognition technology, actually // Input Magazine

Meta will continue to use facial recognition technology, actually // Input Magazine | Educational Psychology & Technology: Critical Perspectives and Resources |

By Matt Wille
"Earlier this week, Facebook made the somewhat shocking announcement that it would be shutting down its facial recognition systems. But now, Facebook’s parent company, Meta, has walked that promise back a bit. A lot, really.

Meta is not planning to hold back its use of facial recognition technology in its forthcoming metaverse products. Facebook’s new parent company told Recode that the social network’s commitment does not in any way apply to the metaverse. The metaverse will abide by its own rules, thank you very much. In fact, Meta spokesperson Jason Grosse says the company is already experimenting with different ways to bring biometrics into the metaverse equation.


“We believe this technology has the potential to enable positive use cases in the future that maintain privacy, control, and transparency, and it’s an approach we’ll continue to explore as we consider how our future computing platforms and devices can serve people’s needs,” Grosse said of the technology.

Sigh. We should’ve seen that one coming. Changing the company’s name did nothing to alter its underhanded business strategies.

LOL JK — Just a week after its rebrand, Meta is making it clear that taking the “Facebook” out of Facebook did nothing to actually change the company. One of Meta’s first actions as a new company is making a big deal out of shutting down its facial recognition tech, only to a few days later say, “Oh, we didn’t mean it like that.”


In announcing the seemingly all-encompassing shutdown, Meta failed to mention a key fact: that it would not be eliminating DeepFace, its house-made facial recognition algorithms, from its servers. We only learned of this pivotal information because Grosse spilled the beans to The New York Times. Grosse did say, at that point, that Meta hadn’t “ruled out” using facial recognition in the future — but he failed to mention that Meta had already begun discussing how it could use biometrics in its future products."...


For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

What are the risks of Virtual Reality data? Learning Analytics, Algorithmic Bias and a Fantasy of Perfect Data // Marcus Carter & Ben Egliston (2021). New Media and Society

What are the risks of Virtual Reality data? Learning Analytics, Algorithmic Bias and a Fantasy of Perfect Data // Marcus Carter & Ben Egliston (2021). New Media and Society | Educational Psychology & Technology: Critical Perspectives and Resources |

"Virtual reality (VR) is an emerging technology with the potential to extract significantly more data about learners and the learning process. In this article, we present an analysis of how VR education technology companies frame, use and analyse this data. We found both an expansion and acceleration of what data are being collected about learners and how these data are being mobilized in potentially discriminatory and problematic ways. Beyond providing evidence for how VR represents an intensification of the datafication of education, we discuss three interrelated critical issues that are specific to VR: the fantasy that VR data is ‘perfect’, the datafication of soft-skills training, and the commercialisation and commodification of VR data. In the context of the issues identified, we caution the unregulated and uncritical application of learning analytics to the data that are collected from VR training."


To download, click on link below (author version online) 

No comment yet.
Scooped by Roxana Marachi, PhD!

Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission // Slaughter, 2021, Yale Journal of Law & Technology 

To download, click on title or arrow above. 

No comment yet.
Scooped by Roxana Marachi, PhD!

Collection and Processing of Data from Wrist Wearable Devices in Heterogeneous and Multiple-User Scenarios // Sensors (Arriba-Pérez, Caeiro-Rodríguez, & Santos-Gago, 2021)

Collection and Processing of Data from Wrist Wearable Devices in Heterogeneous and Multiple-User Scenarios // Sensors (Arriba-Pérez, Caeiro-Rodríguez, & Santos-Gago, 2021) | Educational Psychology & Technology: Critical Perspectives and Resources |


"Over recent years, we have witnessed the development of mobile and wearable technologies to collect data from human vital signs and activities. Nowadays, wrist wearables including sensors (e.g., heart rate, accelerometer, pedometer) that provide valuable data are common in market. We are working on the analytic exploitation of this kind of data towards the support of learners and teachers in educational contexts. More precisely, sleep and stress indicators are defined to assist teachers and learners on the regulation of their activities. During this development, we have identified interoperability challenges related to the collection and processing of data from wearable devices. Different vendors adopt specific approaches about the way data can be collected from wearables into third-party systems. This hinders such developments as the one that we are carrying out. This paper contributes to identifying key interoperability issues in this kind of scenario and proposes guidelines to solve them. Taking into account these topics, this work is situated in the context of the standardization activities being carried out in the Internet of Things and Machine to Machine domains."

Keywords: wearable sensors, wearable computing, data interoperability, internet of things, machine to machine
No comment yet.
Scooped by Roxana Marachi, PhD!

Normalizing Surveillance (Selinger & Rhee, 2021) // Northern European Journal of Philosophy (via SSRN)

Normalizing Surveillance (Selinger & Rhee, 2021) // Northern European Journal of Philosophy (via SSRN) | Educational Psychology & Technology: Critical Perspectives and Resources |


Definitions of privacy change, as do norms for protecting it. Why, then, are privacy scholars and activists currently worried about “normalization”? This essay explains what normalization means in the context of surveillance concerns and clarifies why normalization has significant governance consequences. We emphasize two things. First, the present is a transitional moment in history. AI-infused surveillance tools offer a window into the unprecedented dangers of automated real-time monitoring and analysis. Second, privacy scholars and activists can better integrate supporting evidence to counter skepticism about their most disturbing and speculative claims about normalization. Empirical results in moral psychology support the assertion that widespread surveillance typically will lead people to become favorably disposed toward it. If this causal dynamic is pervasive, it can diminish autonomy and contribute to a slippery slope trajectory that diminishes privacy and civil liberties.


Keywords: normalization, surveillance, privacy, civil liberties, moral psychology, function creep, surveillance creep, slippery slope arguments

For original post and to download full pdf, please visit: 
No comment yet.
Scooped by Roxana Marachi, PhD!

"Education 3.0: The Internet of Education" [Slidedeck]

This link was forwarded by a colleague on October 3rd, 2021 (posted by Greg Nadeau on LinkedIn). The pdf above is the download as of the slide deck from 10/3/21 (click on title or arrow above to download). The live link posted is at the following URL


See also: 

No comment yet.
Scooped by Roxana Marachi, PhD!

The Impossibility of Automating Ambiguity // Abeba Birhane, MIT Press

The Impossibility of Automating Ambiguity // Abeba Birhane, MIT Press | Educational Psychology & Technology: Critical Perspectives and Resources |

"On the one hand, complexity science and enactive and embodied cognitive science approaches emphasize that people, as complex adaptive systems, are ambiguous, indeterminable, and inherently unpredictable. On the other, Machine Learning (ML) systems that claim to predict human behaviour are becoming ubiquitous in all spheres of social life. I contend that ubiquitous Artificial Intelligence (AI) and ML systems are close descendants of the Cartesian and Newtonian worldview in so far as they are tools that fundamentally sort, categorize, and classify the world, and forecast the future. Through the practice of clustering, sorting, and predicting human behaviour and action, these systems impose order, equilibrium, and stability to the active, fluid, messy, and unpredictable nature of human behaviour and the social world at large. Grounded in complexity science and enactive and embodied cognitive science approaches, this article emphasizes why people, embedded in social systems, are indeterminable and unpredictable. When ML systems “pick up” patterns and clusters, this often amounts to identifying historically and socially held norms, conventions, and stereotypes. Machine prediction of social behaviour, I argue, is not only erroneous but also presents real harm to those at the margins of society."


To download full document: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Protecting Kids Online: Internet Privacy and Manipulative Marketing - U.S. Senate Subcommittee on Consumer Protection, Product Safety, and Data Security

Protecting Kids Online: Internet Privacy and Manipulative Marketing - U.S. Senate Subcommittee on Consumer Protection, Product Safety, and Data Security | Educational Psychology & Technology: Critical Perspectives and Resources |

"WASHINGTON, D.C.— U.S. Senator Richard Blumenthal (D-CT), the Chair of the Subcommittee on Consumer Protection, Product Safety, and Data Security, will convene a hearing titled, “Protecting Kids Online: Internet Privacy and Manipulative Marketing” at 10 a.m. on Tuesday, May 18, 2021. Skyrocketing screen time has deepened parents’ concerns about their children’s online safety, privacy, and wellbeing. Apps such as TikTok, Facebook Messenger, and Instagram draw younger audiences onto their platforms raising concerns about how their data is being used and how marketers are targeting them. This hearing will examine the issues posed by Big Tech, child-oriented apps, and manipulative influencer marketing. The hearing will also explore needed improvements to our laws and enforcement, such as the Children’s Online Privacy Protection Act, child safety codes, and the Federal Trade Commission’s advertising disclosure guidance.


  • Ms. Angela Campbell, Professor Emeritus, Georgetown Law
  • Mr. Serge Egelman, Research Director, Usable Security and Privacy, International Computer Science Institute, University of California Berkeley
  • Ms. Beeban Kidron, Founder and Chair, 5Rights

Hearing Details:

Tuesday, May 18, 2021

10:00 a.m. EDT

Subcommittee on Consumer Protection, Product Safety, and Data Security (Hybrid)  


Witness Panel 1 

No comment yet.
Scooped by Roxana Marachi, PhD!

More than 40 attorneys general ask Facebook to abandon plans to build Instagram for kids //

More than 40 attorneys general ask Facebook to abandon plans to build Instagram for kids // | Educational Psychology & Technology: Critical Perspectives and Resources |

By Lauren Feiner

"Attorneys general from 44 states and territories urged Facebook to abandon its plans to create an Instagram service for kids under the age of 13, citing detrimental health effects of social media on kids and Facebook’s reportedly checkered past of protecting children on its platform.

Monday’s letter follows questioning from federal lawmakers who have also expressed concern over social media’s impact on children. The topic was a major theme that emerged from lawmakers at a House hearing in March with Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai and Twitter CEO Jack Dorsey. Republican staff for that committee later highlighted online protection for kids as the main principle lawmakers should consider in their legislation.


BuzzFeed News reported in March that Facebook had been exploring creating an Instagram service for children, based on internal documents it obtained.

Protecting children from harm online appears to be one of the rare motivators both Democrats and Republicans can agree on, which puts additional pressure on any company creating an online service for kids.

In Monday’s letter to Zuckerberg, the bipartisan group of AGs cited news reports and research findings that social media and Instagram, in particular, had a negative effect on kids’ mental well-being, including lower self-esteem and suicidal ideation.

The attorneys general also said young kids “are not equipped to handle the range of challenges that come with having an Instagram account.” Those challenges include online privacy, the permanence of internet posts, and navigating what’s appropriate to view and share. They noted that Facebook and Instagram had reported 20 million child sexual abuse images in 2020.

Officials also based their skepticism on Facebook’s history with products aimed at children, saying it “has a record of failing to protect the safety and privacy of children on its platform, despite claims that its products have strict privacy controls.” Citing news reports from 2019, the AGs said that Facebook’s Messenger Kids app for children between 6 and 12 years old “contained a significant design flaw that allowed children to circumvent restrictions on online interactions and join group chats with strangers that were not previously approved by the children’s parents.” They also referenced a recently reported “mistake” in Instagram’s algorithm that served diet-related content to users with eating disorders.


“It appears that Facebook is not responding to a need, but instead creating one, as this platform appeals primarily to children who otherwise do not or would not have an Instagram account,” the AGs wrote. “In short, an Instagram platform for young children is harmful for myriad reasons. The attorneys general urge Facebook to abandon its plans to launch this new platform.”

In a statement, a Facebook spokesperson said the company has “just started exploring a version of Instagram for kids,” and committed to not show ads “in any Instagram experience we develop for people under the age of 13.”

“We agree that any experience we develop must prioritize their safety and privacy, and we will consult with experts in child development, child safety and mental health, and privacy advocates to inform it. We also look forward to working with legislators and regulators, including the nation’s attorneys general,” the spokesperson said.

After publication, Facebook sent an updated statement acknowledging that since children are already using the internet, “We want to improve this situation by delivering experiences that give parents visibility and control over what their kids are doing. We are developing these experiences in consultation with experts in child development, child safety and mental health, and privacy advocates.”

Facebook isn’t the only social media platform that’s created services for children. Google-owned YouTube has a kids service, for example, though with any internet service, there are usually ways for children to lie about their age to access the main site. In 2019, YouTube reached a $170 million settlement with the Federal Trade Commission and New York attorney general over claims it illegally earned money from collecting the personal information of kids without parental consent, allegedly violating the Children’s Online Privacy Protection Act (COPPA).

Following the settlement, YouTube said in a blog post it will limit data collection on videos aimed at children, regardless of the age of the user actually watching. It also said it will stop serving personalized ads on child-focused content and disable comments and notifications on them."... 


For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Silicon Valley, Philanthrocapitalism, and Policy Shifts from Teachers to Tech // Chapter in Strike for the Common Good: Fighting for the Future of Public Education 

Marachi, R., & Carpenter, R. (2020). Silicon Valley, philanthrocapitalism, and policy shifts from teachers to tech. In Givan, R. K. and Lang, A. S. (Eds.). Strike for the Common Good: Fighting for the Future of Public Education. Ann Arbor: University of Michigan Press.


Author version above (click down arrow to access). For final publication, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Toward a Critical Approach for OER: A Case Study in Removing the 'Big Five' from OER Creation (Joseph, Guy, & McNally, 2019) // Open Praxis 


"This paper examines the role of proprietary software in the production of open educational resources (OER). Using a single case study, the paper explores the implications of removing proprietary software from an OER project, with the aim of examining how complicated such a process is and whether removing such software meaningfully advances a critical approach to OER. The analysis reveals that software from the Big Five technology companies (Apple, Alphabet/Google, Amazon, Facebook and Microsoft) are deeply embedded in OER production and distribution, and that complete elimination of software or services from these companies is not feasible. The paper concludes by positing that simply rejecting Big Five technology introduces too many challenges to be justified on a pragmatic basis; however, it encourages OER creators to remain critical in their use of technology and continue to try to advance a critical approach to OER."


To download, click on title, arrow above, or link below: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Borrowed a School Laptop? Mind Your Open Tabs // WIRED

Borrowed a School Laptop? Mind Your Open Tabs // WIRED | Educational Psychology & Technology: Critical Perspectives and Resources |

Students—many from lower-income households—were likely to use school-issued devices for remote learning. But the devices often contained monitoring software.


By Sidney Fussell

"When tens of millions of students suddenly had to learn remotely, schools lent laptops and tablets to those without them. But those devices typically came with monitoring software, marketed as a way to protect students and keep them on-task. Now, some privacy advocates, parents, and teachers say that software created a new digital divide, limiting what some students could do and putting them at increased risk of disciplinary action.

One day last fall, Ramsey Hootman’s son, then a fifth grader in the West Contra Costa School District in California, came to her with a problem: He was trying to write a social studies report when the tabs on his browser kept closing. Every time he tried to open a new tab to study, it disappeared.


It wasn’t an accident. When Hootman emailed the teacher, she says she was told, “‘Oh, surprise, we have this new software where we can monitor everything your child is doing throughout the day and can see exactly what they're seeing, and we can close all their tabs if we want.’”


Hootman soon learned that all of the district’s school-issued devices use Securly, student-monitoring software that lets teachers see a student’s screen in real time and even close tabs if they discover a student is off-task. During class time, students were expected to have only two tabs open. After Hootman’s complaint, the district raised the limit to five tabs.


But Hootman says she and other parents wouldn’t have chosen school-issued devices if they knew the extent of the monitoring. (“I’m lucky that’s an option for us,” she says.) She also worried that when monitoring software automatically closes tabs or otherwise penalizes multitasking, it makes it harder for students to cultivate their own ability to focus and build discipline.


“As parents, we spend a lot of time helping our kids figure out how to balance schoolwork and other stuff,” she says. “Obviously, the internet is a big distraction, and we're working with them on being able to manage distractions. You can't do that if everything is already decided for you.”


Ryan Phillips, communications director for the school district, says Securly’s features are designed to protect students’ privacy, are only required for district-issued devices, and that teachers can only view a student’s computer during school hours. Securly did not respond to a request for comment before this article was published. After it was initially published, a Securly spokesperson said district administrators can disable screen viewing, the product notifies students when a class session begins, and schools can limit teachers to only  start class sessions during school hours.


In a report earlier this month, the Center for Democracy and Technology, a Washington, DC-based tech policy nonprofit, said the software installed on school-issued computers essentially created two classes of students. Those from lower-income households were more likely to use school-issued computers, and therefore more likely to be monitored.


“Our hypothesis was there are certain groups of students, more likely those attending lower-income schools, who are going to be more reliant on school-issued devices and therefore be subject to more surveillance and tracking than their peers who can essentially afford to opt out,” explains Elizabeth Laird, one of the report’s authors.


The report found that Black and Hispanic families were more reliant on school devices than their white counterparts and were more likely to voice concern about the potential disciplinary consequences of the monitoring software.


The group said monitoring software, from companies like Securly and GoGuardian, offers a range of capabilities, from blocking access to adult content and flagging certain keywords (slurs, profanity, terms associated with self-harm, violence, etc.) to allowing teachers to see students screens in real time and make changes." 

No comment yet.
Scooped by Roxana Marachi, PhD!

The Problems with Immersive Advertising: In AR/VR, Nobody Knows You Are an Ad // Heller & Bar-Zeev, 2021, Journal of Online Trust and Safety

The Problems with Immersive Advertising: In AR/VR, Nobody Knows You Are an Ad // Heller & Bar-Zeev, 2021, Journal of Online Trust and Safety | Educational Psychology & Technology: Critical Perspectives and Resources |

"Imagine five years from now, you’re walking down a street wearing your own mixed reality glasses. They’re sleek, comfortable and increasingly fashionable. A virtual car drives by—it’s no coincidence that it’s the exact model you’ve been saving for. Your level of interest is noted. A hipster passes on the sidewalk wearing some limited-edition sneakers. Given your excitement, a prompt to buy “copy #93/100” shows up nearby. You jump at the chance, despite the hefty price. They’ll be waiting for you when you get home.


Cinema and television have long been imagining what advertising will look like in XR (known alternatively as eXtended Reality and as the entire Mixed Reality continuum from Virtual Reality (VR) to Augmented reality (AR)). We’re reaching the point where science fiction is rapidly becoming reality.


If you’ve watched professional sports on TV in the past decade, you’ve almost certainly experienced a form of augmented advertising. Your friend watching the same game across the country will likely see different ads—not just the commercials but the actual billboards in the stadium behind the players may be replaced to suit the local advertising market.


VR-based advertising is in its infancy, and it looks very different from traditional advertising because of the way immersive media works. Instead of simple product placement, think about immersive ads as placement within the product. The advertising is experiential, using characteristics of media and entertainment that came before, alongside embodiment (a feeling of physical presence in a virtual space) and full immersion into a digital world. In immersive environments, creators completely determine what is seen, heard, and experienced by the user. This is not just influencing your feelings and impulses, but placing you in a controlled environment that your brain will interpret as real.


As advertising in immersive contexts takes off, we should do more than marvel at the pokemon we meet in the street. AR and VR experiences have profound effects on cognition that are different from how we take in and process information in other media. For example, our brains interpret an assault in a VR world,1 even in a cartoonish environment, just like we are being attacked in our own homes. These implications get even more complex when we consider paid content: how will the unique characteristics of immersive worlds be used to persuade and drive behaviors?"....


For full document, please visit: 



Heller, B., & Bar-Zeev, A. (2021). The Problems with Immersive Advertising: In AR/VR, Nobody Knows You Are an Ad. Journal of Online Trust and Safety, October 2021, 1-14.

No comment yet.
Scooped by Roxana Marachi, PhD!

Academic Integrity and Anti-Black Aspects of Educational Surveillance and E-Proctoring (Parnther & Eaton, 2021) // Teachers College Record 

Academic Integrity and Anti-Black Aspects of Educational Surveillance and E-Proctoring (Parnther & Eaton, 2021) // Teachers College Record  | Educational Psychology & Technology: Critical Perspectives and Resources |

Academic Integrity and Anti-Black Aspects of Educational Surveillance and E-Proctoring

by Ceceilia Parnther & Sarah Elaine Eaton - June 23, 2021


"In this commentary, we address issues of equity, diversity, and inclusion as related to academic integrity. We speak specifically to the ways in which Black and other racialized minorities may be over-represented in those who get reported for academic misconduct, compared to their White peers. We further address the ways in which electronic and remote proctoring software (also known as e-proctoring) discriminates against students of darker skin tones. We conclude with a call to action to educational researchers everywhere to pay close attention to how surveillance technologies are used to propagate systemic racism in our learning institutions.



The rapid pivot to remote teaching during the COVID-19 pandemic resulted in colleges and universities turning to technological services to optimize classroom management with more frequency than ever before. Electronic proctoring technology (also known as e-proctoring or remote invigilation) is one such fast-growing service, with an expected industry valuation estimated to be $10 Billion by 2026 (Learning Light, 2016). Students and faculty are increasingly concerned about the role e-proctoring technologies play in college exams.




We come to this work as educators, advocates, and scholars of academic integrity and educational ethics.


Ceceilia’s connection to this work lies in her personal and professional identities: “I am a Black, low socioeconomic status, first-generation college graduate and faculty member. The experiences of students who share my identity deeply resonate with me. While I’ve been fortunate to have support systems that helped me navigate college, I am keenly aware that my experience and opportunities are often the exceptions rather than the norm in a system historically designed to disregard, if not exclude, the experiences of minoritized populations. There were many moments where, in honor of the support I received, my career represents a commitment as an advocate, researcher, and teacher to student success and equitable systems in education.”


Sarah’s commitment to equity, diversity, and inclusion stems from experiences as a (White) first-generation student who grew up under the poverty line: “My formative experiences included living in servants’ quarters while my single mother worked as a full-time servant to a wealthy British family (see Eaton, 2020). Later, we moved Halifax, Nova Scotia, where we settled in the North End, a section of the city that is home to many Black and Irish Catholic residents. Social and economic disparities propagated by race, social class, and religion impacted my lived experiences from an early age. I now work as a tenured associate professor of education, focusing on ethics and integrity in higher education, taking an advocacy and social justice approach to my research.”




Higher rates of reporting and adjudicated instances of academic misconduct make Black students especially susceptible to cheating accusations. The disproportionality of Black students charged and found responsible for student misconduct is most readily seen in a K–12 context (Fabelo et al., 2011). However, research supports this as a reasonable assertion in the higher education context (Trachtenberg, 2017; Bobrow, 2020), primarily due to implicit bias (Gillo, 2017). In other words, Black and other minoritized students are already starting from a position of disadvantage in terms of being reported for academic misconduct.


The notion of over-representation is important here. Over-representation happens when individuals from a particular sub-group are reported for crimes or misconduct more often than those of the dominant White population. When we extend this notion to academic misconduct, we see evidence that Black students are reported more often than their White peers. This is not indicative that Black students engage in more misconduct behaviors, but rather it is more likely that White students are forgiven or simply not reported for misconduct as often. The group most likely to be forgiven for student conduct issues without ever being reported are White females, leaving non-White males to be among those most frequently reported for misconduct (Fabelo et al., 2011). Assumptions such as these perpetuate a system that views White student behavior as appropriate, unchallenged, normative, and therefore more trustworthy. These issues are of significant concern in an increasingly diverse student environment.




For Black and other students of color, e-proctoring represents a particular threat to equity in academic integrity. Although technology in and of itself is not racist, a disproportionate impact of consequences experienced by Black students is worthy of further investigation. Many educational administrators have subscribed to the idea that outsourcing test proctoring to a neutral third party is an effective solution. The problem is these ideas are often based on sales pitches, rather than actual data. There is a paucity of data about the effectiveness of e-proctoring technologies in general and even less about its impact on Black and other racialized minority students.

However, there are plenty of reports that show that facial recognition software unfairly discriminates against people with darker skin tones. For example, Robert Julian-Borchak Williams, a Black man from Detroit, was wrongfully accused and arrested on charges of larceny on the basis of facial recognition software—which, as it turns out, was incorrect (Hill, 2020). Williams described the experience as “humiliating” (Hill, 2020). This example highlights not only the inequities of surveillance technologies, but also the devastating effects the software can have when the system is faulty.


Algorithms often make Whiteness normative, with Blackness then reduced to a measure of disparity. Facial recognition software viewing White as normative is often unable to distinguish phenotypical Black individuals at higher rates than Whites (Hood, 2020). Surveillance of living spaces for authentication creates uncomfortable requirements that are anxiety-inducing and prohibitive.


E-proctoring companies often provide colleges and universities contracts releasing them of culpability while also allowing them to collect biodata. For Black students, biodata collection for unarticulated purposes represents concerns rooted in a history of having Black biological information used in unethical and inappropriate ways (Williams, 2020).




As educators and researchers specializing in ethics and integrity, we do not view academic integrity research as being objective. Instead, we see academic integrity inquiry as the basis for advocacy and social justice. We conclude with a call to action to educational researchers everywhere to pay close attention to how surveillance technologies are used to propagate systemic racism in our learning institutions. This call should include increased research on the impact of surveillance on student success, examination, and accountability of the consequences of institutional use of and investment in e-proctoring software, and centering of student advocates who challenge e-proctoring."




Bobrow, A. G. (2020). Restoring honor: Ending racial disparities in university honor systems. Virginia Law Review, 106, 47–70.


Eaton, S. E. (2020). Challenging and critiquing notions of servant leadership: Lessons from my mother. In S. E. Eaton & A. Burns (Eds.), Women negotiating life in the academy: A Canadian perspective (pp. 15–23). Springer.


Fabelo, T., Thompson, M. D., Plotkin, M., Carmichael, D., Marchbanks III, M. P., & Booth, E. A. (2011). Breaking schools’ rules: A statewide study of how school discipline relates to students’ success and juvenile justice involvement. Retrieved from


Hood, J. (2020). Making the body electric: The politics of body-worn cameras and facial recognition in the United States. Surveillance & Society18(2), 157–169. (2019, February 19). Online Proctoring / Remote Invigilation – Soon a multibillion dollar market within eLearning & assessment. Retrieved May 23, 2020, from


Trachtenberg, B. (2017). How university Title IX enforcement and other discipline processes (probably) discriminate against minority students. Nevada Law Journal, 18(1), 107–164.


Williams, D. P. (2020). Fitting the description: historical and sociotechnical elements of facial recognition and anti-black surveillance. Journal of Responsible Innovation7(sup1), 1–10."

Cite This Article as: Teachers College Record, Date Published: June 23, 2021 ID Number: 23752, Date Accessed: 6/25/2021 5:07:12 PM


For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Technology and Inequality, Surveillance, and Privacy during COVID-19 // Denise Anthony, University of Michigan

Technology and Inequality, Surveillance, and Privacy during COVID-19 // Denise Anthony, University of Michigan | Educational Psychology & Technology: Critical Perspectives and Resources |

By Denise Anthony, Professor, Sociology and Health Management and Policy, University of Michigan 


"Since the start of the pandemic, we have spent more time in our homes than we ever expected to—working from home (if fortunate to have that possibility); learning from home; being entertained with streaming services; and doing virtual happy hours, birthdays, and holidays. Now we even have doctor’s visits and consultations with our therapists from home.

The increasing availability of so-called Internet of Things (IoT) technology (i.e., internet-enabled devices such as smart TVs, smart speakers, video doorbells, and voice-activated virtual assistants like Amazon Alexa or Google Assistant), along with our smartphones, computers, and Wi-Fi internet access, has comforted, entertained, facilitated work and learning, and safeguarded us at home during the pandemic. Estimates suggest that roughly three-quarters of American adults have broadband internet service at home and about 69 percent of people in the U.S. have an IoT device/system in their home.

But while these computing and “smart” technologies were facilitating our interaction, work, school and health care, they were also becoming embedded into our social worlds in ways that have important sociological implications. The distinct power dynamics, social conditions, and intimate situations of the home create implications for inequality, privacy, and surveillance in the smart home.

Like so much else during COVID-19 (Henderson et al. 2020Perry et al. 2021), these virtual activities in the home have revealed much about inequality in our society, including the gulf between technology haves and have-nots (Campos-Castillo 2014Hargittai 2002Puckett 2020).


Working from Home

Even before the pandemic, it is important to remember that homes were already sites of work. The impact of the pandemic on domestic workers for example, who historically have had few protections as employees (Maich 2020)—as well as on many of the elderly and people with disabilities who depend on them—has been devastating.

Those lucky enough to be able to work from home after the start of the pandemic often needed significant resources to do so—computers, high-bandwidth internet access, and cameras (to attend virtual meetings), not to mention a quiet room in which to work. But who is paying for all of that? Some companies made headlines by helping workers equip home offices, but many workers simply had to absorb those costs.

What workers probably didn’t know they were also getting in the bargain was the potential for their employer to monitor them in their home. Technological surveillance of workers is as old as the industrial revolution, and modern tracking of workers (also Atwell 1987Lyon 1994) via embedded cameras (now sometimes with facial recognition software), location trackingelectronic monitoring, and even individual keystrokes, is increasing.

The relative abilities of workers to manage privacy and resist surveillance are unevenly distributed, with particularly negative consequences for individuals and communities with low status and few resources. The potential that work-from-home may be extended for some workers post-pandemic, coupled with the increasing presence of home IoT, fuels the capacity for surveillance in the home. But surveillance is not merely about technology. It not only amplifies social inequalities (Brayne 2020Browne 2015Benjamin 2019Eubanks 2018), it also has long-term implications for the organization and operation of power in society (Zuboff 2019).


School from Home

Space constraints and required computing resources have been especially relevant for families grappling with virtual education during the pandemic (Calarco 2020Puckett and Rafalow 2020). The long-term implications of virtual schooling will need to be studied for years to come, but some of the devastating harms, including from the invasive surveillance that technology and government enabled, are already clear, particularly for the most vulnerable students. It is important to recognize that examples of surveillance—like the student incarcerated for not completing her online schoolwork—illustrate that it is not technology alone that produces surveillance. It is technology used in specific ways by specific actors (in this case, teachers, school systems, governments) that produces surveillance (Lyon 200720112018).


Health Care from Home

In the initial period of the pandemic during spring 2020 when much of the world shut down, health care—other than ERs and ICUs treating severely ill and dying COVID patients—nearly ground to a halt as both providers and patients sought to avoid in-person contact. But chronic conditions still needed monitoring and other illness and injuries still happened.

Telehealth visits filled the void for some (Cantor et al. 2021). People who already have broadband, home Wi-Fi, necessary devices (smartphones, tablets, or laptops), and experience using technologies like online patient portals, can more easily engage in telehealth than those without them (Campos-Castillo and Anthony 2021Reed et al. 2020). However, populations with chronic health needs are generally lower resourced (Phelan et al. 2010) and disproportionately people of color (Williams et al. 2019), but lower resourced patients and some minority racial and ethnic groups are less likely to be offered technologies like patient portals (Anthony et al. 2018).

IoT further increases the potential for health tracking in the home. We track our own health using smart watches and other wearables, like internet-enabled glucometers for diabetics. And now, smart sensors can be installed in the home to detect falls, and virtual assistants keep elders company while also enabling distant family members to check in. The socio-technical landscape of these smart homes creates potential benefits for health but also raises privacy risks. Privacy concerns can influence whether people seek care at all or disclose information to doctors if they do, potentially having consequences for relationships with providers and for family dynamics as well.

But privacy management has become increasingly complex for individuals. In part, this is because privacy management is mediated by technology and the companies that control the technology (and data) in ways that are often invisible, confusing, or uncontrollable. While I can decide whether to wear a fitness tracker or use a virtual assistant, I have very little ability to decide what data about me flows to the company or how the company uses it. But importantly, these data used to evaluate, engage, or exclude me are not exclusively about me. These kinds of data also implicate others—those who live with or near me, are connected to me, or possibly just “like” me in some categorical way decided by the company (and their algorithms). And those others can then also be evaluated, engaged, or excluded based on the data. Thus, privacy has important social implications for surveillance and social control, and also for group boundaries, inequality, cohesion, and collective action.


Societal Implications

The sociological impact and implications of increased technology use in the home extend far beyond these important aspects of inequality, surveillance, and privacy. Such first-order effects are important to understand in order to develop interventions and policies to ameliorate them. But sociologists studying technology also consider what Claude Fischer described as the second-order effects—the ways that social relations, institutions, and systems intersect with technology in ways that change the culture, organization, and structure of society. For example, work-from-home is likely to have long term consequences for employment relations, workplace cultures, and the structures of occupations and professions. Distance-based learning, which was well underway prior to the pandemic, may expand learning and access beyond the constraints of physical schools, but also may alter the training and practice of teachers, as well as the political dynamics of public school districts. Technologies in health care can enhance or limit access, improve or harm health, reduce or exacerbate health disparities, but also alter the doctor-patient relationship, the practice of medicine, and the delivery of health care.


This does not mean that technology drives social change. That kind of simplistic technological determinism has long been critiqued by sociologists. Rather, technology offers an entry point to observe the sociological forces that shape, for example, the production and distribution of new technologies. Think of the political economy of surveillance capitalism, so carefully detailed by Shoshana Zuboff; the institutional and professional practices of those developing technology; and the economic, regulatory, and organizational dynamics driving adoption of new devices and systems. Sociologists also study how social conditions—dynamics of social interactionexisting social norms and status structures, and systemic inequalities—shape how technologies are used, in ways both expected and unexpected. It is these dynamics and sociological forces that drive the social changes we associate with, and sometimes mistakenly attribute to, the technologies themselves.

The ongoing threat of COVID-19 may keep us in our homes a while longer, relying on existing and new technologies for work, school, health care, and more. Sociological research can help to make sense of the drivers and current impact of new technologies that have become widespread. Sociology is also necessary for understanding the deeper and long-term consequences and social shifts that have only just begun.


Any opinions expressed in the articles in this publication are those of the author and not the American Sociological Association.


For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Parents Nationwide File Complaints with U.S. Department of Education; Seek to Address Massive Student Data Privacy Protection Failures // Parents' Coalition of Montgomery County, MD

Parents Nationwide File Complaints with U.S. Department of Education; Seek to Address Massive Student Data Privacy Protection Failures // Parents' Coalition of Montgomery County, MD | Educational Psychology & Technology: Critical Perspectives and Resources |
 "On July 9, 2021, parents of school-age children from Maryland to Alaska, in collaboration with the Student Data Privacy Project (SDPP), will file over a dozen complaints with the U.S. Department of Education (DoE) demanding accountability for the student data that schools share with Educational Technology (EdTech) vendors.
Formed during the pandemic, SDPP is comprised of parents concerned about how their children’s personally identifiable information (PII) is increasingly being mined by EdTech vendors, with the consent of our schools, and without parental consent or school oversight.
With assistance and support from SDPP, 14 parents from 9 states filed requests with their school districts under the Family Educational Rights and Privacy Act (FERPA) seeking access to the PII collected about their children by EdTech vendors. No SDPP parents were able to obtain all of the requested PII held by EdTech vendors, a clear violation of FERPA.
One parent in Maryland never received a response. A New Jersey parent received a generic reply with no date, school name or district identification. Yet a Minnesota parent received over 2,000 files, none of which contained the metadata requested, but did reveal a disturbing amount of personal information held by an EdTech vendor, including the child’s baby pictures, videos of her in an online yoga class, her artwork and answers to in-class questions.
Lisa Cline, SDPP co-founder and parent in Maryland said, “When I tried to obtain data gathered by one app my child uses in class, the school district said, ‘Talk to the vendor.’ The vendor said, ‘Talk to the school.’ This is classic passing of the buck. And the DoE appears to be looking the other way.”
FERPA, a statute enacted in 1974 — almost two decades before the Internet came into existence, at a time when technology in schools was limited to mimeograph machines and calculators  — affords parents the right to obtain their children’s education records, to seek to have those records amended, and to have control over the disclosure of the PII in those records.
Unfortunately, this law is now outdated. Since the digital revolution, schools are either unaware, unable or unwilling to apply FERPA to EdTech vendors. Before the pandemic, the average school used 400-1,000 online tools, according to the Student Data Privacy Consortium. Remote learning has increased this number exponentially.
SDPP co-founder, privacy consultant, law professor and parent Joel Schwarz, noted that “DOE’s failure to enforce FERPA, means that EdTech providers are putting the privacy of millions of children at risk, leaving these vendors free to collect, use and monetize student PII, and share it with third parties at will.”
A research study released by the Me2B Alliance in May 2021, showed that 60% of school apps send student data to potentially high-risk third parties without knowledge or consent. SDPP reached out to Me2B and requested an audit of the apps used by schools in the districts involved in the Project. Almost 70% of the apps reviewed used Software Development Kits (SDKs) that posed a “High Risk” to student data privacy, and almost 40% of the apps were rated “Very High Risk,” meaning the code used is known to be associated with registered Data Brokers. Even more concerning, Google showed up in approximately 80% of the apps that included an SDK, and Facebook ran a close second, showing up in about 60% of the apps.
Emily Cherkin, an SDPP co-founder who writes and speaks nationally about screen use as The Screentime Consultant, noted, “because these schools failed to provide the data requested, we don’t know what information is being collected about our children, how long these records are maintained, who has access to them, and with whom they’re being shared.”
“FERPA says that parents have a right to know what information is being collected about their children, and how that data is being used,” according to Andy Liddell, a federal court litigator in Austin, TX and another SDPP co-founder. “But those rights are being trampled because neither the schools nor the DoE are focused on this issue.”

The relief sought of the DoE includes requiring schools to:
•  actively oversee their EdTech vendors, including regular audits of vendors’ access, use and disclosure of student PII and publicly posting the results of those audits so that parents can validate that their children’s data is being adequately protected;

•  provide meaningful access to records held by EdTech in response to a FERPA request, clarifying that merely providing a student’s account log-in credentials, or referring the requester to the Vendor, does not satisfy the school’s obligations under FERPA;

•  ensure that when their EdTech vendors share student PII with third parties, the Vendor and the school maintain oversight of third-party access and use of that PII, and apply all FERPA rights and protections to that data, including honoring FERPA access requests;

•  protect all of a students’ digital footprints — including browsing history, searches performed, websites visited, etc. (i.e., metadata) — under FERPA, and that all of this data be provided in response to a FERPA access request.

# # # 

If you would like more information, please contact Joel Schwarz at

Parents are invited to join the Student Data Privacy Project. A template letter to school districts can be downloaded from the SDPP website:

SDPP is an independent parent-led organization founded by Joel Schwarz, Andy Liddell, Emily Cherkin and Lisa Cline. Research and filing assistance provided pro bono by recent George Washington University Law School graduate Gina McKlaveen.
No comment yet.
Scooped by Roxana Marachi, PhD!

Companies are hoarding personal data about you. Here’s how to get them to delete it. // Washington Post

Companies are hoarding personal data about you. Here’s how to get them to delete it. // Washington Post | Educational Psychology & Technology: Critical Perspectives and Resources |
 By Tatum Hunter

"In February, Whitney Merrill, a privacy attorney who lives in San Francisco, asked audio-chat company Clubhouse to disclose how many people had shared her name and phone number with the company as part of its contact-sharing feature, in hopes of getting that data deleted.


As she waited to hear from the company, she tweeted about the process and also reached out to multiple Clubhouse employees for help.


Only after that, and weeks after the California Consumer Privacy Act’s 45-day deadline for companies to respond to data deletion requests, Clubhouse responded and complied with her request to delete her information, said Merrill.

“They eventually corrected it after a lot of pressure because I was fortunate enough to be able to tweet about it,” she said.


The landmark California Consumer Privacy Act (CCPA), which went into effect in 2020, gave state residents the right to ask companies to not sell their data, to provide a copy of that data or to delete that data. Virginia and Colorado have also passed consumer privacy laws, which go into effect in 2023.


As more states adopt privacy legislation — Massachusetts, New York, Minnesota, North Carolina, Ohio and Pennsylvania may be next — more of us will get the right to ask for our data to be deleted. Some companies — including Spotify, Uber and Twitter — told us they already honor requests from people outside California when it comes to data deletion.


But that doesn’t mean it always goes smoothly. Data is valuable to companies, and some don’t make it easy to scrub, privacy advocates say. Data deletion request forms are often tucked away, processes are cumbersome, and barriers to verifying your identity slow things down. Sometimes personal data is tied up in confusing legal requirements, so companies can’t get rid of it. Other times, the technical and personnel burden of data requests is simply too much for companies to handle.


Exercising CCPA rights can be an uphill battle, Consumer Reports found in a 2020 study involving more than 400 California consumers who submitted “do not sell my personal data” requests to registered data brokers. Sixty-two percent of the time, participants either couldn’t figure out how to submit the request or were left with no idea whether it worked."


It doesn’t bode well for data deletion, Maureen Mahoney, a Consumer Reports analyst, said. People have to verify their identities before companies can delete data, which poses an extra obstacle.

But that doesn’t mean it’s a lost cause. Many data deletion requests are successful, according to company metrics, and things get easier if you know where to look. Most companies are doing their best to figure out how to deal with patchwork privacy laws and an influx of data rights requests, Merrill said."...


For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Rejecting Test Surveillance in Higher Education (Barrett, 2021) // Georgetown University Law Center 

The rise of remote proctoring software during the COVID-19 pandemic illustrates the dangers of surveillance-enabled pedagogy built on the belief that students can’t be trusted. These services, which deploy a range of identification protocols, computer and internet access limitations, and human or automated observation of students as they take tests remotely, are marketed as necessary to prevent cheating. But the success of these services in their stated goal is ill- supported at best and discredited at worst, particularly given their highly over- inclusive criteria for “suspicious” behavior. Meanwhile, the harms they inflict on students are clear: severe anxiety among test-takers, concerning data collection and use practices, and discriminatory flagging of students of color and students with disabilities have provoked widespread outcry from students, professors, privacy advocates, policymakers, and sometimes universities themselves.


To make matters worse, the privacy and civil rights laws most relevant to the use of these services are generally inadequate to protect students from the harms they inflict.

Colleges and universities routinely face difficult decisions that require reconciling conflicting interests, but whether to use remote proctoring software isn’t one of them. Remote proctoring software is not pedagogically beneficial, institutionally necessary, or remotely unavoidable, and its use further entrenches inequities in higher education that schools should be devoted to rooting out. Colleges and universities should abandon remote proctoring software, and apply the lessons from this failed experiment to their other existing or potential future uses of surveillance technologies and automated decision-making systems that threaten students’ privacy, access to important life opportunities, and intellectual freedom.


Keywords: privacy, surveillance, automated decision-making, algorithmic discrimination, COVID-19, higher education, remote proctoring software, FERPA, FTC, ADA


Suggested Citation:

Barrett, Lindsey, Rejecting Test Surveillance in Higher Education (June 21, 2021). Available at SSRN: or
For original post on SSRN, please visit 

No comment yet.
Scooped by Roxana Marachi, PhD!

The Color of Surveillance: Monitoring of Poor and Working People // Center on Privacy and Technology // Georgetown Law

The Color of Surveillance: Monitoring of Poor and Working People // Center on Privacy and Technology // Georgetown Law | Educational Psychology & Technology: Critical Perspectives and Resources | 

No comment yet.
Scooped by Roxana Marachi, PhD!

UM study finds facial recognition technology in schools presents many problems, recommends ban // University of Michigan 

UM study finds facial recognition technology in schools presents many problems, recommends ban // University of Michigan  | Educational Psychology & Technology: Critical Perspectives and Resources |

Contact: Jeff Karoub,;
Daniel Rivkin,


"Facial recognition technology should be banned for use in schools, according to a new study by the University of Michigan’s Ford School of Public Policy that cites the heightened risk of racism and potential for privacy erosion.


The study by the Ford School’s Science, Technology, and Public Policy Program comes at a time when debates over returning to in-person school in the face of the COVID-19 pandemic are consuming administrators and teachers, who are deciding which technologies will best serve public health, educational and privacy requirements.

Among the concerns is facial recognition, which could be used to monitor student attendance and behavior as well as contact tracing. But the report argues this technology will “exacerbate racism,” an issue of particular concern as the nation confronts structural inequality and discrimination.

In the pre-COVID-19 debate about the technology, deployment of facial recognition was seen as a potential panacea to assist with security measures in the aftermath of school shootings. Schools also have begun using it to track students and automate attendance records. Globally, facial recognition technology represents a $3.2 billion business.

The study, “Cameras in the Classroom,” led by Shobita Parthasarathy, asserts that not only is the technology not suited to security purposes, but it also creates a web of serious problems beyond racial discrimination, including normalizing surveillance and eroding privacy, institutionalizing inaccuracy and creating false data on school life, commodifying data and marginalizing nonconforming students.




“We have focused on facial recognition in schools because it is not yet widespread and because it will impact particularly vulnerable populations. The research shows that prematurely deploying the technology without understanding its implications would be unethical and dangerous,” said Parthasarathy, STPP director and professor of public policy.


The study is part of STPP’s Technology Assessment Project, which focuses on emerging technologies and seeks to influence public and policy debate with interdisciplinary, evidence-based analysis.


The study used an analogical case comparison method, looking specifically at previous uses of security technology like CCTV cameras and metal detectors, as well as biometric technologies, to anticipate the implications of facial recognition. The research team also included one undergraduate and one graduate student from the Ford school.


Currently, there are no national laws regulating facial recognition technology anywhere in the world.


“Some people say, ‘We can’t regulate a technology until we see what it can do.’ But looking at technology that has already been implemented, we can predict the potential social, economic and political impacts, and surface the unintended consequences,” said Molly Kleinman, STPP’s program manager.


Though the study recommends a complete ban on the technology’s use, it concludes with a set of 15 policy recommendations for those at the national, state and school district levels who may be considering using it, as well as a set of sample questions for stakeholders, such as parents and students, to consider as they evaluate its use."


More information:


For original post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

I Have a Lot to Say About Signal’s Cellebrite Hack // Center for Internet and Society

I Have a Lot to Say About Signal’s Cellebrite Hack // Center for Internet and Society | Educational Psychology & Technology: Critical Perspectives and Resources |

By Riana Pfefferkorn on May 12, 2021

This blog post is based off of a talk I gave on May 12, 2021 at the Stanford Computer Science Department’s weekly lunch talk series on computer security topics. Full disclosure: I’ve done some consulting work for Signal, albeit not on anything like this issue. (I kinda doubt they’ll hire me again if they read this, though.)

You may have seen a story in the news recently about vulnerabilities discovered in the digital forensics tool made by Israeli firm Cellebrite. Cellebrite's software extracts data from mobile devices and generates a report about the extraction. It's popular with law enforcement agencies as a tool for gathering digital evidence from smartphones in their custody. 

In April, the team behind the popular end-to-end encrypted (E2EE) chat app Signal published a blog post detailing how they had obtained a Cellebrite device, analyzed the software, and found vulnerabilities that would allow for arbitrary code execution by a device that's being scanned with a Cellebrite tool. 

As coverage of the blog post pointed out, the vulnerability draws into question whether Cellebrite's tools are reliable in criminal prosecutions after all. While Cellebrite has since taken steps to mitigate the vulnerability, there's already been a motion for a new trial filed in at least one criminal case on the basis of Signal's blog post. 

Is that motion likely to succeed? What will be the likely ramifications of Signal's discovery in court cases? I think the impact on existing cases will be negligible, but that Signal has made an important point that may help push the mobile device forensics industry towards greater accountability for their often sloppy product security. Nevertheless, I have a raised eyebrow for Signal here too.

Let’s dive in.


What is Cellebrite? 

Cellebrite is an Israeli company that, per Signal’s blog post, “makes software to automate physically extracting and indexing data from mobile devices.” A common use case here in the U.S. is to be used by law enforcement in criminal investigations, typically with a warrant under the Fourth Amendment that allows them to search someone’s phone and seize data from it. 

Cellebrite’s products are part of the industry of “mobile device forensics” tools. “The mobile forensics process aims to recover digital evidence or relevant data from a mobile device in a way that will preserve the evidence in a forensically sound condition,” using accepted methods, so that it can later be presented in court. 

Who are their customers?

Between Cellebrite and the other vendors in the industry of mobile device forensics tools, there are over two thousand law enforcement agencies across the country that have such tools — including 49 of the 50 biggest cities in the U.S. Plus, ICE has contracts with Cellebrite worth tens of millions of dollars. 

But Cellebrite has lots of customers besides U.S. law enforcement agencies. And some of them aren’t so nice. As Signal’s blog post notes, “Their customer list has included authoritarian regimes in Belarus, Russia, Venezuela, and China; death squads in Bangladesh; military juntas in Myanmar; and those seeking to abuse and oppress in Turkey, UAE, and elsewhere.” 

The vendors of these kinds of tools love to get up on their high horse and talk about how they’re the “good guys,” they help keep the world safe from criminals and terrorists. Yes, sure, fine. But a lot of vendors in this industry, the industry of selling surveillance technologies to governments, sell not only to the U.S. and other countries that respect the rule of law, but also to repressive governments that persecute their own people, where the definition of “criminal” might just mean being gay or criticizing the government. The willingness of companies like Cellebrite to sell to unsavory governments is why there have been calls from human rights leaders and groups for a global moratorium on selling these sorts of surveillance tools to governments.

What do Cellebrite’s products do?

Cellebrite has a few different products, but as relevant here, there’s a two-part system in play: the first part, called UFED (which stands for Universal Forensic Extraction Device), extracts the data from a mobile device and backs it up to a Windows PC, and the second part, called Physical Analyzer, parses and indexes the data so it’s searchable. So, take the raw data out, then turn it into something useful for the user, all in a forensically sound manner. 

As Signal’s blog post explains, this two-part system requires physical access to the phone; these aren’t tools for remotely accessing someone’s phone. And the kind of extraction (a “logical extraction”) at issue here requires the device to be unlocked and open. (A logical extraction is quicker and easier, but also more limited, than the deeper but more challenging type of extraction, a “physical extraction,” which can work on locked devices, though not with 100% reliability. Plus, logical extractions won’t recover deleted or hidden files, unlike physical extractions.) As the blog post says, think of it this way: “if someone is physically holding your unlocked device in their hands, they could open whatever apps they would like and take screenshots of everything in them to save and go over later. Cellebrite essentially automates that process for someone holding your device in their hands.”

Plus, unlike some cop taking screenshots, a logical data extraction preserves the recovered data “in its original state with forensically-sound integrity admissible in a court of law.” Why show that the data were extracted and preserved without altering anything? Because that’s what is necessary to satisfy the rules for admitting evidence in court. U.S. courts have rules in place to ensure that the evidence that is presented is reliable — you don’t want to convict or acquit somebody on the basis of, say, a file whose contents or metadata got corrupted. Cellebrite holds itself out as meeting the standards that U.S. courts require for digital forensics.

But what Signal showed is that Cellebrite tools actually have really shoddy security that could, unless the problem is fixed, allow alteration of data in the reports the software generates when it analyzes phones. Demonstrating flaws in the Cellebrite system calls into question the integrity and reliability of the data extracted and of the reports generated about the extraction. 

That undermines the entire reason for these tools’ existence: compiling digital evidence that is sound enough to be admitted and relied upon in court cases.


What was the hack?

As background: Late last year, Cellebrite announced that one of their tools (the Physical Analyzer tool) could be used to extract Signal data from unlocked Android phones. Signal wasn’t pleased.


Apparently in retaliation, Signal struck back. As last month’s blog post details, Signal creator Moxie Marlinspike and his team obtained a Cellebrite kit (they’re coy about how they got it), analyzed the software, and found vulnerabilities that would allow for arbitrary code execution by a device that's being scanned with a Cellebrite tool.


According to the blog post:

Looking at both UFED and Physical Analyzer, ... we were surprised to find that very little care seems to have been given to Cellebrite’s own software security. Industry-standard exploit mitigation defenses are missing, and many opportunities for exploitation are present. ...

“[W]e found that it’s possible to execute arbitrary code on a Cellebrite machine simply by including a specially formatted but otherwise innocuous file in any app on a device that is subsequently plugged into Cellebrite and scanned. There are virtually no limits on the code that can be executed.

“For example, by including a specially formatted but otherwise innocuous file in an app on a device that is then scanned by Cellebrite, it’s possible to execute code that modifies not just the Cellebrite report being created in that scan, but also all previous and future generated Cellebrite reports from all previously scanned devices and all future scanned devices in any arbitrary way (inserting or removing text, email, photos, contacts, files, or any other data), with no detectable timestamp changes or checksum failures. This could even be done at random, and would seriously call the data integrity of Cellebrite’s reports into question.

Signal also created a video demo to show their proof of concept (PoC), which you can watch in the blog post or their tweet about it. They summarized what’s depicted in the video:

[This] is a sample video of an exploit for UFED (similar exploits exist for Physical Analyzer). In the video, UFED hits a file that executes arbitrary code on the Cellebrite machine. This exploit payload uses the MessageBox Windows API to display a dialog with a message in it. This is for demonstration purposes; it’s possible to execute any code, and a real exploit payload would likely seek to undetectably alter previous reports, compromise the integrity of future reports (perhaps at random!), or exfiltrate data from the Cellebrite machine.".... 

No comment yet.
Scooped by Roxana Marachi, PhD!

Kahoot acquires Clever, the US-based edtech portal, for up to $500M // TechCrunch

Kahoot acquires Clever, the US-based edtech portal, for up to $500M // TechCrunch | Educational Psychology & Technology: Critical Perspectives and Resources |

By Ingrid Lunden

"Kahoot, the popular Oslo-based edtech company that has built a big business out of gamifiying education and creating a platform for users to build their own learning games, is making an acquisition to double down on K-12 education and its opportunities to grow in the U.S. It is acquiring Clever, a startup that has built a single sign-on portal for educators, students and their families to build and engage in digital learning classrooms, currently used by about 65% of all U.S. K-12 schools. Kahoot said that the deal — coming in a combination of cash and shares — gives Clever an enterprise value of between $435 million and $500 million, dependent on meeting certain performance milestones.

The plan will be to continue growing Clever’s business in the U.S. — which currently employs 175 people — as well as give it a lever for expanding globally alongside Kahoot’s wider stable of edtech software and services.

“Clever and Kahoot are two purpose-led organizations that are equally passionate about education and unleashing the potential within every learner,” said Eilert Hanoa, CEO at Kahoot, in a statement. “Through this acquisition we see considerable potential to collaborate on education innovation to better service all our users — schools, teachers, students, parents and lifelong learners — and leveraging our global scale to offer Clever’s unique platform worldwide. I’m excited to welcome Tyler and his team to the Kahoot family.”

The news came on the same day that Kahoot, which is traded in Oslo with a market cap of $4.3 billion, also announced strong Q1 results in which it also noted it has closed its acquisition of, a provider of whiteboard tools for teachers, for an undisclosed sum.

The same tides that have been lifting Kahoot have also been playing out for Clever and other edtech companies.


The startup was originally incubated in Y Combinator and launched with a vision to be a “Twilio for education“, which in its vision was to create a unified way of being able to tap into the myriad student sign-on systems and educational databases to make it easier for those building edtech services to scale their products and bring on more customers (schools, teachers, students, families) to use them. As with payments, financial services in general, and telecommunications, it turns out that education is also a pretty fragmented market, and Clever wanted to figure out a way to fix the complexity and put it behind an API to make it easier for others to tap into it.


Over time it built that out also with a marketplace (application gallery in its terminology) of some 600 software providers and application developers that integrate with its SSO, which in turn becomes a way for a school or district to subsequently expand the number of edtech tools that it can use. This has been especially critical in the last year as schools have been forced to close in-person learning and go entirely virtual to help stave off the spread of the COVID-19 pandemic.


Clever has found a lot of traction for its approach both with schools and investors. With the former, Clever says that it’s used by 89,000 schools and some 65% of K-12 school districts (13,000 overall) in the U.S., with that figure including 95 of the 100 largest school districts in the country. This works out to 20 million students logging in monthly and 5.6 billion learning sessions.

The latter, meanwhile, has seen the company raise from a pretty impressive range of investors, including YC current and former partners like Paul Graham and Sam Altman, GSV, Founders Fund, Lightspeed and Sequoia. It raised just under $60 million, which may sound modest these days but remember that it’s been around since 2012, when edtech was not so cool and attention-grabbing, and hasn’t raised money since 2016, which in itself is a sign that it’s doing something right as a business.


Indeed, Kahoot noted that Clever projects $44 million in billed revenues for 2021, with an annual revenue growth rate of approximately 25% CAGR in the last three years, and it has been running the business on “a cash flow neutral basis, redeploying all cash into development of its offerings,” Kahoot noted.


Kahoot itself has had a strong year driven in no small part by the pandemic and the huge boost that resulted in remote learning and remote work. It noted in its results that it had 28 million active accounts in the last twelve months representing 68% growth on the year before, with the number of hosted games in that period at 279 million (up 28%) with more than 1.6 billion participants of those games (up 24%). Paid subscriptions in Q1 were at 760,000, with 255,000 using the “work” (B2B) tier; 275,000 school accounts; and 230,000 thousand in its “home and study” category. Annual recurring revenue is now at $69 million ($18 million a year ago for the same quarter), while actual revenue for the quarter was $16.2 million (up from $4.2 million a year ago), growing 284%.


The company, which is also backed by the likes of Disney, Microsoft and Softbank, has made a number of acquisitions to expand. Clever is the biggest of these to date."...


For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Privacy and Security in State Postsecondary Data Systems: Strong Foundations 2020 // State Higher Education Executive Officers Association (SHEEO)

Privacy and Security in State Postsecondary Data Systems: Strong Foundations 2020 // State Higher Education Executive Officers Association (SHEEO) | Educational Psychology & Technology: Critical Perspectives and Resources |
State postsecondary data systems contain a wealth of information—including detailed records about individuals—that allow states to analyze and improve their postsecondary education systems. The entities that maintain these system


For SHEEO landing page, click here


To download full report, click here: 

No comment yet.