Educational Psychology & Technology: Critical Perspectives and Resources
29.4K views | +0 today
Educational Psychology & Technology: Critical Perspectives and Resources
This curated collection includes news, resources, and research related to the intersections of Educational Psychology and Technology. The page also serves as a research tool to organize online content. The grey funnel shaped icon at the top allows for searching by keyword. For research more specific to tech, screen time and health/safety concerns, please see:, to learn about the next wave of privatization involving technology intersections with Pay For Success,  Social Impact Bonds, and Results Based Financing (often marketed with language promoting 'public-private-partnerships'), see, and for additional Educator Resources, please visit [Links to an external site].
Your new post is loading...
Your new post is loading...
Scooped by Roxana Marachi, PhD!

When "Innovation" is Exploitation: Data Ethics, Data Harms and Why We Need to Demand Data Justice // Marachi, 2019, Summer Institute of A Black Education Network 

To download pdf, please click on title or arrow above.


For more on the data brokers selling personal information from a variety of platforms, including education, please see: 


Please also visit: Parent Coalition for Student Privacy


the Data Justice Lab:


and the Algorithmic Justice League:  


No comment yet.
Scooped by Roxana Marachi, PhD!

Meta will continue to use facial recognition technology, actually // Input Magazine

Meta will continue to use facial recognition technology, actually // Input Magazine | Educational Psychology & Technology: Critical Perspectives and Resources |

By Matt Wille
"Earlier this week, Facebook made the somewhat shocking announcement that it would be shutting down its facial recognition systems. But now, Facebook’s parent company, Meta, has walked that promise back a bit. A lot, really.

Meta is not planning to hold back its use of facial recognition technology in its forthcoming metaverse products. Facebook’s new parent company told Recode that the social network’s commitment does not in any way apply to the metaverse. The metaverse will abide by its own rules, thank you very much. In fact, Meta spokesperson Jason Grosse says the company is already experimenting with different ways to bring biometrics into the metaverse equation.


“We believe this technology has the potential to enable positive use cases in the future that maintain privacy, control, and transparency, and it’s an approach we’ll continue to explore as we consider how our future computing platforms and devices can serve people’s needs,” Grosse said of the technology.

Sigh. We should’ve seen that one coming. Changing the company’s name did nothing to alter its underhanded business strategies.

LOL JK — Just a week after its rebrand, Meta is making it clear that taking the “Facebook” out of Facebook did nothing to actually change the company. One of Meta’s first actions as a new company is making a big deal out of shutting down its facial recognition tech, only to a few days later say, “Oh, we didn’t mean it like that.”


In announcing the seemingly all-encompassing shutdown, Meta failed to mention a key fact: that it would not be eliminating DeepFace, its house-made facial recognition algorithms, from its servers. We only learned of this pivotal information because Grosse spilled the beans to The New York Times. Grosse did say, at that point, that Meta hadn’t “ruled out” using facial recognition in the future — but he failed to mention that Meta had already begun discussing how it could use biometrics in its future products."...


For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

What are the risks of Virtual Reality data? Learning Analytics, Algorithmic Bias and a Fantasy of Perfect Data // Marcus Carter & Ben Egliston (2021). New Media and Society

What are the risks of Virtual Reality data? Learning Analytics, Algorithmic Bias and a Fantasy of Perfect Data // Marcus Carter & Ben Egliston (2021). New Media and Society | Educational Psychology & Technology: Critical Perspectives and Resources |

"Virtual reality (VR) is an emerging technology with the potential to extract significantly more data about learners and the learning process. In this article, we present an analysis of how VR education technology companies frame, use and analyse this data. We found both an expansion and acceleration of what data are being collected about learners and how these data are being mobilized in potentially discriminatory and problematic ways. Beyond providing evidence for how VR represents an intensification of the datafication of education, we discuss three interrelated critical issues that are specific to VR: the fantasy that VR data is ‘perfect’, the datafication of soft-skills training, and the commercialisation and commodification of VR data. In the context of the issues identified, we caution the unregulated and uncritical application of learning analytics to the data that are collected from VR training."


To download, click on link below (author version online) 

No comment yet.
Scooped by Roxana Marachi, PhD!

Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission // Slaughter, 2021, Yale Journal of Law & Technology 

To download, click on title or arrow above. 

No comment yet.
Scooped by Roxana Marachi, PhD!

Collection and Processing of Data from Wrist Wearable Devices in Heterogeneous and Multiple-User Scenarios // Sensors (Arriba-Pérez, Caeiro-Rodríguez, & Santos-Gago, 2021)

Collection and Processing of Data from Wrist Wearable Devices in Heterogeneous and Multiple-User Scenarios // Sensors (Arriba-Pérez, Caeiro-Rodríguez, & Santos-Gago, 2021) | Educational Psychology & Technology: Critical Perspectives and Resources |


"Over recent years, we have witnessed the development of mobile and wearable technologies to collect data from human vital signs and activities. Nowadays, wrist wearables including sensors (e.g., heart rate, accelerometer, pedometer) that provide valuable data are common in market. We are working on the analytic exploitation of this kind of data towards the support of learners and teachers in educational contexts. More precisely, sleep and stress indicators are defined to assist teachers and learners on the regulation of their activities. During this development, we have identified interoperability challenges related to the collection and processing of data from wearable devices. Different vendors adopt specific approaches about the way data can be collected from wearables into third-party systems. This hinders such developments as the one that we are carrying out. This paper contributes to identifying key interoperability issues in this kind of scenario and proposes guidelines to solve them. Taking into account these topics, this work is situated in the context of the standardization activities being carried out in the Internet of Things and Machine to Machine domains."

Keywords: wearable sensors, wearable computing, data interoperability, internet of things, machine to machine
No comment yet.
Scooped by Roxana Marachi, PhD!

Normalizing Surveillance (Selinger & Rhee, 2021) // Northern European Journal of Philosophy (via SSRN)

Normalizing Surveillance (Selinger & Rhee, 2021) // Northern European Journal of Philosophy (via SSRN) | Educational Psychology & Technology: Critical Perspectives and Resources |


Definitions of privacy change, as do norms for protecting it. Why, then, are privacy scholars and activists currently worried about “normalization”? This essay explains what normalization means in the context of surveillance concerns and clarifies why normalization has significant governance consequences. We emphasize two things. First, the present is a transitional moment in history. AI-infused surveillance tools offer a window into the unprecedented dangers of automated real-time monitoring and analysis. Second, privacy scholars and activists can better integrate supporting evidence to counter skepticism about their most disturbing and speculative claims about normalization. Empirical results in moral psychology support the assertion that widespread surveillance typically will lead people to become favorably disposed toward it. If this causal dynamic is pervasive, it can diminish autonomy and contribute to a slippery slope trajectory that diminishes privacy and civil liberties.


Keywords: normalization, surveillance, privacy, civil liberties, moral psychology, function creep, surveillance creep, slippery slope arguments

For original post and to download full pdf, please visit: 
No comment yet.
Scooped by Roxana Marachi, PhD!

"Education 3.0: The Internet of Education" [Slidedeck]

This link was forwarded by a colleague on October 3rd, 2021 (posted by Greg Nadeau on LinkedIn). The pdf above is the download as of the slide deck from 10/3/21 (click on title or arrow above to download). The live link posted is at the following URL


See also: 

No comment yet.
Scooped by Roxana Marachi, PhD!

The Impossibility of Automating Ambiguity // Abeba Birhane, MIT Press

The Impossibility of Automating Ambiguity // Abeba Birhane, MIT Press | Educational Psychology & Technology: Critical Perspectives and Resources |

"On the one hand, complexity science and enactive and embodied cognitive science approaches emphasize that people, as complex adaptive systems, are ambiguous, indeterminable, and inherently unpredictable. On the other, Machine Learning (ML) systems that claim to predict human behaviour are becoming ubiquitous in all spheres of social life. I contend that ubiquitous Artificial Intelligence (AI) and ML systems are close descendants of the Cartesian and Newtonian worldview in so far as they are tools that fundamentally sort, categorize, and classify the world, and forecast the future. Through the practice of clustering, sorting, and predicting human behaviour and action, these systems impose order, equilibrium, and stability to the active, fluid, messy, and unpredictable nature of human behaviour and the social world at large. Grounded in complexity science and enactive and embodied cognitive science approaches, this article emphasizes why people, embedded in social systems, are indeterminable and unpredictable. When ML systems “pick up” patterns and clusters, this often amounts to identifying historically and socially held norms, conventions, and stereotypes. Machine prediction of social behaviour, I argue, is not only erroneous but also presents real harm to those at the margins of society."


To download full document: 

No comment yet.
Scooped by Roxana Marachi, PhD!

UM study finds facial recognition technology in schools presents many problems, recommends ban // University of Michigan 

UM study finds facial recognition technology in schools presents many problems, recommends ban // University of Michigan  | Educational Psychology & Technology: Critical Perspectives and Resources |

Contact: Jeff Karoub,;
Daniel Rivkin,


"Facial recognition technology should be banned for use in schools, according to a new study by the University of Michigan’s Ford School of Public Policy that cites the heightened risk of racism and potential for privacy erosion.


The study by the Ford School’s Science, Technology, and Public Policy Program comes at a time when debates over returning to in-person school in the face of the COVID-19 pandemic are consuming administrators and teachers, who are deciding which technologies will best serve public health, educational and privacy requirements.

Among the concerns is facial recognition, which could be used to monitor student attendance and behavior as well as contact tracing. But the report argues this technology will “exacerbate racism,” an issue of particular concern as the nation confronts structural inequality and discrimination.

In the pre-COVID-19 debate about the technology, deployment of facial recognition was seen as a potential panacea to assist with security measures in the aftermath of school shootings. Schools also have begun using it to track students and automate attendance records. Globally, facial recognition technology represents a $3.2 billion business.

The study, “Cameras in the Classroom,” led by Shobita Parthasarathy, asserts that not only is the technology not suited to security purposes, but it also creates a web of serious problems beyond racial discrimination, including normalizing surveillance and eroding privacy, institutionalizing inaccuracy and creating false data on school life, commodifying data and marginalizing nonconforming students.




“We have focused on facial recognition in schools because it is not yet widespread and because it will impact particularly vulnerable populations. The research shows that prematurely deploying the technology without understanding its implications would be unethical and dangerous,” said Parthasarathy, STPP director and professor of public policy.


The study is part of STPP’s Technology Assessment Project, which focuses on emerging technologies and seeks to influence public and policy debate with interdisciplinary, evidence-based analysis.


The study used an analogical case comparison method, looking specifically at previous uses of security technology like CCTV cameras and metal detectors, as well as biometric technologies, to anticipate the implications of facial recognition. The research team also included one undergraduate and one graduate student from the Ford school.


Currently, there are no national laws regulating facial recognition technology anywhere in the world.


“Some people say, ‘We can’t regulate a technology until we see what it can do.’ But looking at technology that has already been implemented, we can predict the potential social, economic and political impacts, and surface the unintended consequences,” said Molly Kleinman, STPP’s program manager.


Though the study recommends a complete ban on the technology’s use, it concludes with a set of 15 policy recommendations for those at the national, state and school district levels who may be considering using it, as well as a set of sample questions for stakeholders, such as parents and students, to consider as they evaluate its use."


More information:


For original post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

I Have a Lot to Say About Signal’s Cellebrite Hack // Center for Internet and Society

I Have a Lot to Say About Signal’s Cellebrite Hack // Center for Internet and Society | Educational Psychology & Technology: Critical Perspectives and Resources |

By Riana Pfefferkorn on May 12, 2021

This blog post is based off of a talk I gave on May 12, 2021 at the Stanford Computer Science Department’s weekly lunch talk series on computer security topics. Full disclosure: I’ve done some consulting work for Signal, albeit not on anything like this issue. (I kinda doubt they’ll hire me again if they read this, though.)

You may have seen a story in the news recently about vulnerabilities discovered in the digital forensics tool made by Israeli firm Cellebrite. Cellebrite's software extracts data from mobile devices and generates a report about the extraction. It's popular with law enforcement agencies as a tool for gathering digital evidence from smartphones in their custody. 

In April, the team behind the popular end-to-end encrypted (E2EE) chat app Signal published a blog post detailing how they had obtained a Cellebrite device, analyzed the software, and found vulnerabilities that would allow for arbitrary code execution by a device that's being scanned with a Cellebrite tool. 

As coverage of the blog post pointed out, the vulnerability draws into question whether Cellebrite's tools are reliable in criminal prosecutions after all. While Cellebrite has since taken steps to mitigate the vulnerability, there's already been a motion for a new trial filed in at least one criminal case on the basis of Signal's blog post. 

Is that motion likely to succeed? What will be the likely ramifications of Signal's discovery in court cases? I think the impact on existing cases will be negligible, but that Signal has made an important point that may help push the mobile device forensics industry towards greater accountability for their often sloppy product security. Nevertheless, I have a raised eyebrow for Signal here too.

Let’s dive in.


What is Cellebrite? 

Cellebrite is an Israeli company that, per Signal’s blog post, “makes software to automate physically extracting and indexing data from mobile devices.” A common use case here in the U.S. is to be used by law enforcement in criminal investigations, typically with a warrant under the Fourth Amendment that allows them to search someone’s phone and seize data from it. 

Cellebrite’s products are part of the industry of “mobile device forensics” tools. “The mobile forensics process aims to recover digital evidence or relevant data from a mobile device in a way that will preserve the evidence in a forensically sound condition,” using accepted methods, so that it can later be presented in court. 

Who are their customers?

Between Cellebrite and the other vendors in the industry of mobile device forensics tools, there are over two thousand law enforcement agencies across the country that have such tools — including 49 of the 50 biggest cities in the U.S. Plus, ICE has contracts with Cellebrite worth tens of millions of dollars. 

But Cellebrite has lots of customers besides U.S. law enforcement agencies. And some of them aren’t so nice. As Signal’s blog post notes, “Their customer list has included authoritarian regimes in Belarus, Russia, Venezuela, and China; death squads in Bangladesh; military juntas in Myanmar; and those seeking to abuse and oppress in Turkey, UAE, and elsewhere.” 

The vendors of these kinds of tools love to get up on their high horse and talk about how they’re the “good guys,” they help keep the world safe from criminals and terrorists. Yes, sure, fine. But a lot of vendors in this industry, the industry of selling surveillance technologies to governments, sell not only to the U.S. and other countries that respect the rule of law, but also to repressive governments that persecute their own people, where the definition of “criminal” might just mean being gay or criticizing the government. The willingness of companies like Cellebrite to sell to unsavory governments is why there have been calls from human rights leaders and groups for a global moratorium on selling these sorts of surveillance tools to governments.

What do Cellebrite’s products do?

Cellebrite has a few different products, but as relevant here, there’s a two-part system in play: the first part, called UFED (which stands for Universal Forensic Extraction Device), extracts the data from a mobile device and backs it up to a Windows PC, and the second part, called Physical Analyzer, parses and indexes the data so it’s searchable. So, take the raw data out, then turn it into something useful for the user, all in a forensically sound manner. 

As Signal’s blog post explains, this two-part system requires physical access to the phone; these aren’t tools for remotely accessing someone’s phone. And the kind of extraction (a “logical extraction”) at issue here requires the device to be unlocked and open. (A logical extraction is quicker and easier, but also more limited, than the deeper but more challenging type of extraction, a “physical extraction,” which can work on locked devices, though not with 100% reliability. Plus, logical extractions won’t recover deleted or hidden files, unlike physical extractions.) As the blog post says, think of it this way: “if someone is physically holding your unlocked device in their hands, they could open whatever apps they would like and take screenshots of everything in them to save and go over later. Cellebrite essentially automates that process for someone holding your device in their hands.”

Plus, unlike some cop taking screenshots, a logical data extraction preserves the recovered data “in its original state with forensically-sound integrity admissible in a court of law.” Why show that the data were extracted and preserved without altering anything? Because that’s what is necessary to satisfy the rules for admitting evidence in court. U.S. courts have rules in place to ensure that the evidence that is presented is reliable — you don’t want to convict or acquit somebody on the basis of, say, a file whose contents or metadata got corrupted. Cellebrite holds itself out as meeting the standards that U.S. courts require for digital forensics.

But what Signal showed is that Cellebrite tools actually have really shoddy security that could, unless the problem is fixed, allow alteration of data in the reports the software generates when it analyzes phones. Demonstrating flaws in the Cellebrite system calls into question the integrity and reliability of the data extracted and of the reports generated about the extraction. 

That undermines the entire reason for these tools’ existence: compiling digital evidence that is sound enough to be admitted and relied upon in court cases.


What was the hack?

As background: Late last year, Cellebrite announced that one of their tools (the Physical Analyzer tool) could be used to extract Signal data from unlocked Android phones. Signal wasn’t pleased.


Apparently in retaliation, Signal struck back. As last month’s blog post details, Signal creator Moxie Marlinspike and his team obtained a Cellebrite kit (they’re coy about how they got it), analyzed the software, and found vulnerabilities that would allow for arbitrary code execution by a device that's being scanned with a Cellebrite tool.


According to the blog post:

Looking at both UFED and Physical Analyzer, ... we were surprised to find that very little care seems to have been given to Cellebrite’s own software security. Industry-standard exploit mitigation defenses are missing, and many opportunities for exploitation are present. ...

“[W]e found that it’s possible to execute arbitrary code on a Cellebrite machine simply by including a specially formatted but otherwise innocuous file in any app on a device that is subsequently plugged into Cellebrite and scanned. There are virtually no limits on the code that can be executed.

“For example, by including a specially formatted but otherwise innocuous file in an app on a device that is then scanned by Cellebrite, it’s possible to execute code that modifies not just the Cellebrite report being created in that scan, but also all previous and future generated Cellebrite reports from all previously scanned devices and all future scanned devices in any arbitrary way (inserting or removing text, email, photos, contacts, files, or any other data), with no detectable timestamp changes or checksum failures. This could even be done at random, and would seriously call the data integrity of Cellebrite’s reports into question.

Signal also created a video demo to show their proof of concept (PoC), which you can watch in the blog post or their tweet about it. They summarized what’s depicted in the video:

[This] is a sample video of an exploit for UFED (similar exploits exist for Physical Analyzer). In the video, UFED hits a file that executes arbitrary code on the Cellebrite machine. This exploit payload uses the MessageBox Windows API to display a dialog with a message in it. This is for demonstration purposes; it’s possible to execute any code, and a real exploit payload would likely seek to undetectably alter previous reports, compromise the integrity of future reports (perhaps at random!), or exfiltrate data from the Cellebrite machine.".... 

No comment yet.
Scooped by Roxana Marachi, PhD!

Kahoot acquires Clever, the US-based edtech portal, for up to $500M // TechCrunch

Kahoot acquires Clever, the US-based edtech portal, for up to $500M // TechCrunch | Educational Psychology & Technology: Critical Perspectives and Resources |

By Ingrid Lunden

"Kahoot, the popular Oslo-based edtech company that has built a big business out of gamifiying education and creating a platform for users to build their own learning games, is making an acquisition to double down on K-12 education and its opportunities to grow in the U.S. It is acquiring Clever, a startup that has built a single sign-on portal for educators, students and their families to build and engage in digital learning classrooms, currently used by about 65% of all U.S. K-12 schools. Kahoot said that the deal — coming in a combination of cash and shares — gives Clever an enterprise value of between $435 million and $500 million, dependent on meeting certain performance milestones.

The plan will be to continue growing Clever’s business in the U.S. — which currently employs 175 people — as well as give it a lever for expanding globally alongside Kahoot’s wider stable of edtech software and services.

“Clever and Kahoot are two purpose-led organizations that are equally passionate about education and unleashing the potential within every learner,” said Eilert Hanoa, CEO at Kahoot, in a statement. “Through this acquisition we see considerable potential to collaborate on education innovation to better service all our users — schools, teachers, students, parents and lifelong learners — and leveraging our global scale to offer Clever’s unique platform worldwide. I’m excited to welcome Tyler and his team to the Kahoot family.”

The news came on the same day that Kahoot, which is traded in Oslo with a market cap of $4.3 billion, also announced strong Q1 results in which it also noted it has closed its acquisition of, a provider of whiteboard tools for teachers, for an undisclosed sum.

The same tides that have been lifting Kahoot have also been playing out for Clever and other edtech companies.


The startup was originally incubated in Y Combinator and launched with a vision to be a “Twilio for education“, which in its vision was to create a unified way of being able to tap into the myriad student sign-on systems and educational databases to make it easier for those building edtech services to scale their products and bring on more customers (schools, teachers, students, families) to use them. As with payments, financial services in general, and telecommunications, it turns out that education is also a pretty fragmented market, and Clever wanted to figure out a way to fix the complexity and put it behind an API to make it easier for others to tap into it.


Over time it built that out also with a marketplace (application gallery in its terminology) of some 600 software providers and application developers that integrate with its SSO, which in turn becomes a way for a school or district to subsequently expand the number of edtech tools that it can use. This has been especially critical in the last year as schools have been forced to close in-person learning and go entirely virtual to help stave off the spread of the COVID-19 pandemic.


Clever has found a lot of traction for its approach both with schools and investors. With the former, Clever says that it’s used by 89,000 schools and some 65% of K-12 school districts (13,000 overall) in the U.S., with that figure including 95 of the 100 largest school districts in the country. This works out to 20 million students logging in monthly and 5.6 billion learning sessions.

The latter, meanwhile, has seen the company raise from a pretty impressive range of investors, including YC current and former partners like Paul Graham and Sam Altman, GSV, Founders Fund, Lightspeed and Sequoia. It raised just under $60 million, which may sound modest these days but remember that it’s been around since 2012, when edtech was not so cool and attention-grabbing, and hasn’t raised money since 2016, which in itself is a sign that it’s doing something right as a business.


Indeed, Kahoot noted that Clever projects $44 million in billed revenues for 2021, with an annual revenue growth rate of approximately 25% CAGR in the last three years, and it has been running the business on “a cash flow neutral basis, redeploying all cash into development of its offerings,” Kahoot noted.


Kahoot itself has had a strong year driven in no small part by the pandemic and the huge boost that resulted in remote learning and remote work. It noted in its results that it had 28 million active accounts in the last twelve months representing 68% growth on the year before, with the number of hosted games in that period at 279 million (up 28%) with more than 1.6 billion participants of those games (up 24%). Paid subscriptions in Q1 were at 760,000, with 255,000 using the “work” (B2B) tier; 275,000 school accounts; and 230,000 thousand in its “home and study” category. Annual recurring revenue is now at $69 million ($18 million a year ago for the same quarter), while actual revenue for the quarter was $16.2 million (up from $4.2 million a year ago), growing 284%.


The company, which is also backed by the likes of Disney, Microsoft and Softbank, has made a number of acquisitions to expand. Clever is the biggest of these to date."...


For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Privacy and Security in State Postsecondary Data Systems: Strong Foundations 2020 // State Higher Education Executive Officers Association (SHEEO)

Privacy and Security in State Postsecondary Data Systems: Strong Foundations 2020 // State Higher Education Executive Officers Association (SHEEO) | Educational Psychology & Technology: Critical Perspectives and Resources |
State postsecondary data systems contain a wealth of information—including detailed records about individuals—that allow states to analyze and improve their postsecondary education systems. The entities that maintain these system


For SHEEO landing page, click here


To download full report, click here: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Resources to Learn More about Algorithmic Bias, Data Harms, and Data Justice 

Resources to Learn More about Algorithmic Bias, Data Harms, and Data Justice  | Educational Psychology & Technology: Critical Perspectives and Resources | 

No comment yet.
Scooped by Roxana Marachi, PhD!

The Problems with Immersive Advertising: In AR/VR, Nobody Knows You Are an Ad // Heller & Bar-Zeev, 2021, Journal of Online Trust and Safety

The Problems with Immersive Advertising: In AR/VR, Nobody Knows You Are an Ad // Heller & Bar-Zeev, 2021, Journal of Online Trust and Safety | Educational Psychology & Technology: Critical Perspectives and Resources |

"Imagine five years from now, you’re walking down a street wearing your own mixed reality glasses. They’re sleek, comfortable and increasingly fashionable. A virtual car drives by—it’s no coincidence that it’s the exact model you’ve been saving for. Your level of interest is noted. A hipster passes on the sidewalk wearing some limited-edition sneakers. Given your excitement, a prompt to buy “copy #93/100” shows up nearby. You jump at the chance, despite the hefty price. They’ll be waiting for you when you get home.


Cinema and television have long been imagining what advertising will look like in XR (known alternatively as eXtended Reality and as the entire Mixed Reality continuum from Virtual Reality (VR) to Augmented reality (AR)). We’re reaching the point where science fiction is rapidly becoming reality.


If you’ve watched professional sports on TV in the past decade, you’ve almost certainly experienced a form of augmented advertising. Your friend watching the same game across the country will likely see different ads—not just the commercials but the actual billboards in the stadium behind the players may be replaced to suit the local advertising market.


VR-based advertising is in its infancy, and it looks very different from traditional advertising because of the way immersive media works. Instead of simple product placement, think about immersive ads as placement within the product. The advertising is experiential, using characteristics of media and entertainment that came before, alongside embodiment (a feeling of physical presence in a virtual space) and full immersion into a digital world. In immersive environments, creators completely determine what is seen, heard, and experienced by the user. This is not just influencing your feelings and impulses, but placing you in a controlled environment that your brain will interpret as real.


As advertising in immersive contexts takes off, we should do more than marvel at the pokemon we meet in the street. AR and VR experiences have profound effects on cognition that are different from how we take in and process information in other media. For example, our brains interpret an assault in a VR world,1 even in a cartoonish environment, just like we are being attacked in our own homes. These implications get even more complex when we consider paid content: how will the unique characteristics of immersive worlds be used to persuade and drive behaviors?"....


For full document, please visit: 



Heller, B., & Bar-Zeev, A. (2021). The Problems with Immersive Advertising: In AR/VR, Nobody Knows You Are an Ad. Journal of Online Trust and Safety, October 2021, 1-14.

No comment yet.
Scooped by Roxana Marachi, PhD!

Academic Integrity and Anti-Black Aspects of Educational Surveillance and E-Proctoring (Parnther & Eaton, 2021) // Teachers College Record 

Academic Integrity and Anti-Black Aspects of Educational Surveillance and E-Proctoring (Parnther & Eaton, 2021) // Teachers College Record  | Educational Psychology & Technology: Critical Perspectives and Resources |

Academic Integrity and Anti-Black Aspects of Educational Surveillance and E-Proctoring

by Ceceilia Parnther & Sarah Elaine Eaton - June 23, 2021


"In this commentary, we address issues of equity, diversity, and inclusion as related to academic integrity. We speak specifically to the ways in which Black and other racialized minorities may be over-represented in those who get reported for academic misconduct, compared to their White peers. We further address the ways in which electronic and remote proctoring software (also known as e-proctoring) discriminates against students of darker skin tones. We conclude with a call to action to educational researchers everywhere to pay close attention to how surveillance technologies are used to propagate systemic racism in our learning institutions.



The rapid pivot to remote teaching during the COVID-19 pandemic resulted in colleges and universities turning to technological services to optimize classroom management with more frequency than ever before. Electronic proctoring technology (also known as e-proctoring or remote invigilation) is one such fast-growing service, with an expected industry valuation estimated to be $10 Billion by 2026 (Learning Light, 2016). Students and faculty are increasingly concerned about the role e-proctoring technologies play in college exams.




We come to this work as educators, advocates, and scholars of academic integrity and educational ethics.


Ceceilia’s connection to this work lies in her personal and professional identities: “I am a Black, low socioeconomic status, first-generation college graduate and faculty member. The experiences of students who share my identity deeply resonate with me. While I’ve been fortunate to have support systems that helped me navigate college, I am keenly aware that my experience and opportunities are often the exceptions rather than the norm in a system historically designed to disregard, if not exclude, the experiences of minoritized populations. There were many moments where, in honor of the support I received, my career represents a commitment as an advocate, researcher, and teacher to student success and equitable systems in education.”


Sarah’s commitment to equity, diversity, and inclusion stems from experiences as a (White) first-generation student who grew up under the poverty line: “My formative experiences included living in servants’ quarters while my single mother worked as a full-time servant to a wealthy British family (see Eaton, 2020). Later, we moved Halifax, Nova Scotia, where we settled in the North End, a section of the city that is home to many Black and Irish Catholic residents. Social and economic disparities propagated by race, social class, and religion impacted my lived experiences from an early age. I now work as a tenured associate professor of education, focusing on ethics and integrity in higher education, taking an advocacy and social justice approach to my research.”




Higher rates of reporting and adjudicated instances of academic misconduct make Black students especially susceptible to cheating accusations. The disproportionality of Black students charged and found responsible for student misconduct is most readily seen in a K–12 context (Fabelo et al., 2011). However, research supports this as a reasonable assertion in the higher education context (Trachtenberg, 2017; Bobrow, 2020), primarily due to implicit bias (Gillo, 2017). In other words, Black and other minoritized students are already starting from a position of disadvantage in terms of being reported for academic misconduct.


The notion of over-representation is important here. Over-representation happens when individuals from a particular sub-group are reported for crimes or misconduct more often than those of the dominant White population. When we extend this notion to academic misconduct, we see evidence that Black students are reported more often than their White peers. This is not indicative that Black students engage in more misconduct behaviors, but rather it is more likely that White students are forgiven or simply not reported for misconduct as often. The group most likely to be forgiven for student conduct issues without ever being reported are White females, leaving non-White males to be among those most frequently reported for misconduct (Fabelo et al., 2011). Assumptions such as these perpetuate a system that views White student behavior as appropriate, unchallenged, normative, and therefore more trustworthy. These issues are of significant concern in an increasingly diverse student environment.




For Black and other students of color, e-proctoring represents a particular threat to equity in academic integrity. Although technology in and of itself is not racist, a disproportionate impact of consequences experienced by Black students is worthy of further investigation. Many educational administrators have subscribed to the idea that outsourcing test proctoring to a neutral third party is an effective solution. The problem is these ideas are often based on sales pitches, rather than actual data. There is a paucity of data about the effectiveness of e-proctoring technologies in general and even less about its impact on Black and other racialized minority students.

However, there are plenty of reports that show that facial recognition software unfairly discriminates against people with darker skin tones. For example, Robert Julian-Borchak Williams, a Black man from Detroit, was wrongfully accused and arrested on charges of larceny on the basis of facial recognition software—which, as it turns out, was incorrect (Hill, 2020). Williams described the experience as “humiliating” (Hill, 2020). This example highlights not only the inequities of surveillance technologies, but also the devastating effects the software can have when the system is faulty.


Algorithms often make Whiteness normative, with Blackness then reduced to a measure of disparity. Facial recognition software viewing White as normative is often unable to distinguish phenotypical Black individuals at higher rates than Whites (Hood, 2020). Surveillance of living spaces for authentication creates uncomfortable requirements that are anxiety-inducing and prohibitive.


E-proctoring companies often provide colleges and universities contracts releasing them of culpability while also allowing them to collect biodata. For Black students, biodata collection for unarticulated purposes represents concerns rooted in a history of having Black biological information used in unethical and inappropriate ways (Williams, 2020).




As educators and researchers specializing in ethics and integrity, we do not view academic integrity research as being objective. Instead, we see academic integrity inquiry as the basis for advocacy and social justice. We conclude with a call to action to educational researchers everywhere to pay close attention to how surveillance technologies are used to propagate systemic racism in our learning institutions. This call should include increased research on the impact of surveillance on student success, examination, and accountability of the consequences of institutional use of and investment in e-proctoring software, and centering of student advocates who challenge e-proctoring."




Bobrow, A. G. (2020). Restoring honor: Ending racial disparities in university honor systems. Virginia Law Review, 106, 47–70.


Eaton, S. E. (2020). Challenging and critiquing notions of servant leadership: Lessons from my mother. In S. E. Eaton & A. Burns (Eds.), Women negotiating life in the academy: A Canadian perspective (pp. 15–23). Springer.


Fabelo, T., Thompson, M. D., Plotkin, M., Carmichael, D., Marchbanks III, M. P., & Booth, E. A. (2011). Breaking schools’ rules: A statewide study of how school discipline relates to students’ success and juvenile justice involvement. Retrieved from


Hood, J. (2020). Making the body electric: The politics of body-worn cameras and facial recognition in the United States. Surveillance & Society18(2), 157–169. (2019, February 19). Online Proctoring / Remote Invigilation – Soon a multibillion dollar market within eLearning & assessment. Retrieved May 23, 2020, from


Trachtenberg, B. (2017). How university Title IX enforcement and other discipline processes (probably) discriminate against minority students. Nevada Law Journal, 18(1), 107–164.


Williams, D. P. (2020). Fitting the description: historical and sociotechnical elements of facial recognition and anti-black surveillance. Journal of Responsible Innovation7(sup1), 1–10."

Cite This Article as: Teachers College Record, Date Published: June 23, 2021 ID Number: 23752, Date Accessed: 6/25/2021 5:07:12 PM


For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Technology and Inequality, Surveillance, and Privacy during COVID-19 // Denise Anthony, University of Michigan

Technology and Inequality, Surveillance, and Privacy during COVID-19 // Denise Anthony, University of Michigan | Educational Psychology & Technology: Critical Perspectives and Resources |

By Denise Anthony, Professor, Sociology and Health Management and Policy, University of Michigan 


"Since the start of the pandemic, we have spent more time in our homes than we ever expected to—working from home (if fortunate to have that possibility); learning from home; being entertained with streaming services; and doing virtual happy hours, birthdays, and holidays. Now we even have doctor’s visits and consultations with our therapists from home.

The increasing availability of so-called Internet of Things (IoT) technology (i.e., internet-enabled devices such as smart TVs, smart speakers, video doorbells, and voice-activated virtual assistants like Amazon Alexa or Google Assistant), along with our smartphones, computers, and Wi-Fi internet access, has comforted, entertained, facilitated work and learning, and safeguarded us at home during the pandemic. Estimates suggest that roughly three-quarters of American adults have broadband internet service at home and about 69 percent of people in the U.S. have an IoT device/system in their home.

But while these computing and “smart” technologies were facilitating our interaction, work, school and health care, they were also becoming embedded into our social worlds in ways that have important sociological implications. The distinct power dynamics, social conditions, and intimate situations of the home create implications for inequality, privacy, and surveillance in the smart home.

Like so much else during COVID-19 (Henderson et al. 2020Perry et al. 2021), these virtual activities in the home have revealed much about inequality in our society, including the gulf between technology haves and have-nots (Campos-Castillo 2014Hargittai 2002Puckett 2020).


Working from Home

Even before the pandemic, it is important to remember that homes were already sites of work. The impact of the pandemic on domestic workers for example, who historically have had few protections as employees (Maich 2020)—as well as on many of the elderly and people with disabilities who depend on them—has been devastating.

Those lucky enough to be able to work from home after the start of the pandemic often needed significant resources to do so—computers, high-bandwidth internet access, and cameras (to attend virtual meetings), not to mention a quiet room in which to work. But who is paying for all of that? Some companies made headlines by helping workers equip home offices, but many workers simply had to absorb those costs.

What workers probably didn’t know they were also getting in the bargain was the potential for their employer to monitor them in their home. Technological surveillance of workers is as old as the industrial revolution, and modern tracking of workers (also Atwell 1987Lyon 1994) via embedded cameras (now sometimes with facial recognition software), location trackingelectronic monitoring, and even individual keystrokes, is increasing.

The relative abilities of workers to manage privacy and resist surveillance are unevenly distributed, with particularly negative consequences for individuals and communities with low status and few resources. The potential that work-from-home may be extended for some workers post-pandemic, coupled with the increasing presence of home IoT, fuels the capacity for surveillance in the home. But surveillance is not merely about technology. It not only amplifies social inequalities (Brayne 2020Browne 2015Benjamin 2019Eubanks 2018), it also has long-term implications for the organization and operation of power in society (Zuboff 2019).


School from Home

Space constraints and required computing resources have been especially relevant for families grappling with virtual education during the pandemic (Calarco 2020Puckett and Rafalow 2020). The long-term implications of virtual schooling will need to be studied for years to come, but some of the devastating harms, including from the invasive surveillance that technology and government enabled, are already clear, particularly for the most vulnerable students. It is important to recognize that examples of surveillance—like the student incarcerated for not completing her online schoolwork—illustrate that it is not technology alone that produces surveillance. It is technology used in specific ways by specific actors (in this case, teachers, school systems, governments) that produces surveillance (Lyon 200720112018).


Health Care from Home

In the initial period of the pandemic during spring 2020 when much of the world shut down, health care—other than ERs and ICUs treating severely ill and dying COVID patients—nearly ground to a halt as both providers and patients sought to avoid in-person contact. But chronic conditions still needed monitoring and other illness and injuries still happened.

Telehealth visits filled the void for some (Cantor et al. 2021). People who already have broadband, home Wi-Fi, necessary devices (smartphones, tablets, or laptops), and experience using technologies like online patient portals, can more easily engage in telehealth than those without them (Campos-Castillo and Anthony 2021Reed et al. 2020). However, populations with chronic health needs are generally lower resourced (Phelan et al. 2010) and disproportionately people of color (Williams et al. 2019), but lower resourced patients and some minority racial and ethnic groups are less likely to be offered technologies like patient portals (Anthony et al. 2018).

IoT further increases the potential for health tracking in the home. We track our own health using smart watches and other wearables, like internet-enabled glucometers for diabetics. And now, smart sensors can be installed in the home to detect falls, and virtual assistants keep elders company while also enabling distant family members to check in. The socio-technical landscape of these smart homes creates potential benefits for health but also raises privacy risks. Privacy concerns can influence whether people seek care at all or disclose information to doctors if they do, potentially having consequences for relationships with providers and for family dynamics as well.

But privacy management has become increasingly complex for individuals. In part, this is because privacy management is mediated by technology and the companies that control the technology (and data) in ways that are often invisible, confusing, or uncontrollable. While I can decide whether to wear a fitness tracker or use a virtual assistant, I have very little ability to decide what data about me flows to the company or how the company uses it. But importantly, these data used to evaluate, engage, or exclude me are not exclusively about me. These kinds of data also implicate others—those who live with or near me, are connected to me, or possibly just “like” me in some categorical way decided by the company (and their algorithms). And those others can then also be evaluated, engaged, or excluded based on the data. Thus, privacy has important social implications for surveillance and social control, and also for group boundaries, inequality, cohesion, and collective action.


Societal Implications

The sociological impact and implications of increased technology use in the home extend far beyond these important aspects of inequality, surveillance, and privacy. Such first-order effects are important to understand in order to develop interventions and policies to ameliorate them. But sociologists studying technology also consider what Claude Fischer described as the second-order effects—the ways that social relations, institutions, and systems intersect with technology in ways that change the culture, organization, and structure of society. For example, work-from-home is likely to have long term consequences for employment relations, workplace cultures, and the structures of occupations and professions. Distance-based learning, which was well underway prior to the pandemic, may expand learning and access beyond the constraints of physical schools, but also may alter the training and practice of teachers, as well as the political dynamics of public school districts. Technologies in health care can enhance or limit access, improve or harm health, reduce or exacerbate health disparities, but also alter the doctor-patient relationship, the practice of medicine, and the delivery of health care.


This does not mean that technology drives social change. That kind of simplistic technological determinism has long been critiqued by sociologists. Rather, technology offers an entry point to observe the sociological forces that shape, for example, the production and distribution of new technologies. Think of the political economy of surveillance capitalism, so carefully detailed by Shoshana Zuboff; the institutional and professional practices of those developing technology; and the economic, regulatory, and organizational dynamics driving adoption of new devices and systems. Sociologists also study how social conditions—dynamics of social interactionexisting social norms and status structures, and systemic inequalities—shape how technologies are used, in ways both expected and unexpected. It is these dynamics and sociological forces that drive the social changes we associate with, and sometimes mistakenly attribute to, the technologies themselves.

The ongoing threat of COVID-19 may keep us in our homes a while longer, relying on existing and new technologies for work, school, health care, and more. Sociological research can help to make sense of the drivers and current impact of new technologies that have become widespread. Sociology is also necessary for understanding the deeper and long-term consequences and social shifts that have only just begun.


Any opinions expressed in the articles in this publication are those of the author and not the American Sociological Association.


For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Parents Nationwide File Complaints with U.S. Department of Education; Seek to Address Massive Student Data Privacy Protection Failures // Parents' Coalition of Montgomery County, MD

Parents Nationwide File Complaints with U.S. Department of Education; Seek to Address Massive Student Data Privacy Protection Failures // Parents' Coalition of Montgomery County, MD | Educational Psychology & Technology: Critical Perspectives and Resources |
 "On July 9, 2021, parents of school-age children from Maryland to Alaska, in collaboration with the Student Data Privacy Project (SDPP), will file over a dozen complaints with the U.S. Department of Education (DoE) demanding accountability for the student data that schools share with Educational Technology (EdTech) vendors.
Formed during the pandemic, SDPP is comprised of parents concerned about how their children’s personally identifiable information (PII) is increasingly being mined by EdTech vendors, with the consent of our schools, and without parental consent or school oversight.
With assistance and support from SDPP, 14 parents from 9 states filed requests with their school districts under the Family Educational Rights and Privacy Act (FERPA) seeking access to the PII collected about their children by EdTech vendors. No SDPP parents were able to obtain all of the requested PII held by EdTech vendors, a clear violation of FERPA.
One parent in Maryland never received a response. A New Jersey parent received a generic reply with no date, school name or district identification. Yet a Minnesota parent received over 2,000 files, none of which contained the metadata requested, but did reveal a disturbing amount of personal information held by an EdTech vendor, including the child’s baby pictures, videos of her in an online yoga class, her artwork and answers to in-class questions.
Lisa Cline, SDPP co-founder and parent in Maryland said, “When I tried to obtain data gathered by one app my child uses in class, the school district said, ‘Talk to the vendor.’ The vendor said, ‘Talk to the school.’ This is classic passing of the buck. And the DoE appears to be looking the other way.”
FERPA, a statute enacted in 1974 — almost two decades before the Internet came into existence, at a time when technology in schools was limited to mimeograph machines and calculators  — affords parents the right to obtain their children’s education records, to seek to have those records amended, and to have control over the disclosure of the PII in those records.
Unfortunately, this law is now outdated. Since the digital revolution, schools are either unaware, unable or unwilling to apply FERPA to EdTech vendors. Before the pandemic, the average school used 400-1,000 online tools, according to the Student Data Privacy Consortium. Remote learning has increased this number exponentially.
SDPP co-founder, privacy consultant, law professor and parent Joel Schwarz, noted that “DOE’s failure to enforce FERPA, means that EdTech providers are putting the privacy of millions of children at risk, leaving these vendors free to collect, use and monetize student PII, and share it with third parties at will.”
A research study released by the Me2B Alliance in May 2021, showed that 60% of school apps send student data to potentially high-risk third parties without knowledge or consent. SDPP reached out to Me2B and requested an audit of the apps used by schools in the districts involved in the Project. Almost 70% of the apps reviewed used Software Development Kits (SDKs) that posed a “High Risk” to student data privacy, and almost 40% of the apps were rated “Very High Risk,” meaning the code used is known to be associated with registered Data Brokers. Even more concerning, Google showed up in approximately 80% of the apps that included an SDK, and Facebook ran a close second, showing up in about 60% of the apps.
Emily Cherkin, an SDPP co-founder who writes and speaks nationally about screen use as The Screentime Consultant, noted, “because these schools failed to provide the data requested, we don’t know what information is being collected about our children, how long these records are maintained, who has access to them, and with whom they’re being shared.”
“FERPA says that parents have a right to know what information is being collected about their children, and how that data is being used,” according to Andy Liddell, a federal court litigator in Austin, TX and another SDPP co-founder. “But those rights are being trampled because neither the schools nor the DoE are focused on this issue.”

The relief sought of the DoE includes requiring schools to:
•  actively oversee their EdTech vendors, including regular audits of vendors’ access, use and disclosure of student PII and publicly posting the results of those audits so that parents can validate that their children’s data is being adequately protected;

•  provide meaningful access to records held by EdTech in response to a FERPA request, clarifying that merely providing a student’s account log-in credentials, or referring the requester to the Vendor, does not satisfy the school’s obligations under FERPA;

•  ensure that when their EdTech vendors share student PII with third parties, the Vendor and the school maintain oversight of third-party access and use of that PII, and apply all FERPA rights and protections to that data, including honoring FERPA access requests;

•  protect all of a students’ digital footprints — including browsing history, searches performed, websites visited, etc. (i.e., metadata) — under FERPA, and that all of this data be provided in response to a FERPA access request.

# # # 

If you would like more information, please contact Joel Schwarz at

Parents are invited to join the Student Data Privacy Project. A template letter to school districts can be downloaded from the SDPP website:

SDPP is an independent parent-led organization founded by Joel Schwarz, Andy Liddell, Emily Cherkin and Lisa Cline. Research and filing assistance provided pro bono by recent George Washington University Law School graduate Gina McKlaveen.
No comment yet.
Scooped by Roxana Marachi, PhD!

Companies are hoarding personal data about you. Here’s how to get them to delete it. // Washington Post

Companies are hoarding personal data about you. Here’s how to get them to delete it. // Washington Post | Educational Psychology & Technology: Critical Perspectives and Resources |
 By Tatum Hunter

"In February, Whitney Merrill, a privacy attorney who lives in San Francisco, asked audio-chat company Clubhouse to disclose how many people had shared her name and phone number with the company as part of its contact-sharing feature, in hopes of getting that data deleted.


As she waited to hear from the company, she tweeted about the process and also reached out to multiple Clubhouse employees for help.


Only after that, and weeks after the California Consumer Privacy Act’s 45-day deadline for companies to respond to data deletion requests, Clubhouse responded and complied with her request to delete her information, said Merrill.

“They eventually corrected it after a lot of pressure because I was fortunate enough to be able to tweet about it,” she said.


The landmark California Consumer Privacy Act (CCPA), which went into effect in 2020, gave state residents the right to ask companies to not sell their data, to provide a copy of that data or to delete that data. Virginia and Colorado have also passed consumer privacy laws, which go into effect in 2023.


As more states adopt privacy legislation — Massachusetts, New York, Minnesota, North Carolina, Ohio and Pennsylvania may be next — more of us will get the right to ask for our data to be deleted. Some companies — including Spotify, Uber and Twitter — told us they already honor requests from people outside California when it comes to data deletion.


But that doesn’t mean it always goes smoothly. Data is valuable to companies, and some don’t make it easy to scrub, privacy advocates say. Data deletion request forms are often tucked away, processes are cumbersome, and barriers to verifying your identity slow things down. Sometimes personal data is tied up in confusing legal requirements, so companies can’t get rid of it. Other times, the technical and personnel burden of data requests is simply too much for companies to handle.


Exercising CCPA rights can be an uphill battle, Consumer Reports found in a 2020 study involving more than 400 California consumers who submitted “do not sell my personal data” requests to registered data brokers. Sixty-two percent of the time, participants either couldn’t figure out how to submit the request or were left with no idea whether it worked."


It doesn’t bode well for data deletion, Maureen Mahoney, a Consumer Reports analyst, said. People have to verify their identities before companies can delete data, which poses an extra obstacle.

But that doesn’t mean it’s a lost cause. Many data deletion requests are successful, according to company metrics, and things get easier if you know where to look. Most companies are doing their best to figure out how to deal with patchwork privacy laws and an influx of data rights requests, Merrill said."...


For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Rejecting Test Surveillance in Higher Education (Barrett, 2021) // Georgetown University Law Center 

The rise of remote proctoring software during the COVID-19 pandemic illustrates the dangers of surveillance-enabled pedagogy built on the belief that students can’t be trusted. These services, which deploy a range of identification protocols, computer and internet access limitations, and human or automated observation of students as they take tests remotely, are marketed as necessary to prevent cheating. But the success of these services in their stated goal is ill- supported at best and discredited at worst, particularly given their highly over- inclusive criteria for “suspicious” behavior. Meanwhile, the harms they inflict on students are clear: severe anxiety among test-takers, concerning data collection and use practices, and discriminatory flagging of students of color and students with disabilities have provoked widespread outcry from students, professors, privacy advocates, policymakers, and sometimes universities themselves.


To make matters worse, the privacy and civil rights laws most relevant to the use of these services are generally inadequate to protect students from the harms they inflict.

Colleges and universities routinely face difficult decisions that require reconciling conflicting interests, but whether to use remote proctoring software isn’t one of them. Remote proctoring software is not pedagogically beneficial, institutionally necessary, or remotely unavoidable, and its use further entrenches inequities in higher education that schools should be devoted to rooting out. Colleges and universities should abandon remote proctoring software, and apply the lessons from this failed experiment to their other existing or potential future uses of surveillance technologies and automated decision-making systems that threaten students’ privacy, access to important life opportunities, and intellectual freedom.


Keywords: privacy, surveillance, automated decision-making, algorithmic discrimination, COVID-19, higher education, remote proctoring software, FERPA, FTC, ADA


Suggested Citation:

Barrett, Lindsey, Rejecting Test Surveillance in Higher Education (June 21, 2021). Available at SSRN: or
For original post on SSRN, please visit 

No comment yet.
Scooped by Roxana Marachi, PhD!

The Color of Surveillance: Monitoring of Poor and Working People // Center on Privacy and Technology // Georgetown Law

The Color of Surveillance: Monitoring of Poor and Working People // Center on Privacy and Technology // Georgetown Law | Educational Psychology & Technology: Critical Perspectives and Resources | 

No comment yet.
Scooped by Roxana Marachi, PhD!

Borrowed a School Laptop? Mind Your Open Tabs // WIRED

Borrowed a School Laptop? Mind Your Open Tabs // WIRED | Educational Psychology & Technology: Critical Perspectives and Resources |

Students—many from lower-income households—were likely to use school-issued devices for remote learning. But the devices often contained monitoring software.


By Sidney Fussell

"When tens of millions of students suddenly had to learn remotely, schools lent laptops and tablets to those without them. But those devices typically came with monitoring software, marketed as a way to protect students and keep them on-task. Now, some privacy advocates, parents, and teachers say that software created a new digital divide, limiting what some students could do and putting them at increased risk of disciplinary action.

One day last fall, Ramsey Hootman’s son, then a fifth grader in the West Contra Costa School District in California, came to her with a problem: He was trying to write a social studies report when the tabs on his browser kept closing. Every time he tried to open a new tab to study, it disappeared.


It wasn’t an accident. When Hootman emailed the teacher, she says she was told, “‘Oh, surprise, we have this new software where we can monitor everything your child is doing throughout the day and can see exactly what they're seeing, and we can close all their tabs if we want.’”


Hootman soon learned that all of the district’s school-issued devices use Securly, student-monitoring software that lets teachers see a student’s screen in real time and even close tabs if they discover a student is off-task. During class time, students were expected to have only two tabs open. After Hootman’s complaint, the district raised the limit to five tabs.


But Hootman says she and other parents wouldn’t have chosen school-issued devices if they knew the extent of the monitoring. (“I’m lucky that’s an option for us,” she says.) She also worried that when monitoring software automatically closes tabs or otherwise penalizes multitasking, it makes it harder for students to cultivate their own ability to focus and build discipline.


“As parents, we spend a lot of time helping our kids figure out how to balance schoolwork and other stuff,” she says. “Obviously, the internet is a big distraction, and we're working with them on being able to manage distractions. You can't do that if everything is already decided for you.”


Ryan Phillips, communications director for the school district, says Securly’s features are designed to protect students’ privacy, are only required for district-issued devices, and that teachers can only view a student’s computer during school hours. Securly did not respond to a request for comment before this article was published. After it was initially published, a Securly spokesperson said district administrators can disable screen viewing, the product notifies students when a class session begins, and schools can limit teachers to only  start class sessions during school hours.


In a report earlier this month, the Center for Democracy and Technology, a Washington, DC-based tech policy nonprofit, said the software installed on school-issued computers essentially created two classes of students. Those from lower-income households were more likely to use school-issued computers, and therefore more likely to be monitored.


“Our hypothesis was there are certain groups of students, more likely those attending lower-income schools, who are going to be more reliant on school-issued devices and therefore be subject to more surveillance and tracking than their peers who can essentially afford to opt out,” explains Elizabeth Laird, one of the report’s authors.


The report found that Black and Hispanic families were more reliant on school devices than their white counterparts and were more likely to voice concern about the potential disciplinary consequences of the monitoring software.


The group said monitoring software, from companies like Securly and GoGuardian, offers a range of capabilities, from blocking access to adult content and flagging certain keywords (slurs, profanity, terms associated with self-harm, violence, etc.) to allowing teachers to see students screens in real time and make changes." 

No comment yet.
Scooped by Roxana Marachi, PhD!

Protecting Kids Online: Internet Privacy and Manipulative Marketing - U.S. Senate Subcommittee on Consumer Protection, Product Safety, and Data Security

Protecting Kids Online: Internet Privacy and Manipulative Marketing - U.S. Senate Subcommittee on Consumer Protection, Product Safety, and Data Security | Educational Psychology & Technology: Critical Perspectives and Resources |

"WASHINGTON, D.C.— U.S. Senator Richard Blumenthal (D-CT), the Chair of the Subcommittee on Consumer Protection, Product Safety, and Data Security, will convene a hearing titled, “Protecting Kids Online: Internet Privacy and Manipulative Marketing” at 10 a.m. on Tuesday, May 18, 2021. Skyrocketing screen time has deepened parents’ concerns about their children’s online safety, privacy, and wellbeing. Apps such as TikTok, Facebook Messenger, and Instagram draw younger audiences onto their platforms raising concerns about how their data is being used and how marketers are targeting them. This hearing will examine the issues posed by Big Tech, child-oriented apps, and manipulative influencer marketing. The hearing will also explore needed improvements to our laws and enforcement, such as the Children’s Online Privacy Protection Act, child safety codes, and the Federal Trade Commission’s advertising disclosure guidance.


  • Ms. Angela Campbell, Professor Emeritus, Georgetown Law
  • Mr. Serge Egelman, Research Director, Usable Security and Privacy, International Computer Science Institute, University of California Berkeley
  • Ms. Beeban Kidron, Founder and Chair, 5Rights

Hearing Details:

Tuesday, May 18, 2021

10:00 a.m. EDT

Subcommittee on Consumer Protection, Product Safety, and Data Security (Hybrid)  


Witness Panel 1 

No comment yet.
Scooped by Roxana Marachi, PhD!

More than 40 attorneys general ask Facebook to abandon plans to build Instagram for kids //

More than 40 attorneys general ask Facebook to abandon plans to build Instagram for kids // | Educational Psychology & Technology: Critical Perspectives and Resources |

By Lauren Feiner

"Attorneys general from 44 states and territories urged Facebook to abandon its plans to create an Instagram service for kids under the age of 13, citing detrimental health effects of social media on kids and Facebook’s reportedly checkered past of protecting children on its platform.

Monday’s letter follows questioning from federal lawmakers who have also expressed concern over social media’s impact on children. The topic was a major theme that emerged from lawmakers at a House hearing in March with Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai and Twitter CEO Jack Dorsey. Republican staff for that committee later highlighted online protection for kids as the main principle lawmakers should consider in their legislation.


BuzzFeed News reported in March that Facebook had been exploring creating an Instagram service for children, based on internal documents it obtained.

Protecting children from harm online appears to be one of the rare motivators both Democrats and Republicans can agree on, which puts additional pressure on any company creating an online service for kids.

In Monday’s letter to Zuckerberg, the bipartisan group of AGs cited news reports and research findings that social media and Instagram, in particular, had a negative effect on kids’ mental well-being, including lower self-esteem and suicidal ideation.

The attorneys general also said young kids “are not equipped to handle the range of challenges that come with having an Instagram account.” Those challenges include online privacy, the permanence of internet posts, and navigating what’s appropriate to view and share. They noted that Facebook and Instagram had reported 20 million child sexual abuse images in 2020.

Officials also based their skepticism on Facebook’s history with products aimed at children, saying it “has a record of failing to protect the safety and privacy of children on its platform, despite claims that its products have strict privacy controls.” Citing news reports from 2019, the AGs said that Facebook’s Messenger Kids app for children between 6 and 12 years old “contained a significant design flaw that allowed children to circumvent restrictions on online interactions and join group chats with strangers that were not previously approved by the children’s parents.” They also referenced a recently reported “mistake” in Instagram’s algorithm that served diet-related content to users with eating disorders.


“It appears that Facebook is not responding to a need, but instead creating one, as this platform appeals primarily to children who otherwise do not or would not have an Instagram account,” the AGs wrote. “In short, an Instagram platform for young children is harmful for myriad reasons. The attorneys general urge Facebook to abandon its plans to launch this new platform.”

In a statement, a Facebook spokesperson said the company has “just started exploring a version of Instagram for kids,” and committed to not show ads “in any Instagram experience we develop for people under the age of 13.”

“We agree that any experience we develop must prioritize their safety and privacy, and we will consult with experts in child development, child safety and mental health, and privacy advocates to inform it. We also look forward to working with legislators and regulators, including the nation’s attorneys general,” the spokesperson said.

After publication, Facebook sent an updated statement acknowledging that since children are already using the internet, “We want to improve this situation by delivering experiences that give parents visibility and control over what their kids are doing. We are developing these experiences in consultation with experts in child development, child safety and mental health, and privacy advocates.”

Facebook isn’t the only social media platform that’s created services for children. Google-owned YouTube has a kids service, for example, though with any internet service, there are usually ways for children to lie about their age to access the main site. In 2019, YouTube reached a $170 million settlement with the Federal Trade Commission and New York attorney general over claims it illegally earned money from collecting the personal information of kids without parental consent, allegedly violating the Children’s Online Privacy Protection Act (COPPA).

Following the settlement, YouTube said in a blog post it will limit data collection on videos aimed at children, regardless of the age of the user actually watching. It also said it will stop serving personalized ads on child-focused content and disable comments and notifications on them."... 


For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Silicon Valley, Philanthrocapitalism, and Policy Shifts from Teachers to Tech // Chapter in Strike for the Common Good: Fighting for the Future of Public Education 

Marachi, R., & Carpenter, R. (2020). Silicon Valley, philanthrocapitalism, and policy shifts from teachers to tech. In Givan, R. K. and Lang, A. S. (Eds.). Strike for the Common Good: Fighting for the Future of Public Education. Ann Arbor: University of Michigan Press.


Author version above (click down arrow to access). For final publication, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

The Rise—and the Recurring Bias—of Risk Assessment Algorithms // Revue

The Rise—and the Recurring Bias—of Risk Assessment Algorithms // Revue | Educational Psychology & Technology: Critical Perspectives and Resources |
By Julia Angwin (Editor In Chief, The Markup)
"Hello, friends, 
I first learned the term “risk assessments” in 2014 when I read a short paper called “Data & Civil Rights: A Criminal Justice Primer,” written by researchers at Data & Society. I was shocked to learn that software was being used throughout the criminal justice system to predict whether defendants were likely to commit future crimes. It sounded like science fiction.

I didn’t know much about criminal justice at the time, but as a longtime technology reporter, I knew that algorithms for predicting human behavior didn’t seem ready for prime time. After all, Google’s ad targeting algorithm thought I was a man, and most of the ads that followed me around the web were for things I had already bought. 

So I decided I should test a criminal justice risk assessment algorithm to see if it was accurate. Two years and a lot of hard work later, my team at ProPublica published “Machine Bias,” an investigation proving that a popular criminal risk assessment tool was biased against Black defendants, possibly leading them to be unfairly kept longer in pretrial detention. 

Specifically what we found—and detailed in an extensive methodology—was that the risk scores were not particularly accurate (60 percent) at predicting future arrests and that when they were wrong, they were twice as likely to incorrectly predict Black that defendants would be arrested in the future compared with White defendants.

In other words, the algorithm overestimated the likelihood that Black defendants would later be arrested and underestimated the likelihood that White defendants would later be arrested. 

But despite those well-known flaws, risk assessment algorithms are still popular in the criminal justice system, where judges use them to help decide everything from whether to grant pretrial release to the length of prison sentences.

And the idea of using software to predict the risk of human behaviors is catching on in other sectors as well. Risk assessments are being used by police to identify future criminals and by social service agencies to predict which children might be abused.

Last year, The Markup investigative reporter Lauren Kirchner and Matthew Goldstein of The New York Times investigated the tenant screening algorithms that landlords use to predict which applicants are likely to be good tenants. They found that the algorithms use sloppy matching techniques that often generate incorrect reports, falsely labeling people as having criminal or eviction records. The problem is particularly acute among minority groups, which tend to have fewer unique last names. For example, more than 12 million Latinos nationwide share just 26 surnames, according to the U.S. Census Bureau.

And this week, reporter Todd Feathers broke the news for The Markup that hundreds of universities are using risk assessment algorithms to predict which students are likely not to graduate within their chosen major.

Todd obtained documents from four large universities that showed that they were using race as predictor, and in some cases a “high impact predictor” in their risk assessment algorithms. In criminal justice risk algorithms, race has not been included as an input variable since the 1960s. 

At the University of Massachusetts Amherst, the University of Wisconsin–Milwaukee, the University of Houston, and Texas A&M University the software predicted that Black students were “high risk” at as much as quadruple the rate of their White peers."
Representatives of Texas A&M, UMass Amherst, and UW-Milwaukee noted that they were not aware of exactly how EAB’s proprietary algorithms weighed race and other variables. A spokesperson for the University of Houston did not respond specifically to our request for comment on the use of race as a predictor.

The risk assessment software being used by the universities is called Navigate and is provided by an education research company called EAB. Ed Venit, an EAB executive, told Todd it is up to the universities to decide which variables to use and that existence of race as an option is meant to “highlight [racial] disparities and prod schools to take action to break the pattern.”
If the risk scores were being used solely to provide additional support to the students labeled as high risk, then perhaps the racial disparity would be less concerning. But faculty members told Todd that the software encourages them to steer high-risk students into “easier” majors—and particularly, away from math and science degrees. 

“This opens the door to even more educational steering,” Ruha Benjamin, a professor of African American studies at Princeton and author of “Race After Technology,” told The Markup. “College advisors tell Black, Latinx, and indigenous students not to aim for certain majors. But now these gatekeepers are armed with ‘complex’ math.”

There are no standards and no accountability for the ‘complex math’ that is being used to steer students, rate tenants, and rank criminal defendants. So we at The Markup are using the tools we have at our disposal to fill this gap. As I wrote in this newsletter last week, we employ all sorts of creative techniques to try to audit the algorithms that are proliferating across society. 

It’s not easy work, and we can’t always obtain data that lets us definitively show how an algorithm works. But we will continue to try to peer inside the black boxes that have been entrusted with making such important decisions about our lives.
As always, thanks for reading.
Julia Angwin
The Markup
No comment yet.
Scooped by Roxana Marachi, PhD!

As schools reopen with billions in federal aid, surveillance vendors are hawking expensive tools like license plate readers and facial recognition //

By Todd Feathers

"As vaccination rates rise and schools prepare to reopen, surveillance companies have trained their sights on the billions of dollars in federal COVID-19 relief funds being provided to schools across the US, hoping to make a profit by introducing a bevy of new snooping devices.


“$82 BILLION,” reads the huge front-page font on one Motorola Solutions brochure distributed to K-12 schools after the passage of the Coronavirus Response and Relief Supplemental Appropriations Act. “Consider COVID-19 technology from Motorola Solutions for your Education Stabilization Fund dollars.”


Other vendors are using similar language and marketing tactics that attempt to latch on to the amount of money Congress set aside for K-12 schools, colleges, and universities in the COVID-19 stimulus packages.


School administrators are used to receiving constant sales pitches from ed tech vendors. But many of the pricey products now being offered have previously been reserved for cops, or have been spun up over the last year to be marketed as solutions for reopening schools during the pandemic. Privacy experts fear that, if deployed, many of these technologies will remain in schools long after classrooms return to normal.


Motorola Solutions' suite of products in its "safe schools solutions" line includes automated license plate readers, watch lists that send automatic alerts when people enter a building, and anonymous “tip” submission apps for students, according to a copy of the brochure shared with Motherboard. The document also advertises artificial intelligence-powered camera systems that purportedly detect “unusual motion,” track individuals using facial recognition as they move around a school, and allow staff to search through hours of video to find footage of a person simply by typing in their “physical descriptors.”


Verkada, a smart surveillance camera company, and its sales partners have been aggressively pushing AI surveillance tools as a response to COVID-19, according to the company’s blog posts and emails obtained by Motherboard through public records requests.


“Whether leveraging features like Face Search for contact tracing or Crowd Notifications to enforce social distancing, schools can proactively protect their students and staff,” a sales associate offering Verkada facial recognition products wrote in a March 8th email to technology staff at the Morgan-Hill Unified School District in California. 

He added that the products qualify for “ESSER II funding,” a reference to the federal Elementary and Secondary School Emergency Relief Fund created by Congress to help schools cope with the pandemic. It was just one in a long series of emails that district officials received from Verkada and its third-party sellers during the first few months of the year, many of them offering to drop off demonstration products or provide Amazon gift cards and Yeti ramblers in exchange for attending sales webinars.

A day after that email was sent, hackers announced that they had breached Verkada, gaining access to live feeds at hospitals, schools, and company offices.

Motorola Solutions, Verkada, and the other companies mentioned in this article, did not respond to multiple requests for comment.


“Unscrupulous vendors are taking every single technology they can think of and offering them to schools as if it’s going to make them safer,” Lia Holland, campaigns and communications director for the privacy group Fight for the Future, told Motherboard. “The push for surveillance of children in every aspect of their lives, especially in schools, just keeps accelerating and it’s an incredible threat to children’s lifetime privacy, their mental health, and their physical safety to deploy these technologies that are often racially biased.”

Neither states nor the U.S. Department of Education have published detailed data on exactly how local districts have spent their relief funding, so it’s unclear just how successful the surveillance vendors’ marketing strategy has been. But the companies have found at least a small number of buyers and convinced them to provide glowing testimonials.


Given the cost of the surveillance equipment being offered, it’s easy to see why the relief funds are so appetizing to the sellers.

The Godley Independent School District in Texas, for example, purchased 51 Verkada cameras and software licenses for a new building in June 2020 at a cost of $82,000, according to records obtained by Motherboard. The original cost would have been more than $100,000, but the district received a discount from the vendor.

While Godley ISD didn’t use relief funds, the purchase demonstrates what a large chunk of the money a single surveillance project can suck up—it was equivalent to 45 percent of the $182,000 in COVID-19 relief funds the district has received so far, according to federal grant records.

The relief money is intended to help districts implement remote learning systems, reopen schools, reduce the risk of virus transmission, and provide extra aid to low-income, minority, and special needs students. Surveillance vendors have interpreted those purposes liberally.

SchoolPass is one of several companies that have taken the opportunity to sell automated license plate reader (ALPR) systems to schools, going so far as to host webinars for district officials during which experts explain how to apply for and access the new federal funds.


The company explains that by tracking cars as they enter and leave school property, schools can ensure that students are physically distanced when they’re dropped off, thus reducing the risk of transmitting the virus.

What’s not clear is what happens to the ALPR data, and who else—including local police and federal agencies—may have access to it. The company and districts that use SchoolPass did not respond to requests for comment.

What’s not clear is what happens to the ALPR data, and who else—including local police and federal agencies—may have access to it. The company and districts that use SchoolPass did not respond to requests for comment.

As Motherboard has previously reported, ALPR data is uploaded into vast databases that are then used by cops, private investigators, and repo companies to track people across the country—in some cases, illegally.

Motorola Solutions owns two of the largest license plate databases through its subsidiaries Vigilant Solutions and Digital Recognition Network. It’s not clear from the company’s marketing material whether the location data scooped up by the ALPR systems it sells to schools are added to those databases.

Despite vendors’ proclamations about student safety and well-being, research shows that the increase in surveillance is likely to have a severely negative effect on students.

recent study of more than 6,000 high school students conducted by researchers from Johns Hopkins University and Washington University in St. Louis found that students attending “high surveillance” schools were far more likely to be suspended and have lower math achievement rates than students at low-surveillance schools, and they were less likely to go to college. The study controlled for other variables, such as rates of student misbehavior.

It also found that the burden fell particularly hard on Black students, who were four times more likely to attend a high-surveillance high school.

“There’s actually no evidence that it works,” Rory Mir, a grassroots advocacy organizer with the Electronic Frontier Foundation, told Motherboard. “What there is clear proof for is how this technology is biased and disproportionately impacts more at-risk students, and it creates an environment where students are constantly surveilled. It’s treating students like criminals and making money while doing so.” 


For full post, please visit: 


No comment yet.