Educational Psychology & Technology: Critical Perspectives and Resources
29.0K views | +0 today
Educational Psychology & Technology: Critical Perspectives and Resources
This curated collection includes news, resources, and research related to the intersections of Educational Psychology and Technology. The page also serves as a research tool to organize online content. The grey funnel shaped icon at the top allows for searching by keyword. For research more specific to tech, screen time and health/safety concerns, please see:, to learn about the next wave of privatization involving technology intersections with Pay For Success,  Social Impact Bonds, and Results Based Financing (often marketed with language promoting 'public-private-partnerships'), see, and for additional Educator Resources, please visit [Links to an external site].
Your new post is loading...
Your new post is loading...
Scooped by Roxana Marachi, PhD!

When "Innovation" is Exploitation: Data Ethics, Data Harms and Why We Need to Demand Data Justice // Marachi, 2019, Summer Institute of A Black Education Network 

To download pdf, please click on title or arrow above.


For more on the data brokers selling personal information from a variety of platforms, including education, please see: 


Please also visit: Parent Coalition for Student Privacy


the Data Justice Lab:


and the Algorithmic Justice League:  


No comment yet.
Scooped by Roxana Marachi, PhD!

More than 40 attorneys general ask Facebook to abandon plans to build Instagram for kids //

More than 40 attorneys general ask Facebook to abandon plans to build Instagram for kids // | Educational Psychology & Technology: Critical Perspectives and Resources |

By Lauren Feiner

"Attorneys general from 44 states and territories urged Facebook to abandon its plans to create an Instagram service for kids under the age of 13, citing detrimental health effects of social media on kids and Facebook’s reportedly checkered past of protecting children on its platform.

Monday’s letter follows questioning from federal lawmakers who have also expressed concern over social media’s impact on children. The topic was a major theme that emerged from lawmakers at a House hearing in March with Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai and Twitter CEO Jack Dorsey. Republican staff for that committee later highlighted online protection for kids as the main principle lawmakers should consider in their legislation.


BuzzFeed News reported in March that Facebook had been exploring creating an Instagram service for children, based on internal documents it obtained.

Protecting children from harm online appears to be one of the rare motivators both Democrats and Republicans can agree on, which puts additional pressure on any company creating an online service for kids.

In Monday’s letter to Zuckerberg, the bipartisan group of AGs cited news reports and research findings that social media and Instagram, in particular, had a negative effect on kids’ mental well-being, including lower self-esteem and suicidal ideation.

The attorneys general also said young kids “are not equipped to handle the range of challenges that come with having an Instagram account.” Those challenges include online privacy, the permanence of internet posts, and navigating what’s appropriate to view and share. They noted that Facebook and Instagram had reported 20 million child sexual abuse images in 2020.

Officials also based their skepticism on Facebook’s history with products aimed at children, saying it “has a record of failing to protect the safety and privacy of children on its platform, despite claims that its products have strict privacy controls.” Citing news reports from 2019, the AGs said that Facebook’s Messenger Kids app for children between 6 and 12 years old “contained a significant design flaw that allowed children to circumvent restrictions on online interactions and join group chats with strangers that were not previously approved by the children’s parents.” They also referenced a recently reported “mistake” in Instagram’s algorithm that served diet-related content to users with eating disorders.


“It appears that Facebook is not responding to a need, but instead creating one, as this platform appeals primarily to children who otherwise do not or would not have an Instagram account,” the AGs wrote. “In short, an Instagram platform for young children is harmful for myriad reasons. The attorneys general urge Facebook to abandon its plans to launch this new platform.”

In a statement, a Facebook spokesperson said the company has “just started exploring a version of Instagram for kids,” and committed to not show ads “in any Instagram experience we develop for people under the age of 13.”

“We agree that any experience we develop must prioritize their safety and privacy, and we will consult with experts in child development, child safety and mental health, and privacy advocates to inform it. We also look forward to working with legislators and regulators, including the nation’s attorneys general,” the spokesperson said.

After publication, Facebook sent an updated statement acknowledging that since children are already using the internet, “We want to improve this situation by delivering experiences that give parents visibility and control over what their kids are doing. We are developing these experiences in consultation with experts in child development, child safety and mental health, and privacy advocates.”

Facebook isn’t the only social media platform that’s created services for children. Google-owned YouTube has a kids service, for example, though with any internet service, there are usually ways for children to lie about their age to access the main site. In 2019, YouTube reached a $170 million settlement with the Federal Trade Commission and New York attorney general over claims it illegally earned money from collecting the personal information of kids without parental consent, allegedly violating the Children’s Online Privacy Protection Act (COPPA).

Following the settlement, YouTube said in a blog post it will limit data collection on videos aimed at children, regardless of the age of the user actually watching. It also said it will stop serving personalized ads on child-focused content and disable comments and notifications on them."... 


For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Silicon Valley, Philanthrocapitalism, and Policy Shifts from Teachers to Tech // Chapter in Strike for the Common Good: Fighting for the Future of Public Education 

Marachi, R., & Carpenter, R. (2020). Silicon Valley, philanthrocapitalism, and policy shifts from teachers to tech. In Givan, R. K. and Lang, A. S. (Eds.). Strike for the Common Good: Fighting for the Future of Public Education. Ann Arbor: University of Michigan Press. 

No comment yet.
Scooped by Roxana Marachi, PhD!

The Color of Surveillance: Monitoring of Poor and Working People // Center on Privacy and Technology // Georgetown Law

The Color of Surveillance: Monitoring of Poor and Working People // Center on Privacy and Technology // Georgetown Law | Educational Psychology & Technology: Critical Perspectives and Resources | 

No comment yet.
Scooped by Roxana Marachi, PhD!

Hackers post 26,000 Broward school files online 

No comment yet.
Scooped by Roxana Marachi, PhD!

Online learning's toll on kids' privacy // Axios

Online learning's toll on kids' privacy // Axios | Educational Psychology & Technology: Critical Perspectives and Resources | 

No comment yet.
Scooped by Roxana Marachi, PhD!

Algorithmic Racism, Artificial Intelligence, and Emerging Educational Technologies // SJSU

Algorithmic Racism, Artificial Intelligence, and Emerging Educational Technologies // SJSU | Educational Psychology & Technology: Critical Perspectives and Resources |

The slides above were presented at a faculty research sharing event sponsored by the Office of Innovation and Research at San José State University on March 5th, 2021 for an event with the theme, "Artificial Intelligence, Machine Learning, and Ethics."  Each presenter was allowed 5 minutes to share about the ways their research connected with the themes of AI, ML, and ethics. 


The slides above can also be accessed at


No comment yet.
Scooped by Roxana Marachi, PhD!

The End Of Student Privacy? Remote Proctoring's Invasiveness and Bias // Symposium of the Surveillance Technology Oversight Project (S.T.O.P)

On Saturday, March 6th, from 1:00 – 4:30 pm ET,  the Surveillance Technology Oversight Project (S.T.O.P.) and Privacy Lab (an initiative of the Information Society Project at Yale Law School) convened a symposium on remote proctoring technology. This interdisciplinary discussion discussed how remote proctoring software promotes bias, undermines privacy, and creates barriers to accessibility.

1:00 pm: Opening Remarks.
Albert Fox Cahn, Surveillance Technology Oversight Project

1:10 pm – 2:10 pm: Session one will provide an overview of the technology used for remote proctoring, which ranges from keyloggers, to facial recognition, and other forms of artificial intelligence. Panelists will highlight the rapid growth of remote proctoring technology during the COVID-19 pandemic and its potential role in the future.

Expert Panel:
Lindsey Barrett, Institute for Technology Law & Policy at Georgetown Law
Rory Mir, Electronic Frontier Foundation
sava saheli singh, University of Ottawa AI + Society Initiative

2:15 pm – 3:15 pm: Part two will explore the numerous technical, pedagogical, and sociological drivers of racial bias in remote proctoring technology. Speakers will examine sources of bias for existing software, its legal ramifications, and likely changes in future remote proctoring systems.

Expert Panel:
David Brody, Lawyer's Committee for Civil Rights Under Law
Chris Gilliard, Harvard Kennedy School Shorenstein Center
Lia Holland, Fight For The Future

2:20 pm – 3:20 pm: Lastly, our final session will explore remote proctoring’s impact on accessibility for students with disabilities. Panelists will detail the difficulties students have already experienced using such software, as well as the potential legal ramifications of such discrimination.

Expert Panel:
Chancey Fleet, Data and Society
Marci Miller, Potomac Law Group, PLLC
Tara Roslin, National Disabled Law Students Association

4:20 pm: Closing Remarks.
Sean O'Brien, Information Society Project at Yale Law School

No comment yet.
Scooped by Roxana Marachi, PhD!

Jane Doe (Class Action Plaintiff) vs. Northwestern University (Defenant) // Cook County, Illinois Circuit Court 

No comment yet.
Scooped by Roxana Marachi, PhD!

Northwestern University faces lawsuit for improperly capturing and storing students’ biometric data // The Daily Northwestern

Northwestern University faces lawsuit for improperly capturing and storing students’ biometric data // The Daily Northwestern | Educational Psychology & Technology: Critical Perspectives and Resources |

By Waverly Long 

"Northwestern is facing a lawsuit accusing the University of capturing and storing students’ biometric identifiers — such as their facial features and voices — through online test proctoring tools. 

The complaint, filed on Jan. 27 on behalf of an anonymous NU junior, asserts that NU violated the Illinois Biometric Information Privacy Act. 

BIPA was enacted in 2008 to protect Illinois residents from companies seeking to collect their biometric data. The law requires companies and institutions to get permission from users before collecting and saving their biometrics. It also requires them to inform users about how their information will be stored, used and destroyed.

The lawsuit asserts that NU violated BIPA by failing to properly inform students about the collection and retention of their biometric data through online test proctoring systems such as Respondus and Examity. These tools are designed to prevent cheating by verifying students’ identities and tracking their physical and digital movements during the exam.

The complaint states that NU has been collecting students’ “facial recognition data, facial detection data, recorded patterns of keystrokes, eye monitoring data, gaze monitoring data, and camera and microphone recordings.” According to the lawsuit, NU “owns, has access to, and possesses this data.” 

It also asserts that by requiring students to use online proctoring systems when taking exams remotely, NU is not giving students a “meaningful choice” in whether or not they are comfortable with the data collection and retention.

The complaint emphasizes the lack of information given to students about the University collecting and storing their biometric data.

“Northwestern collects, captures, and stores everything from a student’s facial features to their voice through a web portal accessed through the student’s personal device,” the complaint said. “Using these tools, Northwestern is able to collect and aggregate information on all aspects of a student’s life… All the while, students are left in the dark about the vast amount of information their university collects.”

Students and faculty at universities across the country — including The University of Texas at Dallas, University of Miami and the University of Wisconsin–Madison — are demanding a ban on online test proctoring because of privacy concerns. The lawsuit filed against NU noted petitions at these schools have gained tens of thousands of student and faculty signatures.

According to a Washington Post article referenced in the lawsuit, the Faculty Association at the University of California, Santa Barbara wrote a letter to university administrators demanding they cancel their contracts with online test proctoring companies in order to avoid turning the university into “a surveillance tool.” 

The lawsuit also referenced a Forbes article that said some universities and individual professors are opting to not use proctoring software. An economic professor at Harvard told Forbes the “level of intrusion” of these tools is inappropriate.

The end of the complaint notes that NU’s actions are especially egregious because BIPA clearly outlined the requirements that NU allegedly violated.

“Northwestern’s failure to maintain and comply with such a written policy is negligent and reckless because BIPA has governed the collection and use of biometric identifiers and biometric information since 2008, and Northwestern is presumed to know these legal requirements,” the lawsuit said.

A University spokesperson told The Daily that Northwestern does not comment on pending litigation."

For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Seven Reasons to Say NO to Prodigy

Seven Reasons to Say NO to Prodigy | Educational Psychology & Technology: Critical Perspectives and Resources |

"Today, almost every school in the U.S. has online learning as a part of its menu. Many parents are incorporating online education into their teaching toolkit at home, too. Unfortunately, some platforms have taken advantage of families during the pandemic, exploiting parents’ desire to provide the best possible education for their children during this difficult time. Prodigy is one of those platforms.

In February 2021, Campaign for Commercial-Free Childhood and 21 advocacy partners took action against Prodigy by filing a complaint with the Federal Trade Commission (FTC). Our complaint made headlines, raising awareness about Prodigy’s unfair practices. Even as we urge the FTC to ensure that Prodigy is held accountable for exploiting families on a national scale, our kids also need you to start the conversation about removing the game from your own school’s curriculum. We‘re inviting families, educators, and school leaders to say NO to Prodigy.



Prodigy is a math game used by millions of students, parents, and teachers across the globe. The game is designed for 1st through 8th graders to play during the school day and at home. In this online role-playing game, children create customized wizard characters to earn stars and prizes for winning math “battles,” finding treasure, and completing a variety of non-math challenges throughout the game. Children can also shop with Prodigy currency, practice dance moves, chat with other players, and rescue cute pets.

Prodigy is intentionally formulated to keep kids playing for long periods of time, but not for the reasons that we might hope. Instead of being designed to get kids excited about math, Prodigy is designed to make money.

Prodigy claims it is “free forever” for schools. However, as children play one free version in the classroom they are encouraged to play a different free version at home. This home version, though technically “free,” bombards children with advertisements and uses relentless tactics to pressure children to ask parents for “premium” memberships, which cost up to $108 per child per year.


For full post, please visit: 



No comment yet.
Scooped by Roxana Marachi, PhD!

Class action: Northwestern's online test proctoring wrongly collected face scans, other biometric identifiers // Cook County Record

Class action: Northwestern's online test proctoring wrongly collected face scans, other biometric identifiers // Cook County Record | Educational Psychology & Technology: Critical Perspectives and Resources |

..."A new class action has accused Northwestern University of violating Illinois' biometrics privacy law by using online proctoring to collect students' face scans and collect other biometric data without students' consent."


By Jonathan Bilyk

Northwestern University has become one of the latest Illinois institutions targeted under the state’s biometrics privacy law, as a new class action accuses the university of using facial recognition and other tools used in its online testing and instruction programs to improperly capture and store students’ biometric data.


The lawsuit was filed Jan. 27 in Cook County Circuit Court by attorneys Brian K. Murphy, Mary Turke, and others with the firms of Murray Murphy Moul + Basil LLP, of Columbus, Ohio; Turke & Strauss LLP, of Madison, Wis.; and Paronich Law P.C., of Hingham, Mass.


The lawsuit was filed on behalf of a woman, identified only as Jane Doe, a junior at Northwestern.


The complaint seeks to expand the action to include any Northwestern students who in the past five years took a test or other assessment that required them to use Northwestern’s so-called online proctoring tools.


Such online proctoring systems are designed to prevent cheating on tests conducted remotely. They work by using the student’s device to verify their identity and monitor their physical and digital movements.


However, the complaint asserts Northwestern’s use of its online proctoring system has violated the Illinois Biometric Information Privacy Act (BIPA.)


Enacted in 2008, the law was intended to protect Illinois residents against data breaches that may occur when companies that collect their so-called biometric identifiers – fingerprints, facial geometry, retinal scans and other unique physical characteristics – might go out of business. The law came in the wake of a bankruptcy involving a company that specialized in online payments, and had captured a trove of its users fingerprints, which had been used to verify their identities when making transactions.

The law gave plaintiffs the ability to sue, and seek damages of $1,000-$5,000 for each violation of the law’s provisions, which can be defined as each time a user’s biometric identifier is scanned.


 The BIPA law included a number of technical notice and consent provisions, including those requiring companies to secure authorization from users before scanning and storing their biometrics, and those requiring notices be given to users concerning how their information would be stored, used, shared and ultimately destroyed.


The law has been used since 2015 by plaintiffs’ lawyers to target a range of companies and institutions, from tech giants, like Facebook, to employers of all sizes, and even charities, like the Salvation Army, in thousands of class action lawsuits.


In their complaint against Northwestern, the plaintiffs assert Northwestern allegedly failed to comply with those provisions before requiring students to use the online proctoring system when taking remote tests and assessments.


They assert, for instance, that students like Jane Doe were not aware of the collection and retention of their biometric data by the online proctoring system.


“The context in which Plaintiff Jane Doe was asked to accept an online proctoring service to take her exam – as a requirement to successfully complete a college course – did not give her a meaningful choice,” the complaint said.

The complaint identified two online proctoring services used by Northwestern, Respondus and Examity.


The complaint noted that dozens of Illinois colleges and universities use online proctoring services, and the use of such services has increased as colleges and universities shut down many in-person classes in response to the COVID-19 pandemic.


The complaint, however, said “petitions have sprung up” at colleges and universities across the country from faculty and students seeking to ban the use of such services, concerned they would be used to surveil students.

“Using these tools, Northwestern is able to collect and aggregate information on all aspects of a student’s life,” the complaint said. “… All the while students are left in the dark about the vast amount of information their university collects through remote proctoring tools.”


Northwestern has yet to reply to the new lawsuit."


For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

The Dangers of Risk Prediction in the Criminal Justice System

The Dangers of Risk Prediction in the Criminal Justice System | Educational Psychology & Technology: Critical Perspectives and Resources |

"Courts across the United States are using computer software to predict whether a person will commit a crime, the results of which are incorporated into bail and sentencing decisions. It is imperative that such tools be accurate and fair, but critics have charged that the software can be racially biased, favoring white defendants over Black defendants. We evaluate the claim that computer software is more accurate and fairer than people tasked with making similar decisions. We also evaluate, and explain, the presence of racial bias in these predictive algorithms.


We are the frequent subjects of predictive algorithms that determine music recommendations, product advertising, university admission, job placement, and bank loan qualification. In the criminal justice system, predictive algorithms have been used to predict where crimes will most likely occur, who is most likely to commit a violent crime, who is likely to fail to appear at their court hearing, and who is likely to reoffend at some point in the future.

Certain types of algorithmic tools known as “risk assessments” have become particularly prevalent in the criminal justice system within the United States. The majority of risk assessments are built to predict recidivism: asking whether someone with a criminal offense will reoffend at some point in the future. These tools rely on an individual’s criminal history, personal background, and demographic information to make these risk predictions.

Various risk assessments are in use across the country to inform decisions at almost every stage in the criminal justice system.undefined One widely used criminal risk assessment tool, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS, Northpointe), has been used to assess over one million individuals in the criminal justice system since it was developed in 1998.undefined The recidivism prediction component of COMPAS—the Recidivism Risk Scale—has been in use since 2000. This software predicts a person’s risk of committing a misdemeanor or felony within two years of assessment from an individual’s demographics and criminal record.


In the past few years, algorithmic risk assessments like COMPAS have become increasingly prevalent in pretrial decision making. In these contexts, an individual who has been arrested and booked in jail is assessed by the algorithmic tool in use by the given jurisdiction. Judges then consider the risk scores calculated by the tool in their decision to either release or detain a criminal defendant before their trial.

In May of 2016, writing for ProPublica, Julia Angwin and colleagues analyzed the efficacy of COMPAS in the pretrial context on over seven thousand individuals arrested in Broward County, Florida, between 2013 and 2014.undefined The analysis indicated that the predictions were unreliable and racially biased. The authors found that COMPAS’s overall accuracy for white defendants is 67.0%, only slightly higher than its accuracy of 63.8% for Black defendants.undefined The mistakes made by COMPAS, however, affected Black and white defendants differently: Black defendants who did not recidivate were incorrectly predicted to reoffend at a rate of 44.9%, nearly twice as high as their white counterparts at 23.5%; and white defendants who did recidivate were incorrectly predicted to not reoffend at a rate of 47.7%, nearly twice as high as their Black counterparts at 28.0%. In other words, COMPAS scores appeared to favor white defendants over Black defendants by underpredicting recidivism for white and overpredicting recidivism for Black defendants. Unsurprisingly, this caused an uproar and significant concern that technology was being used to further entrench racism in our criminal justice system.

Since the publication of the ProPublica analysis, there has been significant research and debate regarding the measurement of algorithmic fairness.undefined Complicating this discussion is the fact that the research community does not necessarily agree on the definition of what makes an algorithm fair. And some studies have revealed that certain definitions of fairness are mathematically incompatible.undefined To this date, the debate around mathematical measurement of fairness is both complicated and unresolved.

Algorithmic predictions have become common in the criminal justice system because they maintain a reputation of being objective and unbiased, whereas human decision making is considered inherently more biased and flawed. Northpointe describes COMPAS as “an objective method of estimating the likelihood of reoffending.”undefined The Public Safety Assessment (PSA), another common pretrial risk assessment tool, advertises itself as a tool to “provide judges with objective, data-driven, consistent information that can inform the decisions they make.”undefined In general, people often assume that algorithms using “big data techniques” are unbiased simply because of the amount of data used to build them.

After reading the ProPublica analysis in May of 2016, we started thinking about recidivism prediction algorithms and their use in the criminal justice system. To our surprise, we could not find any research proving that recidivism prediction algorithms are superior to human predictions. Due to the serious implications this type of software can have on a person’s life, we felt that we should start by confirming that COMPAS is, in fact, outperforming human predictions. We also felt that it was critical to get beyond the debate of how to measure fairness and understand why COMPAS’s predictive algorithm exhibited such troubling racial bias.


In our study, published in Science Advances in January 2018, we began by asking a fundamental question regarding the use of algorithmic risk predictions: are these tools more accurate than the human decision making they aim to replace?undefined The goal of the study was to evaluate the baseline for human performance on recidivism prediction, and assess whether COMPAS was actually outperforming this baseline. We found that people from a popular online crowd-sourcing marketplace—who, it can reasonably be assumed, have little to no expertise in criminal justice—are as accurate and fair as COMPAS at predicting recidivism. This somewhat surprising result then led us to ask: how is it possible that the average person on the internet, being paid $1 to respond to a survey, is as accurate as commercial software used in the criminal justice system? To answer this, we effectively reverse engineered the COMPAS prediction algorithm and discovered that the software is equivalent to a simple classifier based on only two pieces of data, and it is this simple predictor that leads to the algorithm reproducing historical racial inequities in the criminal justice system.

Comparing Human and Algorithmic Recidivism Prediction


Our study is based on a data set of 2013–2014 pretrial defendants from Broward County, Florida.undefined This data set of 7,214 defendants contains individual demographic information, criminal history, the COMPAS recidivism risk score, and each defendant’s arrest record within a two-year period following the COMPAS scoring, excluding any time spent detained in a jail or a prison. COMPAS scores—ranging from 1 to 10—classify the risk of recidivism as low-risk (1–4), medium-risk (5–7), or high-risk (8–10). For the purpose of binary classification, following the methodology used in the ProPublica analysis and the guidance of the COMPAS practitioner’s guide, scores of 5 or above were classified as a prediction of recidivism.undefined

Of the 7,214 defendants in the data set, 1,000 were randomly selected for use in our study that evaluated the human performance of recidivism prediction. This subset yields similar overall COMPAS accuracy, false positive rate, and false negative rate as on the complete data set. (A positive prediction is one in which a defendant is predicted to recidivate; a negative prediction is one in which they are predicted to not recidivate.) The COMPAS accuracy for this subset of 1,000 defendants is 65.2%. The average COMPAS accuracy on 10,000 random subsets of size 1,000 each is 65.4% (with a 95% confidence interval of [62.6, 68.1]).

A descriptive paragraph for each of 1,000 defendants was generated:

The defendant is a [SEX] aged [AGE]. They have been charged with: [CRIME CHARGE]. This crime is classified as a [CRIMINAL DEGREE]. They have been convicted of [NON-JUVENILE PRIOR COUNT] prior crimes. They have [JUVENILE- FELONY COUNT] juvenile felony charges and [JUVENILE-MISDEMEANOR COUNT] juvenile misdemeanor charges on their record.

Perhaps, most notably, we did not specify the defendant's race in this “no race” condition. In a follow-up “race” condition, the defendant’s race was included so that the first line of the above paragraph read, “The defendant is a [RACE] [SEX] aged [AGE].”

There was a total of sixty-three unique criminal charges, including armed robbery, burglary, grand theft, prostitution, robbery, and sexual assault. The crime degree is either “misdemeanor” or “felony.” To ensure that our participants understood the nature of each crime, the above paragraph was followed by a short description of each criminal charge:


After reading the defendant description, participants were then asked to respond either “Yes” or “No” to the question “Do you think this person will commit another crime within two years?” Participants were required to answer each question and could not change their response once it was made. After each answer, participants were given two forms of feedback: whether their response was correct and their average accuracy.

The 1,000 defendants were randomly divided into 20 subsets of 50 each. Each participant was randomly assigned to see one of these 20 subsets. Participants saw the 50 defendants—one at a time—in random order. Participants were only allowed to complete a single subset of 50 defendants.

Participants were recruited through Amazon’s Mechanical Turk, an online crowd-sourcing marketplace where people are paid to perform a wide variety of tasks. (Institutional review board [IRB] guidelines were followed for all participants.) Our task was titled “Predicting Crime” with the description “Read a few sentences about an actual person and predict if they will commit a crime in the future.” The keywords for the task were “survey, research, criminal justice.” Participants were paid one dollar for completing the task and an additional five-dollar bonus if their overall accuracy on the task was greater than 65%. This bonus was intended to provide an incentive for participants to pay close attention to the task. To filter out participants who were not paying close attention, three catch trials were randomly added to the subset of 50 questions. These questions were formatted to look like all other questions but had easily identifiable correct answers.undefined A participant’s response was eliminated from our analysis if any of these questions were answered incorrectly.

Responses for the first (no-race) condition were collected from 462 participants, 62 of whom were removed due to an incorrect response on a catch trial. Responses for the second (race) condition were collected from 449 participants, 49 of whom were removed due to an incorrect response on a catch trial. In each condition, this yielded 20 participant responses for each of 20 subsets of 50 questions. Because of the random pairing of participants to a subset of 50 questions, we occasionally oversampled the required number of 20 participants. In these cases, we selected a random 20 participants and discarded any excess responses.


We compare the overall accuracy and bias in human assessment with the algorithmic assessment of COMPAS. Throughout, a positive prediction is one in which a defendant is predicted to recidivate while a negative prediction is one in which they are predicted to not recidivate. We measure overall accuracy as the rate at which a defendant is correctly predicted to recidivate or not (i.e., the combined true positive and true negative rates). We also report on false positives (a defendant is predicted to recidivate but they don’t) and false negatives (a defendant is predicted to not recidivate but they do). Throughout, we use both paired and unpaired t-tests (with 19 degrees of freedom) to analyze the performance of our participants and COMPAS.

The mean and median accuracy in the no-race condition—computed by analyzing the average accuracy of the 400 human predictions—is 62.1% and 64.0%. We compare these results with the performance of COMPAS on this subset of 1,000 defendants. Because groups of 20 participants judged the same subset of 50 defendants, the individual judgments are not independent. However, because each participant judged only one subset of the defendants, the median accuracies of each subset can reasonably be assumed to be independent. The participant performance, therefore, on the 20 subsets can be directly compared to the COMPAS performance on the same 20 subsets. A one-sided t-test reveals that the average of the 20 median participant accuracies of 62.8% is, just barely, lower than the COMPAS accuracy of 65.2% (p = 0.045).

To determine if there is “wisdom in the crowd” (in our case, a small crowd of 20 people per subset), participant responses were pooled within each subset using a majority rules criterion. This crowd-based approach yields a prediction accuracy of 67.0%. A one-sided t-test reveals that COMPAS is not significantly better than the crowd (p = 0.85). This demonstrates that the commercial COMPAS prediction algorithm does not outperform small crowds of non-experts at predicting recidivism.

As we noted earlier, there exists significant debate regarding the measurement of algorithmic fairness. For the purpose of this study, we evaluate the human predictions with the same fairness criteria used in the ProPublica analysis for ease of comparability. We acknowledge that this may not be the ideal measure of fairness, and also acknowledge that there is debate in the literature on the appropriate measure of fairness.undefined Regardless, we consider fairness in terms of disparate false positive rates (incorrectly classifying a defendant as high risk when they are not) and false negative rates (incorrectly classifying a defendant as low risk when they are not). We believe that, while perhaps not perfect, this measure of fairness shines a light on real-world consequences of incorrect predictions by quantifying the number of defendants that are improperly incarcerated or released.

We measure the fairness of our participants with respect to a defendant’s race based on the crowd predictions. Our participants’ accuracy on Black defendants is 68.2% compared to 67.6% for white defendants. An unpaired t-test reveals no significant difference across race (p = .87). This is similar to that of COMPAS, having a statistically insignificant difference in accuracy of 64.9% for Black defendants and 65.7% for white defendants. By this measure of fairness, our participants and COMPAS are fair to Black and white defendants.

Despite this fairness in overall accuracy, our participants had a significant difference in the false positive and false negative rates for Black and white defendants. Specifically, our participants’ false positive rate for Black defendants is 37.1% compared to 27.2% for white defendants, and our participants’ false negative rate for Black defendants is 29.2% compared to 40.3% for white defendants.

These discrepancies are similar to that of COMPAS, which has a false positive rate of 40.4% for Black defendants and 25.4% for white defendants, and a false negative rate for Black defendants of 30.9% compared to 47.9% for white defendants. See table 1(a) and (c) and figure 1 for a summary of these results. By this measure of fairness, our participants and COMPAS are similarly unfair to Black defendants, despite—bizarrely—the fact that race is not explicitly specified.

[See full article for tables cited] 



The results of this study led us to question how human participants produced racially disparate predictions despite not knowing the race of the defendant. We recruited a new set of 400 participants to repeat the same exercise but this time with the defendant’s race included. We wondered if including a defendant’s race would reduce or exaggerate the effect of any implicit, explicit, or institutional racial bias.

In this race condition, the mean and median accuracy on predicting whether a defendant would recidivate is 62.3% and 64.0%, nearly identical to the condition where race is not specified, see table 1(a) and (b). The crowd-based accuracy is 66.5%, slightly lower than the condition in which race is not specified, but not significantly so. With respect to fairness, participant accuracy is not significantly different for Black defendants, 66.2%, compared to white defendants, 67.6%. The false positive rate for Black defendants is 40.0% compared to 26.2% for white defendants. The false negative rate for Black defendants is 30.1% compared to 42.1% for white defendants. See table 1(b) for a summary of these results.


Somewhat surprisingly, including race does not have a significant impact on overall accuracy or fairness. Most interestingly, the exclusion of race does not necessarily lead to the elimination of racial disparities in human recidivism prediction.

At this point in our study, we have confidently seen that the COMPAS predictive software is not superior to nonexpert human predictions. However, we are left with two perplexing questions:

  1. How is it that nonexperts are as accurate as a widely used commercial software? and

  2. How is it that nonexperts appear to be racially biased even when they don’t know the race of the defendant?

We next set out to answer these questions.

Replicating COMPAS’s Algorithmic Recidivism Prediction

With an overall accuracy of around 65%, COMPAS and nonexpert predictions are not as accurate as we might want, particularly from the point of view of a defendant whose future lies in the balance. Since nonexperts are as accurate as the COMPAS software, we wondered about the sophistication of the underlying COMPAS predictive algorithm. This algorithm, however, is not publicized, so we built our own predictive algorithm in an attempt to understand and effectively reverse engineer the COMPAS software.


Our algorithmic analysis used the same seven features as described in the previous section, extracted from the records in the Broward County data set. Unlike the human assessment that analyzed a subset of these defendants, the following algorithmic assessment is performed over the entire data set.

Logistic regression is a linear classifier that, in a two-class classification (as in our case), computes a separating hyperplane to distinguish between recidivists and nonrecidivists. A nonlinear support vector machine employs a kernel function—in our case, a radial basis kernel—to project the initial seven-dimensional feature space to a higher dimensional space in which a linear hyperplane is used to distinguish between recidivists and nonrecidivists. The use of a kernel function amounts to computing a nonlinear separating surface in the original seven-dimensional feature space, allowing the classifier to capture more complex patterns between recidivists and nonrecidivists than is possible with linear classifiers.


We employed two different classifiers: logistic regression (a simple, general-purpose, linear classifier) and a support vector machine (a more complex, general-purpose, nonlinear classifier).undefined The input to each classifier was seven features from 7,214 defendants: age, sex, number of juvenile misdemeanors, number of juvenile felonies, number of prior (nonjuvenile) crimes, crime degree, and crime charge (see previous section). Each classifier was trained to predict recidivism from these seven features. Each classifier was trained 1,000 times on a random 80% training and 20% testing split. We report the average testing accuracy.


We found that a simple linear predictor—logistic regression (LR)—provided with the same seven features as our participants (in the no-race condition), yields similar prediction accuracy as COMPAS’s predictive algorithm. As compared to COMPAS’s overall accuracy of 65.4%, our LR classifier yields an overall testing accuracy of 66.6%. Our predictor also yields similar results to COMPAS in terms of predictive fairness, see table 2(a) and (d).

Despite using only seven features as input, a standard linear predictor yields similar results to COMPAS’s software. We can reasonably conclude, therefore, that COMPAS is employing nothing more sophisticated than a linear predictor, or its equivalent.


To test whether performance was limited by the classifier or by the nature of the data, we trained a more powerful nonlinear support vector machine (SVM) on the same data. Somewhat surprisingly, the SVM yields nearly identical results to the linear classifier, see table 2(c). If the relatively low accuracy of the linear classifier was because the data is not linearly separable, then we would have expected the nonlinear SVM to perform better. The failure to do so suggests the data is not separable, linearly or otherwise.

Lastly, we wondered if using an even smaller subset of the seven features would be as accurate as COMPAS. We trained and tested an LR-classifier on all possible subsets of the seven features. In agreement with the research done by Angelino et al., we show that a classifier based on only two features—age and total number of prior convictions—performs as well as COMPAS, see table 2(b).undefined The importance of these two criteria is consistent with the conclusions of two meta-analysis studies that set out to determine, in part, which criteria are most predictive of recidivism.

Note: Predictions are for (a) logistic regression with seven features; (b) logistic regression with two features; (c) a nonlinear support vector machine with seven features; and (d) the commercial COMPAS software with 137 features. The results in columns (a)–(c) correspond to the average testing accuracy over 1,000 random 80/20 training/testing splits. The values in the square brackets correspond to the 95% bootstrapped (a)–(c) and binomial (d) confidence intervals.


In addition to further elucidating the inner workings of these predictive algorithms, the behavior of this two-feature linear classifier helps us understand how the nonexperts were able to match COMPAS’s predictive ability. When making predictions about an individual’s likelihood of future recidivism, the nonexperts saw the following seven criteria: age, sex, number of juvenile misdemeanors, number of juvenile felonies, number of prior (nonjuvenile) crimes, current crime degree, and current crime charge. If the algorithmic classifier can rely only on a person’s age and number of prior crimes to make this prediction, it is plausible that the nonexperts implicitly or explicitly focused on these criteria as well. (Recall that participants were provided with feedback on their correct and incorrect responses, so it is likely that some learning occurred.)


The two-feature classifier effectively learned that if a person is young and has already been convicted multiple times, they are at a higher risk of reoffending, but if a person is older and has not previously been convicted of a crime, then they are at a lower risk of reoffending. This certainly seems like a sensible strategy, if not a terribly accurate one.


The predictive strength of a person’s age and number of prior convictions in this context also helps explain the racially disparate predictions seen in both of our human studies and in COMPAS’s predictions overall. On a national scale, Black people are more likely to have prior convictions on their record than white people are: for example, Black people in the United States are incarcerated in state prisons at a rate that is 5.1 times that of white Americans.undefined Within the data set used in the study, white defendants have an average of 2.59 prior convictions, whereas Black defendants have an average of 4.95 prior convictions. In Florida, the state in which COMPAS was validated for use in Broward County, the incarceration rate of Black people is 3.6 times higher than that of white people.undefined These racially disparate incarceration rates are not fully explained by different rates of offense by race. Racial disparities against Black people in the United States also exist in policing, arrests, and sentencing.undefined The racial bias that appears in both the algorithmic and human predictions is a result of these discrepancies.


While the total number of prior convictions is one of the most predictive variables of recidivism, its predictive power is not very strong. Because COMPAS and the human participants are only moderately accurate (both achieve an accuracy of around 65%), they both make significant, and racially biased, mistakes. Black defendants are more likely to be classified as medium or high risk by COMPAS, because Black defendants are more likely to have prior convictions due to the fact that Black people are more likely to be arrested, charged, and convicted. On the other hand, white defendants are more likely to be classified as low risk by COMPAS, because white defendants are less likely to have prior convictions. Black defendants, therefore, who don’t reoffend are predicted to be riskier than white defendants who don’t reoffend. Conversely, white defendants who do reoffend are predicted to be less risky than Black defendants who do reoffend. As a result, the false positive rate is higher for Black defendants than white defendants, and the false negative rate for white defendants is higher than for Black defendants. This, in short, is the racial bias that ProPublica first exposed.


This same type of disparate outcome appeared in the human predictions as well. Because the human participants saw only a few facts about each defendant, it is safe to assume that the total number of prior convictions was heavily considered in one’s predictions. Therefore, the bias of the human predictions was likely also a result of the difference in conviction history, which itself is linked to inequities in our criminal justice system.


The participant and COMPAS’s predictions were in agreement for 692 of the 1,000 defendants, indicating that perhaps there could be predictive power in the “combined wisdom” of the risk tool and the human-generated risk scores. However, a classifier that combined the same seven data per defendant along with the COMPAS risk score and the average human-generated risk score performed no better than any of the individual predictions. This suggests that the mistakes made by humans and COMPAS are not independent.


We have shown that a commercial software that is widely used to predict recidivism is no more accurate or fair than the predictions of people with little to no criminal justice expertise who responded to an online survey. We have shown that these predictions are functionally equivalent. When discussing the use of COMPAS in the courtroom to make these life-altering decisions, we should therefore ask whether we would place these same decisions in the equally accurate and biased hands of random people responding to an online survey.


In response to our study, equivant, the makers of COMPAS, responded that our study was both “highly misleading,” and “confirmed that COMPAS achieves good predictability.”undefined Despite this contradictory statement and a promise to analyze our data and results, equivant has not demonstrated any flaws with our study.


Algorithmic predictions—whether in the courts, in university admissions, or employment, financial, and health decisions—can have a profound impact on someone’s life. It is essential, therefore, that the underlying data and algorithms that fuel these predictions are well understood, validated, and transparent to those who are the subject of their use.


In beginning to question the predictive validity of an algorithmic tool, it is essential to also interrogate the ethical implications of the use of the tool. Recidivism prediction tools are used in decisions about a person’s civil liberties. They are, for example, used to answer questions such as “Will this person commit a crime if they are released from jail before their trial? Should this person instead be detained in jail before their trial?” and “How strictly should this person be supervised while they are on parole? What is their risk of recidivism while they are out on parole?” Even if technologists could build a perfect and fair recidivism prediction tool, we should still ask if the use of this tool is just. In each of these contexts, a person is punished (either detained or surveilled) for a crime they have not yet committed. Is punishing a person for something they have not yet done ethical and just?


It is also crucial to discuss the possibility of building any recidivism prediction tool in the United States that is free from racial bias. Recidivism prediction algorithms are necessarily trained on decades of historical criminal justice data, learning the patterns of which kinds of people are incarcerated again and again. The United States suffers from racial discrimination at every stage in the criminal justice system. Machine learning technologies rely on the core assumption that the future will look like the past, and it is imperative that the future of our justice system looks nothing like its racist past. If any criminal risk prediction tool in the United States will inherently reinforce these racially disparate patterns, perhaps they should be avoided altogether."

For full publication, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

I Have a Lot to Say About Signal’s Cellebrite Hack // Center for Internet and Society

I Have a Lot to Say About Signal’s Cellebrite Hack // Center for Internet and Society | Educational Psychology & Technology: Critical Perspectives and Resources |

By Riana Pfefferkorn on May 12, 2021

This blog post is based off of a talk I gave on May 12, 2021 at the Stanford Computer Science Department’s weekly lunch talk series on computer security topics. Full disclosure: I’ve done some consulting work for Signal, albeit not on anything like this issue. (I kinda doubt they’ll hire me again if they read this, though.)

You may have seen a story in the news recently about vulnerabilities discovered in the digital forensics tool made by Israeli firm Cellebrite. Cellebrite's software extracts data from mobile devices and generates a report about the extraction. It's popular with law enforcement agencies as a tool for gathering digital evidence from smartphones in their custody. 

In April, the team behind the popular end-to-end encrypted (E2EE) chat app Signal published a blog post detailing how they had obtained a Cellebrite device, analyzed the software, and found vulnerabilities that would allow for arbitrary code execution by a device that's being scanned with a Cellebrite tool. 

As coverage of the blog post pointed out, the vulnerability draws into question whether Cellebrite's tools are reliable in criminal prosecutions after all. While Cellebrite has since taken steps to mitigate the vulnerability, there's already been a motion for a new trial filed in at least one criminal case on the basis of Signal's blog post. 

Is that motion likely to succeed? What will be the likely ramifications of Signal's discovery in court cases? I think the impact on existing cases will be negligible, but that Signal has made an important point that may help push the mobile device forensics industry towards greater accountability for their often sloppy product security. Nevertheless, I have a raised eyebrow for Signal here too.

Let’s dive in.


What is Cellebrite? 

Cellebrite is an Israeli company that, per Signal’s blog post, “makes software to automate physically extracting and indexing data from mobile devices.” A common use case here in the U.S. is to be used by law enforcement in criminal investigations, typically with a warrant under the Fourth Amendment that allows them to search someone’s phone and seize data from it. 

Cellebrite’s products are part of the industry of “mobile device forensics” tools. “The mobile forensics process aims to recover digital evidence or relevant data from a mobile device in a way that will preserve the evidence in a forensically sound condition,” using accepted methods, so that it can later be presented in court. 

Who are their customers?

Between Cellebrite and the other vendors in the industry of mobile device forensics tools, there are over two thousand law enforcement agencies across the country that have such tools — including 49 of the 50 biggest cities in the U.S. Plus, ICE has contracts with Cellebrite worth tens of millions of dollars. 

But Cellebrite has lots of customers besides U.S. law enforcement agencies. And some of them aren’t so nice. As Signal’s blog post notes, “Their customer list has included authoritarian regimes in Belarus, Russia, Venezuela, and China; death squads in Bangladesh; military juntas in Myanmar; and those seeking to abuse and oppress in Turkey, UAE, and elsewhere.” 

The vendors of these kinds of tools love to get up on their high horse and talk about how they’re the “good guys,” they help keep the world safe from criminals and terrorists. Yes, sure, fine. But a lot of vendors in this industry, the industry of selling surveillance technologies to governments, sell not only to the U.S. and other countries that respect the rule of law, but also to repressive governments that persecute their own people, where the definition of “criminal” might just mean being gay or criticizing the government. The willingness of companies like Cellebrite to sell to unsavory governments is why there have been calls from human rights leaders and groups for a global moratorium on selling these sorts of surveillance tools to governments.

What do Cellebrite’s products do?

Cellebrite has a few different products, but as relevant here, there’s a two-part system in play: the first part, called UFED (which stands for Universal Forensic Extraction Device), extracts the data from a mobile device and backs it up to a Windows PC, and the second part, called Physical Analyzer, parses and indexes the data so it’s searchable. So, take the raw data out, then turn it into something useful for the user, all in a forensically sound manner. 

As Signal’s blog post explains, this two-part system requires physical access to the phone; these aren’t tools for remotely accessing someone’s phone. And the kind of extraction (a “logical extraction”) at issue here requires the device to be unlocked and open. (A logical extraction is quicker and easier, but also more limited, than the deeper but more challenging type of extraction, a “physical extraction,” which can work on locked devices, though not with 100% reliability. Plus, logical extractions won’t recover deleted or hidden files, unlike physical extractions.) As the blog post says, think of it this way: “if someone is physically holding your unlocked device in their hands, they could open whatever apps they would like and take screenshots of everything in them to save and go over later. Cellebrite essentially automates that process for someone holding your device in their hands.”

Plus, unlike some cop taking screenshots, a logical data extraction preserves the recovered data “in its original state with forensically-sound integrity admissible in a court of law.” Why show that the data were extracted and preserved without altering anything? Because that’s what is necessary to satisfy the rules for admitting evidence in court. U.S. courts have rules in place to ensure that the evidence that is presented is reliable — you don’t want to convict or acquit somebody on the basis of, say, a file whose contents or metadata got corrupted. Cellebrite holds itself out as meeting the standards that U.S. courts require for digital forensics.

But what Signal showed is that Cellebrite tools actually have really shoddy security that could, unless the problem is fixed, allow alteration of data in the reports the software generates when it analyzes phones. Demonstrating flaws in the Cellebrite system calls into question the integrity and reliability of the data extracted and of the reports generated about the extraction. 

That undermines the entire reason for these tools’ existence: compiling digital evidence that is sound enough to be admitted and relied upon in court cases.


What was the hack?

As background: Late last year, Cellebrite announced that one of their tools (the Physical Analyzer tool) could be used to extract Signal data from unlocked Android phones. Signal wasn’t pleased.


Apparently in retaliation, Signal struck back. As last month’s blog post details, Signal creator Moxie Marlinspike and his team obtained a Cellebrite kit (they’re coy about how they got it), analyzed the software, and found vulnerabilities that would allow for arbitrary code execution by a device that's being scanned with a Cellebrite tool.


According to the blog post:

Looking at both UFED and Physical Analyzer, ... we were surprised to find that very little care seems to have been given to Cellebrite’s own software security. Industry-standard exploit mitigation defenses are missing, and many opportunities for exploitation are present. ...

“[W]e found that it’s possible to execute arbitrary code on a Cellebrite machine simply by including a specially formatted but otherwise innocuous file in any app on a device that is subsequently plugged into Cellebrite and scanned. There are virtually no limits on the code that can be executed.

“For example, by including a specially formatted but otherwise innocuous file in an app on a device that is then scanned by Cellebrite, it’s possible to execute code that modifies not just the Cellebrite report being created in that scan, but also all previous and future generated Cellebrite reports from all previously scanned devices and all future scanned devices in any arbitrary way (inserting or removing text, email, photos, contacts, files, or any other data), with no detectable timestamp changes or checksum failures. This could even be done at random, and would seriously call the data integrity of Cellebrite’s reports into question.

Signal also created a video demo to show their proof of concept (PoC), which you can watch in the blog post or their tweet about it. They summarized what’s depicted in the video:

[This] is a sample video of an exploit for UFED (similar exploits exist for Physical Analyzer). In the video, UFED hits a file that executes arbitrary code on the Cellebrite machine. This exploit payload uses the MessageBox Windows API to display a dialog with a message in it. This is for demonstration purposes; it’s possible to execute any code, and a real exploit payload would likely seek to undetectably alter previous reports, compromise the integrity of future reports (perhaps at random!), or exfiltrate data from the Cellebrite machine.".... 

No comment yet.
Scooped by Roxana Marachi, PhD!

Kahoot acquires Clever, the US-based edtech portal, for up to $500M // TechCrunch

Kahoot acquires Clever, the US-based edtech portal, for up to $500M // TechCrunch | Educational Psychology & Technology: Critical Perspectives and Resources |

By Ingrid Lunden

"Kahoot, the popular Oslo-based edtech company that has built a big business out of gamifiying education and creating a platform for users to build their own learning games, is making an acquisition to double down on K-12 education and its opportunities to grow in the U.S. It is acquiring Clever, a startup that has built a single sign-on portal for educators, students and their families to build and engage in digital learning classrooms, currently used by about 65% of all U.S. K-12 schools. Kahoot said that the deal — coming in a combination of cash and shares — gives Clever an enterprise value of between $435 million and $500 million, dependent on meeting certain performance milestones.

The plan will be to continue growing Clever’s business in the U.S. — which currently employs 175 people — as well as give it a lever for expanding globally alongside Kahoot’s wider stable of edtech software and services.

“Clever and Kahoot are two purpose-led organizations that are equally passionate about education and unleashing the potential within every learner,” said Eilert Hanoa, CEO at Kahoot, in a statement. “Through this acquisition we see considerable potential to collaborate on education innovation to better service all our users — schools, teachers, students, parents and lifelong learners — and leveraging our global scale to offer Clever’s unique platform worldwide. I’m excited to welcome Tyler and his team to the Kahoot family.”

The news came on the same day that Kahoot, which is traded in Oslo with a market cap of $4.3 billion, also announced strong Q1 results in which it also noted it has closed its acquisition of, a provider of whiteboard tools for teachers, for an undisclosed sum.

The same tides that have been lifting Kahoot have also been playing out for Clever and other edtech companies.


The startup was originally incubated in Y Combinator and launched with a vision to be a “Twilio for education“, which in its vision was to create a unified way of being able to tap into the myriad student sign-on systems and educational databases to make it easier for those building edtech services to scale their products and bring on more customers (schools, teachers, students, families) to use them. As with payments, financial services in general, and telecommunications, it turns out that education is also a pretty fragmented market, and Clever wanted to figure out a way to fix the complexity and put it behind an API to make it easier for others to tap into it.


Over time it built that out also with a marketplace (application gallery in its terminology) of some 600 software providers and application developers that integrate with its SSO, which in turn becomes a way for a school or district to subsequently expand the number of edtech tools that it can use. This has been especially critical in the last year as schools have been forced to close in-person learning and go entirely virtual to help stave off the spread of the COVID-19 pandemic.


Clever has found a lot of traction for its approach both with schools and investors. With the former, Clever says that it’s used by 89,000 schools and some 65% of K-12 school districts (13,000 overall) in the U.S., with that figure including 95 of the 100 largest school districts in the country. This works out to 20 million students logging in monthly and 5.6 billion learning sessions.

The latter, meanwhile, has seen the company raise from a pretty impressive range of investors, including YC current and former partners like Paul Graham and Sam Altman, GSV, Founders Fund, Lightspeed and Sequoia. It raised just under $60 million, which may sound modest these days but remember that it’s been around since 2012, when edtech was not so cool and attention-grabbing, and hasn’t raised money since 2016, which in itself is a sign that it’s doing something right as a business.


Indeed, Kahoot noted that Clever projects $44 million in billed revenues for 2021, with an annual revenue growth rate of approximately 25% CAGR in the last three years, and it has been running the business on “a cash flow neutral basis, redeploying all cash into development of its offerings,” Kahoot noted.


Kahoot itself has had a strong year driven in no small part by the pandemic and the huge boost that resulted in remote learning and remote work. It noted in its results that it had 28 million active accounts in the last twelve months representing 68% growth on the year before, with the number of hosted games in that period at 279 million (up 28%) with more than 1.6 billion participants of those games (up 24%). Paid subscriptions in Q1 were at 760,000, with 255,000 using the “work” (B2B) tier; 275,000 school accounts; and 230,000 thousand in its “home and study” category. Annual recurring revenue is now at $69 million ($18 million a year ago for the same quarter), while actual revenue for the quarter was $16.2 million (up from $4.2 million a year ago), growing 284%.


The company, which is also backed by the likes of Disney, Microsoft and Softbank, has made a number of acquisitions to expand. Clever is the biggest of these to date."...


For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

The Rise—and the Recurring Bias—of Risk Assessment Algorithms // Revue

The Rise—and the Recurring Bias—of Risk Assessment Algorithms // Revue | Educational Psychology & Technology: Critical Perspectives and Resources |
By Julia Angwin (Editor In Chief, The Markup)
"Hello, friends, 
I first learned the term “risk assessments” in 2014 when I read a short paper called “Data & Civil Rights: A Criminal Justice Primer,” written by researchers at Data & Society. I was shocked to learn that software was being used throughout the criminal justice system to predict whether defendants were likely to commit future crimes. It sounded like science fiction.

I didn’t know much about criminal justice at the time, but as a longtime technology reporter, I knew that algorithms for predicting human behavior didn’t seem ready for prime time. After all, Google’s ad targeting algorithm thought I was a man, and most of the ads that followed me around the web were for things I had already bought. 

So I decided I should test a criminal justice risk assessment algorithm to see if it was accurate. Two years and a lot of hard work later, my team at ProPublica published “Machine Bias,” an investigation proving that a popular criminal risk assessment tool was biased against Black defendants, possibly leading them to be unfairly kept longer in pretrial detention. 

Specifically what we found—and detailed in an extensive methodology—was that the risk scores were not particularly accurate (60 percent) at predicting future arrests and that when they were wrong, they were twice as likely to incorrectly predict Black that defendants would be arrested in the future compared with White defendants.

In other words, the algorithm overestimated the likelihood that Black defendants would later be arrested and underestimated the likelihood that White defendants would later be arrested. 

But despite those well-known flaws, risk assessment algorithms are still popular in the criminal justice system, where judges use them to help decide everything from whether to grant pretrial release to the length of prison sentences.

And the idea of using software to predict the risk of human behaviors is catching on in other sectors as well. Risk assessments are being used by police to identify future criminals and by social service agencies to predict which children might be abused.

Last year, The Markup investigative reporter Lauren Kirchner and Matthew Goldstein of The New York Times investigated the tenant screening algorithms that landlords use to predict which applicants are likely to be good tenants. They found that the algorithms use sloppy matching techniques that often generate incorrect reports, falsely labeling people as having criminal or eviction records. The problem is particularly acute among minority groups, which tend to have fewer unique last names. For example, more than 12 million Latinos nationwide share just 26 surnames, according to the U.S. Census Bureau.

And this week, reporter Todd Feathers broke the news for The Markup that hundreds of universities are using risk assessment algorithms to predict which students are likely not to graduate within their chosen major.

Todd obtained documents from four large universities that showed that they were using race as predictor, and in some cases a “high impact predictor” in their risk assessment algorithms. In criminal justice risk algorithms, race has not been included as an input variable since the 1960s. 

At the University of Massachusetts Amherst, the University of Wisconsin–Milwaukee, the University of Houston, and Texas A&M University the software predicted that Black students were “high risk” at as much as quadruple the rate of their White peers."
Representatives of Texas A&M, UMass Amherst, and UW-Milwaukee noted that they were not aware of exactly how EAB’s proprietary algorithms weighed race and other variables. A spokesperson for the University of Houston did not respond specifically to our request for comment on the use of race as a predictor.

The risk assessment software being used by the universities is called Navigate and is provided by an education research company called EAB. Ed Venit, an EAB executive, told Todd it is up to the universities to decide which variables to use and that existence of race as an option is meant to “highlight [racial] disparities and prod schools to take action to break the pattern.”
If the risk scores were being used solely to provide additional support to the students labeled as high risk, then perhaps the racial disparity would be less concerning. But faculty members told Todd that the software encourages them to steer high-risk students into “easier” majors—and particularly, away from math and science degrees. 

“This opens the door to even more educational steering,” Ruha Benjamin, a professor of African American studies at Princeton and author of “Race After Technology,” told The Markup. “College advisors tell Black, Latinx, and indigenous students not to aim for certain majors. But now these gatekeepers are armed with ‘complex’ math.”

There are no standards and no accountability for the ‘complex math’ that is being used to steer students, rate tenants, and rank criminal defendants. So we at The Markup are using the tools we have at our disposal to fill this gap. As I wrote in this newsletter last week, we employ all sorts of creative techniques to try to audit the algorithms that are proliferating across society. 

It’s not easy work, and we can’t always obtain data that lets us definitively show how an algorithm works. But we will continue to try to peer inside the black boxes that have been entrusted with making such important decisions about our lives.
As always, thanks for reading.
Julia Angwin
The Markup
No comment yet.
Scooped by Roxana Marachi, PhD!

Artificial intelligence is infiltrating higher ed, from admissions to grading: As colleges' use of the technology grows, so do questions about bias and accuracy

Artificial intelligence is infiltrating higher ed, from admissions to grading: As colleges' use of the technology grows, so do questions about bias and accuracy | Educational Psychology & Technology: Critical Perspectives and Resources |

By Derek Newton

Students newly accepted by colleges and universities this spring are being deluged by emails and texts in the hope that they will put down their deposits and enroll. If they have questions about deadlines, financial aid and even where to eat on campus, they can get instant answers.


The messages are friendly and informative. But many of them aren’t from humans.

Artificial intelligence, or AI, is being used to shoot off these seemingly personal appeals and deliver pre-written information through chatbots and text personas meant to mimic human banter.  It can help a university or college by boosting early deposit rates while cutting down on expensive and time-consuming calls to stretched admissions staffs.

AI has long been quietly embedding itself into higher education in ways like these, often to save money — a need that’s been heightened by pandemic-related budget squeezes.

Now, simple AI-driven tools like these chatbots, plagiarism-detecting software and apps to check spelling and grammar are being joined by new, more powerful – and controversial – applications that answer academic questions, grade assignments, recommend classes and even teach.


The newest can evaluate and score applicants’ personality traits and perceived motivation, and colleges increasing are using these tools to make admissions and financial aid decisions.

As the presence of this technology on campus grows, so do concerns about it. In at least one case, a seemingly promising use of AI in admissions decisions was halted because, by using algorithms to score applicants based on historical precedence, it perpetuated bias."...


For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Protecting Youth from Data Exploitation by Online Technologies and Applications // Proposed by CA-HI State NAACP // Passed at National NAACP Conference 9/26/20

The following resolution was proposed and passed in April 2020 at the CA-HI State NAACP Resolutions Conference, and passed by the national delegates at the NAACP National Conference on September 26th, 2020. 


"WHEREAS the COVID19 pandemic has resulted in widespread school closures that are disproportionately disadvantaging families in under-resourced communities; and


WHEREAS resulting emergency learning tools have primarily been comprised of untested, online technology apps and software programs; and


WHEREAS, the National Education Policy Center has documented evidence of widespread adoptions of apps and online programs that fail to meet basic safety and privacy protections; and


WHEREAS privately managed cyber/online schools, many of which have been involved in wide-reaching scandals involving fraud, false marketing, and unethical practices, have seized the COVID crisis to increase marketing of their programs that have resulted in negative outcomes for students most in need of resources and supports; and


WHEREAS, parents and students have a right to be free from intrusive monitoring of their children’s online behaviors, have a right to know what data are being collected, what entities have access, how long data will be held, in what ways data would combined, and how data could be protected against exploitation;


WHEREAS increased monitoring and use of algorithmic risk assessments on students’ behavioral data are likely to disproportionately affect students of color and other underrepresented or underserved groups, such as immigrant families, students with previous disciplinary issues or interactions with the criminal justice system, and students with disabilities; and


WHEREAS serious harms resulting from the use of big data and predictive analytics have been documented to include targeting based on vulnerability, misuse of personal information, discrimination, data breaches, political manipulation and social harm, data and system errors, and limiting or creating barriers to access for services, insurance, employment, and other basic life necessities;


BE IT THEREFORE RESOLVED that the NAACP will advocate for strict enforcement of the Family Education Rights and Privacy Act to protect youth from exploitative data practices that violate their privacy rights or lead to predictive harms; and


BE IT FURTHER RESOLVED that the NAACP will advocate for federal, state, and local policy to ensure that schools, districts, and technology companies contracting with schools will neither collect, use, share, nor sell student information unless given explicit permission by parents in plain language and only after being given full informed consent from parents about what kinds of data would be collected and how it would be utilized; and


BE IT FURTHER RESOLVED that the NAACP will work independently and in coalition with like-minded civil rights, civili liberties, social justice, education and privacy groups to collectively advocate for stronger protection of data and privacy rights; and



BE IT FURTHER RESOLVED that the NAACP will oppose state and federal policies that would promote longitudinal data systems that track and/or share data from infancy/early childhood in exploitative, negatively impactful, discriminatory, or racially profiling ways through their education path and into adulthood; and


BE IT FINALLY RESOLVED that the NAACP will urge Congress and state legislatures to enact legislation that would prevent technology companies engaged in big data and predictive analytics from collecting, sharing, using, and/or selling children’s educational or behavioral data."


No comment yet.
Scooped by Roxana Marachi, PhD!

"How Do You Feel Today?": The Invasion of SEL Software in K-12 // by Shelley Buchanan // 

"How Do You Feel Today?": The Invasion of SEL Software in K-12 // by Shelley Buchanan //  | Educational Psychology & Technology: Critical Perspectives and Resources |

By Shelley Buchanan
"The recent appeal for more mental health services has caused school districts to adopt software touting SEL (social-emotional learning) capabilities. Such programs as GoGuardian, Panorama, and Harmony SEL are now in thousands of schools across the nation. While the need for more mental health supports in schools is evident, the rapid adoption of technology has occurred without adequate scrutiny and parental awareness. Even teachers and district administrators blindly accept these companies’ claims to improve behavior and dramatically drop suicide rates. But such businesses base their product’s effectiveness on few research studies of little value.¹ The valid studies cited may be focusing on the SEL lessons delivered by humans without using the digital program.

One such program called PBIS Rewards touts the benefits of daily student “check-ins.” Students log into the program on their devices and click on an emoji reflecting their current emotional state. This information is then automatically sent to a central database that allows the teacher to track students’ emotions on their computer. The program makers tout the benefits by emphasizing how easy it is to collect and track such student data. Teachers and schools can set goals for students using this data and assign points to desired behaviors. The PBIS Rewards website states, “Students love to see their point totals grow, and to think about which rewards they’ll get with their points.” Parents are encouraged to download the associated app onto their phones to reinforce the program at home. The company assures schools that “Parents enjoy seeing their student’s progress, and are alerted when a referral is given.” ²


Within PBIS Rewards and other SEL software, teachers and administrators can use data collected online from students to create reports.³. Schools can refine these reports to gender and race. Let’s say a school compiles a database that shows their Black male students were angry 70% of the time. It is not difficult to imagine how schools could inadvertently use this information to reinforce pre-existing bias and racial stereotyping. Just because we have data doesn’t mean this leads to equity.⁴ It matters what people do with the data.⁵

The school also keeps this information about students throughout the year. If they do not delete it, there’s a potential for future teachers to develop a bias towards a student even before they meet them.⁶ Some will say knowledge is helpful, but are we not giving kids a chance to start over with a new school year? What if they had a parent who went to prison that year and they were depressed or angry because of it? Yet, a teacher merely sees that the particular student was angry 70% of the time. Now consider if the school shares this information with law enforcement?⁶

According to FERPA, school resource officers and other law enforcement cannot access student information without a specified exception, but districts can creatively interpret these limits.⁷

SEL tech providers will often claim their products promote mental health awareness and can be used to reduce the number of suicidal or dangerous students. Even before the pandemic, the Guardian reported that with such technology, “privacy experts — and students — said they are concerned that surveillance at school might actually be undermining students’ wellbeing.” ⁸ Over-reliance upon potentially invasive technology can erode students’ trust.

Reliance on mental health digital applications during distance learning can also lead to several ethical concerns rarely brought up among staff untrained in mental health issues.⁹ Use of such programs such as GoGuardian to monitor students’ screens for concerning websites can lead to legal problems for unaware educators.¹⁰

This district website sends parents directly to the company’s site to encourage the download of the app.

In addition to requiring children to use these programs in school, ed-tech companies are now encouraging schools to have students and parents download apps. Such actions can create several privacy concerns. The student is downloading an app on their personal device; therefore, they will be using it outside of school networks and all their security. Thus personal information in these apps could be accessed by outside parties. While companies may claim that they have ensured their software is safe, online apps installed on phones are routinely not secure.¹¹ COPPA guidelines often are not abided by.¹² School districts have even been known to put direct links to these apps on their websites, encouraging parents and students to use apps with privacy issues.¹³

The integration of digital SEL programs with other software platforms like Clever adds another layer of privacy concerns. What if another student hacks into Clever or Google Classroom? What if the SEL screen on a teacher’s computer became visible? Teachers often will display their laptop screen to the class. What if they accidentally had a student’s SEL screen open and projected this? Technical issues occur all the time, and it is easy to see how such an incident could happen.

The potential privacy issues surrounding digital SEL programs abound. For example, a popular app called Thrively shares information with third party users (despite their company privacy statement).¹⁴ Many widely used applications in schools are too new for privacy specialists to know to what extent they violate individual privacy.¹⁵ Therefore, schools using these programs often act as experimental laboratories for the legal limits of data collection and usage. We must keep in mind that just because there are no reported incidences of privacy violations doesn’t mean they don’t occur.

Frequently, companies that produce such online programs will offer their product for free to districts. Let us be clear; no one merely gives away software with no compensation in return. Educational technology companies have routinely taken data as payment for the use of their products¹⁶ Sales of data to third party digital operators is big money. Information is the most expensive commodity there is today.¹⁷

Educational technology companies can trade influence for payment.¹⁸ The student usage of Google or Microsoft products can lead to parents purchasing such products for home use. As adults, former students will also be more likely to buy these brand name products. The free license for school districts ends up paying off in such cases. And it’s not only the big guys like Google that are making such an investment. Organizations like Harmony SEL have a whole line of products for home use geared towards parents. Harmony is also associated with a private university, an online charter school, a professional development company, and a company that sells fundraising training for schools. These programs all rely heavily upon funding by billionaire T. Denny Sanford.¹⁹ Of course, consumers of the Harmony SEL system are encouraged to use these other businesses and organizations’ products.


Online educational software does sometimes disregard privacy laws regarding children. In 2020, New Mexico’s attorney general sued Google claiming the tech giant used its educational products to spy on the state’s children and families despite Google’s privacy statement ensuring schools and families that children’s data wouldn’t be tracked.²⁰ The lack of comprehensive and sufficient laws protecting children’s online information makes the ubiquitous use of educational technology all the more troubling.²¹ If schools are not aware of the potential violations, how can parents be? Even more concerning, The State Student Privacy Report Card states, “FERPA contains no specific protections against data breaches and hacking, nor does it require families be notified when inadvertent disclosures occur.” ²²

Educational technology providers can adhere to COPPA guidelines by claiming they require parental consent before children use their products.²³ But frequently, school districts will merely have parents sign a universal consent form covering all digital tools. Although they can, and should, require additional consent for specific applications, they often do not. Besides, if the parental consent form includes all necessary tools such as Google Suite, a student could be denied any devices until a parent signs the form. Such conditions place tremendous pressure on parents to consent.

Equally insidious are the tech marketing claims that feed into school accountability mandates. Makers of SEL software craft their messaging to reflect the mission statements and goals of school districts. For example, Panorama claims that their SEL tracking program can predict “college and career readiness.” Popular terms like “grit” and “growth mindset” are generously sprinkled throughout marketing literature. Other programs claim their products produce a rise in standardized test scores.²⁴ Some even have convinced school districts to do marketing for them, promoting their products for free.²⁵

Underlying many such behavioral programs is the reliance on extrinsic motivators. Yet, the use of rewards for learning is highly problematic.²⁶ Dan Pink found that extrinsic rewards such as gold stars and gift certificates were harmful in the school environment.²⁷ Teachers themselves are even speaking out against the damaging effects of such programs.²⁸


These concerns lead us to the larger question: who decides what feelings are acceptable? How does SEL technology discourage the expression of certain feelings? If we reward students for a “positive mind set,” does that mean we actively should try to stifle negative emotions? Evan Selinger, the author of Reengineering Humanity, warns that “technology, by taking over what were once fundamental functions…has begun to dissociate us from our own humanity.” ²⁹ School SEL programs with objectives to produce more positive feelings may have the unintended effect of telling the child that their emotional reactions are something they entirely create, not a reflection of their environment. Suppose a child is frustrated because they don’t understand the curriculum. In that case, the school may emphasize the child controlling their feelings rather than adapting the material to the student’s needs. Students rarely have the ability or courage to tell teachers why they are feeling what they are feeling. In a system where adults advise students that they alone are responsible for their feelings, a child can easily take the blame for adult behaviors. Districts can then use such data to explain away low standardized test scores, asserting that “students with higher social-emotional competencies tend to have higher scores on Smarter Balanced ELA and math assessments.” Therefore, it is easy to assume that student academic failure has little to do with the quality of instruction in the school but rather the student’s emotional competencies.

“Technology, by taking over what were once fundamental functions, has begun to dissociate us from our own humanity.” — Evan Selinger, author of Reengineering Humanity

In our modern western culture, society encourages parents to minimize negative emotions in their children.³⁰ Child psychologists stress children need to be allowed to express negative feelings. Not only does this tell the child that fear, anger, frustration, etc., are normal, but it also allows the child to practice dealing with negative feelings. It is not sufficient or helpful to encourage positive emotions but censor negative ones. Expression of negative feelings is necessary for mental health.³¹ (Take a look at the millions of adults stuffing their anger and sadness away with alcohol, food, and drugs.) Parental discouragement of negative feelings is one thing, though. It’s another to allow a school, and worse yet, a technology company to regulate a child’s emotion. One can only envision a dystopian future where we are not allowed to feel anything but happiness.

“And that,” put in the Director sententiously, “that is the secret of happiness and virtue — liking what you’ve got to do. All conditioning aims at that: making people like their unescapable social destiny.” — Brave New World

If we take Huxley’s writings seriously, the intention of societal enforced happiness is the control of the individual. One cannot help but think of this when reading about behavioral programs that reward “good” feelings with happy face emojis, stars, or even pizza parties.

Instead of relying on software to monitor and shape children’s behaviors, teachers should be focusing on improving relationships built on trust. Even if a school uses software to identify a child’s feelings, no change will occur because of mere identification. The difference is in the steps schools take to address student anger, frustration, apathy, and the conditions that create them. Over and over again, the one thing that improves student mental health is teachers’ and counselors’ support. Without such beneficial relationships, destructive behavior occurs. Research consistently finds that poor relationships between teachers and pupils can cause bad behavior.³²

When SEL software is adopted, and there are limited counselors and social workers, the teacher decides the meaning of a student’s emotions and mental health. What does depression look like, and how many days of “sad” is symptomatic of a mental health issue? Teachers are not trained mental health providers. But the reliance on and assumed efficacy of such programs may give teachers the false feeling that they can rely on their perspective without contacting a counselor. Broad adoption of such software could be a money-saving measure to cash-strapped districts pressured to deal with a rising level of child mental health issues. The annual cost of a software license is far less than the salaries of certified school counselors and social workers.


Parents and teachers need to be aware of SEL software, its use, and its purpose. The simple addition of a list of licensed applications on a district website is not enough to ensure parental awareness. Often SEL technology is adopted without parent review and feedback. While districts allow parents to review and opt their child out of sex education programs, SEL programs do not have such a requirement in place. This lack of clarity has led to parents (and teachers) voicing their concerns over SEL curriculums and lessons.³³ ³⁴ Rapid adoption without critical voices could lead to school encroachment into families’ values and norms. Whether or not one agrees with the beliefs of individual families, as a society, we need to be aware of how specific policies may negatively impact the civil liberties of individuals.³⁵


Technology is changing at a rapid pace never previously experienced. If we are to harness its benefits, we must first take stock of its harmful impact on our institutions. Quick adoption of SEL programs needs reassessment given the risks associated with their misuse. We must first insist upon the humanity from which all good teaching emanates. Only within this framework can we create environments in which children can develop and flourish."...


  1. García Mathewson , Tara, and Sarah Butrymowicz. Ed Tech Companies Promise Results, but Their Claims Are Often Based on Shoddy Research. The Hechinger Report, 20 May 2020
  2. PBIS Rewards also has a teacher behavior reward system. The PBIS rewards website states that principals can give reward points just like they do for students. Teachers can get rewards points for Bath and Body Works baskets, a dress-down pass, or even a gift card for groceries. (Not making enough money teaching to buy dinner? If you earn enough points, you can too can buy food for your family!) Ironically, principals can even give teachers points for “buying into” the PBIS system. No mention of how such systems can negatively contribute to our teacher attrition problem. Source“Introducing the SEL Check-In Feature with PBIS Rewards.” PBIS Rewards, Motivating Systems, LLC., 4 Sept. 2020
  3. For example, a school district in Nevada used data collected through the Panorama application to create reports of behavioral trends based on gender and race. SourceDavidson, Laura. How Washoe County School District Uses SEL Data to Advance Equity and Excellence, Panorama Education, October 2020
  4. Bump, Philip. Cops Tend to See Black Kids as Less Innocent Than White Kids. The Atlantic, 27 Nov. 2014
  5. Skibba, Ramin. The Disturbing Resilience of Scientific Racism. Smithsonian Magazine, 20 May 2019
  6. An EFF report found few school privacy policies address deletion of data after periods of inactivity, which would allow applications to retain information even after students graduate. Source: Alim, F., Cardoza, N., Gebhart, G., Gullo, K., & Kalia, A. Spying on Students: School-Issued Devices and Student Privacy. Electronic Frontier Foundation, 13 April 2017
  7. Education, Privacy, Disability Rights, and Civil Rights Groups Send Letter to Florida Governor About Discriminatory Student Database. Future of Privacy Forum, 14 Dec. 2020
  8. It is estimated that as many as a third of America’s school districts may already be using technology that monitors students’ emails and documents for phrases that might flag suicidal thoughts. SourceBeckett, Lois. Clear Backpacks, Monitored Emails: Life for US Students under Constant Surveillance. The Guardian, 2 Dec. 2019
  9. D., Florell, et al. “Legal and Ethical Considerations for Remote School Psychological Services.” National Association of School Psychologists (NASP), Accessed 12 February 2021.
  10. Buchanan, Shelley. “The Abuses and Misuses of GoGuardian in Schools.” Medium, Teachers on Fire Magazine, 23 Jan. 2021
  11. COPPA requires that websites and online services directed to children obtain parental consent before collecting personal information from anyone younger than 13; however, many popular apps do not comply. A University of Texas at Dallas study of 100 mobile apps for kids found that 72 violated a federal law aimed at protecting children’s online privacy. Source: University of Texas at Dallas. Tool to protect children’s online privacy: Tracking instrument nabs apps that violate federal law with 99% accuracy. Science Daily, 23 June 2020.
  12. ibid.
  13. For example, Second Step, a program used in many school districts has a link to a children’s app that collects personally identifiable information which is sold to third parties.
  14. “Common Sense Privacy Evaluation for Thrively.” The Common Sense Privacy Program, Common Sense Media. Accessed 12 February 2021.
  15. Tate, Emily. Is School Surveillance Going Too Far? Privacy Leaders Urge a Slow Down. EdSurge News, 10 June 2019
  16. Educator Toolkit for Teacher and Student Privacy: A Practical Guide for Protecting Person al Data. Parent Coalition for Student Privacy & Badass Teachers Association. October 2018.
  17. Jossen , Sam. The World’s Most Valuable Resource Is No Longer Oil, but Data. The Economist , 6 May 2017.
  18. Klein, Alyson. What Does Big Tech Want From Schools? (Spoiler Alert: It’s Not Money). Education Week, 29 Dec. 2020.
  19. T. Denny Sanford also has heavily funded and lent his name to a number of other organizations. Although recently, in late 2020, Sanford Health decided to drop the founders name from their title after reported child pornography investigations of their benefactor. National University (home of the college associated with the Harmony SEL program) also adopted the name of the philanthropist, yet recently reconsidered the change.
  20. Singer, N. and Wakabayashi, D. New Mexico Sues Google Over Children's Privacy Violations. New York Times, 20 February 2020
  21. The State Student Privacy Report Card: Grading the States on Protecting Student Data Privacy. Parent Coalition for Student Privacy & The Network for Public Education, January 2019.
  22. ibid.
  23. COPPA protects children under the age of 13 who use commercial websites, online games, and mobile apps. While schools must ensure the services their students use treat the data they collect responsibly, COPPA ultimately places the responsibility on the online service operator. At the same time, COPPA generally does not apply when a school has hired a website operator to collect information from students for educational purposes for use by the school. In those instances, the school (not an individual teacher) can provide consent on behalf of the students when required, as long as the data is used only for educational purposes.
  24. Such correlation assumes that standardized assessments such as the SBAC are accurate measurements of student’s academic abilities. There are multiple reasons why this is not the case. To blame a student’s success on their emotional state is harmful, considering the tests themselves have serious flaws. If a school decides to use data collected about SEL competencies and sort according to socio-economic status, it would be too easy to assume that poor SEL skills rather than ineffective schools or poverty causes low test scores. It would not be difficult to imagine how this flawed logic could then be used to substantiate a claim that low social-emotional skills cause poverty instead of any societal attributes.
  25. Current WCSD Superintendent Kristen McNeill stated in 2017, “I can’t think of better data to help our 64,000 students on their path to graduation.” SourceServing 5 Million Students, Panorama Education Raises $16M to Expand Reach of Social-Emotional Learning and Increase College Readiness in Schools, Panorama Education, 26 June 2018
  26. Kohn , Alfie. The Risks of Rewards. Eric Digest , 17 Nov. 2014
  27. Truby, Dana. “Motivation: The Secret Behind Student Success.” Accessed 12 January 2021.
  28. “To monitor students like items on a conveyer belt does more for District PR machine than how to assist real students with real complex emotional and social issues.” SourceRubin, Lynda. “Action Item 16 Contracts with Motivating Systems LLC and Kickboard, Inc.” Alliance for Philadelphia Public Schools , 20 Jan. 2021.
  29. Furness, Dylan. “Technology Makes Our Lives Easier, but is it at the Cost of Our Humanity?” Digital Trends, Digital Trends, 28 Apr. 2018.
  30. Denham, S. A. “Emotional Competence During Childhood and Adolescence.” Handbook of Emotional Development, edited by Vanessa LoBue, Vanessa, et al, 2019, pp. 493–541.
  31. Rodriguez, Tori. Negative Emotions Are Key to Well-Being. Scientific American, 1 May 2013.
  32. Cadima J, Leal T, Burchinal M. “The Quality of Teacher-Student Interactions: Associations with First Graders’ Academic and Behavioral Outcomes. Journal of School Psychology. 2010;48:457–82.
  33. Callahan, Joe. Marion School Board Shelves Sanford Harmony Curriculum Over Gender Norm Themes. Ocala Star-Banner, 24 Oct. 2020.
  34. Bailey, Nancy. “Social-Emotional Learning: The Dark Side.” Nancy Bailey’s Education Website, 6 Nov. 2020.
  35. “Problems with Social-Emotional Learning in K-12 Education: New Study.” Pioneer Institute , 10 Dec. 2020.
No comment yet.
Scooped by Roxana Marachi, PhD!

Okta to Acquire Identity Tech Startup Auth0 for $6.5B // Subscription Insider

Okta to Acquire Identity Tech Startup Auth0 for $6.5B // Subscription Insider | Educational Psychology & Technology: Critical Perspectives and Resources |

"On Wednesday, Okta announced it will acquire identity management company Auth0 for $6.5 billion in an all-stock


No comment yet.
Scooped by Roxana Marachi, PhD!

The End Of Student Privacy? Remote Proctoring's Invasiveness and Bias Symposium // Saturday March 6th, 10am-1:30pm PST// Mobilizon

The End Of Student Privacy? Remote Proctoring's Invasiveness and Bias Symposium // Saturday March 6th, 10am-1:30pm PST// Mobilizon | Educational Psychology & Technology: Critical Perspectives and Resources |

"On Saturday, March 6th, from 1:00 – 4:30 pm ET, please join the Surveillance Technology Oversight Project (S.T.O.P.) and Privacy Lab (an initiative of the Information Society Project at Yale Law School) for a symposium on remote proctoring technology. This interdisciplinary discussion will examine how remote proctoring software promotes bias, undermines privacy, and creates barriers to accessibility.

Please join with your web browser at on the day of the event.

We are using privacy-respecting web conferencing software BigBlueButton (provided by PrivacySafe) which only requires a web browser. Best viewed via Firefox or Chrome/Chromium. You may test your setup at

Sessions (all times Eastern):

1:00 pm: Opening Remarks.

1:10 pm – 2:10 pm: Session one will provide an overview of the technology used for remote proctoring, which ranges from keyloggers, to facial recognition, and other forms of artificial intelligence. Panelists will highlight the rapid growth of remote proctoring technology during the COVID-19 pandemic and its potential role in the future.

Expert Panel:

2:15 pm – 3:15 pm: Part two will explore the numerous technical, pedagogical, and sociological drivers of racial bias in remote proctoring technology. Speakers will examine sources of bias for existing software, its legal ramifications, and likely changes in future remote proctoring systems.

Expert Panel:

  • David Brody, Lawyer's Committee for Civil Rights Under Law

2:20 pm – 3:20 pm: Lastly, our final session will explore remote proctoring’s impact on accessibility for students with disabilities. Panelists will detail the difficulties students have already experienced using such software, as well as the potential legal ramifications of such discrimination.

Expert Panel:


4:20 pm: Closing Remarks.

  • Sean O'Brien, Information Society Project at Yale Law School


To register, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

The student and the algorithm: How the exam results fiasco threatened one pupil’s future // Education // The Guardian

The student and the algorithm: How the exam results fiasco threatened one pupil’s future // Education // The Guardian | Educational Psychology & Technology: Critical Perspectives and Resources | 

No comment yet.
Scooped by Roxana Marachi, PhD!

AI, algorithmic & automation incident & controversy [AIAAIC] repository //

AI, algorithmic & automation incident & controversy [AIAAIC] repository // | Educational Psychology & Technology: Critical Perspectives and Resources |

"You may use, copy, redistribute and adapt the contents of the AIAAIC repository in line with its CC by 4.0 licence.


When doing so, ensure you attribute 'AIAAIC' and provide a clear, prominent link back to this resource:


The AIAAIC repository is devised and managed by Charlie Pownall. Questions, suggestions, complaints via"

No comment yet.
Scooped by Roxana Marachi, PhD!

Letter to State Bar of California and ExamSoft Worldwide Inc. // Lawyers' Committee for Civil Rights Under Law  

No comment yet.
Scooped by Roxana Marachi, PhD!

The Platform University: A New Data Driven Business Model for Profiting from Higher Education  

The Platform University: A New Data Driven Business Model for Profiting from Higher Education   | Educational Psychology & Technology: Critical Perspectives and Resources |

By Ben Williamson

Digital software platforms have changed everyday life for billions around the world. Platforms for higher education are now shaking up the way universities operate too. The “platform university” is being built on top of the campus, and its main motive is to profit from the HE market through the lucrative currency of student data.


Platforms have become the dominant technologies of the current moment. They act as intermediaries between different groups, such as by enabling real-time online chat, or exchanging user information with advertisers. Additionally, they extend rapidly, usually as they’re free to use. This provides platform owners with unprecedented quantities of data for monitoring, improving and further monetising their services. As a result, platform capitalism has become the dominant business model of our time.

Unbundling Higher Education

A market in HE platforms is now expanding fast. The platform university is the result of multiple trends—demands for enhanced performance measurement; the opening up of HE to alternative providers; increased marketization, internationalisation, competition, and innovation; and the “unbundling” of HE into discrete services for outsourcing to platform providers, which then repackage those services for sale back to the sector.


Platforms for matching students to degrees and graduates to jobs have become especially successful, and are prototypical of what the platform university portends. One of the most widely used, Studyportals, markets its service as an “International Study Choice Platform”. In 2018 alone it “helped 36 million students around the world to explore study programmes and make an informed choice, throughout over 190,000 courses at 3,200+ educational institutes across 110 countries”.

Likewise, Debut uses data provided by student users—including a psychometric competencies test—to match graduates to employer internships and graduate schemes. It was recently awarded £5million in venture capital to train its AI-based recommendation algorithm to make better automated matches. And its entrepreneurial founder claims Debut’s “cognitive psychometric intelligence” profiles are more useful to employers for matching graduates to jobs than “irrelevant” degrees or academic success.


The US platform Handshake also recently received US$40million venture capital from an investment team that included Facebook founder Mark Zuckerberg. Handshake already claims 14 million student users, with uptake by over 700 university career centres and 300,000 employers. Its 2018 Campus to Career report details employment trends of over 9 million students who have used the platform—positioning itself as an authority on employability.


Meanwhile, the platform for professionals, LinkedIn, continues to expand into HE. Research by Janja Komljenovic shows how LinkedIn is creating an alternative credentialing platform to link graduates to employers through “qualification altmetrics”, thereby “building a global marketplace for skills to run in parallel to, or instead of university degrees”. Like Handshake and Studyportals, LinkedIn has access to huge quantities of student data to deepen its penetration into HE employability services.

Higher Education on demand

As well as reshaping the university around new employability priorities and demands, advocates of the platform university see more and more aspects of HE moving to software. One is Woolf University, a “borderless university” proposed to run on a “software platform” and “the latest technology to support teachers and students anywhere in the world”.

The global education business Pearson also recently announced plans to offer on-demand services through a “global digital platform”. Bypassing the university to market “pay-to-view” education streaming services direct to the consumer, it plans to become the Netflix of education. Simultaneously, it is selling online learning platforms to universities to improve internationalisation performance.

But Pearson is not just making new platform products. Its report Demand Driven Education details a vision for merging work and learning around employability skills. “Demand driven education” would “focus more strongly than ever on ensuring graduates are job-ready and have access to rewarding careers”—and could utilise artificial intelligence-based talent analytics to “identify potential matches between learners and specific career paths”.

Pearson is therefore specifying what the future of HE should be, and building software platforms to accomplish that vision.

Students as raw material for monetisation

By transforming HE to be more demand-driven and on-demand, platform companies are making the sector into a profitable market. Venture capital investment in HE platforms can be eye-watering. The plagiarism detection platform Turnitin was acquired recently for US$1.74 billion by the media conglomerate Advance Publications. Turnitin uses student assignments as data to train its plagiarism detection algorithm, in order to sell ever-more sophisticated services back to HE institutions.

Despite studies repeatedly showing Turnitin’s high error rate, and considerable concern over the mistrust it creates while monetising students’ intellectual property, its acquisition clearly demonstrates huge market demand for data-driven HE platforms. It changes how students are valued—not for their independent intellectual development but as raw material to be mined for market advantage.

Platform capitalism in the academy

The university is being transformed by platform technologies and companies. Although software platforms have been inside HE for years, HE is now moving inside the software platform. In the emerging platform university, more and more HE services are being delegated to software, algorithms, and automation. The business logic of platform capitalism has captured the academy, and the public role of the university has become a source of private financial gain.

At a time of budgetary austerity and myriad other pressures on HE, the platform university is a perversely pricy technical solution to complex structural, institutional, and political problems. Significant HE spending is now flowing from universities to platform providers, along with data they can use to their own advantage as market actors in an emerging sub-sector of platform capitalism. Unless universities act collectively in the sector’s own interests, they may find themselves positioned as educational product providers and data collection partners for the new HE platform industry."


For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

University will stop using Proctorio remote testing after student outcry // The Verge

University will stop using Proctorio remote testing after student outcry // The Verge | Educational Psychology & Technology: Critical Perspectives and Resources |

By Monica Chin

"The University of Illinois Urbana-Champaign announced that it will discontinue its use of remote-proctoring software Proctorio after its summer 2021 term. The decision follows almost a year of outcry over the service, both on UIUC’s campus and around the US, citing concerns with privacy, discrimination, and accessibility.

Proctorio is one of the most prominent software platforms that colleges and universities use to watch for cheating on remote tests. It uses what its website describes as “machine learning and advanced facial detection technologies” to record students through their webcams while they work on their exams and monitor the position of their heads. The software flags “suspicious signs” to professors, who can review its recordings. The platform also enables professors to track the websites students visit during their exams, and bar them from functions like copy / pasting and printing.

Though Proctorio and similar services have been around for years, their use exploded in early 2020 when COVID-19 drove schools around the US to move a bulk of their instruction online. So, too, has scrutiny towards their practices. Students and instructors at universities around the country have spoken out against the widespread use of the software, claiming that it causes unnecessary anxiety, violates privacy, and has the potential to discriminate against marginalized students.

In an email to instructors, ADA coordinator Allison Kushner and Vice Provost Kevin Pitts wrote that professors who continue to use Proctorio through the summer 2021 term are “expected to accommodate students that raise accessibility issues,” and that the campus is “investigating longer-term remote proctoring options.”


Proctorio has been controversial on UIUC’s campus since the service was introduced last spring, and concerned students only grew more vocal through the fall 2020 semester. (Due to COVID-19, the school now operates with a hybrid of online and in-person instruction.) Over 1,000 people signed a petition calling on the university to stop using the service. “Proctorio is not only inefficient, it is also unsafe and a complete violation of a student’s privacy,” reads the petition.

UIUC is one of many campuses where remote proctoring has faced backlash. A Miami University petition, which gathered over 500 signatures, declared that “Proctorio’s design invades student rights, is inherently ableist and discriminatory, and is inconsistent with peer reviewed research.” Over 3,500 signatories have called on the University of Regina to end its use of ProctorTrack, another automated proctoring service. A 1,200-signature petition urges the University of Central Florida to dump Honorlock, another similar software, declaring that “students should not be forced to sign away their privacy and rights in order to take a test.”

Professors and staff have criticized the service as well. The Electronic Privacy Information Center (EPIC) filed a complaint against Proctorio (alongside four other test-proctoring services), claiming that the services’ collection of personal information amounts to “unfair and deceptive trade practices.” Even US senators have gotten involved; a coalition including Sen. Richard Blumenthal (D-CT), Sen. Elizabeth Warren (D-MA), and Sen. Cory Booker (D-NJ) sent an open letter to Proctorio and two similar services in December citing a number of concerns about their business practices. “Students have run head on into the shortcomings of these technologies—shortcomings that fall heavily on vulnerable communities and perpetuate discriminatory biases,” wrote the senators.


The complaints largely revolve around security and privacy — Proctorio’s recordings give instructors and the service access to some of test-takers’ browsing data, and a glimpse of their private homes in some cases. (Proctorio stated in its response to the Senators’ letter that “test-taker data is secured and processed through multiple layers of encryption” and that Proctorio retains its recordings “for the minimum amount of time required by either our customer or by applicable law.”)

Accessibility is another common concern. Students have reported not having access to a webcam at home, or enough bandwidth to accommodate the service; one test-taker told The Verge that she had to take her first chemistry test in a Starbucks parking lot last semester.

Students have also reported that services like Proctorio have difficulty identifying test-takers with darker skin tones, and may disproportionately flag students with certain disabilities. Research has found that even the best facial-recognition algorithms make more errors in identifying Black faces than they do identifying white ones. Proctorio stated in its response that “We believe all of these cases were due to issues relating to lighting, webcam position, or webcam quality, not race.”


“We take these concerns seriously,” reads UIUC’s email, citing student complaints related to “accessibility, privacy, data security and equity” as factors in its decision. It recommends that students for whom Proctorio presents a barrier to test-taking “make alternative arrangements” as the program is phased out, and indicates that accessibility will be considered in the selection of the next remote-proctoring solution.

We’ve reached out to UIUC and Proctorio for comment, and will update this story if we hear back."...


For full post: 


No comment yet.