Educational Psychology & Emerging Technologies: Critical Perspectives and Updates
31.3K views | +0 today
Educational Psychology & Emerging Technologies: Critical Perspectives and Updates
This curated collection includes updates, resources, and research with critical perspectives related to the intersections of educational psychology and emerging technologies in education. The page also serves as a research tool to organize online content (funnel shaped icon allows keyword search). For more on the intersections of privatization and technologization of education with critiques of the social impact finance and related technologies, please visit and For posts regarding screen time risks to health and development, see and for updates related to AI and data concerns, please visit   [Note: Views presented on this page are re-shared from external websites.  The content may not necessarily represent the views nor official position of the curator nor employer of the curator.
Your new post is loading...
Your new post is loading...
Scooped by Roxana Marachi, PhD!

Breaking Free from Blockchaining Babies & "Cradle-to-Career" Data Traps // #CivicsOfTech23 Closing Keynote

The 2nd Annual Civics of Technology conference held on August 3rd and 4th, 2023 closed with a keynote address provided by Dr. Roxana Marachi, Professor of Education at San José State University.

The video for this talk is available here, accompanying slides are available at, and links to additional resources can also be found at   


A blogpost introducing the Civics of Technology community to some of the research and trends discussed in the keynote can be found here

Thanks to the entire Civics of Technology team for their efforts in organizing the conference, to all the presenters and participants, and to the University of North Texas College of Education and Loyola University Maryland School of Education for their generous support. For more information about Civics of Technology events and discussions, please visit

No comment yet.
Scooped by Roxana Marachi, PhD!

Pennsylvania Advisory Committee to the U.S. Commission on Civil Rights // Public Briefings and call for public comment

Pennsylvania Advisory Committee to the U.S. Commission on Civil Rights // Public Briefings and call for public comment | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates |

The Pennsylvania Advisory Committee to the U.S. Commission on Civil Rights is conducting a study to examine the civil rights impact of the rising use of Artificial Intelligence (AI) in Education. In this ongoing study, the Committee will consider how AI algorithms are developed, and the impact they can have on either reducing or exacerbating existing disparities (or creating new disparities) in the classroom based on federally protected classes. The Committee will also examine potential solutions and recommendations to remediate identified concerns.

The Committee will hear testimony through a series of public briefings, as scheduled below. All meetings are free and open to the public. Members of the public will be invited to speak during an open-comment period near the end of each meeting. The Committee also invites written testimony from others who wish to contribute. All written testimony must be submitted to the Committee by Wednesday May 1, 2024, via email at  


Closed captions will be provided. Individuals requiring other accommodations should contact the regional program unit at (202) 618-4158 five business days prior to the meeting to make their request. Committee Steve Irwin said, "Pennsylvania is at the center as our country begins to harness the power of AI. Our Committee has chosen to focus on identifying the risks and rewards for students from Kindergarten through High School. At the same time, its benefits must be felt equitably and its dangers must not be at the expense of the most vulnerable."


The Committee will issue its findings and recommendations in a report to the Commission after all testimony has been received, anticipated Fall 2024.

Panel 1

Monday March 25, 2024

11:00 am - 1:00 pm ET


Roxana Marachi, San José State University

Angela Stewart, University of Pittsburgh School of Computing and Information

Hoda Heidari, Carnegie Mellon University School of Computer Science

Clarence Okoh, The Center for Law and Policy


Panel 2

Wednesday March 27, 2024

11:00 am - 1:00 pm ET


Joseph T. Yun, Swanson School of Engineering and Office of the CIO, University of Pittsburgh

Nicol Turner Lee, The Brookings Institution

Kristin Woelfel, Center for Democracy and Technology

John Browning, Faulkner Law School


Panel 3

Friday March 29, 2024

11:00 am - 1:00 pm ET


Beatrice Dias & Tinukwa Boulder, University of Pittsburgh School of Education

Andrew Buher, Opportunity Labs and Princeton University

Chad Dion Lassiter, Pennsylvania Human Relations Commission

Panel 4

Thursday April 25, 2024

11:00 am - 1:00 pm ET




For original announcement, please see: 


For questions, please contact:
Melissa Wojnaroski, Designated Federal Officer

(202) 681-4158


No comment yet.
Scooped by Roxana Marachi, PhD!

How Surveillance Capitalism Ate Education For Lunch // Marachi, 2022 // Univ. of Pittsburgh Data & Society Speaker Series Presentation [Slidedeck]

How Surveillance Capitalism Ate Education For Lunch // Marachi, 2022 // Univ. of Pittsburgh Data & Society Speaker Series Presentation [Slidedeck] | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates |

Presentation prepared for the University of Pittsburgh's Year of Data & Society Speaker Series. Slidedeck accessible by clicking title above or here:

No comment yet.
Scooped by Roxana Marachi, PhD!

Beware the ‘botshit’: why generative AI is such a real and imminent threat to the way we live // André Spicer via The Guardian

Beware the ‘botshit’: why generative AI is such a real and imminent threat to the way we live // André Spicer via The Guardian | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates |

"Unless checks are put in place, citizens and voters may soon face AI-generated content that bears no relation to reality"


By André Spicer
"During 2023, the shape of politics to come appeared in a video. In it, Hillary Clinton – the former Democratic party presidential candidate and secretary of state – says: “You know, people might be surprised to hear me saying this, but I actually like Ron DeSantis a lot. Yeah, I know. I’d say he’s just the kind of guy this country needs.”


It seems odd that Clinton would warmly endorse a Republican presidential hopeful. And it is. Further investigations found the video was produced using generative artificial intelligence (AI).


The Clinton video is only one small example of how generative AI could profoundly reshape politics in the near future. Experts have pointed out the consequences for elections. These include the possibility of false information being created at little or no cost and highly personalised advertising being produced to manipulate voters. The results could be so-called “October surprises” – ie a piece of news that breaks just before the US elections in November, where misinformation is circulated and there is insufficient time to refute it – and the generation of misleading information about electoral administration, such as where polling stations are.


Concerns about the impact of generative AI on elections have become urgent as we enter a year in which billions of people across the planet will vote. During 2024, it is projected that there will be elections in Taiwan, India, Russia, South Africa, Mexico, Iran, Pakistan, Indonesia, the European Union, the US and the UK. Many of these elections will not determine just the future of nation states; they will also shape how we tackle global challenges such as geopolitical tensions and the climate crisis. It is likely that each of these elections will be influenced by new generative AI technologies in the same way the elections of the 2010s were shaped by social media.


While politicians spent millions harnessing the power of social media to shape elections during the 2010s, generative AI effectively reduces the cost of producing empty and misleading information to zero. This is particularly concerning because during the past decade, we have witnessed the role that so-called “bullshit” can play in politics. In a short book on the topic, the late Princeton philosopher Harry Frankfurt defined bullshit specifically as speech intended to persuade without regard to the truth. Throughout the 2010s this appeared to become an increasingly common practice among political leaders. With the rise of generative AI and technologies such as ChatGPT, we could see the rise of a phenomenon my colleagues and I label “botshit”.

In a recent paper, Tim Hannigan, Ian McCarthy and I sought to understand what exactly botshit is and how it works. It is well known that generative AI technologies such as ChatGPT can produce what are called “hallucinations”. This is because generative AI answers questions by making statistically informed guesses. Often these guesses are correct, but sometimes they are wildly off. The result can be artificially generated “hallucinations” that bear little relationship to reality, such as explanations or images that seem superficially plausible, but aren’t actually the correct answer to whatever the question was.

Humans might use untrue material created by generative AI in an uncritical and thoughtless way. And that could make it harder for people to know what is true and false in the world. In some cases, these risks might be relatively low, for example if generative AI were used for a task that was not very important (such as to come up with some ideas for a birthday party speech), or if the truth of the output were easily verifiable using another source (such as when did the battle of Waterloo happen). The real problems arise when the outputs of generative AI have important consequences and the outputs can’t easily be verified....


For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Could a court really order the destruction of ChatGPT? The New York Times thinks so, and it may be right // The Conversation

Could a court really order the destruction of ChatGPT? The New York Times thinks so, and it may be right // The Conversation | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates |

By João Marinotti

"On Dec. 27, 2023, The New York Times filed a lawsuit against OpenAI alleging that the company committed willful copyright infringement through its generative AI tool ChatGPT. The Times claimed both that ChatGPT was unlawfully trained on vast amounts of text from its articles and that ChatGPT’s output contained language directly taken from its articles.


To remedy this, the Times asked for more than just money: It asked a federal court to order the “destruction” of ChatGPT.

If granted, this request would force OpenAI to delete its trained large language models, such as GPT-4, as well as its training data, which would prevent the company from rebuilding its technology.

This prospect is alarming to the 100 million people who use ChatGPT every week. And it raises two questions that interest me as a law professor. First, can a federal court actually order the destruction of ChatGPT? And second, if it can, will it?


Destruction in the court

The answer to the first question is yes. Under copyright law, courts do have the power to issue destruction orders.

To understand why, consider vinyl records. Their resurging popularity has attracted counterfeiters who sell pirated records.


If a record label sues a counterfeiter for copyright infringement and wins, what happens to the counterfeiter’s inventory? What happens to the master and stamper disks used to mass-produce the counterfeits, and the machinery used to create those disks in the first place?


To address these questions, copyright law grants courts the power to destroy infringing goods and the equipment used to create them. From the law’s perspective, there’s no legal use for a pirated vinyl record. There’s also no legitimate reason for a counterfeiter to keep a pirated master disk. Letting them keep these items would only enable more lawbreaking.


So in some cases, destruction is the only logical legal solution. And if a court decides ChatGPT is like an infringing good or pirating equipment, it could order that it be destroyed. In its complaint, the Times offered arguments that ChatGPT fits both analogies.

Copyright law has never been used to destroy AI models, but OpenAI shouldn’t take solace in this fact. The law has been increasingly open to the idea of targeting AI.

Consider the Federal Trade Commission’s recent use of algorithmic disgorgement as an example. The FTC has forced companies such as WeightWatchers to delete not only unlawfully collected data but also the algorithms and AI models trained on such data.

Why ChatGPT will likely live another day

It seems to be only a matter of time before copyright law is used to order the destruction of AI models and datasets. But I don’t think that’s going to happen in this case. Instead, I see three more likely outcomes.

The first and most straightforward is that the two parties could settle. In the case of a successful settlement, which may be likely, the lawsuit would be dismissed and no destruction would be ordered.

The second is that the court might side with OpenAI, agreeing that ChatGPT is protected by the copyright doctrine of “fair use.” If OpenAI can argue that ChatGPT is transformative and that its service does not provide a substitute for The New York Times’ content, it just might win.

The third possibility is that OpenAI loses but the law saves ChatGPT anyway. Courts can order destruction only if two requirements are met: First, destruction must not prevent lawful activities, and second, it must be “the only remedy” that could prevent infringement.

That means OpenAI could save ChatGPT by proving either that ChatGPT has legitimate, noninfringing uses or that destroying it isn’t necessary to prevent further copyright violations.

Both outcomes seem possible, but for the sake of argument, imagine that the first requirement for destruction is met. The court could conclude that, because of the articles in ChatGPT’s training data, all uses infringe on the Times’ copyrights – an argument put forth in various other lawsuits against generative AI companies.


In this scenario, the court would issue an injunction ordering OpenAI to stop infringing on copyrights. Would OpenAI violate this order? Probably not. A single counterfeiter in a shady warehouse might try to get away with that, but that’s less likely with a US$100 billion company.

Instead, it might try to retrain its AI models without using articles from the Times, or it might develop other software guardrails to prevent further problems. With these possibilities in mind, OpenAI would likely succeed on the second requirement, and the court wouldn’t order the destruction of ChatGPT.

Given all of these hurdles, I think it’s extremely unlikely that any court would order OpenAI to destroy ChatGPT and its training data. But developers should know that courts do have the power to destroy unlawful AI, and they seem increasingly willing to use it."


For original post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Resources to Learn about AI Hype, AI Harms, BigData, Blockchain Harms, and Data Justice 

Resources to Learn about AI Hype, AI Harms, BigData, Blockchain Harms, and Data Justice  | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates |

Resources to Learn about AI, BigData, Blockchain, Algorithmic Harms, and Data Justice

Shortlink to share this page: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Genetics group slams company for using its data to screen embryos’ genomes: Orchid Health’s new in vitro fertilization embryo test aims to predict future health and mental issues //

Genetics group slams company for using its data to screen embryos’ genomes: Orchid Health’s new in vitro fertilization embryo test aims to predict future health and mental issues // | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates |

By Carrie Arnold

"On 5 December, a U.S. company called Orchid Health announced that it would begin to offer fertility clinics and their hopeful customers the unprecedented option to sequence the whole genomes of embryos conceived by in vitro fertilization (IVF). “Find the embryo at lowest risk for a disease that runs in your family,” touts the company’s website. The cost: $2500 per embryo.

Although Orchid and at least two other companies have already been conducting more limited genetic screening of IVF embryos, the new test offers something more: Orchid will look not just for single-gene mutations that cause disorders such as cystic fibrosis, but also more extensively for medleys of common and rare gene variants known to predispose people to neurodevelopmental disorders, severe obesity, and certain psychiatric conditions such as schizophrenia.

That new offering drew swift backlash from genomics researchers who claim the company inappropriately uses their data to generate some of its risk estimates. The Psychiatric Genomics Consortium (PGC), an international group of more than 800 researchers working to decode the genetic and molecular underpinnings of mental health conditions, says Orchid’s new test relies on data it produced over the past decade, and that the company has violated restrictions against the data’s use for embryo screening.

PGC objects to such uses because its goal is to improve the lives of people with mental illness, not stop them from being born, says University of North Carolina at Chapel Hill psychiatrist Patrick Sullivan, PGC’s founder and lead principal investigator. The group signaled its dismay on the social media platform X (formerly Twitter) last week, writing, “Any use for commercial testing or for testing of unborn individuals violates the terms by which PGC & our collaborators’ data are made available.” PGC’s response marks the first time an academic team has publicly taken on companies using its data to offer polygenic risk scores.

Orchid didn’t respond to Science’s request for a comment and PGC leaders say they don’t have any obvious recourse to stop the company. “We are just a confederation of academics in a common purpose. We are not a legal entity,” Sullivan notes in an email to Science. “We can publicize the issue (as we are doing!) in the vain hope that they might do the right thing or be shamed into it.”


Andrew McQuillin, a molecular psychiatry researcher at University College London, has previously worked with PGC and he echoes the group’s concern. It’s difficult for researchers to control the use of their data post publication, especially when so many funders and journals require the publication of these data, he notes. “We can come up with guidance on how these things should be used. The difficulty is that official guidance like that doesn’t feature anywhere in the marketing from these companies,” he says.

For more than a decade, IVF clinics have been able to pluck a handful of cells out of labmade embryos and check whether a baby would inherit a disease-causing gene. Even this more limited screening has generated debate about whether it can truly reassure potential parents an embryo will become a healthy baby—and whether it could lead to a modern-day form of eugenics.


But with recent advances in sequencing, researchers can read more and more of an embryo’s genome from those few cells, which can help create so-called polygenic risk scores for a broad range of chronic diseases. Such scores calculate an individual’s odds of developing common adult-onset conditions that result from a complex interaction between multiple genes as well as environmental factors.

Orchid’s new screening report sequences more than 99% of an embryo’s DNA, and estimates the risk of conditions including the irregular heart rhythm known as atrial fibrillation, inflammatory bowel diseases, types 1 and 2 diabetes, breast cancer—and some of the psychiatric conditions that PGC studies. In an August preprint on bioRxiv, Orchid scientists concluded that their whole genome sequencing methods using a five-cell embryo biopsy were accurate and effective at reading the depth and breadth of the genome.

But even with the most accurate sequencing, polygenic risk scores don’t predict disease risk very well, McQuillin says. “They’re useful in the research context, but at the individual level, they’re not actually terribly useful to predict who’s going to develop schizophrenia or not.”

In an undated online white paper on how its screening calculates a polygenic risk score for schizophrenia, Orchid cites studies conducted by PGC, including one that identified specific genetic variants linked to schizophrenia. But James Walters, co-leader of the Schizophrenia Working Group at PGC and a psychiatrist at Cardiff University’s Centre for Neuropsychiatric Genetics and Genomics, says the data use policy on the PGC website specifically indicates that results from PGC-authored studies are not to be used to develop a test like Orchid’s.

Some groups studying controversial topics such as the genetics of educational attainment (treated as a proxy for intelligence) and of homosexuality have tried to ensure responsible use of their data by placing them in password-protected sites. Others like PGC, however, have made their genotyping results freely accessible to anyone and relied on data policies to discourage improper use.

Orchid’s type of embryo screening may be “ethically repugnant,” says geneticist Mark Daly of the Broad Institute, but he believes data like PGC’s must remain freely available to scientists in academia and industry. “The point of studying the genetics of disease is to provide insights into the mechanisms of disease that can lead to new therapies,” he tells Science.

Society, McQuillin says, must soon have a broader discussion about the implications of this type of embryo screening. “We need to take a look at whether this is really something we should be doing. It’s the type of thing that, if it becomes widespread, in 40 years’ time, we will ask, ‘What on earth have we done?’”


For original post, please visit: 


For a related critical posts regarding the "Genomics of Education", please see: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Research at a Glance: Data Privacy and Children // Children and Screens: Institute of Digital Media and Child Development 

No comment yet.
Scooped by Roxana Marachi, PhD!

The Color of Surveillance: Monitoring of Poor and Working People // Center on Privacy and Technology // Georgetown Law

The Color of Surveillance: Monitoring of Poor and Working People // Center on Privacy and Technology // Georgetown Law | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | 

No comment yet.
Scooped by Roxana Marachi, PhD!

The Dangers of Risk Prediction in the Criminal Justice System

The Dangers of Risk Prediction in the Criminal Justice System | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates |

"Courts across the United States are using computer software to predict whether a person will commit a crime, the results of which are incorporated into bail and sentencing decisions. It is imperative that such tools be accurate and fair, but critics have charged that the software can be racially biased, favoring white defendants over Black defendants. We evaluate the claim that computer software is more accurate and fairer than people tasked with making similar decisions. We also evaluate, and explain, the presence of racial bias in these predictive algorithms.


We are the frequent subjects of predictive algorithms that determine music recommendations, product advertising, university admission, job placement, and bank loan qualification. In the criminal justice system, predictive algorithms have been used to predict where crimes will most likely occur, who is most likely to commit a violent crime, who is likely to fail to appear at their court hearing, and who is likely to reoffend at some point in the future.

Certain types of algorithmic tools known as “risk assessments” have become particularly prevalent in the criminal justice system within the United States. The majority of risk assessments are built to predict recidivism: asking whether someone with a criminal offense will reoffend at some point in the future. These tools rely on an individual’s criminal history, personal background, and demographic information to make these risk predictions.

Various risk assessments are in use across the country to inform decisions at almost every stage in the criminal justice system.undefined One widely used criminal risk assessment tool, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS, Northpointe), has been used to assess over one million individuals in the criminal justice system since it was developed in 1998.undefined The recidivism prediction component of COMPAS—the Recidivism Risk Scale—has been in use since 2000. This software predicts a person’s risk of committing a misdemeanor or felony within two years of assessment from an individual’s demographics and criminal record.


In the past few years, algorithmic risk assessments like COMPAS have become increasingly prevalent in pretrial decision making. In these contexts, an individual who has been arrested and booked in jail is assessed by the algorithmic tool in use by the given jurisdiction. Judges then consider the risk scores calculated by the tool in their decision to either release or detain a criminal defendant before their trial.

In May of 2016, writing for ProPublica, Julia Angwin and colleagues analyzed the efficacy of COMPAS in the pretrial context on over seven thousand individuals arrested in Broward County, Florida, between 2013 and 2014.undefined The analysis indicated that the predictions were unreliable and racially biased. The authors found that COMPAS’s overall accuracy for white defendants is 67.0%, only slightly higher than its accuracy of 63.8% for Black defendants.undefined The mistakes made by COMPAS, however, affected Black and white defendants differently: Black defendants who did not recidivate were incorrectly predicted to reoffend at a rate of 44.9%, nearly twice as high as their white counterparts at 23.5%; and white defendants who did recidivate were incorrectly predicted to not reoffend at a rate of 47.7%, nearly twice as high as their Black counterparts at 28.0%. In other words, COMPAS scores appeared to favor white defendants over Black defendants by underpredicting recidivism for white and overpredicting recidivism for Black defendants. Unsurprisingly, this caused an uproar and significant concern that technology was being used to further entrench racism in our criminal justice system.

Since the publication of the ProPublica analysis, there has been significant research and debate regarding the measurement of algorithmic fairness.undefined Complicating this discussion is the fact that the research community does not necessarily agree on the definition of what makes an algorithm fair. And some studies have revealed that certain definitions of fairness are mathematically incompatible.undefined To this date, the debate around mathematical measurement of fairness is both complicated and unresolved.

Algorithmic predictions have become common in the criminal justice system because they maintain a reputation of being objective and unbiased, whereas human decision making is considered inherently more biased and flawed. Northpointe describes COMPAS as “an objective method of estimating the likelihood of reoffending.”undefined The Public Safety Assessment (PSA), another common pretrial risk assessment tool, advertises itself as a tool to “provide judges with objective, data-driven, consistent information that can inform the decisions they make.”undefined In general, people often assume that algorithms using “big data techniques” are unbiased simply because of the amount of data used to build them.

After reading the ProPublica analysis in May of 2016, we started thinking about recidivism prediction algorithms and their use in the criminal justice system. To our surprise, we could not find any research proving that recidivism prediction algorithms are superior to human predictions. Due to the serious implications this type of software can have on a person’s life, we felt that we should start by confirming that COMPAS is, in fact, outperforming human predictions. We also felt that it was critical to get beyond the debate of how to measure fairness and understand why COMPAS’s predictive algorithm exhibited such troubling racial bias.


In our study, published in Science Advances in January 2018, we began by asking a fundamental question regarding the use of algorithmic risk predictions: are these tools more accurate than the human decision making they aim to replace?undefined The goal of the study was to evaluate the baseline for human performance on recidivism prediction, and assess whether COMPAS was actually outperforming this baseline. We found that people from a popular online crowd-sourcing marketplace—who, it can reasonably be assumed, have little to no expertise in criminal justice—are as accurate and fair as COMPAS at predicting recidivism. This somewhat surprising result then led us to ask: how is it possible that the average person on the internet, being paid $1 to respond to a survey, is as accurate as commercial software used in the criminal justice system? To answer this, we effectively reverse engineered the COMPAS prediction algorithm and discovered that the software is equivalent to a simple classifier based on only two pieces of data, and it is this simple predictor that leads to the algorithm reproducing historical racial inequities in the criminal justice system.

Comparing Human and Algorithmic Recidivism Prediction


Our study is based on a data set of 2013–2014 pretrial defendants from Broward County, Florida.undefined This data set of 7,214 defendants contains individual demographic information, criminal history, the COMPAS recidivism risk score, and each defendant’s arrest record within a two-year period following the COMPAS scoring, excluding any time spent detained in a jail or a prison. COMPAS scores—ranging from 1 to 10—classify the risk of recidivism as low-risk (1–4), medium-risk (5–7), or high-risk (8–10). For the purpose of binary classification, following the methodology used in the ProPublica analysis and the guidance of the COMPAS practitioner’s guide, scores of 5 or above were classified as a prediction of recidivism.undefined

Of the 7,214 defendants in the data set, 1,000 were randomly selected for use in our study that evaluated the human performance of recidivism prediction. This subset yields similar overall COMPAS accuracy, false positive rate, and false negative rate as on the complete data set. (A positive prediction is one in which a defendant is predicted to recidivate; a negative prediction is one in which they are predicted to not recidivate.) The COMPAS accuracy for this subset of 1,000 defendants is 65.2%. The average COMPAS accuracy on 10,000 random subsets of size 1,000 each is 65.4% (with a 95% confidence interval of [62.6, 68.1]).

A descriptive paragraph for each of 1,000 defendants was generated:

The defendant is a [SEX] aged [AGE]. They have been charged with: [CRIME CHARGE]. This crime is classified as a [CRIMINAL DEGREE]. They have been convicted of [NON-JUVENILE PRIOR COUNT] prior crimes. They have [JUVENILE- FELONY COUNT] juvenile felony charges and [JUVENILE-MISDEMEANOR COUNT] juvenile misdemeanor charges on their record.

Perhaps, most notably, we did not specify the defendant's race in this “no race” condition. In a follow-up “race” condition, the defendant’s race was included so that the first line of the above paragraph read, “The defendant is a [RACE] [SEX] aged [AGE].”

There was a total of sixty-three unique criminal charges, including armed robbery, burglary, grand theft, prostitution, robbery, and sexual assault. The crime degree is either “misdemeanor” or “felony.” To ensure that our participants understood the nature of each crime, the above paragraph was followed by a short description of each criminal charge:


After reading the defendant description, participants were then asked to respond either “Yes” or “No” to the question “Do you think this person will commit another crime within two years?” Participants were required to answer each question and could not change their response once it was made. After each answer, participants were given two forms of feedback: whether their response was correct and their average accuracy.

The 1,000 defendants were randomly divided into 20 subsets of 50 each. Each participant was randomly assigned to see one of these 20 subsets. Participants saw the 50 defendants—one at a time—in random order. Participants were only allowed to complete a single subset of 50 defendants.

Participants were recruited through Amazon’s Mechanical Turk, an online crowd-sourcing marketplace where people are paid to perform a wide variety of tasks. (Institutional review board [IRB] guidelines were followed for all participants.) Our task was titled “Predicting Crime” with the description “Read a few sentences about an actual person and predict if they will commit a crime in the future.” The keywords for the task were “survey, research, criminal justice.” Participants were paid one dollar for completing the task and an additional five-dollar bonus if their overall accuracy on the task was greater than 65%. This bonus was intended to provide an incentive for participants to pay close attention to the task. To filter out participants who were not paying close attention, three catch trials were randomly added to the subset of 50 questions. These questions were formatted to look like all other questions but had easily identifiable correct answers.undefined A participant’s response was eliminated from our analysis if any of these questions were answered incorrectly.

Responses for the first (no-race) condition were collected from 462 participants, 62 of whom were removed due to an incorrect response on a catch trial. Responses for the second (race) condition were collected from 449 participants, 49 of whom were removed due to an incorrect response on a catch trial. In each condition, this yielded 20 participant responses for each of 20 subsets of 50 questions. Because of the random pairing of participants to a subset of 50 questions, we occasionally oversampled the required number of 20 participants. In these cases, we selected a random 20 participants and discarded any excess responses.


We compare the overall accuracy and bias in human assessment with the algorithmic assessment of COMPAS. Throughout, a positive prediction is one in which a defendant is predicted to recidivate while a negative prediction is one in which they are predicted to not recidivate. We measure overall accuracy as the rate at which a defendant is correctly predicted to recidivate or not (i.e., the combined true positive and true negative rates). We also report on false positives (a defendant is predicted to recidivate but they don’t) and false negatives (a defendant is predicted to not recidivate but they do). Throughout, we use both paired and unpaired t-tests (with 19 degrees of freedom) to analyze the performance of our participants and COMPAS.

The mean and median accuracy in the no-race condition—computed by analyzing the average accuracy of the 400 human predictions—is 62.1% and 64.0%. We compare these results with the performance of COMPAS on this subset of 1,000 defendants. Because groups of 20 participants judged the same subset of 50 defendants, the individual judgments are not independent. However, because each participant judged only one subset of the defendants, the median accuracies of each subset can reasonably be assumed to be independent. The participant performance, therefore, on the 20 subsets can be directly compared to the COMPAS performance on the same 20 subsets. A one-sided t-test reveals that the average of the 20 median participant accuracies of 62.8% is, just barely, lower than the COMPAS accuracy of 65.2% (p = 0.045).

To determine if there is “wisdom in the crowd” (in our case, a small crowd of 20 people per subset), participant responses were pooled within each subset using a majority rules criterion. This crowd-based approach yields a prediction accuracy of 67.0%. A one-sided t-test reveals that COMPAS is not significantly better than the crowd (p = 0.85). This demonstrates that the commercial COMPAS prediction algorithm does not outperform small crowds of non-experts at predicting recidivism.

As we noted earlier, there exists significant debate regarding the measurement of algorithmic fairness. For the purpose of this study, we evaluate the human predictions with the same fairness criteria used in the ProPublica analysis for ease of comparability. We acknowledge that this may not be the ideal measure of fairness, and also acknowledge that there is debate in the literature on the appropriate measure of fairness.undefined Regardless, we consider fairness in terms of disparate false positive rates (incorrectly classifying a defendant as high risk when they are not) and false negative rates (incorrectly classifying a defendant as low risk when they are not). We believe that, while perhaps not perfect, this measure of fairness shines a light on real-world consequences of incorrect predictions by quantifying the number of defendants that are improperly incarcerated or released.

We measure the fairness of our participants with respect to a defendant’s race based on the crowd predictions. Our participants’ accuracy on Black defendants is 68.2% compared to 67.6% for white defendants. An unpaired t-test reveals no significant difference across race (p = .87). This is similar to that of COMPAS, having a statistically insignificant difference in accuracy of 64.9% for Black defendants and 65.7% for white defendants. By this measure of fairness, our participants and COMPAS are fair to Black and white defendants.

Despite this fairness in overall accuracy, our participants had a significant difference in the false positive and false negative rates for Black and white defendants. Specifically, our participants’ false positive rate for Black defendants is 37.1% compared to 27.2% for white defendants, and our participants’ false negative rate for Black defendants is 29.2% compared to 40.3% for white defendants.

These discrepancies are similar to that of COMPAS, which has a false positive rate of 40.4% for Black defendants and 25.4% for white defendants, and a false negative rate for Black defendants of 30.9% compared to 47.9% for white defendants. See table 1(a) and (c) and figure 1 for a summary of these results. By this measure of fairness, our participants and COMPAS are similarly unfair to Black defendants, despite—bizarrely—the fact that race is not explicitly specified.

[See full article for tables cited] 



The results of this study led us to question how human participants produced racially disparate predictions despite not knowing the race of the defendant. We recruited a new set of 400 participants to repeat the same exercise but this time with the defendant’s race included. We wondered if including a defendant’s race would reduce or exaggerate the effect of any implicit, explicit, or institutional racial bias.

In this race condition, the mean and median accuracy on predicting whether a defendant would recidivate is 62.3% and 64.0%, nearly identical to the condition where race is not specified, see table 1(a) and (b). The crowd-based accuracy is 66.5%, slightly lower than the condition in which race is not specified, but not significantly so. With respect to fairness, participant accuracy is not significantly different for Black defendants, 66.2%, compared to white defendants, 67.6%. The false positive rate for Black defendants is 40.0% compared to 26.2% for white defendants. The false negative rate for Black defendants is 30.1% compared to 42.1% for white defendants. See table 1(b) for a summary of these results.


Somewhat surprisingly, including race does not have a significant impact on overall accuracy or fairness. Most interestingly, the exclusion of race does not necessarily lead to the elimination of racial disparities in human recidivism prediction.

At this point in our study, we have confidently seen that the COMPAS predictive software is not superior to nonexpert human predictions. However, we are left with two perplexing questions:

  1. How is it that nonexperts are as accurate as a widely used commercial software? and

  2. How is it that nonexperts appear to be racially biased even when they don’t know the race of the defendant?

We next set out to answer these questions.

Replicating COMPAS’s Algorithmic Recidivism Prediction

With an overall accuracy of around 65%, COMPAS and nonexpert predictions are not as accurate as we might want, particularly from the point of view of a defendant whose future lies in the balance. Since nonexperts are as accurate as the COMPAS software, we wondered about the sophistication of the underlying COMPAS predictive algorithm. This algorithm, however, is not publicized, so we built our own predictive algorithm in an attempt to understand and effectively reverse engineer the COMPAS software.


Our algorithmic analysis used the same seven features as described in the previous section, extracted from the records in the Broward County data set. Unlike the human assessment that analyzed a subset of these defendants, the following algorithmic assessment is performed over the entire data set.

Logistic regression is a linear classifier that, in a two-class classification (as in our case), computes a separating hyperplane to distinguish between recidivists and nonrecidivists. A nonlinear support vector machine employs a kernel function—in our case, a radial basis kernel—to project the initial seven-dimensional feature space to a higher dimensional space in which a linear hyperplane is used to distinguish between recidivists and nonrecidivists. The use of a kernel function amounts to computing a nonlinear separating surface in the original seven-dimensional feature space, allowing the classifier to capture more complex patterns between recidivists and nonrecidivists than is possible with linear classifiers.


We employed two different classifiers: logistic regression (a simple, general-purpose, linear classifier) and a support vector machine (a more complex, general-purpose, nonlinear classifier).undefined The input to each classifier was seven features from 7,214 defendants: age, sex, number of juvenile misdemeanors, number of juvenile felonies, number of prior (nonjuvenile) crimes, crime degree, and crime charge (see previous section). Each classifier was trained to predict recidivism from these seven features. Each classifier was trained 1,000 times on a random 80% training and 20% testing split. We report the average testing accuracy.


We found that a simple linear predictor—logistic regression (LR)—provided with the same seven features as our participants (in the no-race condition), yields similar prediction accuracy as COMPAS’s predictive algorithm. As compared to COMPAS’s overall accuracy of 65.4%, our LR classifier yields an overall testing accuracy of 66.6%. Our predictor also yields similar results to COMPAS in terms of predictive fairness, see table 2(a) and (d).

Despite using only seven features as input, a standard linear predictor yields similar results to COMPAS’s software. We can reasonably conclude, therefore, that COMPAS is employing nothing more sophisticated than a linear predictor, or its equivalent.


To test whether performance was limited by the classifier or by the nature of the data, we trained a more powerful nonlinear support vector machine (SVM) on the same data. Somewhat surprisingly, the SVM yields nearly identical results to the linear classifier, see table 2(c). If the relatively low accuracy of the linear classifier was because the data is not linearly separable, then we would have expected the nonlinear SVM to perform better. The failure to do so suggests the data is not separable, linearly or otherwise.

Lastly, we wondered if using an even smaller subset of the seven features would be as accurate as COMPAS. We trained and tested an LR-classifier on all possible subsets of the seven features. In agreement with the research done by Angelino et al., we show that a classifier based on only two features—age and total number of prior convictions—performs as well as COMPAS, see table 2(b).undefined The importance of these two criteria is consistent with the conclusions of two meta-analysis studies that set out to determine, in part, which criteria are most predictive of recidivism.

Note: Predictions are for (a) logistic regression with seven features; (b) logistic regression with two features; (c) a nonlinear support vector machine with seven features; and (d) the commercial COMPAS software with 137 features. The results in columns (a)–(c) correspond to the average testing accuracy over 1,000 random 80/20 training/testing splits. The values in the square brackets correspond to the 95% bootstrapped (a)–(c) and binomial (d) confidence intervals.


In addition to further elucidating the inner workings of these predictive algorithms, the behavior of this two-feature linear classifier helps us understand how the nonexperts were able to match COMPAS’s predictive ability. When making predictions about an individual’s likelihood of future recidivism, the nonexperts saw the following seven criteria: age, sex, number of juvenile misdemeanors, number of juvenile felonies, number of prior (nonjuvenile) crimes, current crime degree, and current crime charge. If the algorithmic classifier can rely only on a person’s age and number of prior crimes to make this prediction, it is plausible that the nonexperts implicitly or explicitly focused on these criteria as well. (Recall that participants were provided with feedback on their correct and incorrect responses, so it is likely that some learning occurred.)


The two-feature classifier effectively learned that if a person is young and has already been convicted multiple times, they are at a higher risk of reoffending, but if a person is older and has not previously been convicted of a crime, then they are at a lower risk of reoffending. This certainly seems like a sensible strategy, if not a terribly accurate one.


The predictive strength of a person’s age and number of prior convictions in this context also helps explain the racially disparate predictions seen in both of our human studies and in COMPAS’s predictions overall. On a national scale, Black people are more likely to have prior convictions on their record than white people are: for example, Black people in the United States are incarcerated in state prisons at a rate that is 5.1 times that of white Americans.undefined Within the data set used in the study, white defendants have an average of 2.59 prior convictions, whereas Black defendants have an average of 4.95 prior convictions. In Florida, the state in which COMPAS was validated for use in Broward County, the incarceration rate of Black people is 3.6 times higher than that of white people.undefined These racially disparate incarceration rates are not fully explained by different rates of offense by race. Racial disparities against Black people in the United States also exist in policing, arrests, and sentencing.undefined The racial bias that appears in both the algorithmic and human predictions is a result of these discrepancies.


While the total number of prior convictions is one of the most predictive variables of recidivism, its predictive power is not very strong. Because COMPAS and the human participants are only moderately accurate (both achieve an accuracy of around 65%), they both make significant, and racially biased, mistakes. Black defendants are more likely to be classified as medium or high risk by COMPAS, because Black defendants are more likely to have prior convictions due to the fact that Black people are more likely to be arrested, charged, and convicted. On the other hand, white defendants are more likely to be classified as low risk by COMPAS, because white defendants are less likely to have prior convictions. Black defendants, therefore, who don’t reoffend are predicted to be riskier than white defendants who don’t reoffend. Conversely, white defendants who do reoffend are predicted to be less risky than Black defendants who do reoffend. As a result, the false positive rate is higher for Black defendants than white defendants, and the false negative rate for white defendants is higher than for Black defendants. This, in short, is the racial bias that ProPublica first exposed.


This same type of disparate outcome appeared in the human predictions as well. Because the human participants saw only a few facts about each defendant, it is safe to assume that the total number of prior convictions was heavily considered in one’s predictions. Therefore, the bias of the human predictions was likely also a result of the difference in conviction history, which itself is linked to inequities in our criminal justice system.


The participant and COMPAS’s predictions were in agreement for 692 of the 1,000 defendants, indicating that perhaps there could be predictive power in the “combined wisdom” of the risk tool and the human-generated risk scores. However, a classifier that combined the same seven data per defendant along with the COMPAS risk score and the average human-generated risk score performed no better than any of the individual predictions. This suggests that the mistakes made by humans and COMPAS are not independent.


We have shown that a commercial software that is widely used to predict recidivism is no more accurate or fair than the predictions of people with little to no criminal justice expertise who responded to an online survey. We have shown that these predictions are functionally equivalent. When discussing the use of COMPAS in the courtroom to make these life-altering decisions, we should therefore ask whether we would place these same decisions in the equally accurate and biased hands of random people responding to an online survey.


In response to our study, equivant, the makers of COMPAS, responded that our study was both “highly misleading,” and “confirmed that COMPAS achieves good predictability.”undefined Despite this contradictory statement and a promise to analyze our data and results, equivant has not demonstrated any flaws with our study.


Algorithmic predictions—whether in the courts, in university admissions, or employment, financial, and health decisions—can have a profound impact on someone’s life. It is essential, therefore, that the underlying data and algorithms that fuel these predictions are well understood, validated, and transparent to those who are the subject of their use.


In beginning to question the predictive validity of an algorithmic tool, it is essential to also interrogate the ethical implications of the use of the tool. Recidivism prediction tools are used in decisions about a person’s civil liberties. They are, for example, used to answer questions such as “Will this person commit a crime if they are released from jail before their trial? Should this person instead be detained in jail before their trial?” and “How strictly should this person be supervised while they are on parole? What is their risk of recidivism while they are out on parole?” Even if technologists could build a perfect and fair recidivism prediction tool, we should still ask if the use of this tool is just. In each of these contexts, a person is punished (either detained or surveilled) for a crime they have not yet committed. Is punishing a person for something they have not yet done ethical and just?


It is also crucial to discuss the possibility of building any recidivism prediction tool in the United States that is free from racial bias. Recidivism prediction algorithms are necessarily trained on decades of historical criminal justice data, learning the patterns of which kinds of people are incarcerated again and again. The United States suffers from racial discrimination at every stage in the criminal justice system. Machine learning technologies rely on the core assumption that the future will look like the past, and it is imperative that the future of our justice system looks nothing like its racist past. If any criminal risk prediction tool in the United States will inherently reinforce these racially disparate patterns, perhaps they should be avoided altogether."

For full publication, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Digital Dystopia: The Danger in Buying What the EdTech Surveillance Industry is Selling - ACLU Research Report

"Digital Dystopia: The Danger in Buying What the EdTech Surveillance Industry is Selling, an ACLU research report, examines the EdTech Surveillance (educational technologies used for surveillance) industry in U.S. K-12 schools. Using in-depth investigation into industry products, an incident audit, student focus groups, and national polling, this report scrutinizes industry claims, assesses the efficacy of the products, and explores the impacts EdTech Surveillance has on students and schools. The report concludes by offering concrete actions school districts, elected officials, and community members can take to ensure decisions about using surveillance products are consistent and well-informed. This includes model legislation and decision-making tools, which will often result in the rejection of student surveillance technologies."

Accompanying Resources:

School Board Model Policy - School Surveillance Technology

Model Bill - Student Surveillance Technology Acquisition Standards Act

School Leadership Checklist - School Surveillance Technology

10 Questions for School Board Meetings - School Surveillance Technology

Public Records Request Template- School Surveillance Technology


To download full report, click on title or arrow above. 

No comment yet.
Scooped by Roxana Marachi, PhD!

The Complications of Regulating AI // Interview with Elizabeth Renieris

The Complications of Regulating AI // Interview with Elizabeth Renieris | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates |
Podcast produced by Lydia Morell
"The idea that advancing technology outpaces regulation serves the industry's interests, says Elizabeth Renieris of Oxford. Current oversight methods apply to specific issues, but "general purpose" AI is harder to keep in check."... 


No comment yet.
Scooped by Roxana Marachi, PhD!

Algorithmic personalization is disrupting a healthy teaching environment // LSE

Algorithmic personalization is disrupting a healthy teaching environment // LSE | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates |
By Velislava Hillman and Molly Esquivel

"The UK government has given no sign of when it plans to regulate digital technology companies. In contrast, the US Federal Trade Commission will tomorrow consider whether to make changes on the Children’s Online Privacy Protection Act to address the risks emanating from the growing power of digital technology companies, many of which already play substantial roles in children’s lives and schooling. The free rein offered thus far has so far led many businesses to infiltrate education, slowly degrading the teaching profession and spying on children, argue LSE Visiting fellow Dr Velislava Hillman and junior high school teacher and Doctor of Education candidate Molly Esquivel. They take a look here at what they describe as the mess that digitalized classrooms have become, due to the lack of regulation and absence of support if businesses cause harm."


"Any teacher would attest to the years of specialized schooling, teaching practice, code of ethics and standards they face to obtain a license to teach; those in higher education also need a high-level degree, published scholarship, postgraduate certificates such as PGCE and more. In contrast, businesses offering education technologies enter the classroom with virtually no demonstration of any licensing or standards.

The teaching profession has now become an ironic joke of sorts. If teachers in their college years once dreamed of inspiring their future students, today these dreamers are facing a different reality: one in which they are required to curate and operate with all kinds of applications and platforms; collect edtech badges of competency (fig1); monitor data; navigate students through yet more edtech products.

Unlicensed and unregulated, without years in college and special teaching credentials, edtech products not only override teachers’ competencies and roles; they now dictate them.


Figure 1Teachers race to collect edtech badges

[See original article for image]

Wellbeing indexes and Karma Points

“Your efforts are being noticed” is how Thrively, an application that monitors students and claims to be used by over 120,000 educators across the US, greets its user. In the UK, Symanto, an AI-based software that analyses texts to infer about the psychological state of an individual, is used for a similar purpose.


The Thrively software gathers metrics on attendance, library use, grades, online learning activities and makes inferences about students – how engaged they are or how they feel. Solutionpath, offering support for struggling students, is used in several universities in the UK. ClassDojo claims to be used by 85% of UK primary schools and a global community of over 50 million teachers and families.


Classroom management software Impero, offers teachers remote control of children’s devices. The company claims to provide direct access to over 2 million devices in more than 90 countries. Among other things, the software has a ‘wellbeing keyword library index’ which seeks to identify students who may need emotional support. A form of policing: “with ‘who, what, when and why’ information staff members can build a full picture of the capture and intervene early if necessary”.

These products and others adopt the methodology of  algorithm-based monitoring and profiling of students’ mental health. Such products steer not only student behavior but that of teachers too. One reviewer says of Impero: “My teachers always watch our screens with this instead of teaching”.


When working in Thrively, each interaction with a student earns “Karma Points”. The application lists teacher goals – immediately playing on an educator’s deep-seeded passion to be their best for their students (fig2). Failure to obtain such points becomes internalized as failure in the teaching profession. Thrively’s algorithms could also trigger an all-out battle of who on the teaching staff can earn the most Points. Similarly, ClassDojo offers a ‘mentor’ program to teachers and awards them ‘mentor badges’.

Figure 2Thrively nudges teachers to engage with it to earn badges and “Karma points”; its tutorial states: “It’s OK to brag when you are elevating humanity.” [See original article for image]


The teacher becomes a ‘line operator’ on a conveyor belt run by algorithms. The amassed data triggers algorithmic diagnostics from each application, carving up the curriculum, controlling students and teachers. Inferential software like Thrively throws teachers into rabbit holes by asking them not only to assess students’ personal interests, but their mental state, too. Its Wellbeing Index takes “pulse checks” to tell how students feel as though teachers are incapable of direct connection with their students. In the UK, the lax legislation with regards to biometric data collection, can further lead to advancing technologies’ exploitation of such data into developing mental health prediction and psychometric analytics. Such practices not only increase the risks of harm towards children and students in general; they dehumanize the whole educational process.

Many other technology-infused, surveillance-based applications are thrust into the classroom. Thrively captures data of 12-14-year-olds and suggests career pathways besides how they feel. They share the captured data with third parties such as YouTube Kids, game-based and coding apps – outside vendors that Thrively curates. Impero enables integration with platforms like Clever, used by over 20 million teachers and students, and Microsoft, thus expanding the tech giant’s own reach by millions of individuals. As technology intersects with education, teachers are merely a second thought in curriculum design and leading the classroom.

Teachers must remain central in children’s education, not businesses

The digitalization of education has swiftly moved towards an algorithmic hegemony which is degrading the teaching profession. Edtech companies are judging how students learn, how teachers work – and how they both feel. Public-private partnerships are giving experimental software with arbitrary algorithms warrantless titles of “school official” to untested beta programme, undermining teachers. Ironically, teachers still carry the responsibility for what happens in class.

Parents should ask what software is used to judge how their children feel or do in class and why. At universities, students should enquire what inferences are made about their work or their mental health that emerges from algorithms. Alas, this means heaping yet more responsibility on individuals – parents, children, students, teachers – to fend for themselves. Therefore, at least two things must also happen. First, edtech products and companies must be licensed to operate, the way banks, hospitals or teachers are. And second, educational institutions should consider transparency about how mental health or academic profiling in general is assessed. If and when software analytics play a part, educators (through enquiry) as well as policymakers (through law) should insist on transparency and be critical about the data points collected and the algorithms that process them.

This article represents the views of the author and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.


For original post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

What's in a Name? Auditing Large Language Models for Race and Gender Bias // (Haim, Salinas, & Nyarko, 2024) // Stanford Law School via  

What's in a Name? Auditing Large Language Models for Race and Gender Bias.
By Amit Haim, Alejandro Salinas, Julian Nyarko


"We employ an audit design to investigate biases in state-of-the-art large language models, including GPT-4. In our study, we prompt the models for advice involving a named individual across a variety of scenarios, such as during car purchase negotiations or election outcome predictions. We find that the advice systematically disadvantages names that are commonly associated with racial minorities and women. Names associated with Black women receive the least advantageous outcomes. The biases are consistent across 42 prompt templates and several models, indicating a systemic issue rather than isolated incidents. While providing numerical, decision-relevant anchors in the prompt can successfully counteract the biases, qualitative details have inconsistent effects and may even increase disparities. Our findings underscore the importance of conducting audits at the point of LLM deployment and implementation to mitigate their potential for harm against marginalized communities."


Please visit following for abstract on and link to download: 


No comment yet.
Scooped by Roxana Marachi, PhD!

Time for a Pause: Without Effective Public Oversight, AI in Schools Will Do More Harm Than Good // National Education Policy Center 

Key Takeaway: The current wholesale adoption of unregulated Artificial Intelligence applications in schools poses a grave danger to democratic civil society and to individual freedom and liberty.

Find Documents:

NEPC Publication:

Publication Announcement:

Michelle Renée Valladares: (720) 505-1958,
Ben Williamson: 011-44-0131-651-6176,


Disregarding their own widely publicized appeals for regulating and slowing implementation of artificial intelligence (AI), leading tech giants like Google, Microsoft, and Meta are instead racing to evade regulation and incorporate AI into their platforms. 


A new NEPC policy brief, Time for a Pause: Without Effective Public Oversight, AI in Schools Will Do More Harm Than Good, warns of the dangers of unregulated AI in schools, highlighting democracy and privacy concerns. Authors Ben Williamson of the University of Edinburgh, and Alex Molnar and Faith Boninger of the University of Colorado Boulder, examine the evidence and conclude that the proliferation of AI in schools jeopardizes democratic values and personal freedoms.

Public education is a public and private good that’s essential to democratic civic life. The public must, therefore, be able to provide meaningful direction over schools through transparent democratic governance structures. Yet important discussions about AI’s potentially negative impacts on education are being overwhelmed by relentless rhetoric promoting its purported ability to positively transform teaching and learning. The result is that AI, with little public oversight, is on the verge of becoming a routine and overriding presence in schools.

Years of warnings and precedents have highlighted the risks posed by the widespread use of pre-AI digital technologies in education, which have obscured decision-making and enabled student data exploitation. Without effective public oversight, the introduction of opaque and unproven AI systems and applications will likely exacerbate these problems.

The authors explore the harms likely if lawmakers and others do not step in with carefully considered regulations. Integration of AI can degrade teacher-student relationships, corrupt curriculum with misinformation, encourage student performance bias, and lock schools into a system of expensive corporate technology. Further, they contend, AI is likely to exacerbate violations of student privacy, increase surveillance, and further reduce the transparency and accountability of educational decision-making.


The authors advise that without responsible development and regulation, these opaque AI models and applications will become enmeshed in routine school processes. This will force students and teachers to become involuntary test subjects in a giant experiment in automated instruction and administration that is sure to be rife with unintended consequences and potentially negative effects. Once enmeshed, the only way to disentangle from AI would be to completely dismantle those systems.

The policy brief concludes by suggesting measures to prevent these extensive risks. Perhaps most importantly, the authors urge school leaders to pause the adoption of AI applications until policymakers have had sufficient time to thoroughly educate themselves and develop legislation and policies ensuring effective public oversight and control of its school applications.


Find Time for a Pause: Without Effective Public Oversight, AI in Schools Will Do More Harm Than Good, by Ben Williamson, Alex Molnar, and Faith Boninger, at:


Suggested Citation: Williamson, B., Molnar, A., & Boninger, F. (2024). Time for a pause: Without effective public oversight, AI in schools will do more harm than good. Boulder, CO: National Education Policy Center. Retrieved [date] from


For original link to announcement, please see: 


No comment yet.
Scooped by Roxana Marachi, PhD!

The case of Canvas: Longitudinal datafication through learning management systems // Marachi & Quill, 2020 Teaching in Higher Education: Critical Perspectives  

The case of Canvas: Longitudinal datafication through learning management systems // Marachi & Quill, 2020 Teaching in Higher Education: Critical Perspectives   | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates |
The Canvas Learning Management System (LMS) is used in thousands of universities across the United States and internationally, with a strong and growing presence in K-12 and higher education markets. Analyzing the development of the Canvas LMS, we examine 1) ‘frictionless’ data transitions that bridge K12, higher education, and workforce data 2) integration of third party applications and interoperability or data-sharing across platforms 3) privacy and security vulnerabilities, and 4) predictive analytics and dataveillance.  We conclude that institutions of higher education are currently ill-equipped to protect students and faculty required to use the Canvas Instructure LMS from data harvesting or exploitation. We challenge inevitability narratives and call for greater public awareness concerning the use of predictive analytics, impacts of algorithmic bias, need for algorithmic transparency, and enactment of ethical and legal protections for users who are required to use such software platforms."

KEYWORDS: Data ethics, data privacy, predictive analytics, higher education, dataveillance
Author email contact : 
No comment yet.
Rescooped by Roxana Marachi, PhD from Social Impact Bonds, "Pay For Success," Results-Based Contracting, and Blockchain Digital Identity Systems!

Ableism And Disability Discrimination In New Surveillance Technologies: How new surveillance technologies in education, policing, health care, and the workplace disproportionately harm disabled peo...

Ableism And Disability Discrimination In New Surveillance Technologies: How new surveillance technologies in education, policing, health care, and the workplace disproportionately harm disabled peo... | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates |

Full report – PDF 

Plain language version – PDF

By Lydia X. Z. Brown, Ridhi Shetty, Matt Scherer, and Andrew Crawford

"Algorithmic technologies are everywhere. At this very moment, you can be sure students around the world are complaining about homework, sharing gossip, and talking about politics — all while computer programs observe every web search they make and every social media post they create, sending information about their activities to school officials who might punish them for what they look at. Other things happening right now likely include:

  • Delivery workers are trawling up and down streets near you while computer programs monitor their location and speed to optimize schedules, routes, and evaluate their performance;
  • People working from home are looking at their computers while their computers are staring back at them, timing their bathroom breaks, recording their computer screens, and potentially listening to them through their microphones;
  • Your neighbors – in your community or the next one over – are being tracked and designated by algorithms targeting police attention and resources to some neighborhoods but not others;
  • Your own phone may be tracking data about your heart rate, blood oxygen level, steps walked, menstrual cycle, and diet, and that information might be going to for-profit companies or your employer. Your social media content might even be mined and used to diagnose a mental health disability.

This ubiquity of algorithmic technologies has pervaded every aspect of modern life, and the algorithms are improving. But while algorithmic technologies may become better at predicting which restaurants someone might like or which music a person might enjoy listening to, not all of their possible applications are benign, helpful, or just.

Scholars and advocates have demonstrated myriad harms that can arise from the types of encoded prejudices and self-perpetuating cycles of discrimination, bias, and oppression that may result from automated decision-makers. These potentially harmful technologies are routinely deployed by government entities, private enterprises, and individuals to make assessments and recommendations about everything from rental applications to hiring, allocation of medical resources, and whom to target with specific ads. They have been deployed in a variety of settings including education and the workplace, often with the goal of surveilling activities, habits, and efficiency.

Disabled people comprise one such community that experiences discrimination, bias, and oppression resulting from automated decision-making technology. Disabled people continually experience marginalization in society, especially those who belong to other marginalized communities such as disabled women of color. Yet, not enough scholars or researchers have addressed the specific harms and disproportionate negative impacts that surveillance and algorithmic tools can have on disabled people. This is in part because algorithmic technologies that are trained on data that already embeds ableist (or relatedly racist or sexist) outcomes will entrench and replicate the same ableist (and racial or gendered) bias in the computer system. For example, a tenant screening tool that considers rental applicants’ credit scores, past evictions, and criminal history may prevent poor people, survivors of domestic violence, and people of color from getting an apartment because they are disproportionately likely to have lower credit scores, past evictions, and criminal records due to biases in the credit and housing systems and in policing disparities.

This report examines four areas where algorithmic and/or surveillance technologies are used to surveil, control, discipline, and punish people, with particularly harmful impacts on disabled people. They include: (1) education; (2) the criminal legal system; (3) health care; and (4) the workplace. In each section, we describe several examples of technologies that can violate people’s privacy, contribute to or accelerate existing harm and discrimination, and undermine broader public policy objectives (such as public safety or academic integrity).

Full report – PDF 

Plain language version – PDF 



No comment yet.
Scooped by Roxana Marachi, PhD!

Silicon Valley, Philanthrocapitalism, and Policy Shifts from Teachers to Tech // Marachi & Carpenter (2020) 

To order book, please visit: 


Marachi, R., & Carpenter, R. (2020). Silicon Valley, philanthrocapitalism, and policy shifts from teachers to tech. In Givan, R. K. and Lang, A. S. (Eds.). Strike for the Common Good: Fighting for the Future of Public Education (pp.217-233). Ann Arbor: University of Michigan Press.


To download pdf of final chapter manuscript, click here. 

No comment yet.
Scooped by Roxana Marachi, PhD!

The Big Business of Tracking and Profiling Students [Interview] // The Markup

The Big Business of Tracking and Profiling Students [Interview] // The Markup | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates |
View original post published January 15, 2022
"Hello, friends,

The United States is one of the few countries that does not have a federal baseline privacy law that lays out minimum standards for data use. Instead, it has tailored laws that are supposed to protect data in different sectors—including health, children’s and student data. 

But despite the existence of a law—the Family Educational Rights and Privacy Act—that is specifically designed to protect the privacy of student educational records, there are loopholes in the law that still allow data to be exploited. The Markup reporter Todd Feathers has uncovered a booming business in monetizing student data gathered by classroom software. 

In two articles published this week as part of our Machine Learning series, Todd identified a private equity firm, Vista Equity Partners, that has been buying up educational software companies that have collectively amassed a trove of data about children all the way from their first school days through college.

Vista Equity Partners, which declined to comment for Todd’s story, has acquired controlling ownership stakes in EAB, which provides college counseling and recruitment products to thousands of schools, and PowerSchool, which provides software for K-12 schools and says it holds data on more than 45 million children. 

Some of this data is used to create risk-assessment scores that claim to predict students’ future success. Todd filed public records requests for schools across the nation, and using those documents, he was able to discover that PowerSchool’s algorithm, in at least one district, considered a student who was eligible for free or reduced lunch to be at a higher risk of dropping out. 


Experts told us that using a proxy for wealth as a predictor for success is unfair because students can’t change that status and could be steered into less challenging opportunities as a result.


“I think that having [free and reduced lunch status] as a predictor in the model is indefensible in 2021,” said Ryan Baker, the director of the University of Pennsylvania’s Center for Learning Analytics. PowerSchool defended the use of the factor as a way to help educators provide additional services to students who are at risk.


Todd also found public records showing how student data is used by colleges to target potential applicants through PowerSchool’s Naviance software using controversial criteria such as the race of the applicant. For example, Todd uncovered a 2015 contract between Naviance and the University of Kansas revealing that the school paid for a year-long advertising campaign targeting only White students in three states.

The University of Kansas did not respond to requests for comment. PowerSchool’s chief privacy officer Darron Flagg said Naviance has since stopped colleges from using targeting “criteria that excludes under-represented groups.” He also said that PowerSchool complies with the student privacy law and “does not sell student or school data.”

But, as we have written at The Markup many times, not selling data does not mean not profiting from that data. To understand the perils of the booming educational data market, I spoke this week with Roxana Marachi, a professor of education at San José State University, who researches school violence prevention, high-stakes testing, privatization, and the technologization of teaching and learning. Marachi served as education chair of the CA/HI State NAACP from 2019 to 2021 and has been active in local, state, and national efforts to strengthen and protect public education. Her views do not necessarily reflect the policy or position of her employer.

Her written responses to my questions are below, edited for brevity.


Angwin: You have written that ed tech companies are engaged in a “structural hijacking of education.” What do you mean by this?

Marachi: There has been a slow and steady capture of our educational systems by ed tech firms over the past two decades. The companies have attempted to replace many different practices that we have in education. So, initially, it might have been with curriculum, say a reading or math program, but has grown over the years into wider attempts to extract social, emotional, behavioral, health, and assessment data from students. 

What I find troubling is that there hasn’t been more scrutiny of many of the ed tech companies and their data practices. What we have right now can be called “pinky promise” privacy policies that are not going to protect us. We’re getting into dangerous areas where many of the tech firms are being afforded increased access to the merging of different kinds of data and are actively engaged in the use of “predictive analytics” to try to gauge children’s futures.   

Angwin: Can you talk more about the harmful consequences this type of data exploitation could have?

Yes, researchers at the Data Justice Lab at Cardiff University have documented numerous data harms with the emergence of big data systems and related analytics—some of these include targeting based on vulnerability (algorithmic profiling), misuse of personal information, discrimination, data breaches, political manipulation and social harms, and data and system errors.

As an example in education, several data platforms market their products as providing “early warning systems” to support students in need, yet these same systems can also set students up for hyper-surveillance and racial profiling

One of the catalysts of my inquiry into data harms happened a few years ago when I was using my university’s learning management system. When reviewing my roster, I hovered the cursor over the name of one of my doctoral students and saw that the platform had marked her with one out of three stars, in effect labeling her as in the “lowest third” of students in the course in engagement. This was both puzzling and disturbing as it was such a false depiction—she was consistently highly engaged and active both in class and in correspondence. But the platform’s metric of page views as engagement made her appear otherwise.

Many tech platforms don’t allow instructors or students to delete such labels or to untether at all from algorithms set to compare students with these rank-based metrics. We need to consider what consequences will result when digital labels follow students throughout their educational paths, what longitudinal data capture will mean for the next generation, and how best to systemically prevent emerging, invisible data harms.

One of the key principles of data privacy is the “right to be forgotten”—for data to be able to be deleted. Among the most troubling of emerging technologies I’ve seen in education are blockchain digital ID systems that do not allow for data on an individual’s digital ledger to ever be deleted.

Angwin: There is a law that is supposed to protect student privacy, the Family Educational Rights Protection Act (FERPA). Is it providing any protection?

Marachi: FERPA is intended to protect student data, but unfortunately it’s toothless. While schools that refuse to address FERPA violations may have federal funding withheld from the Department of Education, in practice, this has never happened

One of the ways that companies can bypass FERPA is to have educational institutions designate them as an educational employee or partner. That way they have full access to the data in the name of supporting student success.

The other problem is that with tech platforms as the current backbone of the education system, in order for students to participate in formal education, they are in effect required to relinquish many aspects of their privacy rights. The current situation appears designed to allow ed tech programs to be in “technical compliance” with FERPA by effectively bypassing its intended protections and allowing vast access to student data.

Angwin: What do you think should be done to mitigate existing risks?

Marachi: There needs to be greater awareness that these data vulnerabilities exist, and we should work collectively to prevent data harms. What might this look like? Algorithmic audits and stronger legislative protections. Beyond these strategies, we also need greater scrutiny of the programs that come knocking on education’s door. One of the challenges is that many of these companies have excellent marketing teams that pitch their products with promises to close achievement gaps, support students’ mental health, improve school climate, strengthen social and emotional learning, support workforce readiness, and more. They’ll use the language of equity, access, and student success, issues that as educational leaders, we care about. 


Many of these pitches in the end turn out to be what I call equity doublespeak, or the Theranos-ing of education, meaning there’s a lot of hype without the corresponding delivery on promises. The Hechinger Report has documented numerous examples of high-profile ed tech programs making dubious claims of the efficacy of their products in the K-12 system. We need to engage in ongoing and independent audits of efficacy, data privacy, and analytic practices of these programs to better serve students in our care.

Angwin: You’ve argued that, at the very least, companies implementing new technologies should follow IRB guidelines for working with human subjects. Could you expand on that?

Marachi: Yes, Institutional Review Boards (IRBs) review research to ensure ethical protections of human subjects. Academic researchers are required to provide participants with full informed consent about the risks and benefits of research they’d be involved in and to offer the opportunity to opt out at any time without negative consequences.


Corporate researchers, it appears, are allowed free rein to conduct behavioral research without any formal disclosure to students or guardians of the potential risks or harms to their interventions, what data they may be collecting, or how they would be using students’ data. We know of numerous risks and harms documented with the use of online remote proctoring systems, virtual reality, facial recognition, and other emerging technologies, but rarely if ever do we see disclosure of these risks in the implementation of these systems.

If corporate researchers in ed tech firms were to be contractually required by partnering public institutions to adhere to basic ethical protections of the human participants involved in their research, it would be a step in the right direction toward data justice." 


No comment yet.
Scooped by Roxana Marachi, PhD!

AI Technology Threatens Educational Equity for Marginalized Students // The Progressive

AI Technology Threatens Educational Equity for Marginalized Students // The Progressive | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates |

By Tiera Tanksley

The fall semester is well underway, and schools across the United States are rushing to implement artificial intelligence (AI) in ways that bring about equity, access, and efficiency for all members of the school community. Take, for instance, Los Angeles Unified School District’s (LAUSD) recent decision to implement “Ed.” 


Ed is an AI chatbot meant to replace school advisors for students with Individual Education Plans (IEPs), who are disproportionately Black. Announced on the heels of a national uproar about teachers being unable to read IEPs due to lack of time, energy, and structural support, Ed might seem to many like a sliver of hope—the silver bullet needed to address the chronic mismanagement of IEPs and ongoing disenfranchisement of Black students in the district. But for Black students with IEPs, AI technologies like Ed might be more akin to a nightmare.

Since the pandemic, public schools have seen a proliferation of AI technologies that promise to remediate educational inequality for historically marginalized students. These technologies claim to predict behavior and academic performance, manage classroom engagement, detect and deter cheating, and proactively stop campus-based crimes before they happen. Unfortunately, because anti-Blackness is often baked into the design and implementation of these technologies, they often do more harm than good.

Proctorio, for example, is a popular remote proctoring platform that uses AI to detect perceived behavior abnormalities by test takers in real time. Because the platform employs facial detection systems that fail to recognize Black faces more than half of the time, Black students have an exceedingly hard time completing their exams without triggering the faulty detection systems, which results in locked exams, failing grades, and disciplinary action.


While being falsely flagged by Proctorio might induce test-taking anxiety or result in failed courses, the consequences for inadvertently triggering school safety technologies are much more devastating. Some of the most popular school safety platforms, like Gaggle and GoGaurdian, have been known to falsely identify discussions about LGBTQ+ identity, race related content, and language used by Black youth as dangerous or in violation of school disciplinary policies. Because many of these platforms are directly connected to law enforcement, students that are falsely identified are contacted by police both on campus and in their homes. Considering that Black youth endure the highest rates of discipline, assault, and carceral contact on school grounds and are six times more likely than their white peers to have fatal encounters with police, the risk of experiencing algorithmic bias can be life threatening.

These examples speak to the dangers of educational technologies designed specifically for safety, conduct, and discipline. But what about education technology (EdTech) intended for learning? Are the threats to student safety, privacy, and academic wellbeing the same?

Unfortunately, the use of educational technologies for purposes other than discipline seems to be the exception, not the rule. A national study examining the use of EdTech found an overall decrease in the use of the tools for teaching and learning, with over 60 percent of teachers reporting that the software is used to identify disciplinary infractions. 

What’s more, Black students and students with IEPs endure significantly higher rates of discipline not only from being disproportionately surveilled by educational technologies, but also from using tools like ChatGPT to make their learning experience more accommodating and accessible. This could include using AI technologies to support executive functioning, access translated or simplified language, or provide alternative learning strategies


"Many of these technologies are more likely to exacerbate educational inequities like racialized gaps in opportunity, school punishment, and surveillance."


To be sure, the stated goals and intentions of educational technologies are laudable, and speak to our collective hopes and dreams for the future of schools—places that are safe, engaging, and equitable for all students regardless of their background. But many of these technologies are more likely to exacerbate educational inequities like racialized gaps in opportunity, school punishment, and surveillance, dashing many of these idealistic hopes.  

To confront the disparities wrought by racially-biased AI, schools need a comprehensive approach to EdTech that addresses the harms of algorithmic racism for vulnerable groups. There are several ways to do this. 

One possibility is recognizing that EdTech is not neutral. Despite popular belief, educational technologies are not unbiased, objective, or race-neutral, and they do not inherently support the educational success of all students. Oftentimes, racism becomes encoded from the onset of the design process, and can manifest in the data set, the code, the decision making algorithms, and the system outputs.

Another option is fostering critical algorithmic literacy. Incorporating critical AI curriculum into K-12 coursework, offering professional development opportunities for educators, or hosting community events to raise awareness of algorithmic bias are just a few of the ways schools can support bringing students and staff up to speed. 

A third avenue is conducting algorithmic equity audits. Each year, the United States spends nearly $13 billion on educational technologies, with the LAUSD spending upwards of $227 million on EdTech in the 2020-2021 academic year alone. To avoid a costly mistake, educational stakeholders can work with third-party auditors to identify biases in EdTech programs before launching them. 

Regardless of the imagined future that Big Tech companies try to sell us, the current reality of EdTech for marginalized students is troubling and must be reckoned with. For LAUSD—the second largest district in the country and the home of the fourteenth largest school police force in California—the time to tackle the potential harms of AI systems like Ed the IEP Chatbot is now."


For full post, please visit: 

Samantha Alanís's curator insight, February 1, 10:19 PM
Certainly!! The potential threat of AI technology to educational equity for students raises concerns... It is crucial to consider how technological implementations may inadvertently exacerbate existing disparities, emphasizing the need for thoughtful, inclusive approaches in education technology to ensure equitable access and opportunities for all!!! :D
Scooped by Roxana Marachi, PhD!

Tokenizing Toddlers: Cradle-to-Career Behavioral Tracking on Blockchain, Web3, and the "Internet of Education" [Slidedeck] // Marachi, 2022

Tokenizing Toddlers: Cradle-to-Career Behavioral Tracking on Blockchain, Web3, and the "Internet of Education" [Slidedeck] // Marachi, 2022 | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | 

No comment yet.
Scooped by Roxana Marachi, PhD!

Behind the AI boom, an army of overseas workers in ‘digital sweatshops’ // The Washington Post

Behind the AI boom, an army of overseas workers in ‘digital sweatshops’ // The Washington Post | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates |

By Rebecca Tan and Regine Cabato

CAGAYAN DE ORO, Philippines — In a coastal city in the southern Philippines, thousands of young workers log online every day to support the booming business of artificial intelligence.

In dingy internet cafes, jampacked office spaces or at home, they annotate the masses of data that American companies need to train their artificial intelligence models. The workers differentiate pedestrians from palm trees in videos used to develop the algorithms for automated driving; they label images so AI can generate representations of politicians and celebrities; they edit chunks of text to ensure language models like ChatGPT don’t churn out gibberish.


More than 2 million people in the Philippines perform this type of “crowdwork,” according to informal government estimates, as part of AI’s vast underbelly. While AI is often thought of as human-free machine learning, the technology actually relies on the labor-intensive efforts of a workforce spread across much of the Global South and often subject to exploitation.

The mathematical models underpinning AI tools get smarter by analyzing large data sets, which need to be accurate, precise and legible to be useful. Low-quality data yields low-quality AI. So click by click, a largely unregulated army of humans is transforming the raw data into AI feedstock.

In the Philippines, one of the world’s biggest destinations for outsourced digital work, former employees say that at least 10,000 of these workers do this labor on a platform called Remotasks, which is owned by the $7 billion San Francisco start-up Scale AI.

Scale AI has paid workers at extremely low rates, routinely delayed or withheld payments and provided few channels for workers to seek recourse, according to interviews with workers, internal company messages and payment records, and financial statements. Rights groups and labor researchers say Scale AI is among a number of American AI companies that have not abided by basic labor standards for their workers abroad.


Of 36 current and former freelance workers interviewed, all but two said they’ve had payments from the platform delayed, reduced or canceled after completing tasks. The workers, known as “taskers,” said they often earn far below the minimum wage — which in the Philippines ranges from $6 to $10 a day depending on region — though at times they do make more than the minimum.

Scale AI, which does work for firms like Meta, Microsoft and generative AI companies like Open AI, the creator of ChatGPT, says on its website that it is “proud to pay rates at a living wage.” In a statement, Anna Franko, a Scale AI spokesperson, said the pay system on Remotasks “is continually improving” based on worker feedback and that “delays or interruptions to payments are exceedingly rare.”

But on an internal messaging platform for Remotasks, which The Washington Post accessed in July, notices of late or missing payments from supervisors were commonplace. On some projects, there were multiple notices in a single month. Sometimes, supervisors told workers payments were withheld because work was inaccurate or late. Other times, supervisors gave no explanation. Attempts to track down lost payments often went nowhere, workers said — or worse, led to their accounts being deactivated.


Charisse, 23, said she spent four hours on a task that was meant to earn her $2, and Remotasks paid her 30 cents.

Jackie, 26, said he worked three days on a project that he thought would earn him $50, and he got $12.

Benz, 36, said he’d racked up more than $150 in payments when he was suddenly booted from the platform. He never got the money, he said.

Paul, 25, said he’s lost count of how much money he’s been owed over three years of working on Remotasks. Like other current Remotasks freelancers, Paul spoke on the condition of being identified only by first name to avoid being expelled from the platform. He started “tasking” full time in 2020 after graduating from university. He was once excited to help build AI, he said, but these days, he mostly feels embarrassed by how little he earns.

“The budget for all this, I know it’s big,” Paul said, staring into his hands at a coffee shop in Cagayan de Oro. “None of that is trickling down to us.”


Much of the ethical and regulatory debate over AI has focused so far on its propensity for bias and potential to go rogue or be abused, such as for disinformation. But companies producing AI technology are also charting a new frontier in labor exploitation, researchers say.

In enlisting people in the Global South as freelance contractors, micro-tasking platforms like Remotasks sidestep labor regulations — such as a minimum wage and a fair contract — in favor of terms and conditions they set independently, said Cheryll Soriano, a professor at De La Salle University in Manila who studies digital labor in the Philippines. “What it comes down to,” she said, “is a total absence of standards.”

Dominic Ligot, a Filipino AI ethicist, called these new workplaces “digital sweatshops.”

Overseas outposts

Founded in 2016 by young college dropouts and backed by some $600 million in venture capital, Scale AI has cast itself as a champion of American efforts in the race for AI supremacy. In addition to working with large technology companies, Scale AI has been awarded hundreds of millions of dollars to label data for the U.S. Department of Defense. To work on such sensitive, specialized data sets, the company has begun seeking out more contractors in the United States, though the vast majority of the workforce is still located in Asia, Africa and Latin America.... "


For full article, please visit: 

Adela Ruiz's comment, January 24, 12:25 PM
Me parece muy interesante este tipo de artículos porque muestra la cara menos amable de los avances en tecnología. Como se puede leer la explotación laboral en IA se manifiesta en condiciones de trabajo precarias para aquellos que están involucrados en la recopilación, etiquetado y procesamiento de datos necesarios para entrenar a los modelos de IA. Los trabajadores que realizan estas tareas a menudo enfrentan condiciones laborales desafiantes, largas jornadas de trabajo y salarios insuficientes. Además, la falta de regulaciones claras en esta área puede exacerbar la explotación y permitir prácticas laborales injustas.
Scooped by Roxana Marachi, PhD!

AI: Two reports reveal a massive enterprise pause over security and ethics // Diginomica

AI: Two reports reveal a massive enterprise pause over security and ethics // Diginomica | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates |

by Chris Middleton 

"No one doubts that artificial intelligence is a strategic boardroom issue, though diginomica revealed last year that much of the initial buzz was individuals using free cloud tools as shadow IT, while many business leaders talked up AI in their earnings calls just to keep investors happy. 

In 2024, those caveats remain amidst the hype. As one of my stories from KubeCon + CloudNativeCon last week showed, the reality for many software engineering teams is the C-suite demanding an AI ‘hammer’ with little idea of what business nail they want to hit with it. 

Or, as Intel Vice President and General Manager for Open Ecosystem Arun Gupta put it: 

"When we go into a CIO discussion, it’s ‘How can I use Gen-AI?’ And I’m like, ‘I don’t know. What do you want to do with it?’ And the answer is, ‘I don’t know, you figure it out!’"

So, now that AI Spring is in full bloom, what is the reality of enterprise adoption? Two reports this week unveil some surprising new findings, many of which show that the hype cycle is ending more quickly than the industry would like.

First up is a white paper from $2 billion cloud incident-response provider, PagerDuty. According to its survey of 100 Fortune 1,000 IT leaders, 100% are concerned about the security risks of the technology, and 98% have paused Gen-AI projects as a result. 


Those are extraordinary figures. However, the perceived threats are not solely about cybersecurity (with phishing, deep fakes, complex fraud, and automated attacks on the rise), but are rooted in what PagerDuty calls the “moral implications”. These include worries over copyright theft in training data and any legal exposure that may arise from that. 

As previously reported (see diginomica, passim), multiple IP infringement lawsuits are ongoing in the US, while in the UK, the House of Lords’ Communications and Digital Committee was clear, in its inquiry into Large Language Models, that copyright theft had taken place. A conclusion that peers arrived at after interviewing expert witnesses from all sides of the debate, including vendors and lawyers.

According to PagerDuty, unease over these issues keeps more than half of respondents (51%) awake at night, with nearly as many concerned about the disclosure of sensitive information (48%), data privacy violations (47%), and social engineering attacks (46%). They are right to be cautious: last year, diginomica reported that source code is the most common form of privileged data disclosed to cloud-based AI tools.

The white paper adds:
"Any of these security risks could damage the company’s public image, which explains why Gen-AI’s risk to the organization’s reputation tops the list of concerns for 50% of respondents. More than two in five also worry about the ethics of the technology (42%). Among the executives with these moral concerns, inherent societal biases of training data (26%) and lack of regulation (26%) top the list."

Despite this, only 25% of IT leaders actively mistrust the technology, adds the white paper – cold comfort for vendors, perhaps. Even so, it is hard to avoid the implication that, while some providers might have first- or big-mover advantage in generative AI, any that trained their systems unethically may have stored up a world of problems for themselves.

However, with nearly all Fortune 1,000 companies pausing their AI programmes until clear guidelines can be put in place – though the figure of 98% seems implausibly high – the white paper adds:


"Executives value these policies, so much so that a majority (51%) believe they should adopt Gen-AI only after they have the right guidelines in place. [But] others believe they risk falling behind if they don’t adopt Gen-AI as quickly as possible, regardless of parameters (46%)."


Those figures suggest a familiar pattern in enterprise tech adoption: early movers stepping back from their decisions, while the pack of followers is just getting started. 


Yet the report continues:
"Despite the emphasis and clear need, only 29% of companies have established formal guidelines. Instead, 66% are currently setting up these policies, which means leaders may need to keep pausing Gen-AI until they roll out a course of action."


That said, the white paper’s findings are inconsistent in some respects, and thus present a confusing picture – conceivably, one of customers confirming a security researcher’s line of questioning. Imagine that: confirmation bias in a Gen-AI report!


For example, if 98% of IT leaders say they have paused enterprise AI programmes until organizational guidelines are put in place, how are 64% of the same survey base able to report that Gen-AI is still being used in “some or all” of their departments? 


One answer may be that, as diginomica found last year, that ‘departmental’ use may in fact be individuals experimenting with cloud-based tools as shadow IT. That aside, the white paper confirms that early enterprise adopters may be reconsidering their incautious rush."...


For full post, please visit:  

No comment yet.
Scooped by Roxana Marachi, PhD!

Roblox facilitates “illegal gambling” for minors, according to new lawsuit //ArsTechnica

Roblox facilitates “illegal gambling” for minors, according to new lawsuit //ArsTechnica | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates |

By Kyle Orland

"A new proposed class-action lawsuit (as noticed by Bloomberg Law) accuses user-generated "metaverse" company Roblox of profiting from and helping to power third-party websites that use the platform's Robux currency for unregulated gambling activities. In doing so, the lawsuit says Roblox is effectively "work[ing] with and facilitat[ing] the Gambling Website Defendants... to offer illegal gambling opportunities to minor users."


The three gambling website companies named in the lawsuit—Satozuki, Studs Entertainment, and RBLXWild Entertainment—allow users to connect a Roblox account and convert an existing balance of Robux virtual currency into credits on the gambling site. Those credits act like virtual casino chips that can be used for simple wagers on those sites, ranging from Blackjack to "coin flip" games.

If a player wins, they can transfer their winnings back to the Roblox platform in the form of Robux. The gambling sites use fake purchases of worthless "dummy items" to facilitate these Robux transfers, according to the lawsuit, and Roblox takes a 30 percent transaction fee both when players "cash in" and "cash out" from the gambling sites. If the player loses, the transferred Robux are retained by the gambling website through a "stock" account on the Roblox platform.


In either case, the Robux can be converted back to actual money through the Developer Exchange Program. For individuals, this requires a player to be at least 13 years old, to file tax paperwork (in the US), and to have a balance of at least 30,000 Robux (currently worth $105, or $0.0035 per Robux).

The gambling websites also use the Developer Exchange Program to convert their Robux balances to real money, according to the lawsuit. And the real money involved isn't chump change, either; the lawsuit cites a claim from RBXFlip's owners that 7 billion Robux (worth over $70 million) was wagered on the site in 2021 and that the site's revenues increased 10 times in 2022. The sites are also frequently promoted by Roblox-focused social media influencers to drum up business, according to the lawsuit.


Who’s really responsible?

Roblox's terms of service explicitly bar "experiences that include simulated gambling, including playing with virtual chips, simulated betting, or exchanging real money, Robux, or in-experience items of value." But the gambling sites get around this prohibition by hosting their games away from Roblox's platform of user-created "experiences" while still using Robux transfers to take advantage of players' virtual currency balances from the platform.


This can be a problem for parents who buy Robux for their children thinking they're simply being used for in-game cosmetics and other gameplay items (over half of Roblox players were 12 or under as of 2020). Two parents cited in the lawsuit say their children have lost "thousands of Robux" to the gambling sites, which allegedly have nonexistent or ineffective age-verification controls.

Through its maintenance of the Robux currency platform that powers these sites, the lawsuit alleges that Roblox "monitors and records each of these illegal transactions, yet does nothing to prevent them from happening." Allowing these sites to profit from minors gambling with Robux amounts to "tacitly approv[ing] the Illegal Gambling Websites’ use of [Robux] that Roblox’s minor users can utilize to place bets on the Illegal Gambling Websites." This amounts to a violation of the federal RICO act, as well as California's Unfair Competition Law and New York's General Business Law, among other alleged violations.

In a statement provided to Bloomberg Law, Roblox said that "these are third-party sites and have no legal affiliation to Roblox whatsoever. Bad actors make illegal use of Roblox’s intellectual property and branding to operate such sites in violation of our standards.”

This isn't the first time a game platform has run into problems with its virtual currency powering gambling. In 2016, Valve faced a lawsuit and government attention from Washington state over third-party sites that use Counter-Strike skins as currency for gambling games. The lawsuit against Steam was eventually dismissed last year."


For original post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

A patchwork of platforms: mapping data infrastructures in schools // Pangrazio, Selwyn, & Cumbo (2022)

A patchwork of platforms: mapping data infrastructures in schools // Pangrazio, Selwyn, & Cumbo (2022) | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates |

This paper explores the significance of schools’ data infrastructures as a site of institutional power and (re)configuration. Using ‘infrastructure studies’ as a theoretical framework and drawing on in-depth studies of three contrasting Australian secondary schools, the paper takes a holistic look at schools’ data infrastructures. In contrast to the notion of the ‘platformatised’ school, the paper details the ad hoc and compromised ways that these school data infrastructures have developed – highlighting a number of underlying sociotechnical conditions that lead to an ongoing process of data infrastructuring. These include issues of limited technical interoperability and differences between educational requirements and commercially-led designs. Also apparent is the disjuncture between the imagined benefits of institutional data use and the ongoing maintenance and repair required to make the infrastructures function. Taking an institutional perspective, the paper explores why digital technologies continue to complicate (rather than simplify) school processes and practices."


For journal access, please visit: 

No comment yet.