Educational Psychology & Emerging Technologies: Critical Perspectives and Updates
31.6K views | +0 today
Follow
Educational Psychology & Emerging Technologies: Critical Perspectives and Updates
This curated collection includes updates, resources, and research with critical perspectives related to the intersections of educational psychology and emerging technologies in education. The page also serves as a research tool to organize online content (funnel shaped icon allows keyword search). For more on the intersections of privatization and technologization of education with critiques of the social impact finance and related technologies, please visit http://bit.ly/sibgamble and http://bit.ly/chart_look. For posts regarding screen time risks to health and development, see http://bit.ly/screen_time and for updates related to AI and data concerns, please visit http://bit.ly/DataJusticeLinks.   [Note: Views presented on this page are re-shared from external websites.  The content may not necessarily represent the views nor official position of the curator nor employer of the curator.
Your new post is loading...
Your new post is loading...
Scooped by Roxana Marachi, PhD
August 20, 2024 1:27 PM
Scoop.it!

AI in Education: The Hype, the Harms, Policy Gaps, and "Solutions" Traps [Slidedeck] March 25, 2024 // PA Advisory Committee to the U.S. Commission on Civil Rights, panel hearings on Civil Rights a...

AI in Education: The Hype, the Harms, Policy Gaps, and "Solutions" Traps [Slidedeck] March 25, 2024 // PA Advisory Committee to the U.S. Commission on Civil Rights, panel hearings on Civil Rights a... | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

Marachi, R. (March 25, 2024). AI in Education: The Hype, the Harms, Policy Gaps, and “Solutions” Traps [Slidedeck]. Invited presentation for the Pennsylvania Advisory Committee to the U.S. Commission on Civil Rights, panel hearings on Civil Rights and the Rising Use of AI in Education.

 

Shortlink to slidedeck: http://bit.ly/Marachi_PA_CivilRights

 

 

No comment yet.
Scooped by Roxana Marachi, PhD
December 5, 2024 1:01 PM
Scoop.it!

AllHere [AI] Founder Faked Financial Consultant Emails, Defrauded Investors, Federal Prosecutors Allege // EdWeek

AllHere [AI] Founder Faked Financial Consultant Emails, Defrauded Investors, Federal Prosecutors Allege // EdWeek | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

Federal prosecutors have charged Joanna Smith-Griffin with defrauding investors out of nearly $10 million.

 

By Emma Kate Fittes

"After a sharp rise in the K-12 space, the artificial intelligence startup AllHere is ending the year with a bankruptcy case and its founder and former CEO under arrest.

Federal prosecutors have charged Joanna Smith-Griffin with defrauding investors out of nearly $10 million, according to an indictment unsealed in Manhattan federal court last week.

Smith-Griffin faces securities and wire fraud and aggravated identity theft charges for allegedly misleading investors, employees, and customers by inflating the company’s revenue, cash position, and customer base, the document says.


Prosecutors allege that Smith-Griffin went as far as to create a fake email address for a real AllHere financial consultant, allowing her to send false financial and client information to investors.

They also say Smith-Griffin claimed to have provided services to multiple districts that never had a contractual relationship with the company, which was a Harvard Innovation Lab venture.

 
 

AllHere’s most prominent — and legitimate — customer was the Los Angeles Unified Schools, which had a $6 million contract with AllHere to build an ambitious AI tool for the major district.

“The indictment and the allegations represent, if true, a disturbing and disappointing house of cards that deceived and victimized many across the country,” an LAUSD spokesperson told EdWeek Market Brief in an email. “We will continue to assert and protect our rights.”

 

LAUSD has since paused its use of the AI assistant, known as “Ed,” which was introduced in March as a “learning acceleration platform.” The tool belongs to LAUSD, the district said in a statement emailed to EdWeek Market Brief in June.

Smith-Griffin’s indictment comes two months after federal prosecutors became involved in AllHere’s ongoing bankruptcy case and subpoenaed documents. AllHere furloughed the majority of its staff in June and changed its leadership — an action which came to light days after LAUSD’s superintendent was touting its work with the company at a major education conference.

 

It filed for Chapter 7 bankruptcy in Delaware in August.

Since then, questions have swirled about the cause of the downfall, including a former employee who told The74 in July that he warned LAUSD that AllHere’s data privacy practices violated district policies against sharing students’ personally identifiable information and data-protection best practices.

 

Smith-Griffin’s indictment does not address those questions about data privacy.

 

The new indictment also accuses Smith-Griffin of using AllHere’s inflated success to raise her public profile. She was a Forbes 30 Under 30 award recipient and named to Time World’s top edtech companies list earlier this year.

For original article, please visit:

https://marketbrief.edweek.org/regulation-policy/allhere-founder-faked-financial-consultant-emails-defrauded-investors-federal-prosecutors-allege/2024/11 

No comment yet.
Scooped by Roxana Marachi, PhD
May 21, 2023 3:24 PM
Scoop.it!

Silicon Valley, Philanthrocapitalism, and Policy Shifts from Teachers to Tech // Marachi & Carpenter (2020) 


To order book, please visit: https://www.press.umich.edu/11621094/strike_for_the_common_good 

 

Marachi, R., & Carpenter, R. (2020). Silicon Valley, philanthrocapitalism, and policy shifts from teachers to tech. In Givan, R. K. and Lang, A. S. (Eds.). Strike for the Common Good: Fighting for the Future of Public Education (pp.217-233). Ann Arbor: University of Michigan Press.

 

To download pdf of final chapter manuscript, click here. 

No comment yet.
Scooped by Roxana Marachi, PhD
June 27, 2020 10:45 AM
Scoop.it!

The case of Canvas: Longitudinal datafication through learning management systems // Marachi & Quill, 2020 Teaching in Higher Education: Critical Perspectives  

The case of Canvas: Longitudinal datafication through learning management systems // Marachi & Quill, 2020 Teaching in Higher Education: Critical Perspectives   | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it
Abstract
The Canvas Learning Management System (LMS) is used in thousands of universities across the United States and internationally, with a strong and growing presence in K-12 and higher education markets. Analyzing the development of the Canvas LMS, we examine 1) ‘frictionless’ data transitions that bridge K12, higher education, and workforce data 2) integration of third party applications and interoperability or data-sharing across platforms 3) privacy and security vulnerabilities, and 4) predictive analytics and dataveillance.  We conclude that institutions of higher education are currently ill-equipped to protect students and faculty required to use the Canvas Instructure LMS from data harvesting or exploitation. We challenge inevitability narratives and call for greater public awareness concerning the use of predictive analytics, impacts of algorithmic bias, need for algorithmic transparency, and enactment of ethical and legal protections for users who are required to use such software platforms."

KEYWORDS: Data ethics, data privacy, predictive analytics, higher education, dataveillance

 https://doi.org/10.1080/13562517.2020.1739641
 
Author email contact : roxana.marachi@sjsu.edu 
No comment yet.
Scooped by Roxana Marachi, PhD
July 4, 2023 9:15 PM
Scoop.it!

The Big Business of Tracking and Profiling Students [Interview] // The Markup

The Big Business of Tracking and Profiling Students [Interview] // The Markup | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it
View original post published January 15, 2022
_______________
"Hello, friends,

The United States is one of the few countries that does not have a federal baseline privacy law that lays out minimum standards for data use. Instead, it has tailored laws that are supposed to protect data in different sectors—including health, children’s and student data. 

But despite the existence of a law—the Family Educational Rights and Privacy Act—that is specifically designed to protect the privacy of student educational records, there are loopholes in the law that still allow data to be exploited. The Markup reporter Todd Feathers has uncovered a booming business in monetizing student data gathered by classroom software. 

In two articles published this week as part of our Machine Learning series, Todd identified a private equity firm, Vista Equity Partners, that has been buying up educational software companies that have collectively amassed a trove of data about children all the way from their first school days through college.

Vista Equity Partners, which declined to comment for Todd’s story, has acquired controlling ownership stakes in EAB, which provides college counseling and recruitment products to thousands of schools, and PowerSchool, which provides software for K-12 schools and says it holds data on more than 45 million children. 

Some of this data is used to create risk-assessment scores that claim to predict students’ future success. Todd filed public records requests for schools across the nation, and using those documents, he was able to discover that PowerSchool’s algorithm, in at least one district, considered a student who was eligible for free or reduced lunch to be at a higher risk of dropping out. 

 

Experts told us that using a proxy for wealth as a predictor for success is unfair because students can’t change that status and could be steered into less challenging opportunities as a result.

 

“I think that having [free and reduced lunch status] as a predictor in the model is indefensible in 2021,” said Ryan Baker, the director of the University of Pennsylvania’s Center for Learning Analytics. PowerSchool defended the use of the factor as a way to help educators provide additional services to students who are at risk.

 

Todd also found public records showing how student data is used by colleges to target potential applicants through PowerSchool’s Naviance software using controversial criteria such as the race of the applicant. For example, Todd uncovered a 2015 contract between Naviance and the University of Kansas revealing that the school paid for a year-long advertising campaign targeting only White students in three states.

The University of Kansas did not respond to requests for comment. PowerSchool’s chief privacy officer Darron Flagg said Naviance has since stopped colleges from using targeting “criteria that excludes under-represented groups.” He also said that PowerSchool complies with the student privacy law and “does not sell student or school data.”

But, as we have written at The Markup many times, not selling data does not mean not profiting from that data. To understand the perils of the booming educational data market, I spoke this week with Roxana Marachi, a professor of education at San José State University, who researches school violence prevention, high-stakes testing, privatization, and the technologization of teaching and learning. Marachi served as education chair of the CA/HI State NAACP from 2019 to 2021 and has been active in local, state, and national efforts to strengthen and protect public education. Her views do not necessarily reflect the policy or position of her employer.

Her written responses to my questions are below, edited for brevity.

_______________________________

Angwin: You have written that ed tech companies are engaged in a “structural hijacking of education.” What do you mean by this?

Marachi: There has been a slow and steady capture of our educational systems by ed tech firms over the past two decades. The companies have attempted to replace many different practices that we have in education. So, initially, it might have been with curriculum, say a reading or math program, but has grown over the years into wider attempts to extract social, emotional, behavioral, health, and assessment data from students. 

What I find troubling is that there hasn’t been more scrutiny of many of the ed tech companies and their data practices. What we have right now can be called “pinky promise” privacy policies that are not going to protect us. We’re getting into dangerous areas where many of the tech firms are being afforded increased access to the merging of different kinds of data and are actively engaged in the use of “predictive analytics” to try to gauge children’s futures.   

Angwin: Can you talk more about the harmful consequences this type of data exploitation could have?


Marachi: 
Yes, researchers at the Data Justice Lab at Cardiff University have documented numerous data harms with the emergence of big data systems and related analytics—some of these include targeting based on vulnerability (algorithmic profiling), misuse of personal information, discrimination, data breaches, political manipulation and social harms, and data and system errors.

As an example in education, several data platforms market their products as providing “early warning systems” to support students in need, yet these same systems can also set students up for hyper-surveillance and racial profiling

One of the catalysts of my inquiry into data harms happened a few years ago when I was using my university’s learning management system. When reviewing my roster, I hovered the cursor over the name of one of my doctoral students and saw that the platform had marked her with one out of three stars, in effect labeling her as in the “lowest third” of students in the course in engagement. This was both puzzling and disturbing as it was such a false depiction—she was consistently highly engaged and active both in class and in correspondence. But the platform’s metric of page views as engagement made her appear otherwise.

Many tech platforms don’t allow instructors or students to delete such labels or to untether at all from algorithms set to compare students with these rank-based metrics. We need to consider what consequences will result when digital labels follow students throughout their educational paths, what longitudinal data capture will mean for the next generation, and how best to systemically prevent emerging, invisible data harms.

One of the key principles of data privacy is the “right to be forgotten”—for data to be able to be deleted. Among the most troubling of emerging technologies I’ve seen in education are blockchain digital ID systems that do not allow for data on an individual’s digital ledger to ever be deleted.

Angwin: There is a law that is supposed to protect student privacy, the Family Educational Rights Protection Act (FERPA). Is it providing any protection?

Marachi: FERPA is intended to protect student data, but unfortunately it’s toothless. While schools that refuse to address FERPA violations may have federal funding withheld from the Department of Education, in practice, this has never happened

One of the ways that companies can bypass FERPA is to have educational institutions designate them as an educational employee or partner. That way they have full access to the data in the name of supporting student success.

The other problem is that with tech platforms as the current backbone of the education system, in order for students to participate in formal education, they are in effect required to relinquish many aspects of their privacy rights. The current situation appears designed to allow ed tech programs to be in “technical compliance” with FERPA by effectively bypassing its intended protections and allowing vast access to student data.

Angwin: What do you think should be done to mitigate existing risks?

Marachi: There needs to be greater awareness that these data vulnerabilities exist, and we should work collectively to prevent data harms. What might this look like? Algorithmic audits and stronger legislative protections. Beyond these strategies, we also need greater scrutiny of the programs that come knocking on education’s door. One of the challenges is that many of these companies have excellent marketing teams that pitch their products with promises to close achievement gaps, support students’ mental health, improve school climate, strengthen social and emotional learning, support workforce readiness, and more. They’ll use the language of equity, access, and student success, issues that as educational leaders, we care about. 

 

Many of these pitches in the end turn out to be what I call equity doublespeak, or the Theranos-ing of education, meaning there’s a lot of hype without the corresponding delivery on promises. The Hechinger Report has documented numerous examples of high-profile ed tech programs making dubious claims of the efficacy of their products in the K-12 system. We need to engage in ongoing and independent audits of efficacy, data privacy, and analytic practices of these programs to better serve students in our care.

Angwin: You’ve argued that, at the very least, companies implementing new technologies should follow IRB guidelines for working with human subjects. Could you expand on that?

Marachi: Yes, Institutional Review Boards (IRBs) review research to ensure ethical protections of human subjects. Academic researchers are required to provide participants with full informed consent about the risks and benefits of research they’d be involved in and to offer the opportunity to opt out at any time without negative consequences.

 

Corporate researchers, it appears, are allowed free rein to conduct behavioral research without any formal disclosure to students or guardians of the potential risks or harms to their interventions, what data they may be collecting, or how they would be using students’ data. We know of numerous risks and harms documented with the use of online remote proctoring systems, virtual reality, facial recognition, and other emerging technologies, but rarely if ever do we see disclosure of these risks in the implementation of these systems.

If corporate researchers in ed tech firms were to be contractually required by partnering public institutions to adhere to basic ethical protections of the human participants involved in their research, it would be a step in the right direction toward data justice." 

__________

https://themarkup.org/newsletter/hello-world/the-big-business-of-tracking-and-profiling-students 

No comment yet.
Scooped by Roxana Marachi, PhD
February 11, 2021 2:59 PM
Scoop.it!

The Dangers of Risk Prediction in the Criminal Justice System

The Dangers of Risk Prediction in the Criminal Justice System | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

Abstract
"Courts across the United States are using computer software to predict whether a person will commit a crime, the results of which are incorporated into bail and sentencing decisions. It is imperative that such tools be accurate and fair, but critics have charged that the software can be racially biased, favoring white defendants over Black defendants. We evaluate the claim that computer software is more accurate and fairer than people tasked with making similar decisions. We also evaluate, and explain, the presence of racial bias in these predictive algorithms.

Introduction

We are the frequent subjects of predictive algorithms that determine music recommendations, product advertising, university admission, job placement, and bank loan qualification. In the criminal justice system, predictive algorithms have been used to predict where crimes will most likely occur, who is most likely to commit a violent crime, who is likely to fail to appear at their court hearing, and who is likely to reoffend at some point in the future.

Certain types of algorithmic tools known as “risk assessments” have become particularly prevalent in the criminal justice system within the United States. The majority of risk assessments are built to predict recidivism: asking whether someone with a criminal offense will reoffend at some point in the future. These tools rely on an individual’s criminal history, personal background, and demographic information to make these risk predictions.

Various risk assessments are in use across the country to inform decisions at almost every stage in the criminal justice system.undefined One widely used criminal risk assessment tool, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS, Northpointe), has been used to assess over one million individuals in the criminal justice system since it was developed in 1998.undefined The recidivism prediction component of COMPAS—the Recidivism Risk Scale—has been in use since 2000. This software predicts a person’s risk of committing a misdemeanor or felony within two years of assessment from an individual’s demographics and criminal record.

 

In the past few years, algorithmic risk assessments like COMPAS have become increasingly prevalent in pretrial decision making. In these contexts, an individual who has been arrested and booked in jail is assessed by the algorithmic tool in use by the given jurisdiction. Judges then consider the risk scores calculated by the tool in their decision to either release or detain a criminal defendant before their trial.

In May of 2016, writing for ProPublica, Julia Angwin and colleagues analyzed the efficacy of COMPAS in the pretrial context on over seven thousand individuals arrested in Broward County, Florida, between 2013 and 2014.undefined The analysis indicated that the predictions were unreliable and racially biased. The authors found that COMPAS’s overall accuracy for white defendants is 67.0%, only slightly higher than its accuracy of 63.8% for Black defendants.undefined The mistakes made by COMPAS, however, affected Black and white defendants differently: Black defendants who did not recidivate were incorrectly predicted to reoffend at a rate of 44.9%, nearly twice as high as their white counterparts at 23.5%; and white defendants who did recidivate were incorrectly predicted to not reoffend at a rate of 47.7%, nearly twice as high as their Black counterparts at 28.0%. In other words, COMPAS scores appeared to favor white defendants over Black defendants by underpredicting recidivism for white and overpredicting recidivism for Black defendants. Unsurprisingly, this caused an uproar and significant concern that technology was being used to further entrench racism in our criminal justice system.

Since the publication of the ProPublica analysis, there has been significant research and debate regarding the measurement of algorithmic fairness.undefined Complicating this discussion is the fact that the research community does not necessarily agree on the definition of what makes an algorithm fair. And some studies have revealed that certain definitions of fairness are mathematically incompatible.undefined To this date, the debate around mathematical measurement of fairness is both complicated and unresolved.

Algorithmic predictions have become common in the criminal justice system because they maintain a reputation of being objective and unbiased, whereas human decision making is considered inherently more biased and flawed. Northpointe describes COMPAS as “an objective method of estimating the likelihood of reoffending.”undefined The Public Safety Assessment (PSA), another common pretrial risk assessment tool, advertises itself as a tool to “provide judges with objective, data-driven, consistent information that can inform the decisions they make.”undefined In general, people often assume that algorithms using “big data techniques” are unbiased simply because of the amount of data used to build them.

After reading the ProPublica analysis in May of 2016, we started thinking about recidivism prediction algorithms and their use in the criminal justice system. To our surprise, we could not find any research proving that recidivism prediction algorithms are superior to human predictions. Due to the serious implications this type of software can have on a person’s life, we felt that we should start by confirming that COMPAS is, in fact, outperforming human predictions. We also felt that it was critical to get beyond the debate of how to measure fairness and understand why COMPAS’s predictive algorithm exhibited such troubling racial bias.

 

In our study, published in Science Advances in January 2018, we began by asking a fundamental question regarding the use of algorithmic risk predictions: are these tools more accurate than the human decision making they aim to replace?undefined The goal of the study was to evaluate the baseline for human performance on recidivism prediction, and assess whether COMPAS was actually outperforming this baseline. We found that people from a popular online crowd-sourcing marketplace—who, it can reasonably be assumed, have little to no expertise in criminal justice—are as accurate and fair as COMPAS at predicting recidivism. This somewhat surprising result then led us to ask: how is it possible that the average person on the internet, being paid $1 to respond to a survey, is as accurate as commercial software used in the criminal justice system? To answer this, we effectively reverse engineered the COMPAS prediction algorithm and discovered that the software is equivalent to a simple classifier based on only two pieces of data, and it is this simple predictor that leads to the algorithm reproducing historical racial inequities in the criminal justice system.

Comparing Human and Algorithmic Recidivism Prediction

Methodology

Our study is based on a data set of 2013–2014 pretrial defendants from Broward County, Florida.undefined This data set of 7,214 defendants contains individual demographic information, criminal history, the COMPAS recidivism risk score, and each defendant’s arrest record within a two-year period following the COMPAS scoring, excluding any time spent detained in a jail or a prison. COMPAS scores—ranging from 1 to 10—classify the risk of recidivism as low-risk (1–4), medium-risk (5–7), or high-risk (8–10). For the purpose of binary classification, following the methodology used in the ProPublica analysis and the guidance of the COMPAS practitioner’s guide, scores of 5 or above were classified as a prediction of recidivism.undefined

Of the 7,214 defendants in the data set, 1,000 were randomly selected for use in our study that evaluated the human performance of recidivism prediction. This subset yields similar overall COMPAS accuracy, false positive rate, and false negative rate as on the complete data set. (A positive prediction is one in which a defendant is predicted to recidivate; a negative prediction is one in which they are predicted to not recidivate.) The COMPAS accuracy for this subset of 1,000 defendants is 65.2%. The average COMPAS accuracy on 10,000 random subsets of size 1,000 each is 65.4% (with a 95% confidence interval of [62.6, 68.1]).

A descriptive paragraph for each of 1,000 defendants was generated:

The defendant is a [SEX] aged [AGE]. They have been charged with: [CRIME CHARGE]. This crime is classified as a [CRIMINAL DEGREE]. They have been convicted of [NON-JUVENILE PRIOR COUNT] prior crimes. They have [JUVENILE- FELONY COUNT] juvenile felony charges and [JUVENILE-MISDEMEANOR COUNT] juvenile misdemeanor charges on their record.

Perhaps, most notably, we did not specify the defendant's race in this “no race” condition. In a follow-up “race” condition, the defendant’s race was included so that the first line of the above paragraph read, “The defendant is a [RACE] [SEX] aged [AGE].”

There was a total of sixty-three unique criminal charges, including armed robbery, burglary, grand theft, prostitution, robbery, and sexual assault. The crime degree is either “misdemeanor” or “felony.” To ensure that our participants understood the nature of each crime, the above paragraph was followed by a short description of each criminal charge:

[CRIME CHARGE]: [CRIME DESCRIPTION]

After reading the defendant description, participants were then asked to respond either “Yes” or “No” to the question “Do you think this person will commit another crime within two years?” Participants were required to answer each question and could not change their response once it was made. After each answer, participants were given two forms of feedback: whether their response was correct and their average accuracy.

The 1,000 defendants were randomly divided into 20 subsets of 50 each. Each participant was randomly assigned to see one of these 20 subsets. Participants saw the 50 defendants—one at a time—in random order. Participants were only allowed to complete a single subset of 50 defendants.

Participants were recruited through Amazon’s Mechanical Turk, an online crowd-sourcing marketplace where people are paid to perform a wide variety of tasks. (Institutional review board [IRB] guidelines were followed for all participants.) Our task was titled “Predicting Crime” with the description “Read a few sentences about an actual person and predict if they will commit a crime in the future.” The keywords for the task were “survey, research, criminal justice.” Participants were paid one dollar for completing the task and an additional five-dollar bonus if their overall accuracy on the task was greater than 65%. This bonus was intended to provide an incentive for participants to pay close attention to the task. To filter out participants who were not paying close attention, three catch trials were randomly added to the subset of 50 questions. These questions were formatted to look like all other questions but had easily identifiable correct answers.undefined A participant’s response was eliminated from our analysis if any of these questions were answered incorrectly.

Responses for the first (no-race) condition were collected from 462 participants, 62 of whom were removed due to an incorrect response on a catch trial. Responses for the second (race) condition were collected from 449 participants, 49 of whom were removed due to an incorrect response on a catch trial. In each condition, this yielded 20 participant responses for each of 20 subsets of 50 questions. Because of the random pairing of participants to a subset of 50 questions, we occasionally oversampled the required number of 20 participants. In these cases, we selected a random 20 participants and discarded any excess responses.

Results

We compare the overall accuracy and bias in human assessment with the algorithmic assessment of COMPAS. Throughout, a positive prediction is one in which a defendant is predicted to recidivate while a negative prediction is one in which they are predicted to not recidivate. We measure overall accuracy as the rate at which a defendant is correctly predicted to recidivate or not (i.e., the combined true positive and true negative rates). We also report on false positives (a defendant is predicted to recidivate but they don’t) and false negatives (a defendant is predicted to not recidivate but they do). Throughout, we use both paired and unpaired t-tests (with 19 degrees of freedom) to analyze the performance of our participants and COMPAS.

The mean and median accuracy in the no-race condition—computed by analyzing the average accuracy of the 400 human predictions—is 62.1% and 64.0%. We compare these results with the performance of COMPAS on this subset of 1,000 defendants. Because groups of 20 participants judged the same subset of 50 defendants, the individual judgments are not independent. However, because each participant judged only one subset of the defendants, the median accuracies of each subset can reasonably be assumed to be independent. The participant performance, therefore, on the 20 subsets can be directly compared to the COMPAS performance on the same 20 subsets. A one-sided t-test reveals that the average of the 20 median participant accuracies of 62.8% is, just barely, lower than the COMPAS accuracy of 65.2% (p = 0.045).

To determine if there is “wisdom in the crowd” (in our case, a small crowd of 20 people per subset), participant responses were pooled within each subset using a majority rules criterion. This crowd-based approach yields a prediction accuracy of 67.0%. A one-sided t-test reveals that COMPAS is not significantly better than the crowd (p = 0.85). This demonstrates that the commercial COMPAS prediction algorithm does not outperform small crowds of non-experts at predicting recidivism.

As we noted earlier, there exists significant debate regarding the measurement of algorithmic fairness. For the purpose of this study, we evaluate the human predictions with the same fairness criteria used in the ProPublica analysis for ease of comparability. We acknowledge that this may not be the ideal measure of fairness, and also acknowledge that there is debate in the literature on the appropriate measure of fairness.undefined Regardless, we consider fairness in terms of disparate false positive rates (incorrectly classifying a defendant as high risk when they are not) and false negative rates (incorrectly classifying a defendant as low risk when they are not). We believe that, while perhaps not perfect, this measure of fairness shines a light on real-world consequences of incorrect predictions by quantifying the number of defendants that are improperly incarcerated or released.

We measure the fairness of our participants with respect to a defendant’s race based on the crowd predictions. Our participants’ accuracy on Black defendants is 68.2% compared to 67.6% for white defendants. An unpaired t-test reveals no significant difference across race (p = .87). This is similar to that of COMPAS, having a statistically insignificant difference in accuracy of 64.9% for Black defendants and 65.7% for white defendants. By this measure of fairness, our participants and COMPAS are fair to Black and white defendants.

Despite this fairness in overall accuracy, our participants had a significant difference in the false positive and false negative rates for Black and white defendants. Specifically, our participants’ false positive rate for Black defendants is 37.1% compared to 27.2% for white defendants, and our participants’ false negative rate for Black defendants is 29.2% compared to 40.3% for white defendants.

These discrepancies are similar to that of COMPAS, which has a false positive rate of 40.4% for Black defendants and 25.4% for white defendants, and a false negative rate for Black defendants of 30.9% compared to 47.9% for white defendants. See table 1(a) and (c) and figure 1 for a summary of these results. By this measure of fairness, our participants and COMPAS are similarly unfair to Black defendants, despite—bizarrely—the fact that race is not explicitly specified.

[See full article for tables cited] 

 

 

The results of this study led us to question how human participants produced racially disparate predictions despite not knowing the race of the defendant. We recruited a new set of 400 participants to repeat the same exercise but this time with the defendant’s race included. We wondered if including a defendant’s race would reduce or exaggerate the effect of any implicit, explicit, or institutional racial bias.


In this race condition, the mean and median accuracy on predicting whether a defendant would recidivate is 62.3% and 64.0%, nearly identical to the condition where race is not specified, see table 1(a) and (b). The crowd-based accuracy is 66.5%, slightly lower than the condition in which race is not specified, but not significantly so. With respect to fairness, participant accuracy is not significantly different for Black defendants, 66.2%, compared to white defendants, 67.6%. The false positive rate for Black defendants is 40.0% compared to 26.2% for white defendants. The false negative rate for Black defendants is 30.1% compared to 42.1% for white defendants. See table 1(b) for a summary of these results.

 

Somewhat surprisingly, including race does not have a significant impact on overall accuracy or fairness. Most interestingly, the exclusion of race does not necessarily lead to the elimination of racial disparities in human recidivism prediction.

At this point in our study, we have confidently seen that the COMPAS predictive software is not superior to nonexpert human predictions. However, we are left with two perplexing questions:

  1. How is it that nonexperts are as accurate as a widely used commercial software? and

  2. How is it that nonexperts appear to be racially biased even when they don’t know the race of the defendant?


We next set out to answer these questions.

Replicating COMPAS’s Algorithmic Recidivism Prediction

With an overall accuracy of around 65%, COMPAS and nonexpert predictions are not as accurate as we might want, particularly from the point of view of a defendant whose future lies in the balance. Since nonexperts are as accurate as the COMPAS software, we wondered about the sophistication of the underlying COMPAS predictive algorithm. This algorithm, however, is not publicized, so we built our own predictive algorithm in an attempt to understand and effectively reverse engineer the COMPAS software.

Methodology

Our algorithmic analysis used the same seven features as described in the previous section, extracted from the records in the Broward County data set. Unlike the human assessment that analyzed a subset of these defendants, the following algorithmic assessment is performed over the entire data set.

Logistic regression is a linear classifier that, in a two-class classification (as in our case), computes a separating hyperplane to distinguish between recidivists and nonrecidivists. A nonlinear support vector machine employs a kernel function—in our case, a radial basis kernel—to project the initial seven-dimensional feature space to a higher dimensional space in which a linear hyperplane is used to distinguish between recidivists and nonrecidivists. The use of a kernel function amounts to computing a nonlinear separating surface in the original seven-dimensional feature space, allowing the classifier to capture more complex patterns between recidivists and nonrecidivists than is possible with linear classifiers.

 

We employed two different classifiers: logistic regression (a simple, general-purpose, linear classifier) and a support vector machine (a more complex, general-purpose, nonlinear classifier).undefined The input to each classifier was seven features from 7,214 defendants: age, sex, number of juvenile misdemeanors, number of juvenile felonies, number of prior (nonjuvenile) crimes, crime degree, and crime charge (see previous section). Each classifier was trained to predict recidivism from these seven features. Each classifier was trained 1,000 times on a random 80% training and 20% testing split. We report the average testing accuracy.

Results

We found that a simple linear predictor—logistic regression (LR)—provided with the same seven features as our participants (in the no-race condition), yields similar prediction accuracy as COMPAS’s predictive algorithm. As compared to COMPAS’s overall accuracy of 65.4%, our LR classifier yields an overall testing accuracy of 66.6%. Our predictor also yields similar results to COMPAS in terms of predictive fairness, see table 2(a) and (d).

Despite using only seven features as input, a standard linear predictor yields similar results to COMPAS’s software. We can reasonably conclude, therefore, that COMPAS is employing nothing more sophisticated than a linear predictor, or its equivalent.

 

To test whether performance was limited by the classifier or by the nature of the data, we trained a more powerful nonlinear support vector machine (SVM) on the same data. Somewhat surprisingly, the SVM yields nearly identical results to the linear classifier, see table 2(c). If the relatively low accuracy of the linear classifier was because the data is not linearly separable, then we would have expected the nonlinear SVM to perform better. The failure to do so suggests the data is not separable, linearly or otherwise.

Lastly, we wondered if using an even smaller subset of the seven features would be as accurate as COMPAS. We trained and tested an LR-classifier on all possible subsets of the seven features. In agreement with the research done by Angelino et al., we show that a classifier based on only two features—age and total number of prior convictions—performs as well as COMPAS, see table 2(b).undefined The importance of these two criteria is consistent with the conclusions of two meta-analysis studies that set out to determine, in part, which criteria are most predictive of recidivism.

Note: Predictions are for (a) logistic regression with seven features; (b) logistic regression with two features; (c) a nonlinear support vector machine with seven features; and (d) the commercial COMPAS software with 137 features. The results in columns (a)–(c) correspond to the average testing accuracy over 1,000 random 80/20 training/testing splits. The values in the square brackets correspond to the 95% bootstrapped (a)–(c) and binomial (d) confidence intervals.

 

In addition to further elucidating the inner workings of these predictive algorithms, the behavior of this two-feature linear classifier helps us understand how the nonexperts were able to match COMPAS’s predictive ability. When making predictions about an individual’s likelihood of future recidivism, the nonexperts saw the following seven criteria: age, sex, number of juvenile misdemeanors, number of juvenile felonies, number of prior (nonjuvenile) crimes, current crime degree, and current crime charge. If the algorithmic classifier can rely only on a person’s age and number of prior crimes to make this prediction, it is plausible that the nonexperts implicitly or explicitly focused on these criteria as well. (Recall that participants were provided with feedback on their correct and incorrect responses, so it is likely that some learning occurred.)

 

The two-feature classifier effectively learned that if a person is young and has already been convicted multiple times, they are at a higher risk of reoffending, but if a person is older and has not previously been convicted of a crime, then they are at a lower risk of reoffending. This certainly seems like a sensible strategy, if not a terribly accurate one.

 

The predictive strength of a person’s age and number of prior convictions in this context also helps explain the racially disparate predictions seen in both of our human studies and in COMPAS’s predictions overall. On a national scale, Black people are more likely to have prior convictions on their record than white people are: for example, Black people in the United States are incarcerated in state prisons at a rate that is 5.1 times that of white Americans.undefined Within the data set used in the study, white defendants have an average of 2.59 prior convictions, whereas Black defendants have an average of 4.95 prior convictions. In Florida, the state in which COMPAS was validated for use in Broward County, the incarceration rate of Black people is 3.6 times higher than that of white people.undefined These racially disparate incarceration rates are not fully explained by different rates of offense by race. Racial disparities against Black people in the United States also exist in policing, arrests, and sentencing.undefined The racial bias that appears in both the algorithmic and human predictions is a result of these discrepancies.

 

While the total number of prior convictions is one of the most predictive variables of recidivism, its predictive power is not very strong. Because COMPAS and the human participants are only moderately accurate (both achieve an accuracy of around 65%), they both make significant, and racially biased, mistakes. Black defendants are more likely to be classified as medium or high risk by COMPAS, because Black defendants are more likely to have prior convictions due to the fact that Black people are more likely to be arrested, charged, and convicted. On the other hand, white defendants are more likely to be classified as low risk by COMPAS, because white defendants are less likely to have prior convictions. Black defendants, therefore, who don’t reoffend are predicted to be riskier than white defendants who don’t reoffend. Conversely, white defendants who do reoffend are predicted to be less risky than Black defendants who do reoffend. As a result, the false positive rate is higher for Black defendants than white defendants, and the false negative rate for white defendants is higher than for Black defendants. This, in short, is the racial bias that ProPublica first exposed.

 

This same type of disparate outcome appeared in the human predictions as well. Because the human participants saw only a few facts about each defendant, it is safe to assume that the total number of prior convictions was heavily considered in one’s predictions. Therefore, the bias of the human predictions was likely also a result of the difference in conviction history, which itself is linked to inequities in our criminal justice system.

 

The participant and COMPAS’s predictions were in agreement for 692 of the 1,000 defendants, indicating that perhaps there could be predictive power in the “combined wisdom” of the risk tool and the human-generated risk scores. However, a classifier that combined the same seven data per defendant along with the COMPAS risk score and the average human-generated risk score performed no better than any of the individual predictions. This suggests that the mistakes made by humans and COMPAS are not independent.

Conclusions

We have shown that a commercial software that is widely used to predict recidivism is no more accurate or fair than the predictions of people with little to no criminal justice expertise who responded to an online survey. We have shown that these predictions are functionally equivalent. When discussing the use of COMPAS in the courtroom to make these life-altering decisions, we should therefore ask whether we would place these same decisions in the equally accurate and biased hands of random people responding to an online survey.

 

In response to our study, equivant, the makers of COMPAS, responded that our study was both “highly misleading,” and “confirmed that COMPAS achieves good predictability.”undefined Despite this contradictory statement and a promise to analyze our data and results, equivant has not demonstrated any flaws with our study.

 

Algorithmic predictions—whether in the courts, in university admissions, or employment, financial, and health decisions—can have a profound impact on someone’s life. It is essential, therefore, that the underlying data and algorithms that fuel these predictions are well understood, validated, and transparent to those who are the subject of their use.

 

In beginning to question the predictive validity of an algorithmic tool, it is essential to also interrogate the ethical implications of the use of the tool. Recidivism prediction tools are used in decisions about a person’s civil liberties. They are, for example, used to answer questions such as “Will this person commit a crime if they are released from jail before their trial? Should this person instead be detained in jail before their trial?” and “How strictly should this person be supervised while they are on parole? What is their risk of recidivism while they are out on parole?” Even if technologists could build a perfect and fair recidivism prediction tool, we should still ask if the use of this tool is just. In each of these contexts, a person is punished (either detained or surveilled) for a crime they have not yet committed. Is punishing a person for something they have not yet done ethical and just?

 

It is also crucial to discuss the possibility of building any recidivism prediction tool in the United States that is free from racial bias. Recidivism prediction algorithms are necessarily trained on decades of historical criminal justice data, learning the patterns of which kinds of people are incarcerated again and again. The United States suffers from racial discrimination at every stage in the criminal justice system. Machine learning technologies rely on the core assumption that the future will look like the past, and it is imperative that the future of our justice system looks nothing like its racist past. If any criminal risk prediction tool in the United States will inherently reinforce these racially disparate patterns, perhaps they should be avoided altogether."

For full publication, please visit:
 https://mit-serc.pubpub.org/pub/risk-prediction-in-cj/release/2 

No comment yet.
Scooped by Roxana Marachi, PhD
May 21, 2023 3:47 PM
Scoop.it!

How Surveillance Capitalism Ate Education For Lunch // Marachi, 2022 // Univ. of Pittsburgh Data & Society Speaker Series Presentation [Slidedeck]

How Surveillance Capitalism Ate Education For Lunch // Marachi, 2022 // Univ. of Pittsburgh Data & Society Speaker Series Presentation [Slidedeck] | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

Presentation prepared for the University of Pittsburgh's Year of Data & Society Speaker Series. Slidedeck accessible by clicking title above or here: http://bit.ly/Surveillance_Edtech

No comment yet.
Rescooped by Roxana Marachi, PhD from Social Impact Bonds, "Pay For Success," Results-Based Contracting, and Blockchain Digital Identity Systems
May 24, 2022 7:14 PM
Scoop.it!

Ableism And Disability Discrimination In New Surveillance Technologies: How new surveillance technologies in education, policing, health care, and the workplace disproportionately harm disabled peo...

Ableism And Disability Discrimination In New Surveillance Technologies: How new surveillance technologies in education, policing, health care, and the workplace disproportionately harm disabled peo... | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

Full report – PDF 

Plain language version – PDF

By Lydia X. Z. Brown, Ridhi Shetty, Matt Scherer, and Andrew Crawford

"Algorithmic technologies are everywhere. At this very moment, you can be sure students around the world are complaining about homework, sharing gossip, and talking about politics — all while computer programs observe every web search they make and every social media post they create, sending information about their activities to school officials who might punish them for what they look at. Other things happening right now likely include:

  • Delivery workers are trawling up and down streets near you while computer programs monitor their location and speed to optimize schedules, routes, and evaluate their performance;
  • People working from home are looking at their computers while their computers are staring back at them, timing their bathroom breaks, recording their computer screens, and potentially listening to them through their microphones;
  • Your neighbors – in your community or the next one over – are being tracked and designated by algorithms targeting police attention and resources to some neighborhoods but not others;
  • Your own phone may be tracking data about your heart rate, blood oxygen level, steps walked, menstrual cycle, and diet, and that information might be going to for-profit companies or your employer. Your social media content might even be mined and used to diagnose a mental health disability.

This ubiquity of algorithmic technologies has pervaded every aspect of modern life, and the algorithms are improving. But while algorithmic technologies may become better at predicting which restaurants someone might like or which music a person might enjoy listening to, not all of their possible applications are benign, helpful, or just.

Scholars and advocates have demonstrated myriad harms that can arise from the types of encoded prejudices and self-perpetuating cycles of discrimination, bias, and oppression that may result from automated decision-makers. These potentially harmful technologies are routinely deployed by government entities, private enterprises, and individuals to make assessments and recommendations about everything from rental applications to hiring, allocation of medical resources, and whom to target with specific ads. They have been deployed in a variety of settings including education and the workplace, often with the goal of surveilling activities, habits, and efficiency.

Disabled people comprise one such community that experiences discrimination, bias, and oppression resulting from automated decision-making technology. Disabled people continually experience marginalization in society, especially those who belong to other marginalized communities such as disabled women of color. Yet, not enough scholars or researchers have addressed the specific harms and disproportionate negative impacts that surveillance and algorithmic tools can have on disabled people. This is in part because algorithmic technologies that are trained on data that already embeds ableist (or relatedly racist or sexist) outcomes will entrench and replicate the same ableist (and racial or gendered) bias in the computer system. For example, a tenant screening tool that considers rental applicants’ credit scores, past evictions, and criminal history may prevent poor people, survivors of domestic violence, and people of color from getting an apartment because they are disproportionately likely to have lower credit scores, past evictions, and criminal records due to biases in the credit and housing systems and in policing disparities.

This report examines four areas where algorithmic and/or surveillance technologies are used to surveil, control, discipline, and punish people, with particularly harmful impacts on disabled people. They include: (1) education; (2) the criminal legal system; (3) health care; and (4) the workplace. In each section, we describe several examples of technologies that can violate people’s privacy, contribute to or accelerate existing harm and discrimination, and undermine broader public policy objectives (such as public safety or academic integrity).

Full report – PDF 

Plain language version – PDF


https://cdt.org/insights/ableism-and-disability-discrimination-in-new-surveillance-technologies-how-new-surveillance-technologies-in-education-policing-health-care-and-the-workplace-disproportionately-harm-disabled-people/ 

 

 

No comment yet.
Scooped by Roxana Marachi, PhD
July 1, 2024 12:08 PM
Scoop.it!

Whistleblower: L.A. Schools’ Chatbot Misused Student Data as Tech Co. Crumbled // The 74

Whistleblower: L.A. Schools’ Chatbot Misused Student Data as Tech Co. Crumbled // The 74 | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

"Allhere, ed tech startup hired to build LAUSD's lauded AI chatbot "Ed", played fast and loose with sensitive records, ex-software engineer alleges." 

 

 

By Mark Keierleber, July 1, 2024

Just weeks before the implosion of AllHere, an education technology company that had been showered with cash from venture capitalists and featured in glowing profiles by the business press, America’s second-largest school district was warned about problems with AllHere’s product.

As the eight-year-old startup rolled out Los Angeles Unified School District’s flashy new AI-driven chatbot — an animated sun named “Ed” that AllHere was hired to build for $6 million — a former company executive was sending emails to the district and others that Ed’s workings violated bedrock student data privacy principles. 

 

Those emails were sent shortly before The 74 first reported last week that AllHere, with $12 million in investor capital, was in serious straits. A June 14 statement on the company’s website revealed a majority of its employees had been furloughed due to its “current financial position.” Company founder and CEO Joanna Smith-Griffin, a spokesperson for the Los Angeles district said, was no longer on the job. 

Smith-Griffin and L.A. Superintendent Alberto Carvalho went on the road together this spring to unveil Ed at a series of high-profile ed tech conferences, with the schools chief dubbing it the nation’s first “personal assistant” for students and leaning hard into LAUSD’s place in the K-12 AI vanguard. He called Ed’s ability to know students “unprecedented in American public education” at the ASU+GSV conference in April. 

Through an algorithm that analyzes troves of student information from multiple sources, the chatbot was designed to offer tailored responses to questions like “what grade does my child have in math?” The tool relies on vast amounts of students’ data, including their academic performance and special education accommodations, to function.

Meanwhile, Chris Whiteley, a former senior director of software engineering at AllHere who was laid off in April, had become a whistleblower. He told district officials, its independent inspector general’s office and state education officials that the tool processed student records in ways that likely ran afoul of L.A. Unified’s own data privacy rules and put sensitive information at risk of getting hacked. None of the agencies ever responded, Whiteley told The 74. 

“When AllHere started doing the work for LAUSD, that’s when, to me, all of the data privacy issues started popping up,” Whiteley said in an interview last week. The problem, he said, came down to a company in over its head and one that “was almost always on fire” in terms of its operations and management. LAUSD’s chatbot was unlike anything it had ever built before and — given the company’s precarious state — could be its last. 

If AllHere was in chaos and its bespoke chatbot beset by porous data practices, Carvalho was portraying the opposite. One day before The 74 broke the news of the company turmoil and Smith-Griffin’s departure, EdWeek Marketbrief spotlighted the schools chief at a Denver conference talking about how adroitly LAUSD managed its ed tech vendor relationships — “We force them to all play in the same sandbox” — while ensuring that “protecting data privacy is a top priority.”

In a statement on Friday, a district spokesperson said the school system “takes these concerns seriously and will continue to take any steps necessary to ensure that appropriate privacy and security protections are in place in the Ed platform.” 

“Pursuant to contract and applicable law, AllHere is not authorized to store student data outside the United States without prior written consent from the District,” the statement continued. “Any student data belonging to the District and residing in the Ed platform will continue to be subject to the same privacy and data security protections, regardless of what happens to AllHere as a company.” 

 

A district spokesperson, in response to earlier questioning from The 74 last week, said it was informed that Smith-Griffin was no longer with the company and that several businesses “are interested in acquiring AllHere.” Meanwhile Ed, the spokesperson said, “belongs to Los Angeles Unified and is for Los Angeles Unified.”

Officials in the inspector general’s office didn’t respond to requests for comment. The state education department “does not directly oversee the use of AI programs in schools or have the authority to decide which programs a district can utilize,” a spokesperson said in a statement.

It’s a radical turn of events for AllHere and the AI tool it markets as a “learning acceleration platform,” which were all the buzz just a few months ago. In April, Time Magazine named AllHere among the world’s top education technology companies. That same month, Inc. Magazine dubbed Smith-Griffin a global K-12 education leader in artificial intelligence in its Female Founders 250 list. 

 

 

Ed has been similarly blessed with celebrity treatment. 

“He’s going to talk to you in 100 different languages, he’s going to connect with you, he’s going to fall in love with you,” Carvalho said at ASU+GSV. “Hopefully you’ll love it, and in the process we are transforming a school system of 540,000 students into 540,000 ‘schools of one’ through absolute personalization and individualization.”

Smith-Griffin, who graduated from the Miami school district that Carvalho once led before going onto Harvard, couldn’t be reached for comment. Smith-Griffin’s LinkedIn page was recently deactivated and parts of the company website have gone dark. Attempts to reach AllHere were also unsuccessful.

‘The product worked, right, but it worked by cheating’

Smith-Griffin, a former Boston charter school teacher and family engagement director, founded AllHere in 2016. Since then, the company has primarily provided schools with a text messaging system that facilitates communication between parents and educators. Designed to reduce chronic student absences, the tool relies on attendance data and other information to deliver customized, text-based “nudges.” 

 

The work that AllHere provided the Los Angeles school district, Whiteley said, was on a whole different level — and the company wasn’t prepared to meet the demand and lacked expertise in data security. In L.A., AllHere operated as a consultant rather than a tech firm that was building its own product, according to its contract with LAUSD obtained by The 74. Ultimately, the district retained rights to the chatbot, according to the agreement, but AllHere was contractually obligated to “comply with the district information security policies.” 

 The contract notes that the chatbot would be “trained to detect any confidential or sensitive information” and to discourage parents and students from sharing with it any personal details. But the chatbot’s decision to share and process students’ individual information, Whiteley said, was outside of families’ control. 

In order to provide individualized prompts on details like student attendance and demographics, the tool connects to several data sources, according to the contract, including Welligent, an online tool used to track students’ special education services. The document notes that Ed also interfaces with the Whole Child Integrated Data stored on Snowflake, a cloud storage company. Launched in 2019, the Whole Child platform serves as a central repository for LAUSD student data designed to streamline data analysis to help educators monitor students’ progress and personalize instruction. 

Whiteley told officials the app included students’ personally identifiable information in all chatbot prompts, even in those where the data weren’t relevant. Prompts containing students’ personal information were also shared with other third-party companies unnecessarily, Whiteley alleges, and were processed on offshore servers. Seven out of eight Ed chatbot requests, he said, are sent to places like Japan, Sweden, the United Kingdom, France, Switzerland, Australia and Canada. 

Taken together, he argued the company’s practices ran afoul of data minimization principles, a standard cybersecurity practice that maintains that apps should collect and process the least amount of personal information necessary to accomplish a specific task. Playing fast and loose with the data, he said, unnecessarily exposed students’ information to potential cyberattacks and data breaches and, in cases where the data were processed overseas, could subject it to foreign governments’ data access and surveillance rules. 

Chatbot source code that Whiteley shared with The 74 outlines how prompts are processed on foreign servers by a Microsoft AI service that integrates with ChatGPT. The LAUSD chatbot is directed to serve as a “friendly, concise customer support agent” that replies “using simple language a third grader could understand.” When querying the simple prompt “Hello,” the chatbot provided the student’s grades, progress toward graduation and other personal information. 

AllHere’s critical flaw, Whiteley said, is that senior executives “didn’t understand how to protect data.” 

“The issue is we’re sending data overseas, we’re sending too much data, and then the data were being logged by third parties,” he said, in violation of the district’s data use agreement. “The product worked, right, but it worked by cheating. It cheated by not doing things right the first time.”

In a 2017 policy bulletin, the district notes that all sensitive information “needs to be handled in a secure way that protects privacy,” and that contractors cannot disclose information to other parties without parental consent. A second policy bulletin, from April, outlines the district’s authorized use guidelines for artificial intelligence, which notes that officials, “Shall not share any confidential, sensitive, privileged or private information when using, prompting or communicating with any tools.” It’s important to refrain from using sensitive information in prompts, the policy notes, because AI tools “take whatever users enter into a prompt and incorporate it into their systems/knowledge base for other users.” 

“Well, that’s what AllHere was doing,” Whiteley said. 

‘Acid is dangerous’

Whiteley’s revelations present LAUSD with its third student data security debacle in the last month. In mid-June, a threat actor known as “Sp1d3r” began to sell for $150,000 a trove of data it claimed to have stolen from the Los Angeles district on Breach Forums, a dark web marketplace. LAUSD told Bloomberg that the compromised data had been stored by one of its third-party vendors on the cloud storage company Snowflake, the repository for the district’s Whole Child Integrated Data. The Snowflake data breach may be one of the largest in history. The threat actor claims that the L.A. schools data in its possession include student medical records, disability information, disciplinary details and parent login credentials. 

The chatbot interacted with data stored by Snowflake, according to the district’s contract with AllHere, though any connection between AllHere and the Snowflake data breach is unknown. 

In its statement Friday, the district spokesperson said an ongoing investigation has “revealed no connection between AllHere or the Ed platform and the Snowflake incident.” The spokesperson said there was no “direct integration” between Whole Child and AllHere and that Whole Child data was processed internally before being directed to AllHere.

The contract between AllHere and the district, however, notes that the tool should “seamlessly integrate” with the Whole Child Integrated Data “to receive updated student data regarding attendance, student grades, student testing data, parent contact information and demographics.”

 

Earlier in the month, a second threat actor known as Satanic Cloud claimed it had access to tens of thousands of L.A. students’ sensitive information and had posted it for sale on Breach Forums for $1,000. In 2022, the district was victim to a massive ransomware attack that exposed reams of sensitive data, including thousands of students’ psychological evaluations, to the dark web. 

With AllHere’s fate uncertain, Whiteley blasted the company’s leadership and protocols.

“Personally identifiable information should be considered acid in a company and you should only touch it if you have to because acid is dangerous,” he told The 74. “The errors that were made were so egregious around PII, you should not be in education if you don’t think PII is acid.” 

For original post, please visit:

https://www.the74million.org/article/whistleblower-l-a-schools-chatbot-misused-student-data-as-tech-co-crumbled/ 

 

 

No comment yet.
Scooped by Roxana Marachi, PhD
October 3, 2023 11:19 AM
Scoop.it!

Digital Dystopia: The Danger in Buying What the EdTech Surveillance Industry is Selling - ACLU Research Report

"Digital Dystopia: The Danger in Buying What the EdTech Surveillance Industry is Selling, an ACLU research report, examines the EdTech Surveillance (educational technologies used for surveillance) industry in U.S. K-12 schools. Using in-depth investigation into industry products, an incident audit, student focus groups, and national polling, this report scrutinizes industry claims, assesses the efficacy of the products, and explores the impacts EdTech Surveillance has on students and schools. The report concludes by offering concrete actions school districts, elected officials, and community members can take to ensure decisions about using surveillance products are consistent and well-informed. This includes model legislation and decision-making tools, which will often result in the rejection of student surveillance technologies."

Accompanying Resources:

School Board Model Policy - School Surveillance Technology

Model Bill - Student Surveillance Technology Acquisition Standards Act


School Leadership Checklist - School Surveillance Technology


10 Questions for School Board Meetings - School Surveillance Technology


Public Records Request Template- School Surveillance Technology

 

To download full report, click on title or arrow above. 

https://www.aclu.org/report/digital-dystopia-the-danger-in-buying-what-the-edtech-surveillance-industry-is-selling 

No comment yet.
Scooped by Roxana Marachi, PhD
April 8, 2024 9:00 PM
Scoop.it!

What's in a Name? Auditing Large Language Models for Race and Gender Bias // (Haim, Salinas, & Nyarko, 2024) // Stanford Law School via arxiv.org  

What's in a Name? Auditing Large Language Models for Race and Gender Bias.
By Amit Haim, Alejandro Salinas, Julian Nyarko

ABSTRACT

"We employ an audit design to investigate biases in state-of-the-art large language models, including GPT-4. In our study, we prompt the models for advice involving a named individual across a variety of scenarios, such as during car purchase negotiations or election outcome predictions. We find that the advice systematically disadvantages names that are commonly associated with racial minorities and women. Names associated with Black women receive the least advantageous outcomes. The biases are consistent across 42 prompt templates and several models, indicating a systemic issue rather than isolated incidents. While providing numerical, decision-relevant anchors in the prompt can successfully counteract the biases, qualitative details have inconsistent effects and may even increase disparities. Our findings underscore the importance of conducting audits at the point of LLM deployment and implementation to mitigate their potential for harm against marginalized communities."

 

Please visit following for abstract on arxiv.org and link to download:

https://arxiv.org/abs/2402.14875 

 

No comment yet.
Scooped by Roxana Marachi, PhD
January 29, 2024 2:16 PM
Scoop.it!

Could a court really order the destruction of ChatGPT? The New York Times thinks so, and it may be right // The Conversation

Could a court really order the destruction of ChatGPT? The New York Times thinks so, and it may be right // The Conversation | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By João Marinotti

"On Dec. 27, 2023, The New York Times filed a lawsuit against OpenAI alleging that the company committed willful copyright infringement through its generative AI tool ChatGPT. The Times claimed both that ChatGPT was unlawfully trained on vast amounts of text from its articles and that ChatGPT’s output contained language directly taken from its articles.

 

To remedy this, the Times asked for more than just money: It asked a federal court to order the “destruction” of ChatGPT.

If granted, this request would force OpenAI to delete its trained large language models, such as GPT-4, as well as its training data, which would prevent the company from rebuilding its technology.

This prospect is alarming to the 100 million people who use ChatGPT every week. And it raises two questions that interest me as a law professor. First, can a federal court actually order the destruction of ChatGPT? And second, if it can, will it?

 

Destruction in the court

The answer to the first question is yes. Under copyright law, courts do have the power to issue destruction orders.

To understand why, consider vinyl records. Their resurging popularity has attracted counterfeiters who sell pirated records.

 

If a record label sues a counterfeiter for copyright infringement and wins, what happens to the counterfeiter’s inventory? What happens to the master and stamper disks used to mass-produce the counterfeits, and the machinery used to create those disks in the first place?

 

To address these questions, copyright law grants courts the power to destroy infringing goods and the equipment used to create them. From the law’s perspective, there’s no legal use for a pirated vinyl record. There’s also no legitimate reason for a counterfeiter to keep a pirated master disk. Letting them keep these items would only enable more lawbreaking.

 

So in some cases, destruction is the only logical legal solution. And if a court decides ChatGPT is like an infringing good or pirating equipment, it could order that it be destroyed. In its complaint, the Times offered arguments that ChatGPT fits both analogies.

Copyright law has never been used to destroy AI models, but OpenAI shouldn’t take solace in this fact. The law has been increasingly open to the idea of targeting AI.

Consider the Federal Trade Commission’s recent use of algorithmic disgorgement as an example. The FTC has forced companies such as WeightWatchers to delete not only unlawfully collected data but also the algorithms and AI models trained on such data.

Why ChatGPT will likely live another day

It seems to be only a matter of time before copyright law is used to order the destruction of AI models and datasets. But I don’t think that’s going to happen in this case. Instead, I see three more likely outcomes.

The first and most straightforward is that the two parties could settle. In the case of a successful settlement, which may be likely, the lawsuit would be dismissed and no destruction would be ordered.

The second is that the court might side with OpenAI, agreeing that ChatGPT is protected by the copyright doctrine of “fair use.” If OpenAI can argue that ChatGPT is transformative and that its service does not provide a substitute for The New York Times’ content, it just might win.

The third possibility is that OpenAI loses but the law saves ChatGPT anyway. Courts can order destruction only if two requirements are met: First, destruction must not prevent lawful activities, and second, it must be “the only remedy” that could prevent infringement.

That means OpenAI could save ChatGPT by proving either that ChatGPT has legitimate, noninfringing uses or that destroying it isn’t necessary to prevent further copyright violations.

Both outcomes seem possible, but for the sake of argument, imagine that the first requirement for destruction is met. The court could conclude that, because of the articles in ChatGPT’s training data, all uses infringe on the Times’ copyrights – an argument put forth in various other lawsuits against generative AI companies.

 

In this scenario, the court would issue an injunction ordering OpenAI to stop infringing on copyrights. Would OpenAI violate this order? Probably not. A single counterfeiter in a shady warehouse might try to get away with that, but that’s less likely with a US$100 billion company.

Instead, it might try to retrain its AI models without using articles from the Times, or it might develop other software guardrails to prevent further problems. With these possibilities in mind, OpenAI would likely succeed on the second requirement, and the court wouldn’t order the destruction of ChatGPT.

Given all of these hurdles, I think it’s extremely unlikely that any court would order OpenAI to destroy ChatGPT and its training data. But developers should know that courts do have the power to destroy unlawful AI, and they seem increasingly willing to use it."

 

For original post, please visit: 
https://theconversation.com/could-a-court-really-order-the-destruction-of-chatgpt-the-new-york-times-thinks-so-and-it-may-be-right-221717 

No comment yet.
Scooped by Roxana Marachi, PhD
October 30, 2023 11:36 AM
Scoop.it!

Research at a Glance: Data Privacy and Children // Children and Screens: Institute of Digital Media and Child Development

https://www.childrenandscreens.com/data-privacy-and-children/ 

No comment yet.
Scooped by Roxana Marachi, PhD
January 10, 1:11 PM
Scoop.it!

Edtech giant PowerSchool says hackers accessed personal data of students and teachers // TechCrunch

Edtech giant PowerSchool says hackers accessed personal data of students and teachers // TechCrunch | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Carly Page
"Education technology giant PowerSchool has told customers that it experienced a “cybersecurity incident” that allowed hackers to compromise the personal data of students and teachers in K-12 school districts across the United States.

The California-based PowerSchool, which was acquired by Bain Capital for $5.6 billion in 2024, is the largest provider of cloud-based education software for K-12 education in the U.S., serving more than 75% of students in North America, according to the company’s website. PowerSchool says its software is used by over 16,000 customers to support more than 50 million students in the United States.

In a letter sent to affected customers on Tuesday and published in a local news report, PowerSchool said it identified on December 28 that hackers successfully breached its PowerSource customer support portal, allowing further access to the company’s school information system, PowerSchool SIS, which schools use to manage student records, grades, attendance, and enrollment. The letter said the company’s investigation found the hackers gained access “using a compromised credential.” 

PowerSchool has not said what types of data were accessed during the incident or how many individuals are affected by the breach.

PowerSchool spokesperson Beth Keebler confirmed the incident in an email to TechCrunch but declined to answer specific questions about the incident. Bain Capital has responded to TechCrunch’s questions. 


“We have taken all appropriate steps to prevent the data involved from further unauthorized access or misuse,” Keebler said. “The incident is contained and we do not anticipate the data being shared or made public. PowerSchool is not experiencing, nor expects to experience, any operational disruption and continues to provide services as normal to our customers.”

The nature of the cyberattack remains unknown. Bleeping Computer reports that in an FAQ sent to affected users, PowerSchool said it did not experience a ransomware attack, but that the company was extorted into paying a financial sum to prevent the hackers from leaking the stolen data.

PowerSchool told the publication that names and addresses were exposed in the breach, but that the information may also include Social Security numbers, medical information, grades, and other personally identifiable information. PowerSchool did not say how much the company paid. 

PowerSchool was sued by class action in November 2024, which alleges the company illegally sells student data without consent for commercial gain. According to the lawsuit, the company’s troves of student data totals some “345 terabytes of data collected from 440 school districts.”

“PowerSchool collects this highly sensitive information under the guise of educational support, but in fact collects it for its own commercial gain,” while hiding behind “opaque terms of service such that no one can understand,” the lawsuit alleges. 

Updated with comment from PowerSchool.

Do you have more information about the PowerSchool data breach? We’d love to hear from you. From a non-work device, you can contact Carly Page securely on Signal at +44 1536 853968 or via email at carly.page@techcrunch.com.

 

For original article, please visit: 

https://techcrunch.com/2025/01/08/edtech-giant-powerschool-says-hackers-accessed-personal-data-of-students-and-teachers/ 

No comment yet.
Scooped by Roxana Marachi, PhD
March 11, 2024 8:44 PM
Scoop.it!

Time for a Pause: Without Effective Public Oversight, AI in Schools Will Do More Harm Than Good // National Education Policy Center 

Key Takeaway: The current wholesale adoption of unregulated Artificial Intelligence applications in schools poses a grave danger to democratic civil society and to individual freedom and liberty.

Find Documents:

NEPC Publication: https://nepc.colorado.edu/publication/ai

Publication Announcement: https://nepc.colorado.edu/publication-announcement/2024/03/ai

Contact:
Michelle Renée Valladares: (720) 505-1958, michelle.valladares@colorado.edu
Ben Williamson: 011-44-0131-651-6176, ben.williamson@ed.ac.uk

 

BOULDER, CO (MARCH 5, 2024)
Disregarding their own widely publicized appeals for regulating and slowing implementation of artificial intelligence (AI), leading tech giants like Google, Microsoft, and Meta are instead racing to evade regulation and incorporate AI into their platforms. 

 

A new NEPC policy brief, Time for a Pause: Without Effective Public Oversight, AI in Schools Will Do More Harm Than Good, warns of the dangers of unregulated AI in schools, highlighting democracy and privacy concerns. Authors Ben Williamson of the University of Edinburgh, and Alex Molnar and Faith Boninger of the University of Colorado Boulder, examine the evidence and conclude that the proliferation of AI in schools jeopardizes democratic values and personal freedoms.

Public education is a public and private good that’s essential to democratic civic life. The public must, therefore, be able to provide meaningful direction over schools through transparent democratic governance structures. Yet important discussions about AI’s potentially negative impacts on education are being overwhelmed by relentless rhetoric promoting its purported ability to positively transform teaching and learning. The result is that AI, with little public oversight, is on the verge of becoming a routine and overriding presence in schools.

Years of warnings and precedents have highlighted the risks posed by the widespread use of pre-AI digital technologies in education, which have obscured decision-making and enabled student data exploitation. Without effective public oversight, the introduction of opaque and unproven AI systems and applications will likely exacerbate these problems.

The authors explore the harms likely if lawmakers and others do not step in with carefully considered regulations. Integration of AI can degrade teacher-student relationships, corrupt curriculum with misinformation, encourage student performance bias, and lock schools into a system of expensive corporate technology. Further, they contend, AI is likely to exacerbate violations of student privacy, increase surveillance, and further reduce the transparency and accountability of educational decision-making.

 

The authors advise that without responsible development and regulation, these opaque AI models and applications will become enmeshed in routine school processes. This will force students and teachers to become involuntary test subjects in a giant experiment in automated instruction and administration that is sure to be rife with unintended consequences and potentially negative effects. Once enmeshed, the only way to disentangle from AI would be to completely dismantle those systems.

The policy brief concludes by suggesting measures to prevent these extensive risks. Perhaps most importantly, the authors urge school leaders to pause the adoption of AI applications until policymakers have had sufficient time to thoroughly educate themselves and develop legislation and policies ensuring effective public oversight and control of its school applications.

 

Find Time for a Pause: Without Effective Public Oversight, AI in Schools Will Do More Harm Than Good, by Ben Williamson, Alex Molnar, and Faith Boninger, at:
http://nepc.colorado.edu/publication/ai

_______

Suggested Citation: Williamson, B., Molnar, A., & Boninger, F. (2024). Time for a pause: Without effective public oversight, AI in schools will do more harm than good. Boulder, CO: National Education Policy Center. Retrieved [date] from http://nepc.colorado.edu/publication/ai

 

For original link to announcement, please see: 
https://nepc.colorado.edu/publication-announcement/2024/03/ai 

 

No comment yet.
Scooped by Roxana Marachi, PhD
December 18, 2023 9:49 PM
Scoop.it!

Genetics group slams company for using its data to screen embryos’ genomes: Orchid Health’s new in vitro fertilization embryo test aims to predict future health and mental issues // Science.org

Genetics group slams company for using its data to screen embryos’ genomes: Orchid Health’s new in vitro fertilization embryo test aims to predict future health and mental issues // Science.org | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Carrie Arnold

"On 5 December, a U.S. company called Orchid Health announced that it would begin to offer fertility clinics and their hopeful customers the unprecedented option to sequence the whole genomes of embryos conceived by in vitro fertilization (IVF). “Find the embryo at lowest risk for a disease that runs in your family,” touts the company’s website. The cost: $2500 per embryo.

Although Orchid and at least two other companies have already been conducting more limited genetic screening of IVF embryos, the new test offers something more: Orchid will look not just for single-gene mutations that cause disorders such as cystic fibrosis, but also more extensively for medleys of common and rare gene variants known to predispose people to neurodevelopmental disorders, severe obesity, and certain psychiatric conditions such as schizophrenia.

That new offering drew swift backlash from genomics researchers who claim the company inappropriately uses their data to generate some of its risk estimates. The Psychiatric Genomics Consortium (PGC), an international group of more than 800 researchers working to decode the genetic and molecular underpinnings of mental health conditions, says Orchid’s new test relies on data it produced over the past decade, and that the company has violated restrictions against the data’s use for embryo screening.

PGC objects to such uses because its goal is to improve the lives of people with mental illness, not stop them from being born, says University of North Carolina at Chapel Hill psychiatrist Patrick Sullivan, PGC’s founder and lead principal investigator. The group signaled its dismay on the social media platform X (formerly Twitter) last week, writing, “Any use for commercial testing or for testing of unborn individuals violates the terms by which PGC & our collaborators’ data are made available.” PGC’s response marks the first time an academic team has publicly taken on companies using its data to offer polygenic risk scores.

Orchid didn’t respond to Science’s request for a comment and PGC leaders say they don’t have any obvious recourse to stop the company. “We are just a confederation of academics in a common purpose. We are not a legal entity,” Sullivan notes in an email to Science. “We can publicize the issue (as we are doing!) in the vain hope that they might do the right thing or be shamed into it.”

 

Andrew McQuillin, a molecular psychiatry researcher at University College London, has previously worked with PGC and he echoes the group’s concern. It’s difficult for researchers to control the use of their data post publication, especially when so many funders and journals require the publication of these data, he notes. “We can come up with guidance on how these things should be used. The difficulty is that official guidance like that doesn’t feature anywhere in the marketing from these companies,” he says.

For more than a decade, IVF clinics have been able to pluck a handful of cells out of labmade embryos and check whether a baby would inherit a disease-causing gene. Even this more limited screening has generated debate about whether it can truly reassure potential parents an embryo will become a healthy baby—and whether it could lead to a modern-day form of eugenics.

 

But with recent advances in sequencing, researchers can read more and more of an embryo’s genome from those few cells, which can help create so-called polygenic risk scores for a broad range of chronic diseases. Such scores calculate an individual’s odds of developing common adult-onset conditions that result from a complex interaction between multiple genes as well as environmental factors.

Orchid’s new screening report sequences more than 99% of an embryo’s DNA, and estimates the risk of conditions including the irregular heart rhythm known as atrial fibrillation, inflammatory bowel diseases, types 1 and 2 diabetes, breast cancer—and some of the psychiatric conditions that PGC studies. In an August preprint on bioRxiv, Orchid scientists concluded that their whole genome sequencing methods using a five-cell embryo biopsy were accurate and effective at reading the depth and breadth of the genome.

But even with the most accurate sequencing, polygenic risk scores don’t predict disease risk very well, McQuillin says. “They’re useful in the research context, but at the individual level, they’re not actually terribly useful to predict who’s going to develop schizophrenia or not.”

In an undated online white paper on how its screening calculates a polygenic risk score for schizophrenia, Orchid cites studies conducted by PGC, including one that identified specific genetic variants linked to schizophrenia. But James Walters, co-leader of the Schizophrenia Working Group at PGC and a psychiatrist at Cardiff University’s Centre for Neuropsychiatric Genetics and Genomics, says the data use policy on the PGC website specifically indicates that results from PGC-authored studies are not to be used to develop a test like Orchid’s.

Some groups studying controversial topics such as the genetics of educational attainment (treated as a proxy for intelligence) and of homosexuality have tried to ensure responsible use of their data by placing them in password-protected sites. Others like PGC, however, have made their genotyping results freely accessible to anyone and relied on data policies to discourage improper use.

Orchid’s type of embryo screening may be “ethically repugnant,” says geneticist Mark Daly of the Broad Institute, but he believes data like PGC’s must remain freely available to scientists in academia and industry. “The point of studying the genetics of disease is to provide insights into the mechanisms of disease that can lead to new therapies,” he tells Science.

Society, McQuillin says, must soon have a broader discussion about the implications of this type of embryo screening. “We need to take a look at whether this is really something we should be doing. It’s the type of thing that, if it becomes widespread, in 40 years’ time, we will ask, ‘What on earth have we done?’”

 

For original post, please visit:

https://www.science.org/content/article/genetics-group-slams-company-using-its-data-screen-embryos-genomes 

 

For a related critical posts regarding the "Genomics of Education", please see:

https://elsihub.org/collection/genomics-education-education-elsi-concerns-about-genomic-prediction-educational-settings 

 

https://nepc.colorado.edu/blog/biological 

No comment yet.
Scooped by Roxana Marachi, PhD
September 28, 2024 3:45 PM
Scoop.it!

FTC Announces Crackdown on Deceptive AI Claims and Schemes // Federal Trade Comission

FTC Announces Crackdown on Deceptive AI Claims and Schemes // Federal Trade Comission | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

"The Federal Trade Commission is taking action against multiple companies that have relied on artificial intelligence as a way to supercharge deceptive or unfair conduct that harms consumers, as part of its new law enforcement sweep called Operation AI Comply.

 

The cases being announced today include actions against a company promoting an AI tool that enabled its customers to create fake reviews, a company claiming to sell “AI Lawyer” services, and multiple companies claiming that they could use AI to help consumers make money through online storefronts.

“Using AI tools to trick, mislead, or defraud people is illegal,” said FTC Chair Lina M. Khan. “The FTC’s enforcement actions make clear that there is no AI exemption from the laws on the books. By cracking down on unfair or deceptive practices in these markets, FTC is ensuring that honest businesses and innovators can get a fair shot and consumers are being protected.”


Claims around artificial intelligence have become more prevalent in the marketplace, including frequent promises about the ways it could potentially enhance people’s lives through automation and problem solving. The cases included in this sweep show that firms have seized on the hype surrounding AI and are using it to lure consumers into bogus schemes, and are also providing AI powered tools that can turbocharge deception.

DoNotPay

The FTC is taking action against DoNotPay, a company that claimed to offer an AI service that was “the world’s first robot lawyer,” but the product failed to live up to its lofty claims that the service could substitute for the expertise of a human lawyer.

According to the FTC’s complaint, DoNotPay promised that its service would allow consumers to “sue for assault without a lawyer” and “generate perfectly valid legal documents in no time,” and that the company would “replace the $200-billion-dollar legal industry with artificial intelligence.” DoNotPay, however, could not deliver on these promises. The complaint alleges that the company did not conduct testing to determine whether its AI chatbot’s output was equal to the level of a human lawyer, and that the company itself did not hire or retain any attorneys.

The complaint also alleges that DoNotPay offered a service that would check a small business website for hundreds of federal and state law violations based solely on the consumer’s email address. This feature purportedly would detect legal violations that, if unaddressed, would potentially cost a small business $125,000 in legal fees, but according to the complaint, this service was also not effective.

DoNotPay has agreed to a proposed Commission order settling the charges against it. The settlement would require it to pay $193,000, provide a notice to consumers who subscribed to the service between 2021 and 2023 warning them about the limitations of law-related features on the service. The proposed order also will prohibit the company from making claims about its ability to substitute for any professional service without evidence to back it up.

The Commission vote authorizing the staff to issue the complaint and proposed administrative order was 5-0. Commissioner Holyoak issued a concurring statement joined by Chair Lina M. Khan. Commissioner Ferguson also issued a concurring statementThe FTC will publish a description of the consent agreement package in the Federal Register soon. The agreement will be subject to public comment for 30 days, after which the Commission will decide whether to make the proposed consent order final. Instructions for filing comments appear in the published notice. Once processed, comments will be posted on Regulations.gov.

Ascend Ecom

The FTC has filed a lawsuit against an online business opportunity scheme that it alleges has falsely claimed its “cutting edge” AI-powered tools would help consumers quickly earn thousands of dollars a month in passive income by opening online storefronts. According to the complaint, the scheme has defrauded consumers of at least $25 million.

The scheme is run by William Basta and Kenneth Leung, and it has operated under a number of different names since 2021, including Ascend Ecom, Ascend Ecommerce, Ascend CapVentures, ACV Partners, ACV, Accelerated eCom Ventures, Ethix Capital by Ascend, and ACV Nexus.

According to the FTC’s complaint, the operators of the scheme charge consumers tens of thousands of dollars to start online stores on ecommerce platforms such as Amazon, Walmart, Etsy, and TikTok, while also requiring them to spend tens of thousands more on inventory. Ascend’s advertising content claimed the company was a leader in ecommerce, using proprietary software and artificial intelligence to maximize clients’ business success.

The complaint notes that, while Ascend promises consumers it will create stores producing five-figure monthly income by the second year, for nearly all consumers, the promised gains never materialize, and consumers are left with depleted bank accounts and hefty credit card bills. The complaint alleges that Ascend received numerous complaints from consumers, pressured consumers to modify or delete negative reviews of Ascend, frequently failed to honor their “guaranteed buyback,” and unlawfully threatened to withhold the supposed “guaranteed buyback” for those who left negative reviews of the company online.

As a result of the FTC’s complaint, a federal court issued an order temporarily halting the scheme and putting it under the control of a receiver. The FTC’s case against the scheme is ongoing and will be decided by a federal court.

The Commission vote authorizing the staff to file the complaint was 5-0. The complaint was filed in the U.S. District Court for the Central District of California.

Ecommerce Empire Builders

The FTC has charged a business opportunity scheme with falsely claiming to help consumers build an “AI-powered Ecommerce Empire” by participating in its training programs that can cost almost $2,000 or by buying a “done for you” online storefront for tens of thousands of dollars. The scheme, known as Ecommerce Empire Builders (EEB), claims consumers can potentially make millions of dollars, but the FTC’s complaint alleges that those profits fail to materialize.

The complaint alleges that EEB’s CEO, Peter Prusinowski, has used consumers’ money – as much as $35,000 from consumers who purchase stores – to enrich himself while failing to deliver on the scheme’s promises of big income by selling goods online. In its marketing, EEB encourages consumers to “Skip the guesswork and start a million-dollar business today” by harnessing the “power of artificial intelligence” and the scheme’s supposed strategies.

In social media ads, EEB claims that its clients can make $10,000 monthly, but the FTC’s complaint alleges that the company has no evidence to back up those claims. Numerous consumers have complained that stores they purchased from EEB made little or no money, and that the company has resisted providing refunds to consumers, either denying refunds or only providing partial refunds.

As a result of the FTC’s complaint, a federal court issued an order temporarily halting the scheme and putting it under the control of a receiver. The FTC’s case against the scheme is ongoing and will be decided by a federal court.

The Commission vote authorizing the staff to file the complaint against Prusinowski and his company was 5-0. The complaint was filed in the U.S. District Court for the Eastern District of Pennsylvania.

Rytr

Since April 2021, Rytr has marketed and sold an AI “writing assistant” service for a number of uses, one of which was specifically “Testimonial & Review” generation. Paid subscribers could generate an unlimited number of detailed consumer reviews based on very limited and generic input.

According to the FTC’s complaint, Rytr’s service generated detailed reviews that contained specific, often material details that had no relation to the user’s input, and these reviews almost certainly would be false for the users who copied them and published them online. In many cases, subscribers’ AI-generated reviews featured information that would deceive potential consumers who were using the reviews to make purchasing decisions. The complaint further alleges that at least some of Rytr’s subscribers used the service to produce hundreds, and in some cases tens of thousands, of reviews potentially containing false information.

The complaint charges Rytr with violating the FTC Act by providing subscribers with the means to generate false and deceptive written content for consumer reviews. The complaint also alleges that Rytr engaged in an unfair business practice by offering a service that is likely to pollute the marketplace with a glut of fake reviews that would harm both consumers and honest competitors.

The proposed order settling the Commission’s complaint is designed to prevent Rytr from engaging in similar illegal conduct in the future. It would bar the company from advertising, promoting, marketing, or selling any service dedicated to – or promoted as – generating consumer reviews or testimonials.

The Commission vote authorizing the staff to issue the complaint and proposed administrative order was 3-2, with Commissioners Melissa Holyoak and Andrew Ferguson voting no. Commissioners Holyoak and Ferguson issued statements. The FTC will publish a description of the consent agreement package in the Federal Register soon. The agreement will be subject to public comment for 30 days, after which the Commission will decide whether to make the proposed consent order final. Instructions for filing comments appear in the published notice. Once processed, comments will be posted on Regulations.gov.

FBA Machine

In June, the FTC took action against a business opportunity scheme that allegedly falsely promised consumers that they would make guaranteed income through online storefronts that utilized AI-powered software. According to the FTC, the scheme, which has operated under the names Passive Scaling and FBA Machine, cost consumers more than $15.9 million based on deceptive earnings claims that rarely, if ever, materialize.

The complaint alleges that Bratislav Rozenfeld (also known as Steven Rozenfeld and Steven Rozen) has operated the scheme since 2021, initially as Passive Scaling. When Passive Scaling failed to live up to its promises and consumers sought refunds and brought lawsuits, Rozenfeld rebranded the scheme as FBA Machine in 2023. The rebranded marketing materials claim that FBA Machine uses “AI-powered” tools to help price products in the stores and maximize profits.

The scheme’s claims were wide-ranging, promising consumers that they could operate a “7-figure business” and citing supposed testimonials from clients who “generate over $100,000 per month in profit.” Company sales agents told consumers that the business was “risk-free” and falsely guaranteed refunds to consumers who did not make back their initial investments, which ranged from tens of thousands to hundreds of thousands of dollars.

As a result of the FTC’s complaint, a federal court issued an order temporarily halting the scheme and putting it under the control of a receiver. The case against the scheme is still under way and will be decided by a federal court.

The Commission vote authorizing the staff to file the complaint against Rozenfeld and a number of companies involved in the scheme was 5-0. The complaint was filed in the U.S. District Court for the District of New Jersey.

The Operation AI Comply cases being announced today build on a number of recent FTC cases involving claims about artificial intelligence, including: Automators, another online storefront scheme; Career Step, a company that allegedly used AI technology to convince consumers to enroll in bogus career training; NGL Labs, a company that allegedly claimed to use AI to provide moderation in an anonymous messaging app it unlawfully marketed to children; Rite Aid, which allegedly used AI facial recognition technology in its stores without reasonable safeguards; and CRI Genetics, a company that allegedly deceived users about the accuracy of its DNA reports, including claims it used an AI algorithm to conduct genetic matching.

--

The Federal Trade Commission works to promote competition and protect and educate consumers.  The FTC will never demand money, make threats, tell you to transfer money, or promise you a prize. Learn more about consumer topics at consumer.ftc.gov, or report fraud, scams, and bad business practices at ReportFraud.ftc.gov. Follow the FTC on social media, read consumer alerts and the business blog, and sign up to get the latest FTC news and alerts.""

 

For original post, please visit: 

https://www.ftc.gov/news-events/news/press-releases/2024/09/ftc-announces-crackdown-deceptive-ai-claims-schemes 

 
No comment yet.
Scooped by Roxana Marachi, PhD
November 19, 2024 12:47 PM
Scoop.it!

Refusing Generative AI in Writing Studies // Sano-Franchini, McIntyer, & Fernandes (2024) 

Refusing Generative AI in Writing Studies // Sano-Franchini, McIntyer, & Fernandes (2024)  | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it
 
No comment yet.
Scooped by Roxana Marachi, PhD
November 14, 2024 2:45 PM
Scoop.it!

The EdTech Revolution Has Failed: The case against student use of computers, tablets, and smartphones in the classroom // Jared Cooney Horvath

The EdTech Revolution Has Failed The case against student use of computers, tablets, and smartphones in the classroom

by Jared Cooney Horvath 

 

"In May of 2023, schools minister Lotta Edholm announced that Swedish classrooms would aim to significantly reduce student-facing digital technology and embrace more traditional practices like reading hardcopy books and taking handwritten notes. The announcement was met with disbelief among pundits and the wider international public: why would an entire country willingly forgo those digital technologies which are widely touted to be the future of education? 

In a recent survey, 92% of students worldwide reported having access to a computer at school. In New Zealand, 99% of schools are equipped with high-speed internet while in Australia the student-to-computer ratio has dipped below 1:1 (meaning there are more computers than students in school). In the U.S., government expenditure on EdTech products for public schools exceeds $30 billion annually."...

 

>>>

 

No comment yet.
Scooped by Roxana Marachi, PhD
December 1, 2023 5:43 PM
Scoop.it!

AI Technology Threatens Educational Equity for Marginalized Students // The Progressive

AI Technology Threatens Educational Equity for Marginalized Students // The Progressive | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Tiera Tanksley

The fall semester is well underway, and schools across the United States are rushing to implement artificial intelligence (AI) in ways that bring about equity, access, and efficiency for all members of the school community. Take, for instance, Los Angeles Unified School District’s (LAUSD) recent decision to implement “Ed.” 

 

Ed is an AI chatbot meant to replace school advisors for students with Individual Education Plans (IEPs), who are disproportionately Black. Announced on the heels of a national uproar about teachers being unable to read IEPs due to lack of time, energy, and structural support, Ed might seem to many like a sliver of hope—the silver bullet needed to address the chronic mismanagement of IEPs and ongoing disenfranchisement of Black students in the district. But for Black students with IEPs, AI technologies like Ed might be more akin to a nightmare.

Since the pandemic, public schools have seen a proliferation of AI technologies that promise to remediate educational inequality for historically marginalized students. These technologies claim to predict behavior and academic performance, manage classroom engagement, detect and deter cheating, and proactively stop campus-based crimes before they happen. Unfortunately, because anti-Blackness is often baked into the design and implementation of these technologies, they often do more harm than good.

Proctorio, for example, is a popular remote proctoring platform that uses AI to detect perceived behavior abnormalities by test takers in real time. Because the platform employs facial detection systems that fail to recognize Black faces more than half of the time, Black students have an exceedingly hard time completing their exams without triggering the faulty detection systems, which results in locked exams, failing grades, and disciplinary action.

 

While being falsely flagged by Proctorio might induce test-taking anxiety or result in failed courses, the consequences for inadvertently triggering school safety technologies are much more devastating. Some of the most popular school safety platforms, like Gaggle and GoGaurdian, have been known to falsely identify discussions about LGBTQ+ identity, race related content, and language used by Black youth as dangerous or in violation of school disciplinary policies. Because many of these platforms are directly connected to law enforcement, students that are falsely identified are contacted by police both on campus and in their homes. Considering that Black youth endure the highest rates of discipline, assault, and carceral contact on school grounds and are six times more likely than their white peers to have fatal encounters with police, the risk of experiencing algorithmic bias can be life threatening.

These examples speak to the dangers of educational technologies designed specifically for safety, conduct, and discipline. But what about education technology (EdTech) intended for learning? Are the threats to student safety, privacy, and academic wellbeing the same?

Unfortunately, the use of educational technologies for purposes other than discipline seems to be the exception, not the rule. A national study examining the use of EdTech found an overall decrease in the use of the tools for teaching and learning, with over 60 percent of teachers reporting that the software is used to identify disciplinary infractions. 

What’s more, Black students and students with IEPs endure significantly higher rates of discipline not only from being disproportionately surveilled by educational technologies, but also from using tools like ChatGPT to make their learning experience more accommodating and accessible. This could include using AI technologies to support executive functioning, access translated or simplified language, or provide alternative learning strategies

 

"Many of these technologies are more likely to exacerbate educational inequities like racialized gaps in opportunity, school punishment, and surveillance."

 

To be sure, the stated goals and intentions of educational technologies are laudable, and speak to our collective hopes and dreams for the future of schools—places that are safe, engaging, and equitable for all students regardless of their background. But many of these technologies are more likely to exacerbate educational inequities like racialized gaps in opportunity, school punishment, and surveillance, dashing many of these idealistic hopes.  

To confront the disparities wrought by racially-biased AI, schools need a comprehensive approach to EdTech that addresses the harms of algorithmic racism for vulnerable groups. There are several ways to do this. 

One possibility is recognizing that EdTech is not neutral. Despite popular belief, educational technologies are not unbiased, objective, or race-neutral, and they do not inherently support the educational success of all students. Oftentimes, racism becomes encoded from the onset of the design process, and can manifest in the data set, the code, the decision making algorithms, and the system outputs.

Another option is fostering critical algorithmic literacy. Incorporating critical AI curriculum into K-12 coursework, offering professional development opportunities for educators, or hosting community events to raise awareness of algorithmic bias are just a few of the ways schools can support bringing students and staff up to speed. 

A third avenue is conducting algorithmic equity audits. Each year, the United States spends nearly $13 billion on educational technologies, with the LAUSD spending upwards of $227 million on EdTech in the 2020-2021 academic year alone. To avoid a costly mistake, educational stakeholders can work with third-party auditors to identify biases in EdTech programs before launching them. 

Regardless of the imagined future that Big Tech companies try to sell us, the current reality of EdTech for marginalized students is troubling and must be reckoned with. For LAUSD—the second largest district in the country and the home of the fourteenth largest school police force in California—the time to tackle the potential harms of AI systems like Ed the IEP Chatbot is now."

 

For full post, please visit: 

https://progressive.org/public-schools-advocate/ai-educational-equity-for-marginalized-students-tanksley-20231125/ 

Samantha Alanís's curator insight, February 1, 2024 10:19 PM
Certainly!! The potential threat of AI technology to educational equity for students raises concerns... It is crucial to consider how technological implementations may inadvertently exacerbate existing disparities, emphasizing the need for thoughtful, inclusive approaches in education technology to ensure equitable access and opportunities for all!!! :D
Scooped by Roxana Marachi, PhD
August 1, 2020 7:03 PM
Scoop.it!

The Color of Surveillance: Monitoring of Poor and Working People // Center on Privacy and Technology // Georgetown Law

The Color of Surveillance: Monitoring of Poor and Working People // Center on Privacy and Technology // Georgetown Law | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

https://www.law.georgetown.edu/privacy-technology-center/events/color-of-surveillance-2019/ 

No comment yet.
Scooped by Roxana Marachi, PhD
May 18, 2024 7:59 PM
Scoop.it!

AI Is Taking Water From the Desert // The Atlantic

AI Is Taking Water From the Desert // The Atlantic | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

...New data centers are springing up every week. Can the Earth sustain them?

 

By Karen Hao

"One scorching day this past September, I made the dangerous decision to try to circumnavigate some data centers. The ones I chose sit between a regional airport and some farm fields in Goodyear, Arizona, half an hour’s drive west of downtown Phoenix. When my Uber pulled up beside the unmarked buildings, the temperature was 97 degrees Fahrenheit. The air crackled with a latent energy, and some kind of pulsating sound was emanating from the electric wires above my head, or maybe from the buildings themselves. With no shelter from the blinding sunlight, I began to lose my sense of what was real.

Microsoft announced its plans for this location, and two others not so far away, back in 2019—a week after the company revealed its initial $1 billion investment in OpenAI, the buzzy start-up that would later release ChatGPT. From that time on, OpenAI began to train its models exclusively on Microsoft’s servers; any query for an OpenAI product would flow through Microsoft’s cloud-computing network, Azure. In part to meet that demand, Microsoft has been adding data centers at a stupendous rate, spending more than $10 billion on cloud-computing capacity in every quarter of late."...

 

For full post, please visit: 

https://www.theatlantic.com/technology/archive/2024/03/ai-water-climate-microsoft/677602/ 

 

No comment yet.
Scooped by Roxana Marachi, PhD
May 18, 2024 4:21 PM
Scoop.it!

US Senate AI Report Meets Mostly Disappointment, Condemnation // TechPolicy.Press

US Senate AI Report Meets Mostly Disappointment, Condemnation // TechPolicy.Press | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Gabby Miller and Justin Hendrix, May 16, 2024

"On Wednesday, May 15, 2024, a bipartisan US Senate working group led by Majority Leader Sen. Chuck Schumer (D-NY) released a report titled "Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate." The report follows a series of off-the-record "educational briefings," a classified briefing, and nine "AI Insight Forums" hosted in the fall of 2023 that drew on the participation of more than 150 experts from industry, academia, and civil society.

While some industry voices voiced praise for the report and its recommendations – for instance, IBM Chairman and CEO Arvind Krishna issued a statement commending the report and lauding Congressional leaders, indicating the company intends to “help bring this roadmap to life” – with some exceptions, civil society reactions were almost uniformly negative, with perspectives ranging from disappointment to condemnation."...

For full post, please visit: 

https://www.techpolicy.press/us-senate-ai-report-meets-mostly-disappointment-condemnation/ 

 

No comment yet.
Scooped by Roxana Marachi, PhD
January 29, 2024 2:48 PM
Scoop.it!

Beware the ‘botshit’: why generative AI is such a real and imminent threat to the way we live // André Spicer via The Guardian

Beware the ‘botshit’: why generative AI is such a real and imminent threat to the way we live // André Spicer via The Guardian | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

"Unless checks are put in place, citizens and voters may soon face AI-generated content that bears no relation to reality"

 

By André Spicer
"During 2023, the shape of politics to come appeared in a video. In it, Hillary Clinton – the former Democratic party presidential candidate and secretary of state – says: “You know, people might be surprised to hear me saying this, but I actually like Ron DeSantis a lot. Yeah, I know. I’d say he’s just the kind of guy this country needs.”

 

It seems odd that Clinton would warmly endorse a Republican presidential hopeful. And it is. Further investigations found the video was produced using generative artificial intelligence (AI).

 

The Clinton video is only one small example of how generative AI could profoundly reshape politics in the near future. Experts have pointed out the consequences for elections. These include the possibility of false information being created at little or no cost and highly personalised advertising being produced to manipulate voters. The results could be so-called “October surprises” – ie a piece of news that breaks just before the US elections in November, where misinformation is circulated and there is insufficient time to refute it – and the generation of misleading information about electoral administration, such as where polling stations are.

 

Concerns about the impact of generative AI on elections have become urgent as we enter a year in which billions of people across the planet will vote. During 2024, it is projected that there will be elections in Taiwan, India, Russia, South Africa, Mexico, Iran, Pakistan, Indonesia, the European Union, the US and the UK. Many of these elections will not determine just the future of nation states; they will also shape how we tackle global challenges such as geopolitical tensions and the climate crisis. It is likely that each of these elections will be influenced by new generative AI technologies in the same way the elections of the 2010s were shaped by social media.

 

While politicians spent millions harnessing the power of social media to shape elections during the 2010s, generative AI effectively reduces the cost of producing empty and misleading information to zero. This is particularly concerning because during the past decade, we have witnessed the role that so-called “bullshit” can play in politics. In a short book on the topic, the late Princeton philosopher Harry Frankfurt defined bullshit specifically as speech intended to persuade without regard to the truth. Throughout the 2010s this appeared to become an increasingly common practice among political leaders. With the rise of generative AI and technologies such as ChatGPT, we could see the rise of a phenomenon my colleagues and I label “botshit”.

In a recent paper, Tim Hannigan, Ian McCarthy and I sought to understand what exactly botshit is and how it works. It is well known that generative AI technologies such as ChatGPT can produce what are called “hallucinations”. This is because generative AI answers questions by making statistically informed guesses. Often these guesses are correct, but sometimes they are wildly off. The result can be artificially generated “hallucinations” that bear little relationship to reality, such as explanations or images that seem superficially plausible, but aren’t actually the correct answer to whatever the question was.

Humans might use untrue material created by generative AI in an uncritical and thoughtless way. And that could make it harder for people to know what is true and false in the world. In some cases, these risks might be relatively low, for example if generative AI were used for a task that was not very important (such as to come up with some ideas for a birthday party speech), or if the truth of the output were easily verifiable using another source (such as when did the battle of Waterloo happen). The real problems arise when the outputs of generative AI have important consequences and the outputs can’t easily be verified....

 

For full post, please visit: 

https://theconversation.com/could-a-court-really-order-the-destruction-of-chatgpt-the-new-york-times-thinks-so-and-it-may-be-right-221717 

No comment yet.
Scooped by Roxana Marachi, PhD
February 10, 2023 2:27 PM
Scoop.it!

Resources to Learn about AI Hype, AI Harms, BigData, Blockchain Harms, and Data Justice 

Resources to Learn about AI Hype, AI Harms, BigData, Blockchain Harms, and Data Justice  | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

Resources to Learn about AI, BigData, Blockchain, Algorithmic Harms, and Data Justice

Shortlink to share this page: http://bit.ly/DataJusticeLinks 

No comment yet.