This curated collection includes updates, resources, and research with critical perspectives related to the intersections of educational psychology and emerging technologies in education. The page also serves as a research tool to organize online content (funnel shaped icon allows keyword search). For more on the intersections of privatization and technologization of education with critiques of social impact finance and related technologies, please visit http://bit.ly/sibgamble and http://bit.ly/chart_look. For posts regarding screen time risks to health and development, see http://bit.ly/screen_time and for updates related to AI and data concerns, please visit http://bit.ly/PreventDataHarms. [Note: Views presented on this page are re-shared from external websites. The content may not necessarily represent the views nor official position of the curator nor employer of the curator.
"The Federal Trade Commission is taking action against multiple companies that have relied on artificial intelligence as a way to supercharge deceptive or unfair conduct that harms consumers, as part of its new law enforcement sweep called Operation AI Comply.
The cases being announced today include actions against a company promoting an AI tool that enabled its customers to create fake reviews, a company claiming to sell “AI Lawyer” services, and multiple companies claiming that they could use AI to help consumers make money through online storefronts.
“Using AI tools to trick, mislead, or defraud people is illegal,” said FTC Chair Lina M. Khan. “The FTC’s enforcement actions make clear that there is no AI exemption from the laws on the books. By cracking down on unfair or deceptive practices in these markets, FTC is ensuring that honest businesses and innovators can get a fair shot and consumers are being protected.”
Claims around artificial intelligence have become more prevalent in the marketplace, including frequent promises about the ways it could potentially enhance people’s lives through automation and problem solving. The cases included in this sweep show that firms have seized on the hype surrounding AI and are using it to lure consumers into bogus schemes, and are also providing AI powered tools that can turbocharge deception.
DoNotPay
The FTC is taking action against DoNotPay, a company that claimed to offer an AI service that was “the world’s first robot lawyer,” but the product failed to live up to its lofty claims that the service could substitute for the expertise of a human lawyer.
According to the FTC’s complaint, DoNotPay promised that its service would allow consumers to “sue for assault without a lawyer” and “generate perfectly valid legal documents in no time,” and that the company would “replace the $200-billion-dollar legal industry with artificial intelligence.” DoNotPay, however, could not deliver on these promises. The complaint alleges that the company did not conduct testing to determine whether its AI chatbot’s output was equal to the level of a human lawyer, and that the company itself did not hire or retain any attorneys.
The complaint also alleges that DoNotPay offered a service that would check a small business website for hundreds of federal and state law violations based solely on the consumer’s email address. This feature purportedly would detect legal violations that, if unaddressed, would potentially cost a small business $125,000 in legal fees, but according to the complaint, this service was also not effective.
DoNotPay has agreed to a proposed Commission order settling the charges against it. The settlement would require it to pay $193,000, provide a notice to consumers who subscribed to the service between 2021 and 2023 warning them about the limitations of law-related features on the service. The proposed order also will prohibit the company from making claims about its ability to substitute for any professional service without evidence to back it up.
The Commission vote authorizing the staff to issue the complaint and proposed administrative order was 5-0. Commissioner Holyoak issued a concurring statement joined by Chair Lina M. Khan. Commissioner Ferguson also issued a concurring statement. The FTC will publish a description of the consent agreement package in the Federal Register soon. The agreement will be subject to public comment for 30 days, after which the Commission will decide whether to make the proposed consent order final. Instructions for filing comments appear in the published notice. Once processed, comments will be posted on Regulations.gov.
Ascend Ecom
The FTC has filed a lawsuit against an online business opportunity scheme that it alleges has falsely claimed its “cutting edge” AI-powered tools would help consumers quickly earn thousands of dollars a month in passive income by opening online storefronts. According to the complaint, the scheme has defrauded consumers of at least $25 million.
The scheme is run by William Basta and Kenneth Leung, and it has operated under a number of different names since 2021, including Ascend Ecom, Ascend Ecommerce, Ascend CapVentures, ACV Partners, ACV, Accelerated eCom Ventures, Ethix Capital by Ascend, and ACV Nexus.
According to the FTC’s complaint, the operators of the scheme charge consumers tens of thousands of dollars to start online stores on ecommerce platforms such as Amazon, Walmart, Etsy, and TikTok, while also requiring them to spend tens of thousands more on inventory. Ascend’s advertising content claimed the company was a leader in ecommerce, using proprietary software and artificial intelligence to maximize clients’ business success.
The complaint notes that, while Ascend promises consumers it will create stores producing five-figure monthly income by the second year, for nearly all consumers, the promised gains never materialize, and consumers are left with depleted bank accounts and hefty credit card bills. The complaint alleges that Ascend received numerous complaints from consumers, pressured consumers to modify or delete negative reviews of Ascend, frequently failed to honor their “guaranteed buyback,” and unlawfully threatened to withhold the supposed “guaranteed buyback” for those who left negative reviews of the company online.
As a result of the FTC’s complaint, a federal court issued an order temporarily halting the scheme and putting it under the control of a receiver. The FTC’s case against the scheme is ongoing and will be decided by a federal court.
The Commission vote authorizing the staff to file the complaint was 5-0. The complaint was filed in the U.S. District Court for the Central District of California.
Ecommerce Empire Builders
The FTC has charged a business opportunity scheme with falsely claiming to help consumers build an “AI-powered Ecommerce Empire” by participating in its training programs that can cost almost $2,000 or by buying a “done for you” online storefront for tens of thousands of dollars. The scheme, known as Ecommerce Empire Builders (EEB), claims consumers can potentially make millions of dollars, but the FTC’s complaint alleges that those profits fail to materialize.
The complaint alleges that EEB’s CEO, Peter Prusinowski, has used consumers’ money – as much as $35,000 from consumers who purchase stores – to enrich himself while failing to deliver on the scheme’s promises of big income by selling goods online. In its marketing, EEB encourages consumers to “Skip the guesswork and start a million-dollar business today” by harnessing the “power of artificial intelligence” and the scheme’s supposed strategies.
In social media ads, EEB claims that its clients can make $10,000 monthly, but the FTC’s complaint alleges that the company has no evidence to back up those claims. Numerous consumers have complained that stores they purchased from EEB made little or no money, and that the company has resisted providing refunds to consumers, either denying refunds or only providing partial refunds.
As a result of the FTC’s complaint, a federal court issued an order temporarily halting the scheme and putting it under the control of a receiver. The FTC’s case against the scheme is ongoing and will be decided by a federal court.
The Commission vote authorizing the staff to file the complaint against Prusinowski and his company was 5-0. The complaint was filed in the U.S. District Court for the Eastern District of Pennsylvania.
Rytr
Since April 2021, Rytr has marketed and sold an AI “writing assistant” service for a number of uses, one of which was specifically “Testimonial & Review” generation. Paid subscribers could generate an unlimited number of detailed consumer reviews based on very limited and generic input.
According to the FTC’s complaint, Rytr’s service generated detailed reviews that contained specific, often material details that had no relation to the user’s input, and these reviews almost certainly would be false for the users who copied them and published them online. In many cases, subscribers’ AI-generated reviews featured information that would deceive potential consumers who were using the reviews to make purchasing decisions. The complaint further alleges that at least some of Rytr’s subscribers used the service to produce hundreds, and in some cases tens of thousands, of reviews potentially containing false information.
The complaint charges Rytr with violating the FTC Act by providing subscribers with the means to generate false and deceptive written content for consumer reviews. The complaint also alleges that Rytr engaged in an unfair business practice by offering a service that is likely to pollute the marketplace with a glut of fake reviews that would harm both consumers and honest competitors.
The proposed order settling the Commission’s complaint is designed to prevent Rytr from engaging in similar illegal conduct in the future. It would bar the company from advertising, promoting, marketing, or selling any service dedicated to – or promoted as – generating consumer reviews or testimonials.
The Commission vote authorizing the staff to issue the complaint and proposed administrative order was 3-2, with Commissioners Melissa Holyoak and Andrew Ferguson voting no. Commissioners Holyoak and Ferguson issued statements. The FTC will publish a description of the consent agreement package in the Federal Register soon. The agreement will be subject to public comment for 30 days, after which the Commission will decide whether to make the proposed consent order final. Instructions for filing comments appear in the published notice. Once processed, comments will be posted on Regulations.gov.
FBA Machine
In June, the FTC took action against a business opportunity scheme that allegedly falsely promised consumers that they would make guaranteed income through online storefronts that utilized AI-powered software. According to the FTC, the scheme, which has operated under the names Passive Scaling and FBA Machine, cost consumers more than $15.9 million based on deceptive earnings claims that rarely, if ever, materialize.
The complaint alleges that Bratislav Rozenfeld (also known as Steven Rozenfeld and Steven Rozen) has operated the scheme since 2021, initially as Passive Scaling. When Passive Scaling failed to live up to its promises and consumers sought refunds and brought lawsuits, Rozenfeld rebranded the scheme as FBA Machine in 2023. The rebranded marketing materials claim that FBA Machine uses “AI-powered” tools to help price products in the stores and maximize profits.
The scheme’s claims were wide-ranging, promising consumers that they could operate a “7-figure business” and citing supposed testimonials from clients who “generate over $100,000 per month in profit.” Company sales agents told consumers that the business was “risk-free” and falsely guaranteed refunds to consumers who did not make back their initial investments, which ranged from tens of thousands to hundreds of thousands of dollars.
As a result of the FTC’s complaint, a federal court issued an order temporarily halting the scheme and putting it under the control of a receiver. The case against the scheme is still under way and will be decided by a federal court.
The Commission vote authorizing the staff to file the complaint against Rozenfeld and a number of companies involved in the scheme was 5-0. The complaint was filed in the U.S. District Court for the District of New Jersey.
The Operation AI Comply cases being announced today build on a number of recent FTC cases involving claims about artificial intelligence, including: Automators, another online storefront scheme; Career Step, a company that allegedly used AI technology to convince consumers to enroll in bogus career training; NGL Labs, a company that allegedly claimed to use AI to provide moderation in an anonymous messaging app it unlawfully marketed to children; Rite Aid, which allegedly used AI facial recognition technology in its stores without reasonable safeguards; and CRI Genetics, a company that allegedly deceived users about the accuracy of its DNA reports, including claims it used an AI algorithm to conduct genetic matching.
Abstract "Courts across the United States are using computer software to predict whether a person will commit a crime, the results of which are incorporated into bail and sentencing decisions. It is imperative that such tools be accurate and fair, but critics have charged that the software can be racially biased, favoring white defendants over Black defendants. We evaluate the claim that computer software is more accurate and fairer than people tasked with making similar decisions. We also evaluate, and explain, the presence of racial bias in these predictive algorithms.
Introduction
We are the frequent subjects of predictive algorithms that determine music recommendations, product advertising, university admission, job placement, and bank loan qualification. In the criminal justice system, predictive algorithms have been used to predict where crimes will most likely occur, who is most likely to commit a violent crime, who is likely to fail to appear at their court hearing, and who is likely to reoffend at some point in the future.
Certain types of algorithmic tools known as “risk assessments” have become particularly prevalent in the criminal justice system within the United States. The majority of risk assessments are built to predict recidivism: asking whether someone with a criminal offense will reoffend at some point in the future. These tools rely on an individual’s criminal history, personal background, and demographic information to make these risk predictions.
Various risk assessments are in use across the country to inform decisions at almost every stage in the criminal justice system.undefined One widely used criminal risk assessment tool, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS, Northpointe), has been used to assess over one million individuals in the criminal justice system since it was developed in 1998.undefined The recidivism prediction component of COMPAS—the Recidivism Risk Scale—has been in use since 2000. This software predicts a person’s risk of committing a misdemeanor or felony within two years of assessment from an individual’s demographics and criminal record.
In the past few years, algorithmic risk assessments like COMPAS have become increasingly prevalent in pretrial decision making. In these contexts, an individual who has been arrested and booked in jail is assessed by the algorithmic tool in use by the given jurisdiction. Judges then consider the risk scores calculated by the tool in their decision to either release or detain a criminal defendant before their trial.
In May of 2016, writing for ProPublica, Julia Angwin and colleagues analyzed the efficacy of COMPAS in the pretrial context on over seven thousand individuals arrested in Broward County, Florida, between 2013 and 2014.undefined The analysis indicated that the predictions were unreliable and racially biased. The authors found that COMPAS’s overall accuracy for white defendants is 67.0%, only slightly higher than its accuracy of 63.8% for Black defendants.undefined The mistakes made by COMPAS, however, affected Black and white defendants differently: Black defendants who did not recidivate were incorrectly predicted to reoffend at a rate of 44.9%, nearly twice as high as their white counterparts at 23.5%; and white defendants who did recidivate were incorrectly predicted to not reoffend at a rate of 47.7%, nearly twice as high as their Black counterparts at 28.0%. In other words, COMPAS scores appeared to favor white defendants over Black defendants by underpredicting recidivism for white and overpredicting recidivism for Black defendants. Unsurprisingly, this caused an uproar and significant concern that technology was being used to further entrench racism in our criminal justice system.
Since the publication of the ProPublica analysis, there has been significant research and debate regarding the measurement of algorithmic fairness.undefined Complicating this discussion is the fact that the research community does not necessarily agree on the definition of what makes an algorithm fair. And some studies have revealed that certain definitions of fairness are mathematically incompatible.undefined To this date, the debate around mathematical measurement of fairness is both complicated and unresolved.
Algorithmic predictions have become common in the criminal justice system because they maintain a reputation of being objective and unbiased, whereas human decision making is considered inherently more biased and flawed. Northpointe describes COMPAS as “an objective method of estimating the likelihood of reoffending.”undefined The Public Safety Assessment (PSA), another common pretrial risk assessment tool, advertises itself as a tool to “provide judges with objective, data-driven, consistent information that can inform the decisions they make.”undefined In general, people often assume that algorithms using “big data techniques” are unbiased simply because of the amount of data used to build them.
After reading the ProPublica analysis in May of 2016, we started thinking about recidivism prediction algorithms and their use in the criminal justice system. To our surprise, we could not find any research proving that recidivism prediction algorithms are superior to human predictions. Due to the serious implications this type of software can have on a person’s life, we felt that we should start by confirming that COMPAS is, in fact, outperforming human predictions. We also felt that it was critical to get beyond the debate of how to measure fairness and understand why COMPAS’s predictive algorithm exhibited such troubling racial bias.
In our study, published in Science Advances in January 2018, we began by asking a fundamental question regarding the use of algorithmic risk predictions: are these tools more accurate than the human decision making they aim to replace?undefined The goal of the study was to evaluate the baseline for human performance on recidivism prediction, and assess whether COMPAS was actually outperforming this baseline. We found that people from a popular online crowd-sourcing marketplace—who, it can reasonably be assumed, have little to no expertise in criminal justice—are as accurate and fair as COMPAS at predicting recidivism. This somewhat surprising result then led us to ask: how is it possible that the average person on the internet, being paid $1 to respond to a survey, is as accurate as commercial software used in the criminal justice system? To answer this, we effectively reverse engineered the COMPAS prediction algorithm and discovered that the software is equivalent to a simple classifier based on only two pieces of data, and it is this simple predictor that leads to the algorithm reproducing historical racial inequities in the criminal justice system.
Comparing Human and Algorithmic Recidivism Prediction
Methodology
Our study is based on a data set of 2013–2014 pretrial defendants from Broward County, Florida.undefined This data set of 7,214 defendants contains individual demographic information, criminal history, the COMPAS recidivism risk score, and each defendant’s arrest record within a two-year period following the COMPAS scoring, excluding any time spent detained in a jail or a prison. COMPAS scores—ranging from 1 to 10—classify the risk of recidivism as low-risk (1–4), medium-risk (5–7), or high-risk (8–10). For the purpose of binary classification, following the methodology used in the ProPublica analysis and the guidance of the COMPAS practitioner’s guide, scores of 5 or above were classified as a prediction of recidivism.undefined
Of the 7,214 defendants in the data set, 1,000 were randomly selected for use in our study that evaluated the human performance of recidivism prediction. This subset yields similar overall COMPAS accuracy, false positive rate, and false negative rate as on the complete data set. (A positive prediction is one in which a defendant is predicted to recidivate; a negative prediction is one in which they are predicted to not recidivate.) The COMPAS accuracy for this subset of 1,000 defendants is 65.2%. The average COMPAS accuracy on 10,000 random subsets of size 1,000 each is 65.4% (with a 95% confidence interval of [62.6, 68.1]).
A descriptive paragraph for each of 1,000 defendants was generated:
The defendant is a [SEX] aged [AGE]. They have been charged with: [CRIME CHARGE]. This crime is classified as a [CRIMINAL DEGREE]. They have been convicted of [NON-JUVENILE PRIOR COUNT] prior crimes. They have [JUVENILE- FELONY COUNT] juvenile felony charges and [JUVENILE-MISDEMEANOR COUNT] juvenile misdemeanor charges on their record.
Perhaps, most notably, we did not specify the defendant's race in this “no race” condition. In a follow-up “race” condition, the defendant’s race was included so that the first line of the above paragraph read, “The defendant is a [RACE] [SEX] aged [AGE].”
There was a total of sixty-three unique criminal charges, including armed robbery, burglary, grand theft, prostitution, robbery, and sexual assault. The crime degree is either “misdemeanor” or “felony.” To ensure that our participants understood the nature of each crime, the above paragraph was followed by a short description of each criminal charge:
[CRIME CHARGE]: [CRIME DESCRIPTION]
After reading the defendant description, participants were then asked to respond either “Yes” or “No” to the question “Do you think this person will commit another crime within two years?” Participants were required to answer each question and could not change their response once it was made. After each answer, participants were given two forms of feedback: whether their response was correct and their average accuracy.
The 1,000 defendants were randomly divided into 20 subsets of 50 each. Each participant was randomly assigned to see one of these 20 subsets. Participants saw the 50 defendants—one at a time—in random order. Participants were only allowed to complete a single subset of 50 defendants.
Participants were recruited through Amazon’s Mechanical Turk, an online crowd-sourcing marketplace where people are paid to perform a wide variety of tasks. (Institutional review board [IRB] guidelines were followed for all participants.) Our task was titled “Predicting Crime” with the description “Read a few sentences about an actual person and predict if they will commit a crime in the future.” The keywords for the task were “survey, research, criminal justice.” Participants were paid one dollar for completing the task and an additional five-dollar bonus if their overall accuracy on the task was greater than 65%. This bonus was intended to provide an incentive for participants to pay close attention to the task. To filter out participants who were not paying close attention, three catch trials were randomly added to the subset of 50 questions. These questions were formatted to look like all other questions but had easily identifiable correct answers.undefined A participant’s response was eliminated from our analysis if any of these questions were answered incorrectly.
Responses for the first (no-race) condition were collected from 462 participants, 62 of whom were removed due to an incorrect response on a catch trial. Responses for the second (race) condition were collected from 449 participants, 49 of whom were removed due to an incorrect response on a catch trial. In each condition, this yielded 20 participant responses for each of 20 subsets of 50 questions. Because of the random pairing of participants to a subset of 50 questions, we occasionally oversampled the required number of 20 participants. In these cases, we selected a random 20 participants and discarded any excess responses.
Results
We compare the overall accuracy and bias in human assessment with the algorithmic assessment of COMPAS. Throughout, a positive prediction is one in which a defendant is predicted to recidivate while a negative prediction is one in which they are predicted to not recidivate. We measure overall accuracy as the rate at which a defendant is correctly predicted to recidivate or not (i.e., the combined true positive and true negative rates). We also report on false positives (a defendant is predicted to recidivate but they don’t) and false negatives (a defendant is predicted to not recidivate but they do). Throughout, we use both paired and unpaired t-tests (with 19 degrees of freedom) to analyze the performance of our participants and COMPAS.
The mean and median accuracy in the no-race condition—computed by analyzing the average accuracy of the 400 human predictions—is 62.1% and 64.0%. We compare these results with the performance of COMPAS on this subset of 1,000 defendants. Because groups of 20 participants judged the same subset of 50 defendants, the individual judgments are not independent. However, because each participant judged only one subset of the defendants, the median accuracies of each subset can reasonably be assumed to be independent. The participant performance, therefore, on the 20 subsets can be directly compared to the COMPAS performance on the same 20 subsets. A one-sided t-test reveals that the average of the 20 median participant accuracies of 62.8% is, just barely, lower than the COMPAS accuracy of 65.2% (p = 0.045).
To determine if there is “wisdom in the crowd” (in our case, a small crowd of 20 people per subset), participant responses were pooled within each subset using a majority rules criterion. This crowd-based approach yields a prediction accuracy of 67.0%. A one-sided t-test reveals that COMPAS is not significantly better than the crowd (p = 0.85). This demonstrates that the commercial COMPAS prediction algorithm does not outperform small crowds of non-experts at predicting recidivism.
As we noted earlier, there exists significant debate regarding the measurement of algorithmic fairness. For the purpose of this study, we evaluate the human predictions with the same fairness criteria used in the ProPublica analysis for ease of comparability. We acknowledge that this may not be the ideal measure of fairness, and also acknowledge that there is debate in the literature on the appropriate measure of fairness.undefined Regardless, we consider fairness in terms of disparate false positive rates (incorrectly classifying a defendant as high risk when they are not) and false negative rates (incorrectly classifying a defendant as low risk when they are not). We believe that, while perhaps not perfect, this measure of fairness shines a light on real-world consequences of incorrect predictions by quantifying the number of defendants that are improperly incarcerated or released.
We measure the fairness of our participants with respect to a defendant’s race based on the crowd predictions. Our participants’ accuracy on Black defendants is 68.2% compared to 67.6% for white defendants. An unpaired t-test reveals no significant difference across race (p = .87). This is similar to that of COMPAS, having a statistically insignificant difference in accuracy of 64.9% for Black defendants and 65.7% for white defendants. By this measure of fairness, our participants and COMPAS are fair to Black and white defendants.
Despite this fairness in overall accuracy, our participants had a significant difference in the false positive and false negative rates for Black and white defendants. Specifically, our participants’ false positive rate for Black defendants is 37.1% compared to 27.2% for white defendants, and our participants’ false negative rate for Black defendants is 29.2% compared to 40.3% for white defendants.
These discrepancies are similar to that of COMPAS, which has a false positive rate of 40.4% for Black defendants and 25.4% for white defendants, and a false negative rate for Black defendants of 30.9% compared to 47.9% for white defendants. See table 1(a) and (c) and figure 1 for a summary of these results. By this measure of fairness, our participants and COMPAS are similarly unfair to Black defendants, despite—bizarrely—the fact that race is not explicitly specified.
[See full article for tables cited]
The results of this study led us to question how human participants produced racially disparate predictions despite not knowing the race of the defendant. We recruited a new set of 400 participants to repeat the same exercise but this time with the defendant’s race included. We wondered if including a defendant’s race would reduce or exaggerate the effect of any implicit, explicit, or institutional racial bias.
In this race condition, the mean and median accuracy on predicting whether a defendant would recidivate is 62.3% and 64.0%, nearly identical to the condition where race is not specified, see table 1(a) and (b). The crowd-based accuracy is 66.5%, slightly lower than the condition in which race is not specified, but not significantly so. With respect to fairness, participant accuracy is not significantly different for Black defendants, 66.2%, compared to white defendants, 67.6%. The false positive rate for Black defendants is 40.0% compared to 26.2% for white defendants. The false negative rate for Black defendants is 30.1% compared to 42.1% for white defendants. See table 1(b) for a summary of these results.
Somewhat surprisingly, including race does not have a significant impact on overall accuracy or fairness. Most interestingly, the exclusion of race does not necessarily lead to the elimination of racial disparities in human recidivism prediction.
At this point in our study, we have confidently seen that the COMPAS predictive software is not superior to nonexpert human predictions. However, we are left with two perplexing questions:
How is it that nonexperts are as accurate as a widely used commercial software? and
How is it that nonexperts appear to be racially biased even when they don’t know the race of the defendant?
With an overall accuracy of around 65%, COMPAS and nonexpert predictions are not as accurate as we might want, particularly from the point of view of a defendant whose future lies in the balance. Since nonexperts are as accurate as the COMPAS software, we wondered about the sophistication of the underlying COMPAS predictive algorithm. This algorithm, however, is not publicized, so we built our own predictive algorithm in an attempt to understand and effectively reverse engineer the COMPAS software.
Methodology
Our algorithmic analysis used the same seven features as described in the previous section, extracted from the records in the Broward County data set. Unlike the human assessment that analyzed a subset of these defendants, the following algorithmic assessment is performed over the entire data set.
Logistic regression is a linear classifier that, in a two-class classification (as in our case), computes a separating hyperplane to distinguish between recidivists and nonrecidivists. A nonlinear support vector machine employs a kernel function—in our case, a radial basis kernel—to project the initial seven-dimensional feature space to a higher dimensional space in which a linear hyperplane is used to distinguish between recidivists and nonrecidivists. The use of a kernel function amounts to computing a nonlinear separating surface in the original seven-dimensional feature space, allowing the classifier to capture more complex patterns between recidivists and nonrecidivists than is possible with linear classifiers.
We employed two different classifiers: logistic regression (a simple, general-purpose, linear classifier) and a support vector machine (a more complex, general-purpose, nonlinear classifier).undefined The input to each classifier was seven features from 7,214 defendants: age, sex, number of juvenile misdemeanors, number of juvenile felonies, number of prior (nonjuvenile) crimes, crime degree, and crime charge (see previous section). Each classifier was trained to predict recidivism from these seven features. Each classifier was trained 1,000 times on a random 80% training and 20% testing split. We report the average testing accuracy.
Results
We found that a simple linear predictor—logistic regression (LR)—provided with the same seven features as our participants (in the no-race condition), yields similar prediction accuracy as COMPAS’s predictive algorithm. As compared to COMPAS’s overall accuracy of 65.4%, our LR classifier yields an overall testing accuracy of 66.6%. Our predictor also yields similar results to COMPAS in terms of predictive fairness, see table 2(a) and (d).
Despite using only seven features as input, a standard linear predictor yields similar results to COMPAS’s software. We can reasonably conclude, therefore, that COMPAS is employing nothing more sophisticated than a linear predictor, or its equivalent.
To test whether performance was limited by the classifier or by the nature of the data, we trained a more powerful nonlinear support vector machine (SVM) on the same data. Somewhat surprisingly, the SVM yields nearly identical results to the linear classifier, see table 2(c). If the relatively low accuracy of the linear classifier was because the data is not linearly separable, then we would have expected the nonlinear SVM to perform better. The failure to do so suggests the data is not separable, linearly or otherwise.
Lastly, we wondered if using an even smaller subset of the seven features would be as accurate as COMPAS. We trained and tested an LR-classifier on all possible subsets of the seven features. In agreement with the research done by Angelino et al., we show that a classifier based on only two features—age and total number of prior convictions—performs as well as COMPAS, see table 2(b).undefined The importance of these two criteria is consistent with the conclusions of two meta-analysis studies that set out to determine, in part, which criteria are most predictive of recidivism.
Note: Predictions are for (a) logistic regression with seven features; (b) logistic regression with two features; (c) a nonlinear support vector machine with seven features; and (d) the commercial COMPAS software with 137 features. The results in columns (a)–(c) correspond to the average testing accuracy over 1,000 random 80/20 training/testing splits. The values in the square brackets correspond to the 95% bootstrapped (a)–(c) and binomial (d) confidence intervals.
In addition to further elucidating the inner workings of these predictive algorithms, the behavior of this two-feature linear classifier helps us understand how the nonexperts were able to match COMPAS’s predictive ability. When making predictions about an individual’s likelihood of future recidivism, the nonexperts saw the following seven criteria: age, sex, number of juvenile misdemeanors, number of juvenile felonies, number of prior (nonjuvenile) crimes, current crime degree, and current crime charge. If the algorithmic classifier can rely only on a person’s age and number of prior crimes to make this prediction, it is plausible that the nonexperts implicitly or explicitly focused on these criteria as well. (Recall that participants were provided with feedback on their correct and incorrect responses, so it is likely that some learning occurred.)
The two-feature classifier effectively learned that if a person is young and has already been convicted multiple times, they are at a higher risk of reoffending, but if a person is older and has not previously been convicted of a crime, then they are at a lower risk of reoffending. This certainly seems like a sensible strategy, if not a terribly accurate one.
The predictive strength of a person’s age and number of prior convictions in this context also helps explain the racially disparate predictions seen in both of our human studies and in COMPAS’s predictions overall. On a national scale, Black people are more likely to have prior convictions on their record than white people are: for example, Black people in the United States are incarcerated in state prisons at a rate that is 5.1 times that of white Americans.undefined Within the data set used in the study, white defendants have an average of 2.59 prior convictions, whereas Black defendants have an average of 4.95 prior convictions. In Florida, the state in which COMPAS was validated for use in Broward County, the incarceration rate of Black people is 3.6 times higher than that of white people.undefined These racially disparate incarceration rates are not fully explained by different rates of offense by race. Racial disparities against Black people in the United States also exist in policing, arrests, and sentencing.undefined The racial bias that appears in both the algorithmic and human predictions is a result of these discrepancies.
While the total number of prior convictions is one of the most predictive variables of recidivism, its predictive power is not very strong. Because COMPAS and the human participants are only moderately accurate (both achieve an accuracy of around 65%), they both make significant, and racially biased, mistakes. Black defendants are more likely to be classified as medium or high risk by COMPAS, because Black defendants are more likely to have prior convictions due to the fact that Black people are more likely to be arrested, charged, and convicted. On the other hand, white defendants are more likely to be classified as low risk by COMPAS, because white defendants are less likely to have prior convictions. Black defendants, therefore, who don’t reoffend are predicted to be riskier than white defendants who don’t reoffend. Conversely, white defendants who do reoffend are predicted to be less risky than Black defendants who do reoffend. As a result, the false positive rate is higher for Black defendants than white defendants, and the false negative rate for white defendants is higher than for Black defendants. This, in short, is the racial bias that ProPublica first exposed.
This same type of disparate outcome appeared in the human predictions as well. Because the human participants saw only a few facts about each defendant, it is safe to assume that the total number of prior convictions was heavily considered in one’s predictions. Therefore, the bias of the human predictions was likely also a result of the difference in conviction history, which itself is linked to inequities in our criminal justice system.
The participant and COMPAS’s predictions were in agreement for 692 of the 1,000 defendants, indicating that perhaps there could be predictive power in the “combined wisdom” of the risk tool and the human-generated risk scores. However, a classifier that combined the same seven data per defendant along with the COMPAS risk score and the average human-generated risk score performed no better than any of the individual predictions. This suggests that the mistakes made by humans and COMPAS are not independent.
Conclusions
We have shown that a commercial software that is widely used to predict recidivism is no more accurate or fair than the predictions of people with little to no criminal justice expertise who responded to an online survey. We have shown that these predictions are functionally equivalent. When discussing the use of COMPAS in the courtroom to make these life-altering decisions, we should therefore ask whether we would place these same decisions in the equally accurate and biased hands of random people responding to an online survey.
In response to our study, equivant, the makers of COMPAS, responded that our study was both “highly misleading,” and “confirmed that COMPAS achieves good predictability.”undefined Despite this contradictory statement and a promise to analyze our data and results, equivant has not demonstrated any flaws with our study.
Algorithmic predictions—whether in the courts, in university admissions, or employment, financial, and health decisions—can have a profound impact on someone’s life. It is essential, therefore, that the underlying data and algorithms that fuel these predictions are well understood, validated, and transparent to those who are the subject of their use.
In beginning to question the predictive validity of an algorithmic tool, it is essential to also interrogate the ethical implications of the use of the tool. Recidivism prediction tools are used in decisions about a person’s civil liberties. They are, for example, used to answer questions such as “Will this person commit a crime if they are released from jail before their trial? Should this person instead be detained in jail before their trial?” and “How strictly should this person be supervised while they are on parole? What is their risk of recidivism while they are out on parole?” Even if technologists could build a perfect and fair recidivism prediction tool, we should still ask if the use of this tool is just. In each of these contexts, a person is punished (either detained or surveilled) for a crime they have not yet committed. Is punishing a person for something they have not yet done ethical and just?
It is also crucial to discuss the possibility of building any recidivism prediction tool in the United States that is free from racial bias. Recidivism prediction algorithms are necessarily trained on decades of historical criminal justice data, learning the patterns of which kinds of people are incarcerated again and again. The United States suffers from racial discrimination at every stage in the criminal justice system. Machine learning technologies rely on the core assumption that the future will look like the past, and it is imperative that the future of our justice system looks nothing like its racist past. If any criminal risk prediction tool in the United States will inherently reinforce these racially disparate patterns, perhaps they should be avoided altogether."
...New data centers are springing up every week. Can the Earth sustain them?
By Karen Hao
"One scorching day this past September, I made the dangerous decision to try to circumnavigate some data centers. The ones I chose sit between a regional airport and some farm fields in Goodyear, Arizona, half an hour’s drive west of downtown Phoenix. When my Uber pulled up beside the unmarked buildings, the temperature was 97 degrees Fahrenheit. The air crackled with a latent energy, and some kind of pulsating sound was emanating from the electric wires above my head, or maybe from the buildings themselves. With no shelter from the blinding sunlight, I began to lose my sense of what was real.
Microsoft announced its plans for this location, and two others not so far away, back in 2019—a week after the company revealed its initial $1 billion investment in OpenAI, the buzzy start-up that would later release ChatGPT. From that time on, OpenAI began to train its models exclusively on Microsoft’s servers; any query for an OpenAI product would flow through Microsoft’s cloud-computing network, Azure. In part to meet that demand, Microsoft has been adding data centers at a stupendous rate, spending more than $10 billion on cloud-computing capacity in every quarter of late."...
"On Wednesday, May 15, 2024, a bipartisan US Senate working group led by Majority Leader Sen. Chuck Schumer (D-NY) released a report titled "Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate." The report follows a series of off-the-record "educational briefings," a classified briefing, and nine "AI Insight Forums" hosted in the fall of 2023 that drew on the participation of more than 150 experts from industry, academia, and civil society.
While some industry voices voiced praise for the report and its recommendations – for instance, IBM Chairman and CEO Arvind Krishna issued a statement commending the report and lauding Congressional leaders, indicating the company intends to “help bring this roadmap to life” – with some exceptions, civil society reactions were almost uniformly negative, with perspectives ranging from disappointment to condemnation."...
"Allhere, ed tech startup hired to build LAUSD's lauded AI chatbot "Ed", played fast and loose with sensitive records, ex-software engineer alleges."
By Mark Keierleber, July 1, 2024
Just weeks before the implosion of AllHere, an education technology company that had been showered with cash from venture capitalists and featured in glowing profiles by the business press, America’s second-largest school district was warned about problems with AllHere’s product.
As the eight-year-old startup rolled out Los Angeles Unified School District’s flashy new AI-driven chatbot — an animated sun named “Ed” that AllHere was hired to build for $6 million — a former company executive was sending emails to the district and others that Ed’s workings violated bedrock student data privacy principles.
Those emails were sent shortly before The 74 first reported last week that AllHere, with $12 million in investor capital, was in serious straits. A June 14 statement on the company’s website revealed a majority of its employees had been furloughed due to its “current financial position.” Company founder and CEO Joanna Smith-Griffin, a spokesperson for the Los Angeles district said, was no longer on the job.
Smith-Griffin and L.A. Superintendent Alberto Carvalho went on the road together this spring to unveil Ed at a series of high-profile ed tech conferences, with the schools chief dubbing it the nation’s first “personal assistant” for students and leaning hard into LAUSD’s place in the K-12 AI vanguard. He called Ed’s ability to know students “unprecedented in American public education” at the ASU+GSV conference in April.
Through an algorithm that analyzes troves of student information from multiple sources, the chatbot was designed to offer tailored responses to questions like “what grade does my child have in math?” The tool relies on vast amounts of students’ data, including their academic performance and special education accommodations, to function.
Meanwhile, Chris Whiteley, a former senior director of software engineering at AllHere who was laid off in April, had become a whistleblower. He told district officials, its independent inspector general’s office and state education officials that the tool processed student records in ways that likely ran afoul of L.A. Unified’s own data privacy rules and put sensitive information at risk of getting hacked. None of the agencies ever responded, Whiteley told The 74.
“When AllHere started doing the work for LAUSD, that’s when, to me, all of the data privacy issues started popping up,” Whiteley said in an interview last week. The problem, he said, came down to a company in over its head and one that “was almost always on fire” in terms of its operations and management. LAUSD’s chatbot was unlike anything it had ever built before and — given the company’s precarious state — could be its last.
If AllHere was in chaos and its bespoke chatbot beset by porous data practices, Carvalho was portraying the opposite. One day before The 74 broke the news of the company turmoil and Smith-Griffin’s departure, EdWeek Marketbrief spotlighted the schools chief at a Denver conference talking about how adroitly LAUSD managed its ed tech vendor relationships — “We force them to all play in the same sandbox” — while ensuring that “protecting data privacy is a top priority.”
In a statement on Friday, a district spokesperson said the school system “takes these concerns seriously and will continue to take any steps necessary to ensure that appropriate privacy and security protections are in place in the Ed platform.”
“Pursuant to contract and applicable law, AllHere is not authorized to store student data outside the United States without prior written consent from the District,” the statement continued. “Any student data belonging to the District and residing in the Ed platform will continue to be subject to the same privacy and data security protections, regardless of what happens to AllHere as a company.”
A district spokesperson, in response to earlier questioning from The 74 last week, said it was informed that Smith-Griffin was no longer with the company and that several businesses “are interested in acquiring AllHere.” Meanwhile Ed, the spokesperson said, “belongs to Los Angeles Unified and is for Los Angeles Unified.”
Officials in the inspector general’s office didn’t respond to requests for comment. The state education department “does not directly oversee the use of AI programs in schools or have the authority to decide which programs a district can utilize,” a spokesperson said in a statement.
It’s a radical turn of events for AllHere and the AI tool it markets as a “learning acceleration platform,” which were all the buzz just a few months ago. In April, Time Magazine named AllHere among the world’s top education technology companies. That same month, Inc. Magazine dubbed Smith-Griffin a global K-12 education leader in artificial intelligence in its Female Founders 250 list.
Ed has been similarly blessed with celebrity treatment.
“He’s going to talk to you in 100 different languages, he’s going to connect with you, he’s going to fall in love with you,” Carvalho said at ASU+GSV. “Hopefully you’ll love it, and in the process we are transforming a school system of 540,000 students into 540,000 ‘schools of one’ through absolute personalization and individualization.”
Smith-Griffin, who graduated from the Miami school district that Carvalho once led before going onto Harvard, couldn’t be reached for comment. Smith-Griffin’s LinkedIn page was recently deactivated and parts of the company website have gone dark. Attempts to reach AllHere were also unsuccessful.
‘The product worked, right, but it worked by cheating’
Smith-Griffin, a former Boston charter school teacher and family engagement director, founded AllHere in 2016. Since then, the company has primarily provided schools with a text messaging system that facilitates communication between parents and educators. Designed to reduce chronic student absences, the tool relies on attendance data and other information to deliver customized, text-based “nudges.”
The work that AllHere provided the Los Angeles school district, Whiteley said, was on a whole different level — and the company wasn’t prepared to meet the demand and lacked expertise in data security. In L.A., AllHere operated as a consultant rather than a tech firm that was building its own product, according to its contract with LAUSD obtained by The 74. Ultimately, the district retained rights to the chatbot, according to the agreement, but AllHere was contractually obligated to “comply with the district information security policies.”
The contract notes that the chatbot would be “trained to detect any confidential or sensitive information” and to discourage parents and students from sharing with it any personal details. But the chatbot’s decision to share and process students’ individual information, Whiteley said, was outside of families’ control.
In order to provide individualized prompts on details like student attendance and demographics, the tool connects to several data sources, according to the contract, including Welligent, an online tool used to track students’ special education services. The document notes that Ed also interfaces with the Whole Child Integrated Data stored on Snowflake, a cloud storage company. Launched in 2019, the Whole Child platform serves as a central repository for LAUSD student data designed to streamline data analysis to help educators monitor students’ progress and personalize instruction.
Whiteley told officials the app included students’ personally identifiable information in all chatbot prompts, even in those where the data weren’t relevant. Prompts containing students’ personal information were also shared with other third-party companies unnecessarily, Whiteley alleges, and were processed on offshore servers. Seven out of eight Ed chatbot requests, he said, are sent to places like Japan, Sweden, the United Kingdom, France, Switzerland, Australia and Canada.
Taken together, he argued the company’s practices ran afoul of data minimization principles, a standard cybersecurity practice that maintains that apps should collect and process the least amount of personal information necessary to accomplish a specific task. Playing fast and loose with the data, he said, unnecessarily exposed students’ information to potential cyberattacks and data breaches and, in cases where the data were processed overseas, could subject it to foreign governments’ data access and surveillance rules.
Chatbot source code that Whiteley shared with The 74 outlines how prompts are processed on foreign servers by a Microsoft AI service that integrates with ChatGPT. The LAUSD chatbot is directed to serve as a “friendly, concise customer support agent” that replies “using simple language a third grader could understand.” When querying the simple prompt “Hello,” the chatbot provided the student’s grades, progress toward graduation and other personal information.
AllHere’s critical flaw, Whiteley said, is that senior executives “didn’t understand how to protect data.”
“The issue is we’re sending data overseas, we’re sending too much data, and then the data were being logged by third parties,” he said, in violation of the district’s data use agreement. “The product worked, right, but it worked by cheating. It cheated by not doing things right the first time.”
In a 2017 policy bulletin, the district notes that all sensitive information “needs to be handled in a secure way that protects privacy,” and that contractors cannot disclose information to other parties without parental consent. A second policy bulletin, from April, outlines the district’s authorized use guidelines for artificial intelligence, which notes that officials, “Shall not share any confidential, sensitive, privileged or private information when using, prompting or communicating with any tools.” It’s important to refrain from using sensitive information in prompts, the policy notes, because AI tools “take whatever users enter into a prompt and incorporate it into their systems/knowledge base for other users.”
“Well, that’s what AllHere was doing,” Whiteley said.
‘Acid is dangerous’
Whiteley’s revelations present LAUSD with its third student data security debacle in the last month. In mid-June, a threat actor known as “Sp1d3r” began to sell for $150,000 a trove of data it claimed to have stolen from the Los Angeles district on Breach Forums, a dark web marketplace. LAUSD told Bloomberg that the compromised data had been stored by one of its third-party vendors on the cloud storage company Snowflake, the repository for the district’s Whole Child Integrated Data. The Snowflake data breach may be one of the largest in history. The threat actor claims that the L.A. schools data in its possession include student medical records, disability information, disciplinary details and parent login credentials.
The chatbot interacted with data stored by Snowflake, according to the district’s contract with AllHere, though any connection between AllHere and the Snowflake data breach is unknown.
In its statement Friday, the district spokesperson said an ongoing investigation has “revealed no connection between AllHere or the Ed platform and the Snowflake incident.” The spokesperson said there was no “direct integration” between Whole Child and AllHere and that Whole Child data was processed internally before being directed to AllHere.
The contract between AllHere and the district, however, notes that the tool should “seamlessly integrate” with the Whole Child Integrated Data “to receive updated student data regarding attendance, student grades, student testing data, parent contact information and demographics.”
Earlier in the month, a second threat actor known as Satanic Cloud claimed it had access to tens of thousands of L.A. students’ sensitive information and had posted it for sale on Breach Forums for $1,000. In 2022, the district was victim to a massive ransomware attack that exposed reams of sensitive data, including thousands of students’ psychological evaluations, to the dark web.
With AllHere’s fate uncertain, Whiteley blasted the company’s leadership and protocols.
“Personally identifiable information should be considered acid in a company and you should only touch it if you have to because acid is dangerous,” he told The 74. “The errors that were made were so egregious around PII, you should not be in education if you don’t think PII is acid.”
"A new proposed class-action lawsuit (as noticed by Bloomberg Law) accuses user-generated "metaverse" company Roblox of profiting from and helping to power third-party websites that use the platform's Robux currency for unregulated gambling activities. In doing so, the lawsuit says Roblox is effectively "work[ing] with and facilitat[ing] the Gambling Website Defendants... to offer illegal gambling opportunities to minor users."
The three gambling website companies named in the lawsuit—Satozuki, Studs Entertainment, and RBLXWild Entertainment—allow users to connect a Roblox account and convert an existing balance of Robux virtual currency into credits on the gambling site. Those credits act like virtual casino chips that can be used for simple wagers on those sites, ranging from Blackjack to "coin flip" games.
If a player wins, they can transfer their winnings back to the Roblox platform in the form of Robux. The gambling sites use fake purchases of worthless "dummy items" to facilitate these Robux transfers, according to the lawsuit, and Roblox takes a 30 percent transaction fee both when players "cash in" and "cash out" from the gambling sites. If the player loses, the transferred Robux are retained by the gambling website through a "stock" account on the Roblox platform.
In either case, the Robux can be converted back to actual money through the Developer Exchange Program. For individuals, this requires a player to be at least 13 years old, to file tax paperwork (in the US), and to have a balance of at least 30,000 Robux (currently worth $105, or $0.0035 per Robux).
The gambling websites also use the Developer Exchange Program to convert their Robux balances to real money, according to the lawsuit. And the real money involved isn't chump change, either; the lawsuit cites a claim from RBXFlip's owners that 7 billion Robux (worth over $70 million) was wagered on the site in 2021 and that the site's revenues increased 10 times in 2022. The sites are also frequently promoted by Roblox-focused social media influencers to drum up business, according to the lawsuit.
Advertisement
Who’s really responsible?
Roblox's terms of service explicitly bar "experiences that include simulated gambling, including playing with virtual chips, simulated betting, or exchanging real money, Robux, or in-experience items of value." But the gambling sites get around this prohibition by hosting their games away from Roblox's platform of user-created "experiences" while still using Robux transfers to take advantage of players' virtual currency balances from the platform.
This can be a problem for parents who buy Robux for their children thinking they're simply being used for in-game cosmetics and other gameplay items (over half of Roblox players were 12 or under as of 2020). Two parents cited in the lawsuit say their children have lost "thousands of Robux" to the gambling sites, which allegedly have nonexistent or ineffective age-verification controls.
Through its maintenance of the Robux currency platform that powers these sites, the lawsuit alleges that Roblox "monitors and records each of these illegal transactions, yet does nothing to prevent them from happening." Allowing these sites to profit from minors gambling with Robux amounts to "tacitly approv[ing] the Illegal Gambling Websites’ use of [Robux] that Roblox’s minor users can utilize to place bets on the Illegal Gambling Websites." This amounts to a violation of the federal RICO act, as well as California's Unfair Competition Law and New York's General Business Law, among other alleged violations.
In a statement provided to Bloomberg Law, Roblox said that "these are third-party sites and have no legal affiliation to Roblox whatsoever. Bad actors make illegal use of Roblox’s intellectual property and branding to operate such sites in violation of our standards.”
"The idea that advancing technology outpaces regulation serves the industry's interests, says Elizabeth Renieris of Oxford. Current oversight methods apply to specific issues, but "general purpose" AI is harder to keep in check."...
"The UK government has given no sign of when it plans to regulate digital technology companies. In contrast, the US Federal Trade Commissionwill tomorrow consider whether to make changes on the Children’s Online Privacy Protection Act to address the risks emanating from the growing power of digital technology companies, many of which already play substantial roles in children’s lives and schooling. The free rein offered thus far has so far led many businesses to infiltrate education, slowly degrading the teaching profession and spying on children, argue LSE Visiting fellow Dr Velislava Hillman and junior high school teacher and Doctor of Education candidate Molly Esquivel. They take a look here at what they describe as the mess that digitalized classrooms have become, due to the lack of regulation and absence of support if businesses cause harm."
"Any teacher would attest to the years of specialized schooling, teaching practice, code of ethics and standards they face to obtain a license to teach; those in higher education also need a high-level degree, published scholarship, postgraduate certificates such as PGCE and more. In contrast, businesses offering education technologies enter the classroom with virtually no demonstration of any licensing or standards.
The teaching profession has now become an ironic joke of sorts. If teachers in their college years once dreamed of inspiring their future students, today these dreamers are facing a different reality: one in which they are required to curate and operate with all kinds of applications and platforms; collect edtech badges of competency (fig1); monitor data; navigate students through yet more edtech products.
Unlicensed and unregulated, without years in college and special teaching credentials, edtech products not only override teachers’ competencies and roles; they now dictate them.
“Your efforts are being noticed” is how Thrively, an application that monitors students and claims to be used by over 120,000 educators across the US, greets its user. In the UK, Symanto, an AI-based software that analyses texts to infer about the psychological state of an individual, is used for a similar purpose.
The Thrively software gathers metrics on attendance, library use, grades, online learning activities and makes inferences about students – how engaged they are or how they feel. Solutionpath, offering support for struggling students, is used in several universities in the UK. ClassDojo claims to be used by 85% of UK primary schools and a global community of over 50 million teachers and families.
Classroom management software Impero, offers teachers remote control of children’s devices. The company claims to provide direct access to over 2 million devices in more than 90 countries. Among other things, the software has a ‘wellbeing keyword library index’ which seeks to identify students who may need emotional support. A form of policing: “with ‘who, what, when and why’ information staff members can build a full picture of the capture and intervene early if necessary”.
These products and others adopt the methodology of algorithm-based monitoring and profiling of students’ mental health. Such products steer not only student behavior but that of teachers too. One reviewer says of Impero: “My teachers always watch our screens with this instead of teaching”.
When working in Thrively, each interaction with a student earns “Karma Points”. The application lists teacher goals – immediately playing on an educator’s deep-seeded passion to be their best for their students (fig2). Failure to obtain such points becomes internalized as failure in the teaching profession. Thrively’s algorithms could also trigger an all-out battle of who on the teaching staff can earn the most Points. Similarly, ClassDojo offers a ‘mentor’ program to teachers and awards them ‘mentor badges’.
Figure 2: Thrively nudges teachers to engage with it to earn badges and “Karma points”; its tutorial states: “It’s OK to brag when you are elevating humanity.” [See original article for image]
The teacher becomes a ‘line operator’ on a conveyor belt run by algorithms. The amassed data triggers algorithmic diagnostics from each application, carving up the curriculum, controlling students and teachers. Inferential software like Thrively throws teachers into rabbit holes by asking them not only to assess students’ personal interests, but their mental state, too. Its Wellbeing Index takes “pulse checks” to tell how students feel as though teachers are incapable of direct connection with their students. In the UK, the lax legislation with regards to biometric data collection, can further lead to advancing technologies’ exploitation of such data into developing mental health prediction and psychometric analytics. Such practices not only increase the risks of harm towards children and students in general; they dehumanize the whole educational process.
Many other technology-infused, surveillance-based applications are thrust into the classroom. Thrively captures data of 12-14-year-olds and suggests career pathways besides how they feel. They share the captured data with third parties such as YouTube Kids, game-based and coding apps – outside vendors that Thrively curates. Impero enables integration with platforms like Clever, used by over 20 million teachers and students, and Microsoft, thus expanding the tech giant’s own reach by millions of individuals. As technology intersects with education, teachers are merely a second thought in curriculum design and leading the classroom.
Teachers must remain central in children’s education, not businesses
The digitalization of education has swiftly moved towards an algorithmic hegemony which is degrading the teaching profession. Edtech companies are judging how students learn, how teachers work – and how they both feel. Public-private partnerships are giving experimental software with arbitrary algorithms warrantless titles of “school official” to untested beta programme, undermining teachers. Ironically, teachers still carry the responsibility for what happens in class.
Parents should ask what software is used to judge how their children feel or do in class and why. At universities, students should enquire what inferences are made about their work or their mental health that emerges from algorithms. Alas, this means heaping yet more responsibility on individuals – parents, children, students, teachers – to fend for themselves. Therefore, at least two things must also happen. First, edtech products and companies must be licensed to operate, the way banks, hospitals or teachers are. And second, educational institutions should consider transparency about how mental health or academic profiling in general is assessed. If and when software analytics play a part, educators (through enquiry) as well as policymakers (through law) should insist on transparency and be critical about the data points collected and the algorithms that process them.
This article represents the views of the author and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.
"Meta has received more than 1.1 million reports of users under the age of 13 on its Instagram platform since early 2019 yet it “disabled only a fraction” of those accounts, according to a newly unsealed legal complaint against the company brought by the attorneys general of 33 states.
Instead, the social media giant “routinely continued to collect” children’s personal information, like their locations and email addresses, without parental permission, in violation of a federal children’s privacy law, according to the court filing. Meta could face hundreds of millions of dollars, or more, in civil penalties should the states prove the allegations.
“Within the company, Meta’s actual knowledge that millions of Instagram users are under the age of 13 is an open secret that is routinely documented, rigorously analyzed and confirmed,” the complaint said, “and zealously protected from disclosure to the public.”
The privacy charges are part of a larger federal lawsuit, filed last month by California, Colorado and 31 other states in U.S. District Court for the Northern District of California. The lawsuit accuses Meta of unfairly ensnaring young people on its Instagram and Facebook platforms while concealing internal studies showing user harms. And it seeks to force Meta to stop using certain features that the states say have harmed young users.
But much of the evidence cited by the states was blacked out by redactions in the initial filing.
Now the unsealed complaint, filed on Wednesday evening, provides new details from the states’ lawsuit. Using snippets from internal emails, employee chats and company presentations, the complaint contends that Instagram for years “coveted and pursued” underage users even as the company “failed” to comply with the children’s privacy law.
The unsealed filing said that Meta “continually failed” to make effective age-checking systems a priority and instead used approaches that enabled users under 13 to lie about their age to set up Instagram accounts. It also accused Meta executives of publicly stating in congressional testimony that the company’s age-checking process was effective and that the company removed underage accounts when it learned of them — even as the executives knew there were millions of underage users on Instagram.
“Tweens want access to Instagram, and they lie about their age to get it now,” Adam Mosseri, the head of Instagram, said in an internal company chat in November 2021, according to the court filing.
In Senate testimony the following month, Mr. Mosseri said: “If a child is under the age of 13, they are not permitted on Instagram.”
In a statement on Saturday, Meta said that it had spent a decade working to make online experiences safe and age-appropriate for teenagers and that the states’ complaint “mischaracterizes our work using selective quotes and cherry-picked documents.”
The statement also noted that Instagram’s terms of use prohibit users under the age of 13 in the United States. And it said that the company had “measures in place to remove these accounts when we identify them.”
The company added that verifying people’s ages was a “complex” challenge for online services, especially with younger users who may not have school IDs or driver’s licenses. Meta said it would like to see federal legislation that would require “app stores to get parents’ approval whenever their teens under 16 download apps” rather than having young people or their parents supply personal details like birth dates to many different apps.
The privacy charges in the case center on a 1998 federal law, the Children’s Online Privacy Protection Act. That law requires that online services with content aimed at children obtain verifiable permission from a parent before collecting personal details — like names, email addresses or selfies — from users under 13. Fines for violating the law can run to more than $50,000 per violation.
The lawsuit argues that Meta elected not to build systems to effectively detect and exclude such underage users because it viewed children as a crucial demographic — the next generation of users — that the company needed to capture to assure continued growth.
Meta had many indicators of underage users, according to the Wednesday filing. An internal company chart displayed in the unsealed material, for example, showed how Meta tracked the percentage of 11- and 12-year-olds who used Instagram daily, the complaint said.
Meta also knew about accounts belonging to specific underage Instagram users through company reporting channels. But it “automatically” ignored certain reports of users under 13 and allowed them to continue using their accounts, the complaint said, as long as the accounts did not contain a user biography or photos.
In one case in 2019, Meta employees discussed in emails why the company had not deleted four accounts belonging to a 12-year-old, despite requests and “complaints from the girl’s mother stating her daughter was 12,” according to the complaint. The employees concluded that the accounts were “ignored” partly because Meta representatives “couldn’t tell for sure the user was underage,” the legal filing said.
This is not the first time the social media giant has faced allegations of privacy violations. In 2019, the company agreed to pay a record $5 billion, and to alter its data practices, to settle charges from the Federal Trade Commission of deceiving users about their ability to control their privacy.
It may be easier for the states to pursue Meta for children’s privacy violations than to prove that the company encouraged compulsive social media use — a relatively new phenomenon — among young people. Since 2019, the F.T.C. has successfully brought similar children’s privacy complaints against tech giants including Google and its YouTube platform, Amazon, Microsoft and Epic Games, the creator of Fortnite."
For full/original post, please visit:
Nov. 25, 2023
Meta has received more than 1.1 million reports of users under the age of 13 on its Instagram platform since early 2019 yet it “disabled only a fraction” of those accounts, according to a newly unsealed legal complaint against the company brought by the attorneys general of 33 states.
Instead, the social media giant “routinely continued to collect” children’s personal information, like their locations and email addresses, without parental permission, in violation of a federal children’s privacy law, according to the court filing. Meta could face hundreds of millions of dollars, or more, in civil penalties should the states prove the allegations.
“Within the company, Meta’s actual knowledge that millions of Instagram users are under the age of 13 is an open secret that is routinely documented, rigorously analyzed and confirmed,” the complaint said, “and zealously protected from disclosure to the public.”
The privacy charges are part of a larger federal lawsuit, filed last month by California, Colorado and 31 other states in U.S. District Court for the Northern District of California. The lawsuit accuses Meta of unfairly ensnaring young people on its Instagram and Facebook platforms while concealing internal studies showing user harms. And it seeks to force Meta to stop using certain features that the states say have harmed young users.
But much of the evidence cited by the states was blacked out by redactions in the initial filing.
Now the unsealed complaint, filed on Wednesday evening, provides new details from the states’ lawsuit. Using snippets from internal emails, employee chats and company presentations, the complaint contends that Instagram for years “coveted and pursued” underage users even as the company “failed” to comply with the children’s privacy law.
The unsealed filing said that Meta “continually failed” to make effective age-checking systems a priority and instead used approaches that enabled users under 13 to lie about their age to set up Instagram accounts. It also accused Meta executives of publicly stating in congressional testimony that the company’s age-checking process was effective and that the company removed underage accounts when it learned of them — even as the executives knew there were millions of underage users on Instagram.
“Tweens want access to Instagram, and they lie about their age to get it now,” Adam Mosseri, the head of Instagram, said in an internal company chat in November 2021, according to the court filing.
In Senate testimony the following month, Mr. Mosseri said: “If a child is under the age of 13, they are not permitted on Instagram.”
In a statement on Saturday, Meta said that it had spent a decade working to make online experiences safe and age-appropriate for teenagers and that the states’ complaint “mischaracterizes our work using selective quotes and cherry-picked documents.”
The statement also noted that Instagram’s terms of use prohibit users under the age of 13 in the United States. And it said that the company had “measures in place to remove these accounts when we identify them.”
The company added that verifying people’s ages was a “complex” challenge for online services, especially with younger users who may not have school IDs or driver’s licenses. Meta said it would like to see federal legislation that would require “app stores to get parents’ approval whenever their teens under 16 download apps” rather than having young people or their parents supply personal details like birth dates to many different apps.
The privacy charges in the case center on a 1998 federal law, the Children’s Online Privacy Protection Act. That law requires that online services with content aimed at children obtain verifiable permission from a parent before collecting personal details — like names, email addresses or selfies — from users under 13. Fines for violating the law can run to more than $50,000 per violation.
The lawsuit argues that Meta elected not to build systems to effectively detect and exclude such underage users because it viewed children as a crucial demographic — the next generation of users — that the company needed to capture to assure continued growth.
Meta had many indicators of underage users, according to the Wednesday filing. An internal company chart displayed in the unsealed material, for example, showed how Meta tracked the percentage of 11- and 12-year-olds who used Instagram daily, the complaint said.
Meta also knew about accounts belonging to specific underage Instagram users through company reporting channels. But it “automatically” ignored certain reports of users under 13 and allowed them to continue using their accounts, the complaint said, as long as the accounts did not contain a user biography or photos.
In one case in 2019, Meta employees discussed in emails why the company had not deleted four accounts belonging to a 12-year-old, despite requests and “complaints from the girl’s mother stating her daughter was 12,” according to the complaint. The employees concluded that the accounts were “ignored” partly because Meta representatives “couldn’t tell for sure the user was underage,” the legal filing said.
This is not the first time the social media giant has faced allegations of privacy violations. In 2019, the company agreed to pay a record $5 billion, and to alter its data practices, to settle charges from the Federal Trade Commission of deceiving users about their ability to control their privacy.
It may be easier for the states to pursue Meta for children’s privacy violations than to prove that the company encouraged compulsive social media use — a relatively new phenomenon — among young people. Since 2019, the F.T.C. has successfully brought similar children’s privacy complaints against tech giants including Google and its YouTube platform, Amazon, Microsoft and Epic Games, the creator of Fortnite."
La Commission irlandaise de protection des données a constaté que TikTok avait violé plusieurs articles du Règlement général sur la protection des données (RGPD) , notamment en omettant de protéger la vie privée des enfants. Par conséquent, l’organisme européen de surveillance de la vie privée a pénalisé TikTok d’une amende d’environ 367 millions de dollars.13 oct. 2023
What's in a Name? Auditing Large Language Models for Race and Gender Bias. By Amit Haim, Alejandro Salinas, Julian Nyarko
ABSTRACT
"We employ an audit design to investigate biases in state-of-the-art large language models, including GPT-4. In our study, we prompt the models for advice involving a named individual across a variety of scenarios, such as during car purchase negotiations or election outcome predictions. We find that the advice systematically disadvantages names that are commonly associated with racial minorities and women. Names associated with Black women receive the least advantageous outcomes. The biases are consistent across 42 prompt templates and several models, indicating a systemic issue rather than isolated incidents. While providing numerical, decision-relevant anchors in the prompt can successfully counteract the biases, qualitative details have inconsistent effects and may even increase disparities. Our findings underscore the importance of conducting audits at the point of LLM deployment and implementation to mitigate their potential for harm against marginalized communities."
Please visit following for abstract on arxiv.org and link to download:
"Koko let 4,000 people get therapeutic help from GPT-3 without telling them first."
By Benj Edwards
"On Friday, Koko co-founder Rob Morris announced on Twitter that his company ran an experiment to provide AI-written mental health counseling for 4,000 people without informing them first, Vice reports. Critics have called the experiment deeply unethical because Koko did not obtain informed consent from people seeking counseling.
Koko is a nonprofit mental health platform that connects teens and adults who need mental health help to volunteers through messaging apps like Telegram and Discord.
On Discord, users sign in to the Koko Cares server and send direct messages to a Koko bot that asks several multiple-choice questions (e.g., "What's the darkest thought you have about this?"). It then shares a person's concerns—written as a few sentences of text—anonymously with someone else on the server who can reply anonymously with a short message of their own."...
The EdTech Revolution Has Failed The case against student use of computers, tablets, and smartphones in the classroom
by Jared Cooney Horvath
"In May of 2023, schools minister Lotta Edholm announced that Swedish classrooms would aim to significantly reduce student-facing digital technology and embrace more traditional practices like reading hardcopy books and taking handwritten notes. The announcement was met with disbelief among pundits and the wider international public: why would an entire country willingly forgo those digital technologies which are widely touted to be the future of education?
In a recent survey, 92% of students worldwide reported having access to a computer at school. In New Zealand, 99% of schools are equipped with high-speed internet while in Australia the student-to-computer ratio has dipped below 1:1 (meaning there are more computers than students in school). In the U.S., government expenditure on EdTech products for public schools exceeds $30 billion annually."...
"Digital Dystopia: The Danger in Buying What the EdTech Surveillance Industry is Selling, an ACLU research report, examines the EdTech Surveillance (educational technologies used for surveillance) industry in U.S. K-12 schools. Using in-depth investigation into industry products, an incident audit, student focus groups, and national polling, this report scrutinizes industry claims, assesses the efficacy of the products, and explores the impacts EdTech Surveillance has on students and schools. The report concludes by offering concrete actions school districts, elected officials, and community members can take to ensure decisions about using surveillance products are consistent and well-informed. This includes model legislation and decision-making tools, which will often result in the rejection of student surveillance technologies."
"Unless checks are put in place, citizens and voters may soon face AI-generated content that bears no relation to reality"
By André Spicer "During 2023, the shape of politics to come appeared in a video. In it, Hillary Clinton – the former Democratic party presidential candidate and secretary of state – says: “You know, people might be surprised to hear me saying this, but I actually like Ron DeSantis a lot. Yeah, I know. I’d say he’s just the kind of guy this country needs.”
It seems odd that Clinton would warmly endorse a Republican presidential hopeful. And it is. Further investigations found the video was produced using generative artificial intelligence (AI).
The Clinton video is only one small example of how generative AI could profoundly reshape politics in the near future. Experts have pointed out the consequences for elections. These include the possibility of false information being created at little or no cost and highly personalised advertising being produced to manipulate voters. The results could be so-called “October surprises” – ie a piece of news that breaks just before the US elections in November, where misinformation is circulated and there is insufficient time to refute it – and the generation of misleading information about electoral administration, such as where polling stations are.
Concerns about the impact of generative AI on elections have become urgent as we enter a year in which billions of people across the planet will vote. During 2024, it is projected that there will be elections in Taiwan, India, Russia, South Africa, Mexico, Iran, Pakistan, Indonesia, the European Union, the US and the UK. Many of these elections will not determine just the future of nation states; they will also shape how we tackle global challenges such as geopolitical tensions and the climate crisis. It is likely that each of these elections will be influenced by new generative AI technologies in the same way the elections of the 2010s were shaped by social media.
While politicians spent millions harnessing the power of social media to shape elections during the 2010s, generative AI effectively reduces the cost of producing empty and misleading information to zero. This is particularly concerning because during the past decade, we have witnessed the role that so-called “bullshit” can play in politics. In a short book on the topic, the late Princeton philosopher Harry Frankfurt defined bullshit specifically as speech intended to persuade without regard to the truth. Throughout the 2010s this appeared to become an increasingly common practice among political leaders. With the rise of generative AI and technologies such as ChatGPT, we could see the rise of a phenomenon my colleagues and I label “botshit”.
In a recent paper, Tim Hannigan, Ian McCarthy and I sought to understand what exactly botshit is and how it works. It is well known that generative AI technologies such as ChatGPT can produce what are called “hallucinations”. This is because generative AI answers questions by making statistically informed guesses. Often these guesses are correct, but sometimes they are wildly off. The result can be artificially generated “hallucinations” that bear little relationship to reality, such as explanations or images that seem superficially plausible, but aren’t actually the correct answer to whatever the question was.
Humans might use untrue material created by generative AI in an uncritical and thoughtless way. And that could make it harder for people to know what is true and false in the world. In some cases, these risks might be relatively low, for example if generative AI were used for a task that was not very important (such as to come up with some ideas for a birthday party speech), or if the truth of the output were easily verifiable using another source (such as when did the battle of Waterloo happen). The real problems arise when the outputs of generative AI have important consequences and the outputs can’t easily be verified....
"On Dec. 27, 2023, The New York Times filed a lawsuit against OpenAI alleging that the company committed willful copyright infringement through its generative AI tool ChatGPT. The Times claimed both that ChatGPT was unlawfully trained on vast amounts of text from its articles and that ChatGPT’s output contained language directly taken from its articles.
To remedy this, the Times asked for more than just money: It asked a federal court to order the “destruction” of ChatGPT.
If granted, this request would force OpenAI to delete its trained large language models, such as GPT-4, as well as its training data, which would prevent the company from rebuilding its technology.
This prospect is alarming to the 100 million people who use ChatGPT every week. And it raises two questions that interest me as a law professor. First, can a federal court actually order the destruction of ChatGPT? And second, if it can, will it?
Destruction in the court
The answer to the first question is yes. Under copyright law, courts do have the power to issue destruction orders.
If a record label sues a counterfeiter for copyright infringement and wins, what happens to the counterfeiter’s inventory? What happens to the master and stamper disks used to mass-produce the counterfeits, and the machinery used to create those disks in the first place?
To address these questions, copyright law grants courts the power to destroy infringing goods and the equipment used to create them. From the law’s perspective, there’s no legal use for a pirated vinyl record. There’s also no legitimate reason for a counterfeiter to keep a pirated master disk. Letting them keep these items would only enable more lawbreaking.
So in some cases, destruction is the only logical legal solution. And if a court decides ChatGPT is like an infringing good or pirating equipment, it could order that it be destroyed. In its complaint, the Times offered arguments that ChatGPT fits both analogies.
Copyright law has never been used to destroy AI models, but OpenAI shouldn’t take solace in this fact. The law has been increasingly open to the idea of targeting AI.
Consider the Federal Trade Commission’s recent use of algorithmic disgorgement as an example. The FTC has forced companies such as WeightWatchers to delete not only unlawfully collected data but also the algorithms and AI models trained on such data.
Why ChatGPT will likely live another day
It seems to be only a matter of time before copyright law is used to order the destruction of AI models and datasets. But I don’t think that’s going to happen in this case. Instead, I see three more likely outcomes.
The first and most straightforward is that the two parties could settle. In the case of a successful settlement, which may be likely, the lawsuit would be dismissed and no destruction would be ordered.
The second is that the court might side with OpenAI, agreeing that ChatGPT is protected by the copyright doctrine of “fair use.” If OpenAI can argue that ChatGPT is transformative and that its service does not provide a substitute for The New York Times’ content, it just might win.
The third possibility is that OpenAI loses but the law saves ChatGPT anyway. Courts can order destruction only if two requirements are met: First, destruction must not prevent lawful activities, and second, it must be “the only remedy” that could prevent infringement.
That means OpenAI could save ChatGPT by proving either that ChatGPT has legitimate, noninfringing uses or that destroying it isn’t necessary to prevent further copyright violations.
Both outcomes seem possible, but for the sake of argument, imagine that the first requirement for destruction is met. The court could conclude that, because of the articles in ChatGPT’s training data, all uses infringe on the Times’ copyrights – an argument put forth in various other lawsuits against generative AI companies.
In this scenario, the court would issue an injunction ordering OpenAI to stop infringing on copyrights. Would OpenAI violate this order? Probably not. A single counterfeiter in a shady warehouse might try to get away with that, but that’s less likely with a US$100 billion company.
Instead, it might try to retrain its AI models without using articles from the Times, or it might develop other software guardrails to prevent further problems. With these possibilities in mind, OpenAI would likely succeed on the second requirement, and the court wouldn’t order the destruction of ChatGPT.
Given all of these hurdles, I think it’s extremely unlikely that any court would order OpenAI to destroy ChatGPT and its training data. But developers should know that courts do have the power to destroy unlawful AI, and they seem increasingly willing to use it."
"No one doubts that artificial intelligence is a strategic boardroom issue, though diginomica revealed last year that much of the initial buzz was individuals using free cloud tools as shadow IT, while many business leaders talked up AI in their earnings calls just to keep investors happy.
In 2024, those caveats remain amidst the hype. As one of my stories from KubeCon + CloudNativeCon last week showed, the reality for many software engineering teams is the C-suite demanding an AI ‘hammer’ with little idea of what business nail they want to hit with it.
Or, as Intel Vice President and General Manager for Open Ecosystem Arun Gupta put it:
"When we go into a CIO discussion, it’s ‘How can I use Gen-AI?’ And I’m like, ‘I don’t know. What do you want to do with it?’ And the answer is, ‘I don’t know, you figure it out!’"
So, now that AI Spring is in full bloom, what is the reality of enterprise adoption? Two reports this week unveil some surprising new findings, many of which show that the hype cycle is ending more quickly than the industry would like.
First up is a white paper from $2 billion cloud incident-response provider, PagerDuty. According to its survey of 100 Fortune 1,000 IT leaders, 100% are concerned about the security risks of the technology, and 98% have paused Gen-AI projects as a result.
Those are extraordinary figures. However, the perceived threats are not solely about cybersecurity (with phishing, deep fakes, complex fraud, and automated attacks on the rise), but are rooted in what PagerDuty calls the “moral implications”. These include worries over copyright theft in training data and any legal exposure that may arise from that.
As previously reported (see diginomica, passim), multiple IP infringement lawsuits are ongoing in the US, while in the UK, the House of Lords’ Communications and Digital Committee was clear, in its inquiry into Large Language Models, that copyright theft had taken place. A conclusion that peers arrived at after interviewing expert witnesses from all sides of the debate, including vendors and lawyers.
According to PagerDuty, unease over these issues keeps more than half of respondents (51%) awake at night, with nearly as many concerned about the disclosure of sensitive information (48%), data privacy violations (47%), and social engineering attacks (46%). They are right to be cautious: last year, diginomica reported that source code is the most common form of privileged data disclosed to cloud-based AI tools.
The white paper adds: "Any of these security risks could damage the company’s public image, which explains why Gen-AI’s risk to the organization’s reputation tops the list of concerns for 50% of respondents. More than two in five also worry about the ethics of the technology (42%). Among the executives with these moral concerns, inherent societal biases of training data (26%) and lack of regulation (26%) top the list."
Despite this, only 25% of IT leaders actively mistrust the technology, adds the white paper – cold comfort for vendors, perhaps. Even so, it is hard to avoid the implication that, while some providers might have first- or big-mover advantage in generative AI, any that trained their systems unethically may have stored up a world of problems for themselves.
However, with nearly all Fortune 1,000 companies pausing their AI programmes until clear guidelines can be put in place – though the figure of 98% seems implausibly high – the white paper adds:
"Executives value these policies, so much so that a majority (51%) believe they should adopt Gen-AI only after they have the right guidelines in place. [But] others believe they risk falling behind if they don’t adopt Gen-AI as quickly as possible, regardless of parameters (46%)."
Those figures suggest a familiar pattern in enterprise tech adoption: early movers stepping back from their decisions, while the pack of followers is just getting started.
Yet the report continues: "Despite the emphasis and clear need, only 29% of companies have established formal guidelines. Instead, 66% are currently setting up these policies, which means leaders may need to keep pausing Gen-AI until they roll out a course of action."
That said, the white paper’s findings are inconsistent in some respects, and thus present a confusing picture – conceivably, one of customers confirming a security researcher’s line of questioning. Imagine that: confirmation bias in a Gen-AI report!
For example, if 98% of IT leaders say they have paused enterprise AI programmes until organizational guidelines are put in place, how are 64% of the same survey base able to report that Gen-AI is still being used in “some or all” of their departments?
One answer may be that, as diginomica found last year, that ‘departmental’ use may in fact be individuals experimenting with cloud-based tools as shadow IT. That aside, the white paper confirms that early enterprise adopters may be reconsidering their incautious rush."...
The 2nd Annual Civics of Technology conference held on August 3rd and 4th, 2023 closed with a keynote address provided by Dr. Roxana Marachi, Professor of Education at San José State University.
A blogpost introducing the Civics of Technology community to some of the research and trends discussed in the keynote can be found here.
Thanks to the entire Civics of Technology team for their efforts in organizing the conference, to all the presenters and participants, and to the University of North Texas College of Education and Loyola University Maryland School of Education for their generous support. For more information about Civics of Technology events and discussions, please visit http://civicsoftechnology.org
"Abstract This paper explores the significance of schools’ data infrastructures as a site of institutional power and (re)configuration. Using ‘infrastructure studies’ as a theoretical framework and drawing on in-depth studies of three contrasting Australian secondary schools, the paper takes a holistic look at schools’ data infrastructures. In contrast to the notion of the ‘platformatised’ school, the paper details the ad hoc and compromised ways that these school data infrastructures have developed – highlighting a number of underlying sociotechnical conditions that lead to an ongoing process of data infrastructuring. These include issues of limited technical interoperability and differences between educational requirements and commercially-led designs. Also apparent is the disjuncture between the imagined benefits of institutional data use and the ongoing maintenance and repair required to make the infrastructures function. Taking an institutional perspective, the paper explores why digital technologies continue to complicate (rather than simplify) school processes and practices."
Lego’s parent company, Kirkbi, is among the other co-investors, alongside private equity firm General Atlantic, Kahoot CEO Eilert Hanoa’s investment vehicle Glitrafjord, and other investors and management shareholders, Kahoot said.
The deal is expected to close by the end of the year, pending regulatory approvals.
In announcing the acquisition, Kahoot, which is traded on the Oslo Stock Exchange, also shared preliminary financial results for the second quarter. The company reported recognized revenue of more than $41 million for the quarter, up 14 percent year-over-year. The company also said its adjusted earnings before interest, taxes, depreciation, and amortization was $11 million for the quarter, an increase of 60 percent from the prior year’s period.
The offer to take Kahoot private offers shareholders about $3.48 a share, a 51 percent premium to its closing price of $2.28 in May when the investors disclosed their shareholding positions.
Kahoot was founded in 2013 and designed to offer students a game-based platform to learn a range of subjects. It has acquired seven companies in since it launched, one of the largest being a $500 million acquisition of digital learning platform Clever in May 2021.
“As the need for engaging learning, across home, school and work, continues to grow, I am excited about the opportunities this partnership represents for our users, our ecosystem of partners, and for the talented team across the Kahoot! Group, to advance education for hundreds of millions of learners everywhere,” Hanoa, Kahoot’s CEO, said in a statement.
Goldman Sachs noted Kahoot’s unique brand, extensive reach, and scalable technology and operations in its announcement of the deal, as well as its focus on a wide range of customers, from school children to enterprise clients.
The acquisition will allow Kahoot to benefit from operating as a private company, Goldman Sachs and co-investors said, noting that it plans to invest in product innovation and growth both organically and through acquisitions. Having access to private capital will allow the company to significantly boost it go-to-market strategy, it added.
“Kahoot is unlocking learning potential for children, students and employees across the world. The company has a clear mission and value proposition and our investment will help to grow its impact and accelerate value for all stakeholders,” Michael Bruun, global co-head of private equity at Goldman Sachs Asset Management, said in a statement.
“Through this transaction, we are pleased to partner with a fantastic leadership team and group of co-investors to expand a mission-critical learning and engagement platform and contribute to its further growth and innovation.”
The investment is another move by Lego parent company Kirkbi to grow its presence in the ed-tech market, after the company acquired BrainPop, maker of video-based learning tools, in October 2022.
In a statement, Kirkbi Chief Investment Officer Thomas Lau Schleicher said Kahoot’s mission resonates with his organization’s “core values” and it finds “the investment fits very well with Kirkbi’s long-term investment strategy.”...
Researchers analyzed more than 1,300 apps used in 600 schools across the country looking at what information the apps—and the browser versions of those apps—are collecting on students and who that information is shared with or sold to.
Not protecting students’ personal information in the digital space can cause real-world harms, said Lisa LeVasseur, the founder and executive director of Internet Safety Labs and one of the co-authors of the report.
Strangers can glean a lot of sensitive information about individuals, she said, from even just their location and calendar data.
“It’s like pulling a thread,” LeVassuer said. “Even data that may seem innocuous can be used maliciously, potentially—certainly in ways unanticipated and undesired. These kids are not signing up for data broker profiles. None of us are, actually." (Data brokers are companies that collect people’s personal data from various sources, package it together into profiles, and sell it to other companies for marketing purposes.)
Only 29 percent of schools appear to be vetting all apps used by students, the analysis found. Schools that systematically vet all apps were less likely to recommend or require students use apps that feature ads.
But in an unusual twist, those schools that vet their tech were actually more likely to require students use apps with poor safety ratings from the Internet Research Labs.
Although LeVassuer said she’s not sure why that is the case, it might be because schools with systematic vetting procedures wound up requiring that students use more apps, giving schools a false sense of security that the apps they approved were safe to use.
It’s also hard for families to find information online about the technology their children are required to use for school and difficult to opt out of using that tech, according to the report.
Less than half of schools—45 percent—provide a technology notice that clearly lists all of the technology products students must use, the researchers found. While not required under federal or most state laws, it is considered a best practice, the report said.
Only 14 percent of schools gave parents and students older than 18 years of age the opportunity to consent to technology use.
Certifications can give a false sense of security
Researchers for the Internet Safety Lab also found that apps with the third-party COPA certification called Safe Harbor—which indicates that an app follows federal privacy-protection laws for children—are frequently sharing student data with the likes of Facebook and Twitter. Safe Harbor certified apps also have more advertising than the overall sample of apps the report examined.
The certification verifies that the apps abstain from some important data privacy practices, like behavioral advertising, said LeVasseur. But school leaders may not be getting the data privacy protection for students that they believe they are.
“Third-party certifications may not be doing what you think they are,” said LeVassuer.
But overall, apps with third-party certifications, such as 1EdTech, and pledges or promises, such as the Student Privacy Pledge or the Student Data Privacy Consortium, received better data privacy safety ratings under the rubric developed by the Internet Safety Labs.
In all, the Internet Safety Labs examined and tested 1,357 apps that schools across the country either recommend or require students and families to use. It created its sample of apps by assessing the apps recommended or required in a random sample of 13 schools from each of the 50 states and the District of Columbia, totaling 663 schools serving 456,000 students.
While researchers for Internet Safety Labs were only able to analyze the off-the-shelf versions of the apps schools used (they did not have access to school versions of these apps), the group estimates that 8 out of every 10 apps recommended by schools to students are of the off-the-shelf variety.
This is the second report from an ambitious evaluation of the technology used in schools by Internet Safety Labs. The first report, released in December, labeled the vast majority of those apps—96 percent—as not safe for children to use because they share information with third parties or contain ads."...
To get content containing either thought or leadership enter:
To get content containing both thought and leadership enter:
To get content containing the expression thought leadership enter:
You can enter several keywords and you can refine them whenever you want. Our suggestion engine uses more signals but entering a few keywords here will rapidly give you great content to curate.