Educational Psychology & Emerging Technologies: Critical Perspectives and Updates
30.7K views | +5 today
Follow
Educational Psychology & Emerging Technologies: Critical Perspectives and Updates
This curated collection includes updates, resources, and research with critical perspectives related to the intersections of educational psychology and emerging technologies in education. The page also serves as a research tool to organize online content (funnel shaped icon allows keyword search). For more on the intersections of privatization and technologization of education with critiques of the social impact finance and related technologies, please visit http://bit.ly/sibgamble and http://bit.ly/chart_look. For posts regarding screen time risks to health and development, see http://bit.ly/screen_time and for updates related to AI and data concerns, please visit http://bit.ly/DataJusticeLinks.   [Note: Views presented on this page are re-shared from external websites.  The content may not necessarily represent the views nor official position of the curator nor employer of the curator.
Your new post is loading...
Your new post is loading...
Scooped by Roxana Marachi, PhD
Scoop.it!

Breaking Free from Blockchaining Babies & "Cradle-to-Career" Data Traps // #CivicsOfTech23 Closing Keynote

The 2nd Annual Civics of Technology conference held on August 3rd and 4th, 2023 closed with a keynote address provided by Dr. Roxana Marachi, Professor of Education at San José State University.

The video for this talk is available here, accompanying slides are available at http://bit.ly/MarachiKeynote23, and links to additional resources can also be found at http://bit.ly/DataJusticeLinks.   

 

A blogpost introducing the Civics of Technology community to some of the research and trends discussed in the keynote can be found here


Thanks to the entire Civics of Technology team for their efforts in organizing the conference, to all the presenters and participants, and to the University of North Texas College of Education and Loyola University Maryland School of Education for their generous support. For more information about Civics of Technology events and discussions, please visit http://civicsoftechnology.org

 

https://www.youtube.com/watch?v=OiuKT-LwH4Q

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Resources to Learn about AI, BigData, Blockchain, Algorithmic Harms, and Data Justice 

Resources to Learn about AI, BigData, Blockchain, Algorithmic Harms, and Data Justice  | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

Resources to Learn about AI, BigData, Blockchain, Algorithmic Harms, and Data Justice

Shortlink to share this page: http://bit.ly/DataJusticeLinks 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Tokenizing Toddlers: Cradle-to-Career Behavioral Tracking on Blockchain, Web3, and the "Internet of Education" [Slidedeck] // Marachi, 2022

Tokenizing Toddlers: Cradle-to-Career Behavioral Tracking on Blockchain, Web3, and the "Internet of Education" [Slidedeck] // Marachi, 2022 | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

http://bit.ly/Tokenizing_Toddlers 

No comment yet.
Rescooped by Roxana Marachi, PhD from Social Impact Bonds, "Pay For Success," Results-Based Contracting, and Blockchain Digital Identity Systems
Scoop.it!

Ableism And Disability Discrimination In New Surveillance Technologies: How new surveillance technologies in education, policing, health care, and the workplace disproportionately harm disabled peo...

Ableism And Disability Discrimination In New Surveillance Technologies: How new surveillance technologies in education, policing, health care, and the workplace disproportionately harm disabled peo... | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

Full report – PDF 

Plain language version – PDF

By Lydia X. Z. Brown, Ridhi Shetty, Matt Scherer, and Andrew Crawford

"Algorithmic technologies are everywhere. At this very moment, you can be sure students around the world are complaining about homework, sharing gossip, and talking about politics — all while computer programs observe every web search they make and every social media post they create, sending information about their activities to school officials who might punish them for what they look at. Other things happening right now likely include:

  • Delivery workers are trawling up and down streets near you while computer programs monitor their location and speed to optimize schedules, routes, and evaluate their performance;
  • People working from home are looking at their computers while their computers are staring back at them, timing their bathroom breaks, recording their computer screens, and potentially listening to them through their microphones;
  • Your neighbors – in your community or the next one over – are being tracked and designated by algorithms targeting police attention and resources to some neighborhoods but not others;
  • Your own phone may be tracking data about your heart rate, blood oxygen level, steps walked, menstrual cycle, and diet, and that information might be going to for-profit companies or your employer. Your social media content might even be mined and used to diagnose a mental health disability.

This ubiquity of algorithmic technologies has pervaded every aspect of modern life, and the algorithms are improving. But while algorithmic technologies may become better at predicting which restaurants someone might like or which music a person might enjoy listening to, not all of their possible applications are benign, helpful, or just.

Scholars and advocates have demonstrated myriad harms that can arise from the types of encoded prejudices and self-perpetuating cycles of discrimination, bias, and oppression that may result from automated decision-makers. These potentially harmful technologies are routinely deployed by government entities, private enterprises, and individuals to make assessments and recommendations about everything from rental applications to hiring, allocation of medical resources, and whom to target with specific ads. They have been deployed in a variety of settings including education and the workplace, often with the goal of surveilling activities, habits, and efficiency.

Disabled people comprise one such community that experiences discrimination, bias, and oppression resulting from automated decision-making technology. Disabled people continually experience marginalization in society, especially those who belong to other marginalized communities such as disabled women of color. Yet, not enough scholars or researchers have addressed the specific harms and disproportionate negative impacts that surveillance and algorithmic tools can have on disabled people. This is in part because algorithmic technologies that are trained on data that already embeds ableist (or relatedly racist or sexist) outcomes will entrench and replicate the same ableist (and racial or gendered) bias in the computer system. For example, a tenant screening tool that considers rental applicants’ credit scores, past evictions, and criminal history may prevent poor people, survivors of domestic violence, and people of color from getting an apartment because they are disproportionately likely to have lower credit scores, past evictions, and criminal records due to biases in the credit and housing systems and in policing disparities.

This report examines four areas where algorithmic and/or surveillance technologies are used to surveil, control, discipline, and punish people, with particularly harmful impacts on disabled people. They include: (1) education; (2) the criminal legal system; (3) health care; and (4) the workplace. In each section, we describe several examples of technologies that can violate people’s privacy, contribute to or accelerate existing harm and discrimination, and undermine broader public policy objectives (such as public safety or academic integrity).

Full report – PDF 

Plain language version – PDF


https://cdt.org/insights/ableism-and-disability-discrimination-in-new-surveillance-technologies-how-new-surveillance-technologies-in-education-policing-health-care-and-the-workplace-disproportionately-harm-disabled-people/ 

 

 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

How Surveillance Capitalism Ate Education For Lunch // Marachi, 2022 // Univ. of Pittsburgh Data & Society Speaker Series Presentation [Slidedeck]

How Surveillance Capitalism Ate Education For Lunch // Marachi, 2022 // Univ. of Pittsburgh Data & Society Speaker Series Presentation [Slidedeck] | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

Presentation prepared for the University of Pittsburgh's Year of Data & Society Speaker Series. Slidedeck accessible by clicking title above or here: http://bit.ly/Surveillance_Edtech

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Algorithmic personalization is disrupting a healthy teaching environment // LSE

Algorithmic personalization is disrupting a healthy teaching environment // LSE | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it
By Velislava Hillman and Molly Esquivel

"The UK government has given no sign of when it plans to regulate digital technology companies. In contrast, the US Federal Trade Commission will tomorrow consider whether to make changes on the Children’s Online Privacy Protection Act to address the risks emanating from the growing power of digital technology companies, many of which already play substantial roles in children’s lives and schooling. The free rein offered thus far has so far led many businesses to infiltrate education, slowly degrading the teaching profession and spying on children, argue LSE Visiting fellow Dr Velislava Hillman and junior high school teacher and Doctor of Education candidate Molly Esquivel. They take a look here at what they describe as the mess that digitalized classrooms have become, due to the lack of regulation and absence of support if businesses cause harm."

 

"Any teacher would attest to the years of specialized schooling, teaching practice, code of ethics and standards they face to obtain a license to teach; those in higher education also need a high-level degree, published scholarship, postgraduate certificates such as PGCE and more. In contrast, businesses offering education technologies enter the classroom with virtually no demonstration of any licensing or standards.

The teaching profession has now become an ironic joke of sorts. If teachers in their college years once dreamed of inspiring their future students, today these dreamers are facing a different reality: one in which they are required to curate and operate with all kinds of applications and platforms; collect edtech badges of competency (fig1); monitor data; navigate students through yet more edtech products.

Unlicensed and unregulated, without years in college and special teaching credentials, edtech products not only override teachers’ competencies and roles; they now dictate them.

  

Figure 1Teachers race to collect edtech badges

[See original article for image]

Wellbeing indexes and Karma Points

“Your efforts are being noticed” is how Thrively, an application that monitors students and claims to be used by over 120,000 educators across the US, greets its user. In the UK, Symanto, an AI-based software that analyses texts to infer about the psychological state of an individual, is used for a similar purpose.

 

The Thrively software gathers metrics on attendance, library use, grades, online learning activities and makes inferences about students – how engaged they are or how they feel. Solutionpath, offering support for struggling students, is used in several universities in the UK. ClassDojo claims to be used by 85% of UK primary schools and a global community of over 50 million teachers and families.

 

Classroom management software Impero, offers teachers remote control of children’s devices. The company claims to provide direct access to over 2 million devices in more than 90 countries. Among other things, the software has a ‘wellbeing keyword library index’ which seeks to identify students who may need emotional support. A form of policing: “with ‘who, what, when and why’ information staff members can build a full picture of the capture and intervene early if necessary”.

These products and others adopt the methodology of  algorithm-based monitoring and profiling of students’ mental health. Such products steer not only student behavior but that of teachers too. One reviewer says of Impero: “My teachers always watch our screens with this instead of teaching”.

 

When working in Thrively, each interaction with a student earns “Karma Points”. The application lists teacher goals – immediately playing on an educator’s deep-seeded passion to be their best for their students (fig2). Failure to obtain such points becomes internalized as failure in the teaching profession. Thrively’s algorithms could also trigger an all-out battle of who on the teaching staff can earn the most Points. Similarly, ClassDojo offers a ‘mentor’ program to teachers and awards them ‘mentor badges’.

Figure 2Thrively nudges teachers to engage with it to earn badges and “Karma points”; its tutorial states: “It’s OK to brag when you are elevating humanity.” [See original article for image]

 

The teacher becomes a ‘line operator’ on a conveyor belt run by algorithms. The amassed data triggers algorithmic diagnostics from each application, carving up the curriculum, controlling students and teachers. Inferential software like Thrively throws teachers into rabbit holes by asking them not only to assess students’ personal interests, but their mental state, too. Its Wellbeing Index takes “pulse checks” to tell how students feel as though teachers are incapable of direct connection with their students. In the UK, the lax legislation with regards to biometric data collection, can further lead to advancing technologies’ exploitation of such data into developing mental health prediction and psychometric analytics. Such practices not only increase the risks of harm towards children and students in general; they dehumanize the whole educational process.

Many other technology-infused, surveillance-based applications are thrust into the classroom. Thrively captures data of 12-14-year-olds and suggests career pathways besides how they feel. They share the captured data with third parties such as YouTube Kids, game-based and coding apps – outside vendors that Thrively curates. Impero enables integration with platforms like Clever, used by over 20 million teachers and students, and Microsoft, thus expanding the tech giant’s own reach by millions of individuals. As technology intersects with education, teachers are merely a second thought in curriculum design and leading the classroom.

Teachers must remain central in children’s education, not businesses

The digitalization of education has swiftly moved towards an algorithmic hegemony which is degrading the teaching profession. Edtech companies are judging how students learn, how teachers work – and how they both feel. Public-private partnerships are giving experimental software with arbitrary algorithms warrantless titles of “school official” to untested beta programme, undermining teachers. Ironically, teachers still carry the responsibility for what happens in class.

Parents should ask what software is used to judge how their children feel or do in class and why. At universities, students should enquire what inferences are made about their work or their mental health that emerges from algorithms. Alas, this means heaping yet more responsibility on individuals – parents, children, students, teachers – to fend for themselves. Therefore, at least two things must also happen. First, edtech products and companies must be licensed to operate, the way banks, hospitals or teachers are. And second, educational institutions should consider transparency about how mental health or academic profiling in general is assessed. If and when software analytics play a part, educators (through enquiry) as well as policymakers (through law) should insist on transparency and be critical about the data points collected and the algorithms that process them.

This article represents the views of the author and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

 

For original post, please visit: 

https://blogs.lse.ac.uk/medialse/2022/05/18/algorithmic-personalization-is-disrupting-a-healthy-teaching-environment/ 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Generating Harms: Generative AI's Impact and Paths Forward // EPIC, Electronic Privacy Information Center

https://epic.org/wp-content/uploads/2023/05/EPIC-Generative-AI-White-Paper-May2023.pdf 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Districts, Take Note: Privacy Is Rare in Apps Used in Schools // EdWeek 

Districts, Take Note: Privacy Is Rare in Apps Used in Schools // EdWeek  | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Arianna Prothero


"Schools are falling short on vetting the apps and internet services they require or recommend that students use.


That’s among the findings of a comprehensive analysis of school technology practices by Internet Safety Labs, a nonprofit group that researches tech product safety.

Researchers analyzed more than 1,300 apps used in 600 schools across the country looking at what information the apps—and the browser versions of those apps—are collecting on students and who that information is shared with or sold to.

 

Not protecting students’ personal information in the digital space can cause real-world harms, said Lisa LeVasseur, the founder and executive director of Internet Safety Labs and one of the co-authors of the report.

Strangers can glean a lot of sensitive information about individuals, she said, from even just their location and calendar data.

“It’s like pulling a thread,” LeVassuer said. “Even data that may seem innocuous can be used maliciously, potentially—certainly in ways unanticipated and undesired. These kids are not signing up for data broker profiles. None of us are, actually."  (Data brokers are companies that collect people’s personal data from various sources, package it together into profiles, and sell it to other companies for marketing purposes.)

Only 29 percent of schools appear to be vetting all apps used by students, the analysis found. Schools that systematically vet all apps were less likely to recommend or require students use apps that feature ads.

 

But in an unusual twist, those schools that vet their tech were actually more likely to require students use apps with poor safety ratings from the Internet Research Labs.

Although LeVassuer said she’s not sure why that is the case, it might be because schools with systematic vetting procedures wound up requiring that students use more apps, giving schools a false sense of security that the apps they approved were safe to use.

It’s also hard for families to find information online about the technology their children are required to use for school and difficult to opt out of using that tech, according to the report.

Less than half of schools—45 percent—provide a technology notice that clearly lists all of the technology products students must use, the researchers found. While not required under federal or most state laws, it is considered a best practice, the report said.

Only 14 percent of schools gave parents and students older than 18 years of age the opportunity to consent to technology use.

Certifications can give a false sense of security

Researchers for the Internet Safety Lab also found that apps with the third-party COPA certification called Safe Harbor—which indicates that an app follows federal privacy-protection laws for children—are frequently sharing student data with the likes of Facebook and Twitter. Safe Harbor certified apps also have more advertising than the overall sample of apps the report examined.

The certification verifies that the apps abstain from some important data privacy practices, like behavioral advertising, said LeVasseur. But school leaders may not be getting the data privacy protection for students that they believe they are.

 

“Third-party certifications may not be doing what you think they are,” said LeVassuer.

But overall, apps with third-party certifications, such as 1EdTech, and pledges or promises, such as the Student Privacy Pledge or the Student Data Privacy Consortium, received better data privacy safety ratings under the rubric developed by the Internet Safety Labs.

In all, the Internet Safety Labs examined and tested 1,357 apps that schools across the country either recommend or require students and families to use. It created its sample of apps by assessing the apps recommended or required in a random sample of 13 schools from each of the 50 states and the District of Columbia, totaling 663 schools serving 456,000 students.

While researchers for Internet Safety Labs were only able to analyze the off-the-shelf versions of the apps schools used (they did not have access to school versions of these apps), the group estimates that 8 out of every 10 apps recommended by schools to students are of the off-the-shelf variety.

This is the second report from an ambitious evaluation of the technology used in schools by Internet Safety Labs. The first report, released in December, labeled the vast majority of those apps—96 percent—as not safe for children to use because they share information with third parties or contain ads."...

 

For full post, please visit:

https://www.edweek.org/technology/districts-take-note-privacy-is-rare-in-apps-used-in-schools/2023/07 

 

For report published by Internet Safety Labs, visit here:
https://internetsafetylabs.org/wp-content/uploads/2023/06/2022-K12-Edtech-Safety-Benchmark-Findings-Report-2.pdf 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Tracked: How colleges use AI to monitor student protests // Dallas Morning News

Tracked: How colleges use AI to monitor student protests // Dallas Morning News | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Ari Sen & Derêka K. Bennett

 

The pitch was attractive and simple.

 

For a few thousand dollars a year, Social Sentinel offered schools across the country sophisticated technology to scan social media posts from students at risk of harming themselves or others. Used correctly, the tool could help save lives, the company said.

For some colleges that bought the service, it also served a different purpose — allowing campus police to surveil student protests.

 

During demonstrations over a Confederate statue at UNC-Chapel Hill, a Social Sentinel employee entered keywords into the company’s monitoring tool to find posts related to the protests. At Kennesaw State University in Georgia five years ago, authorities used the service to track protesters at a town hall with a U.S. senator, records show. And at North Carolina A&T, a campus official told a Social Sentinel employee to enter keywords to find posts related to a cheerleader’s allegation that the school mishandled her rape complaint.

 

An investigation by The Dallas Morning News and the Investigative Reporting Program at UC Berkeley’s Graduate School of Journalism reveals for the first time that as more students have embraced social media as a digital town square to express opinions and organize demonstrations, many college police departments have been using taxpayer dollars to pay for Social Sentinel’s services to monitor what they say. At least 37 colleges, including four in North Texas, collectively educating hundreds of thousands of students, have used Social Sentinel since 2015.

The true number of colleges that used the tool could be far higher. In an email to a UT Dallas police lieutenant, the company’s co-founder, Gary Margolis, said it was used by “hundreds of colleges and universities in 36 states.” Margolis declined to comment on this story.

 

The News examined thousands of pages of emails, contracts and marketing material from colleges around the country, and spoke to school officials, campus police, activists and experts. The investigation shows that, despite publicly saying its service was not a surveillance tool, Social Sentinel representatives promoted the tool to universities for “mitigating” and “forestalling” protests. The documents also show the company has been moving in a new and potentially more invasive direction — allowing schools to monitor student emails on university accounts.

 

For colleges struggling to respond to high-profile school shootings and a worsening campus mental health crisis, Social Sentinel’s low-cost tool can seem like a good deal. In addition to the dozens of colleges that use the service, News investigation last year revealed that at least 52 school districts in Texas have adopted Social Sentinel as an additional security measure since 2015, including Uvalde CISD where a gunman killed 19 children and two teachers in May. The company’s current CEO J.P. Guilbault also said their services are used by one in four K-12 schools in the country.

 

Some experts said AI tools like Social Sentinel are untested, and even if they are adopted for a worthwhile purpose, they have the potential to be abused.

 

For public colleges, the use of the service sets up an additional conflict between protecting students' Constitutional rights of free speech and privacy and schools’ duty to keep them safe on campus, said Andrew Guthrie Ferguson, a law professor at American University’s Washington College of Law.

 

“What the technology allows you to do is identify individuals who are associated together or are associated with a place or location,” said Ferguson. “That is obviously somewhat chilling for First Amendment freedoms of people who believe in a right to protest and dissent.”

 

Navigate360, the private Ohio-based company that acquired Social Sentinel in 2020, called The News’ investigation “inaccurate, speculative or by opinion in many instances and significantly outdated.” The company also changed the name of the service from Social Sentinel to Navigate360 Detect earlier this year."...

 

For full post, please visit: 

https://interactives.dallasnews.com/2022/social-sentinel/ 

 

 

Scooped by Roxana Marachi, PhD
Scoop.it!

Students’ psychological reports, abuse allegations leaked by ransomware hackers // NBC News

Students’ psychological reports, abuse allegations leaked by ransomware hackers // NBC News | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

"The leak is a stark reminder of the reams of sensitive information held by schools, and that such leaks often leave parents and administrators with little recourse."

 

By Kevin Collier

Hackers who broke into the Minneapolis Public Schools earlier this year have circulated an enormous cache of files that appear to include highly sensitive documents on schoolchildren and teachers, including allegations of teacher abuse and students’ psychological reports.

The files appeared online in March after the school district announced that it had been the victim of a ransomware cyberattack. NBC News was able to download the cache of documents and reviewed about 500 files. Some were printed on school letterheads. Many were listed in folder sets named after Minneapolis schools. 

 

NBC News was able to view the leaked files after downloading them from links posted to the hacker group’s Telegram account. NBC News has not verified the authenticity of the cache, which totals about 200,000 files, and Minneapolis Public Schools declined to answer specific questions about the documents, instead pointing to its previous public statements

The files reviewed by NBC News include everything from relatively benign data like contact information to far more sensitive information including descriptions of students’ behavioral problems and teachers’ Social Security numbers. 

In addition to leaking the documents, the hacking group appeared to go a step further, posting about the documents on Twitter and Facebook as well as on a website, which hosted a video that opens with an animated short of a flaming motorcycle, followed by 50 minutes of screengrabs of the stolen files. NBC News is not naming the group.

It’s a stark reminder that schools often hold reams of sensitive information, and that such leaks often leave parents and administrators with little recourse once their information is released.

“The fact of the matter is, school districts really should be treating this more like nuclear waste, where they need to identify it and contain it and make sure that access to it is restricted,” said Doug Levin, the director of the K12 Security Information Exchange, a nonprofit that helps schools protect themselves from hackers.'

 

“Organizations that are supposed to be helping to uplift children and prepare them for the future could instead be introducing significant headwinds to their lives just for participating in public school.”

School districts really should be treating this more like nuclear waste.

 

In an update published to the Minneapolis Public Schools website on April 11, Interim Superintendent Rochelle Cox said the school district was working with “external specialists and law enforcement to review the data” that was posted online. Cox also said the district was reaching out to individuals whose information had been found in the leak. Cox also warned about reports that people had received messages telling them their information had been leaked.

 

“This week, we’re seeing an uptick in reports of messages — sometimes multiple messages — sent to people in our community stating something like ‘your social security number has been posted on the dark web,’” Cox wrote. “First — I want to remind everyone to NOT interact with such messages unless you KNOW the sender.”

Cybersecurity experts who are familiar with the leak have said it is among the worst they can remember.

“It’s awful. As bad as I’ve seen,” Brett Callow, an analyst who tracks ransomware attacks for the cybersecurity company Emsisoft, said about the breach.

Ransomware attacks on schools, which often end with the hackers releasing sensitive information, have become frequent across the U.S. since 2015. 

At least 122 public school districts in the U.S. have been hit with ransomware since 2021, Callow said, with more than half — 76 — resulting in the hackers leaking sensitive school and student data.

In such cases, districts often provide parents and students with identity theft protection services, though it’s impossible for them to keep the files from being shared after they’re posted.

The leak has left some Minneapolis parents wondering what to do next.

“I feel like my hands are tied and I feel like the information that the district is giving us is just very limited,” said Heather Paulson, who teaches high school in the district and is the mother of a younger child who attends school in Minneapolis.

 

Lydia Kauppi, a parent of a student in the district, said it’s unsettling to know that her family’s private information may have been shared by hackers.

“It causes anxiety on multiple, multiple fronts for everybody involved,” she said. “And it’s just kind of one of those weird, vague, unsettling feelings because you just don’t know how long do I have to worry about it?”

 


Minneapolis Public Schools, which oversees around 30,000 students across 68 schools, said on April 11 it was continuing to notify people who had been affected by the breach, and that it was offering free credit monitoring and identity theft protection services to victims.

 

Ransomware hackers have drastically escalated their tactics in recent years, increasing how much they ask for and launching efforts to pressure schools to pay up — including by contacting people whose information has been leaked. The group that hacked the Minneapolis schools publicly demanded $1 million. The district announced in March that it had not paid, and ransomware gangs usually only leak large datasets of victims who refuse to pay.


Since last year, various criminal hacker groups have leaked troves of files on some of the largest school districts in the country, including in Los Angeles and Chicago

The leaked Minneapolis files appear to include dossiers on hundreds of children with special needs, identifying each by name, birthday and school. Those dossiers often include pages of details about students, including problems at home like divorcing or incarcerated parents, conditions like Attention Deficit Disorder, documented indications where they appear to have been injured, results of intelligence tests and what medications they take.

Other files include databases of instances where teachers have written up students for behavioral issues, sorted by school, student ID number, behavioral issue and the students’ race. 

The leaked files also include hundreds of forms documenting times when faculty learned that a student had been potentially mistreated. Most of those are allegations that a student had suffered neglect or was physically harmed by a teacher or student. Some are extraordinarily sensitive, and allege incidents like a student being sexually abused by a teacher or by another student. Each report names the victim and cites their birthday and address.

 


In one report, a special education student claimed her bus driver groped her and made her touch him. Minnesota police later charged a man whose name matches the driver named in the report and the date of the incident.

Others describe a teacher accused of having had romantic relationships with two students. Another describes a student whom faculty suspected was the victim of female genital mutilation. NBC News was able to verify that faculty listed in those reports worked for Minneapolis schools, but has not verified those reports.

 

Those files have been promoted online in what experts said is an unorthodox and particularly aggressive manner."... 

 

For full post, please visit: 

https://www.nbcnews.com/tech/security/students-psychological-reports-abuse-allegations-leaked-ransomware-hac-rcna79414 

 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Microsoft to pay $20 million over FTC charges surrounding kids' data collection // CBS News

Microsoft to pay $20 million over FTC charges surrounding kids' data collection // CBS News | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

[CBSNews]

"Microsoft will pay a fine of $20 million to settle Federal Trade Commission charges that it illegally collected and retained the data of children who signed up to use its Xbox video game console.

The agency charged that Microsoft gathered the data without notifying parents or obtaining their consent, and that it also illegally held onto the data. Those actions violated the Children's Online Privacy Protection Act, which limits data collection on kids under 13, the FTC stated.

In a blog post, Microsoft corporate vice president for Xbox Dave McCarthy outlined additional steps the company is now taking to improve its age verification systems and to ensure that parents are involved in the creation of children's accounts for the service. These mostly concern efforts to improve age verification technology and to educate children and parents about privacy issues.

McCarthy also said the company had identified and fixed a technical glitch that failed to delete child accounts in cases where the account creation process never finished. Microsoft policy was to hold that data no longer than 14 days in order to allow players to pick up account creation where they left off if they were interrupted.

The settlement must be approved by a federal court before it can go into effect, the FTC said.

British regulators in April blocked Microsoft's $69 billion deal to buy video game maker Activision Blizzard over worries that the move would stifle competition in the cloud gaming market. The company is now "in search of solutions," Microsoft President Brad Smith said at a tech conference in London Tuesday.

 

Software giant said it has identified and fixed technical glitch that failed to delete child accounts in certain cases."

For original post, please visit:

https://www.cbsnews.com/news/microsoft-settlement-ftc-charges-childrens-data-collection-20-million-dollars/ 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Trove of L.A. Students’ Mental Health Records Posted to Dark Web After Cyber Hack – The 74

Trove of L.A. Students’ Mental Health Records Posted to Dark Web After Cyber Hack – The 74 | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Mark Keierleber

"Update: After this story published, the Los Angeles school district acknowledged in a statement that “approximately 2,000” student psychological evaluations — including those of 60 current students — had been uploaded to the dark web.

Detailed and highly sensitive mental health records of hundreds — and likely thousands — of former Los Angeles students were published online after the city’s school district fell victim to a massive ransomware attack last year, an investigation by The 74 has revealed. 

The student psychological evaluations, published to a “dark web” leak site by the Russian-speaking ransomware gang Vice Society, offer a startling degree of personally identifiable information about students who received special education services, including their detailed medical histories, academic performance and disciplinary records. 


But people are likely unaware their sensitive information is readily available online because the Los Angeles Unified School District hasn’t alerted them, a district spokesperson confirmed, and leaders haven’t acknowledged the trove of records even exists. In contrast, the district publicly acknowledged last month that the sensitive information of district contractors had been leaked. 

Cybersecurity experts said the revelation that student psychological records were exposed en masse and a lack of transparency by the district highlight a gap in existing federal privacy laws. Rules that pertain to sensitive health records maintained by hospitals and health insurers, which are protected by stringent data breach notification policies, differ from those that apply to education records kept by schools — even when the files themselves are virtually identical. Under existing federal privacy rules, school districts are not required to notify the public when students’ personal information, including medical records, is exposed."... 

 

For full article, please visit:

https://www.the74million.org/article/trove-of-l-a-students-mental-health-records-posted-to-dark-web-after-cyber-hack/ 

 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Online Proctoring - Impact on Student Equity // Online Network of Educators (by Francine Van Meter, Cabrillo College)

Online Proctoring - Impact on Student Equity // Online Network of Educators (by Francine Van Meter, Cabrillo College) | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Francine Van Meter

"A fundamental aspect of instruction is the assessment of student learning. The rapid response to move classes online in a pandemic has exposed concerns surrounding the practice of online proctoring. There are many online proctoring features offered by companies such as Proctorio, Examity, Honorlock, and Respondus. The methods that do not require a webcam include locking down the students’ browser so they cannot perform functions such as open another application or tab, use the toolbar, copy/paste, or print screen while taking an exam.

 

The intrusive methods include requesting a photo ID, activating facial recognition, and a live proctor monitoring for sounds and motions. Sessions are typically recorded from the exam start to finish and a live proctor can monitor potential testing infractions as they occur. Proctoring services say exam videos and other data are securely stored. Some store videos in a certified data center server, and then archive them after a defined period of time in line with Family Educational Rights and Privacy Act​ (FERPA) guidelines.

 

According to a 2017 study, it is suggested instructors familiarize themselves with how the services work so they can anticipate students’ concerns. Instructors should identify students’ technical difficulties and try to address them by spending time familiarizing students with how to get ready for and ultimately take their exams. In this pandemic, we know many students lack access to computers and wifi, and the newly issued Chromebooks challenge students to operate another new device and establish wifi access. 

 

Online testing may seem to make things easier but it’s possible the transition to new technology, or the lack of access using current technology that doesn’t include a webcam, may complicate matters and lead to a significant level of discomfort with online proctoring. A survey of 748 students about technology and achievement gaps found about one in five struggled to use the technology at their disposal because of issues such as broken hardware and connectivity problems. Students of color or lower socioeconomic status encountered these difficulties more often. 

 

My colleague, Aloha Sargent, Technology Services Librarian, shared with me an article from Hybrid Pedagogy that asserts “algorithmic test proctoring’s settings have discriminatory consequences across multiple identities and serious privacy implications.” When Texas Tech rolled out online proctoring, they recognized students often take exams in their dorm or bedrooms, and students noted in a campus survey “They thought it was big brother invading their computers.” Some test takers were asked by live proctors to remove pictures from their surroundings and some students of color were told to shine more light on themselves. That’s a disturbing request in my opinion. Many of our community college students occupy multi-family or multi-person residences that include children. These proctoring settings will “disproportionately impact women who typically take on the majority of childcare, breast feeding, lactation, and care-taking roles for their family. Students who are parents may not be able to afford childcare, be able to leave the house, or set aside quiet, uninterrupted blocks of time to take a test.”

At the University of California, Davis, they are discouraging faculty members from using online proctoring this semester unless they have previous experience with such services. “It suggests faculty consider alternatives that will lower students’ anxiety levels during an already stressful time, such as requiring them to reflect on what they learned in the course.” The following article highlights a University of Washington story about adopting Proctorio because of the COVID-19 rapid transition to online.

 

Read the experience of one University of Washington student, Paranoia about cheating is making online education terrible for everyone. The students’ experiences “are another sign that, amid the pandemic, the hurried move to re-create in-person classes online has been far from smooth, especially when it comes to testing.” Live online proctoring is a way to preemptively communicate to students, we don’t trust you. It is a pedagogy of punishment and exclusion.

 

In higher education, traditional exams represent the most appropriate assessment tool. There are ways to cheat on exams no matter what method is used to deploy them. Even a major “NSA-style” proctoring software is not “cheat-proof.” Their sales representative was very candid in showing me how it’s done.  There are alternatives to typical exam questions—often referred to as authentic assessment.

 

According to Oxford Research Encyclopedia, “authentic assessment is an effective measure of intellectual achievement or ability because it requires students to demonstrate their deep understanding, higher-order thinking, and complex problem solving through the performance of exemplary tasks.” 

 

Given the limited timeframe, there will be limits to what you can use now. That’s OK. Consider using Canvas question pools and randomizing questions, or even different versions of the final. For example, replacing six multiple-choice or true-and-false questions with two short-answer items may better indicate how well a question differentiates between students who know the subject matter and those who do not. Or ask students to record a brief spoken-word explanation for the question using the Canvas media tool. Just keep in mind, there are a dozen or more ways to assess learning without “biometric-lockdown-retinal scan-saliva-sample-genetic-mapping-fingerprint-analysis.”

 

References

1. Dimeo, Jean. “Online Exam Proctoring Catches Cheaters, Raises Concerns.” Inside Higher Ed, 2017.

2. Woldeab, Daniel, et al. “Under the Watchful Eye of Online Proctoring.” Innovative Learning and Teaching: Experiments Across the Disciplines, University of Minnesota Libraries Publishing Services’ open book and open textbook initiative, 2017.

3.  ​Schwartz, Natalie. “Colleges flock to online proctors, but equity concerns remain.” Education Dive, April 2020.

 

4. Swager, Shea. “Our Bodies Encoded: Algorithmic Test Proctoring in Higher Education.” Hybrid Pedagogy, April 2020."

 

Francine is the Distance Education Coordinator at Cabrillo College. In addition, she also serves as Cabrillo's Title V Grant Activity Coordinator, CVC-OEI Project Lead, Peer Online Coach Reviewer, and Flex Calendar Coordinator.

Photo by Jakob Owens on Unsplash

 

For original post, please visit: 
https://onlinenetworkofeducators.org/2020/06/01/online-proctoring-impact-on-student-equity/ 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Roblox facilitates “illegal gambling” for minors, according to new lawsuit //ArsTechnica

Roblox facilitates “illegal gambling” for minors, according to new lawsuit //ArsTechnica | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Kyle Orland

"A new proposed class-action lawsuit (as noticed by Bloomberg Law) accuses user-generated "metaverse" company Roblox of profiting from and helping to power third-party websites that use the platform's Robux currency for unregulated gambling activities. In doing so, the lawsuit says Roblox is effectively "work[ing] with and facilitat[ing] the Gambling Website Defendants... to offer illegal gambling opportunities to minor users."

 

The three gambling website companies named in the lawsuit—Satozuki, Studs Entertainment, and RBLXWild Entertainment—allow users to connect a Roblox account and convert an existing balance of Robux virtual currency into credits on the gambling site. Those credits act like virtual casino chips that can be used for simple wagers on those sites, ranging from Blackjack to "coin flip" games.

If a player wins, they can transfer their winnings back to the Roblox platform in the form of Robux. The gambling sites use fake purchases of worthless "dummy items" to facilitate these Robux transfers, according to the lawsuit, and Roblox takes a 30 percent transaction fee both when players "cash in" and "cash out" from the gambling sites. If the player loses, the transferred Robux are retained by the gambling website through a "stock" account on the Roblox platform.

 

In either case, the Robux can be converted back to actual money through the Developer Exchange Program. For individuals, this requires a player to be at least 13 years old, to file tax paperwork (in the US), and to have a balance of at least 30,000 Robux (currently worth $105, or $0.0035 per Robux).

The gambling websites also use the Developer Exchange Program to convert their Robux balances to real money, according to the lawsuit. And the real money involved isn't chump change, either; the lawsuit cites a claim from RBXFlip's owners that 7 billion Robux (worth over $70 million) was wagered on the site in 2021 and that the site's revenues increased 10 times in 2022. The sites are also frequently promoted by Roblox-focused social media influencers to drum up business, according to the lawsuit.

Advertisement

Who’s really responsible?

Roblox's terms of service explicitly bar "experiences that include simulated gambling, including playing with virtual chips, simulated betting, or exchanging real money, Robux, or in-experience items of value." But the gambling sites get around this prohibition by hosting their games away from Roblox's platform of user-created "experiences" while still using Robux transfers to take advantage of players' virtual currency balances from the platform.

 

This can be a problem for parents who buy Robux for their children thinking they're simply being used for in-game cosmetics and other gameplay items (over half of Roblox players were 12 or under as of 2020). Two parents cited in the lawsuit say their children have lost "thousands of Robux" to the gambling sites, which allegedly have nonexistent or ineffective age-verification controls.

Through its maintenance of the Robux currency platform that powers these sites, the lawsuit alleges that Roblox "monitors and records each of these illegal transactions, yet does nothing to prevent them from happening." Allowing these sites to profit from minors gambling with Robux amounts to "tacitly approv[ing] the Illegal Gambling Websites’ use of [Robux] that Roblox’s minor users can utilize to place bets on the Illegal Gambling Websites." This amounts to a violation of the federal RICO act, as well as California's Unfair Competition Law and New York's General Business Law, among other alleged violations.

In a statement provided to Bloomberg Law, Roblox said that "these are third-party sites and have no legal affiliation to Roblox whatsoever. Bad actors make illegal use of Roblox’s intellectual property and branding to operate such sites in violation of our standards.”


This isn't the first time a game platform has run into problems with its virtual currency powering gambling. In 2016, Valve faced a lawsuit and government attention from Washington state over third-party sites that use Counter-Strike skins as currency for gambling games. The lawsuit against Steam was eventually dismissed last year."

 

For original post, please visit:

https://arstechnica.com/gaming/2023/08/roblox-facilitates-illegal-gambling-for-minors-according-to-new-lawsuit/ 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

The Complications of Regulating AI // Marketplace.org Interview with Elizabeth Renieris

The Complications of Regulating AI // Marketplace.org Interview with Elizabeth Renieris | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it
Podcast produced by Lydia Morell
"The idea that advancing technology outpaces regulation serves the industry's interests, says Elizabeth Renieris of Oxford. Current oversight methods apply to specific issues, but "general purpose" AI is harder to keep in check."...

 

https://www.marketplace.org/shows/marketplace-tech/the-complications-of-regulating-ai/ 

 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

A patchwork of platforms: mapping data infrastructures in schools // Pangrazio, Selwyn, & Cumbo (2022)

A patchwork of platforms: mapping data infrastructures in schools // Pangrazio, Selwyn, & Cumbo (2022) | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

"Abstract
This paper explores the significance of schools’ data infrastructures as a site of institutional power and (re)configuration. Using ‘infrastructure studies’ as a theoretical framework and drawing on in-depth studies of three contrasting Australian secondary schools, the paper takes a holistic look at schools’ data infrastructures. In contrast to the notion of the ‘platformatised’ school, the paper details the ad hoc and compromised ways that these school data infrastructures have developed – highlighting a number of underlying sociotechnical conditions that lead to an ongoing process of data infrastructuring. These include issues of limited technical interoperability and differences between educational requirements and commercially-led designs. Also apparent is the disjuncture between the imagined benefits of institutional data use and the ongoing maintenance and repair required to make the infrastructures function. Taking an institutional perspective, the paper explores why digital technologies continue to complicate (rather than simplify) school processes and practices."

 

For journal access, please visit: 

https://www.tandfonline.com/doi/abs/10.1080/17439884.2022.2035395?journalCode=cjem20 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Behind the AI boom, an army of overseas workers in ‘digital sweatshops’ // The Washington Post

Behind the AI boom, an army of overseas workers in ‘digital sweatshops’ // The Washington Post | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Rebecca Tan and Regine Cabato


CAGAYAN DE ORO, Philippines — In a coastal city in the southern Philippines, thousands of young workers log online every day to support the booming business of artificial intelligence.

In dingy internet cafes, jampacked office spaces or at home, they annotate the masses of data that American companies need to train their artificial intelligence models. The workers differentiate pedestrians from palm trees in videos used to develop the algorithms for automated driving; they label images so AI can generate representations of politicians and celebrities; they edit chunks of text to ensure language models like ChatGPT don’t churn out gibberish.

 

More than 2 million people in the Philippines perform this type of “crowdwork,” according to informal government estimates, as part of AI’s vast underbelly. While AI is often thought of as human-free machine learning, the technology actually relies on the labor-intensive efforts of a workforce spread across much of the Global South and often subject to exploitation.

The mathematical models underpinning AI tools get smarter by analyzing large data sets, which need to be accurate, precise and legible to be useful. Low-quality data yields low-quality AI. So click by click, a largely unregulated army of humans is transforming the raw data into AI feedstock.

In the Philippines, one of the world’s biggest destinations for outsourced digital work, former employees say that at least 10,000 of these workers do this labor on a platform called Remotasks, which is owned by the $7 billion San Francisco start-up Scale AI.

Scale AI has paid workers at extremely low rates, routinely delayed or withheld payments and provided few channels for workers to seek recourse, according to interviews with workers, internal company messages and payment records, and financial statements. Rights groups and labor researchers say Scale AI is among a number of American AI companies that have not abided by basic labor standards for their workers abroad.

 

Of 36 current and former freelance workers interviewed, all but two said they’ve had payments from the platform delayed, reduced or canceled after completing tasks. The workers, known as “taskers,” said they often earn far below the minimum wage — which in the Philippines ranges from $6 to $10 a day depending on region — though at times they do make more than the minimum.

Scale AI, which does work for firms like Meta, Microsoft and generative AI companies like Open AI, the creator of ChatGPT, says on its website that it is “proud to pay rates at a living wage.” In a statement, Anna Franko, a Scale AI spokesperson, said the pay system on Remotasks “is continually improving” based on worker feedback and that “delays or interruptions to payments are exceedingly rare.”

But on an internal messaging platform for Remotasks, which The Washington Post accessed in July, notices of late or missing payments from supervisors were commonplace. On some projects, there were multiple notices in a single month. Sometimes, supervisors told workers payments were withheld because work was inaccurate or late. Other times, supervisors gave no explanation. Attempts to track down lost payments often went nowhere, workers said — or worse, led to their accounts being deactivated.

 


Charisse, 23, said she spent four hours on a task that was meant to earn her $2, and Remotasks paid her 30 cents.

Jackie, 26, said he worked three days on a project that he thought would earn him $50, and he got $12.

Benz, 36, said he’d racked up more than $150 in payments when he was suddenly booted from the platform. He never got the money, he said.

Paul, 25, said he’s lost count of how much money he’s been owed over three years of working on Remotasks. Like other current Remotasks freelancers, Paul spoke on the condition of being identified only by first name to avoid being expelled from the platform. He started “tasking” full time in 2020 after graduating from university. He was once excited to help build AI, he said, but these days, he mostly feels embarrassed by how little he earns.

“The budget for all this, I know it’s big,” Paul said, staring into his hands at a coffee shop in Cagayan de Oro. “None of that is trickling down to us.”

 
 

Much of the ethical and regulatory debate over AI has focused so far on its propensity for bias and potential to go rogue or be abused, such as for disinformation. But companies producing AI technology are also charting a new frontier in labor exploitation, researchers say.

In enlisting people in the Global South as freelance contractors, micro-tasking platforms like Remotasks sidestep labor regulations — such as a minimum wage and a fair contract — in favor of terms and conditions they set independently, said Cheryll Soriano, a professor at De La Salle University in Manila who studies digital labor in the Philippines. “What it comes down to,” she said, “is a total absence of standards.”

Dominic Ligot, a Filipino AI ethicist, called these new workplaces “digital sweatshops.”

 
Overseas outposts

Founded in 2016 by young college dropouts and backed by some $600 million in venture capital, Scale AI has cast itself as a champion of American efforts in the race for AI supremacy. In addition to working with large technology companies, Scale AI has been awarded hundreds of millions of dollars to label data for the U.S. Department of Defense. To work on such sensitive, specialized data sets, the company has begun seeking out more contractors in the United States, though the vast majority of the workforce is still located in Asia, Africa and Latin America.... "

 

For full article, please visit: 

https://www.washingtonpost.com/world/2023/08/28/scale-ai-remotasks-philippines-artificial-intelligence/ 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Silicon Valley, Philanthrocapitalism, and Policy Shifts from Teachers to Tech // Marachi & Carpenter (2020) 


To order book, please visit: https://www.press.umich.edu/11621094/strike_for_the_common_good 

 

Marachi, R., & Carpenter, R. (2020). Silicon Valley, philanthrocapitalism, and policy shifts from teachers to tech. In Givan, R. K. and Lang, A. S. (Eds.). Strike for the Common Good: Fighting for the Future of Public Education (pp.217-233). Ann Arbor: University of Michigan Press.

 

To download pdf of final chapter manuscript, click here. 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

The Big Business of Tracking and Profiling Students [Interview] // The Markup

The Big Business of Tracking and Profiling Students [Interview] // The Markup | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it
View original post published January 15, 2022
_______________
"Hello, friends,

The United States is one of the few countries that does not have a federal baseline privacy law that lays out minimum standards for data use. Instead, it has tailored laws that are supposed to protect data in different sectors—including health, children’s and student data. 

But despite the existence of a law—the Family Educational Rights and Privacy Act—that is specifically designed to protect the privacy of student educational records, there are loopholes in the law that still allow data to be exploited. The Markup reporter Todd Feathers has uncovered a booming business in monetizing student data gathered by classroom software. 

In two articles published this week as part of our Machine Learning series, Todd identified a private equity firm, Vista Equity Partners, that has been buying up educational software companies that have collectively amassed a trove of data about children all the way from their first school days through college.

Vista Equity Partners, which declined to comment for Todd’s story, has acquired controlling ownership stakes in EAB, which provides college counseling and recruitment products to thousands of schools, and PowerSchool, which provides software for K-12 schools and says it holds data on more than 45 million children. 

Some of this data is used to create risk-assessment scores that claim to predict students’ future success. Todd filed public records requests for schools across the nation, and using those documents, he was able to discover that PowerSchool’s algorithm, in at least one district, considered a student who was eligible for free or reduced lunch to be at a higher risk of dropping out. 

 

Experts told us that using a proxy for wealth as a predictor for success is unfair because students can’t change that status and could be steered into less challenging opportunities as a result.

 

“I think that having [free and reduced lunch status] as a predictor in the model is indefensible in 2021,” said Ryan Baker, the director of the University of Pennsylvania’s Center for Learning Analytics. PowerSchool defended the use of the factor as a way to help educators provide additional services to students who are at risk.

 

Todd also found public records showing how student data is used by colleges to target potential applicants through PowerSchool’s Naviance software using controversial criteria such as the race of the applicant. For example, Todd uncovered a 2015 contract between Naviance and the University of Kansas revealing that the school paid for a year-long advertising campaign targeting only White students in three states.

The University of Kansas did not respond to requests for comment. PowerSchool’s chief privacy officer Darron Flagg said Naviance has since stopped colleges from using targeting “criteria that excludes under-represented groups.” He also said that PowerSchool complies with the student privacy law and “does not sell student or school data.”

But, as we have written at The Markup many times, not selling data does not mean not profiting from that data. To understand the perils of the booming educational data market, I spoke this week with Roxana Marachi, a professor of education at San José State University, who researches school violence prevention, high-stakes testing, privatization, and the technologization of teaching and learning. Marachi served as education chair of the CA/HI State NAACP from 2019 to 2021 and has been active in local, state, and national efforts to strengthen and protect public education. Her views do not necessarily reflect the policy or position of her employer.

Her written responses to my questions are below, edited for brevity.

_______________________________

Angwin: You have written that ed tech companies are engaged in a “structural hijacking of education.” What do you mean by this?

Marachi: There has been a slow and steady capture of our educational systems by ed tech firms over the past two decades. The companies have attempted to replace many different practices that we have in education. So, initially, it might have been with curriculum, say a reading or math program, but has grown over the years into wider attempts to extract social, emotional, behavioral, health, and assessment data from students. 

What I find troubling is that there hasn’t been more scrutiny of many of the ed tech companies and their data practices. What we have right now can be called “pinky promise” privacy policies that are not going to protect us. We’re getting into dangerous areas where many of the tech firms are being afforded increased access to the merging of different kinds of data and are actively engaged in the use of “predictive analytics” to try to gauge children’s futures.   

Angwin: Can you talk more about the harmful consequences this type of data exploitation could have?


Marachi: 
Yes, researchers at the Data Justice Lab at Cardiff University have documented numerous data harms with the emergence of big data systems and related analytics—some of these include targeting based on vulnerability (algorithmic profiling), misuse of personal information, discrimination, data breaches, political manipulation and social harms, and data and system errors.

As an example in education, several data platforms market their products as providing “early warning systems” to support students in need, yet these same systems can also set students up for hyper-surveillance and racial profiling

One of the catalysts of my inquiry into data harms happened a few years ago when I was using my university’s learning management system. When reviewing my roster, I hovered the cursor over the name of one of my doctoral students and saw that the platform had marked her with one out of three stars, in effect labeling her as in the “lowest third” of students in the course in engagement. This was both puzzling and disturbing as it was such a false depiction—she was consistently highly engaged and active both in class and in correspondence. But the platform’s metric of page views as engagement made her appear otherwise.

Many tech platforms don’t allow instructors or students to delete such labels or to untether at all from algorithms set to compare students with these rank-based metrics. We need to consider what consequences will result when digital labels follow students throughout their educational paths, what longitudinal data capture will mean for the next generation, and how best to systemically prevent emerging, invisible data harms.

One of the key principles of data privacy is the “right to be forgotten”—for data to be able to be deleted. Among the most troubling of emerging technologies I’ve seen in education are blockchain digital ID systems that do not allow for data on an individual’s digital ledger to ever be deleted.

Angwin: There is a law that is supposed to protect student privacy, the Family Educational Rights Protection Act (FERPA). Is it providing any protection?

Marachi: FERPA is intended to protect student data, but unfortunately it’s toothless. While schools that refuse to address FERPA violations may have federal funding withheld from the Department of Education, in practice, this has never happened

One of the ways that companies can bypass FERPA is to have educational institutions designate them as an educational employee or partner. That way they have full access to the data in the name of supporting student success.

The other problem is that with tech platforms as the current backbone of the education system, in order for students to participate in formal education, they are in effect required to relinquish many aspects of their privacy rights. The current situation appears designed to allow ed tech programs to be in “technical compliance” with FERPA by effectively bypassing its intended protections and allowing vast access to student data.

Angwin: What do you think should be done to mitigate existing risks?

Marachi: There needs to be greater awareness that these data vulnerabilities exist, and we should work collectively to prevent data harms. What might this look like? Algorithmic audits and stronger legislative protections. Beyond these strategies, we also need greater scrutiny of the programs that come knocking on education’s door. One of the challenges is that many of these companies have excellent marketing teams that pitch their products with promises to close achievement gaps, support students’ mental health, improve school climate, strengthen social and emotional learning, support workforce readiness, and more. They’ll use the language of equity, access, and student success, issues that as educational leaders, we care about. 

 

Many of these pitches in the end turn out to be what I call equity doublespeak, or the Theranos-ing of education, meaning there’s a lot of hype without the corresponding delivery on promises. The Hechinger Report has documented numerous examples of high-profile ed tech programs making dubious claims of the efficacy of their products in the K-12 system. We need to engage in ongoing and independent audits of efficacy, data privacy, and analytic practices of these programs to better serve students in our care.

Angwin: You’ve argued that, at the very least, companies implementing new technologies should follow IRB guidelines for working with human subjects. Could you expand on that?

Marachi: Yes, Institutional Review Boards (IRBs) review research to ensure ethical protections of human subjects. Academic researchers are required to provide participants with full informed consent about the risks and benefits of research they’d be involved in and to offer the opportunity to opt out at any time without negative consequences.

 

Corporate researchers, it appears, are allowed free rein to conduct behavioral research without any formal disclosure to students or guardians of the potential risks or harms to their interventions, what data they may be collecting, or how they would be using students’ data. We know of numerous risks and harms documented with the use of online remote proctoring systems, virtual reality, facial recognition, and other emerging technologies, but rarely if ever do we see disclosure of these risks in the implementation of these systems.

If corporate researchers in ed tech firms were to be contractually required by partnering public institutions to adhere to basic ethical protections of the human participants involved in their research, it would be a step in the right direction toward data justice." 

__________

https://themarkup.org/newsletter/hello-world/the-big-business-of-tracking-and-profiling-students 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

K-12 Dealmaking: Kahoot Acquired by Investor Group That Includes Goldman Sachs // EdWeek Market Brief

K-12 Dealmaking: Kahoot Acquired by Investor Group That Includes Goldman Sachs // EdWeek Market Brief | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Michelle Caffrey

"Norwegian ed-tech company Kahoot agreed to be acquired by a group of investors led by Goldman Sachs Asset Management for $1.7 billion, or 17.2 Norwegian Krones.

Lego’s parent company, Kirkbi, is among the other co-investors, alongside private equity firm General Atlantic, Kahoot CEO Eilert Hanoa’s investment vehicle Glitrafjord, and other investors and management shareholders, Kahoot said.

The deal is expected to close by the end of the year, pending regulatory approvals.

In announcing the acquisition, Kahoot, which is traded on the Oslo Stock Exchange, also shared preliminary financial results for the second quarter. The company reported recognized revenue of more than $41 million for the quarter, up 14 percent year-over-year. The company also said its adjusted earnings before interest, taxes, depreciation, and amortization was $11 million for the quarter, an increase of 60 percent from the prior year’s period.

 

The offer to take Kahoot private offers shareholders about $3.48 a share, a 51 percent premium to its closing price of $2.28 in May when the investors disclosed their shareholding positions.

Kahoot was founded in 2013 and designed to offer students a game-based platform to learn a range of subjects. It has acquired seven companies in since it launched, one of the largest being a $500 million acquisition of digital learning platform Clever in May 2021.

“As the need for engaging learning, across home, school and work, continues to grow, I am excited about the opportunities this partnership represents for our users, our ecosystem of partners, and for the talented team across the Kahoot! Group, to advance education for hundreds of millions of learners everywhere,” Hanoa, Kahoot’s CEO, said in a statement.

Goldman Sachs noted Kahoot’s unique brand, extensive reach, and scalable technology and operations in its announcement of the deal, as well as its focus on a wide range of customers, from school children to enterprise clients.

The acquisition will allow Kahoot to benefit from operating as a private company, Goldman Sachs and co-investors said, noting that it plans to invest in product innovation and growth both organically and through acquisitions. Having access to private capital will allow the company to significantly boost it go-to-market strategy, it added.

“Kahoot is unlocking learning potential for children, students and employees across the world. The company has a clear mission and value proposition and our investment will help to grow its impact and accelerate value for all stakeholders,” Michael Bruun, global co-head of private equity at Goldman Sachs Asset Management, said in a statement.

“Through this transaction, we are pleased to partner with a fantastic leadership team and group of co-investors to expand a mission-critical learning and engagement platform and contribute to its further growth and innovation.”

The investment is another move by Lego parent company Kirkbi to grow its presence in the ed-tech market, after the company acquired BrainPop, maker of video-based learning tools, in October 2022.

In a statement, Kirkbi Chief Investment Officer Thomas Lau Schleicher said Kahoot’s mission resonates with his organization’s “core values” and it finds “the investment fits very well with Kirkbi’s long-term investment strategy.”...

 

For full post:

https://marketbrief.edweek.org/marketplace-k-12/k-12-dealmaking-kahoot-acquired-investor-group-includes-goldman-sachs/?cmp=eml-enl-mb+20230724&id=1712683 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Controversy erupts over non-consensual AI mental health experiment [Updated] // ArsTechnica

Controversy erupts over non-consensual AI mental health experiment [Updated] // ArsTechnica | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

"Koko let 4,000 people get therapeutic help from GPT-3 without telling them first."

 

By Benj Edwards

"On Friday, Koko co-founder Rob Morris announced on Twitter that his company ran an experiment to provide AI-written mental health counseling for 4,000 people without informing them first, Vice reports. Critics have called the experiment deeply unethical because Koko did not obtain informed consent from people seeking counseling.

 

Koko is a nonprofit mental health platform that connects teens and adults who need mental health help to volunteers through messaging apps like Telegram and Discord.

 

On Discord, users sign in to the Koko Cares server and send direct messages to a Koko bot that asks several multiple-choice questions (e.g., "What's the darkest thought you have about this?"). It then shares a person's concerns—written as a few sentences of text—anonymously with someone else on the server who can reply anonymously with a short message of their own."...

 

For full article, please visit:

https://arstechnica.com/information-technology/2023/01/contoversy-erupts-over-non-consensual-ai-mental-health-experiment/ 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Securly Sued Over Surveillance of Students on School Chromebooks // 7/17/23

Securly Sued Over Surveillance of Students on School Chromebooks // 7/17/23 | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

Securly Sued Over Surveillance of Students on School Chromebooks - by Christopher Brown

https://news.bloomberglaw.com/privacy-and-data-security/securly-sued-over-surveillance-of-students-on-school-chromebooks 

 

Court case docket - public complaint file

https://www.bloomberglaw.com/public/desktop/document/BateetalvSecurlyIncDocketNo323cv01304SDCalJul172023CourtDocket?doc_id=X6CKD5LOJEB9EMRO1KU3RFCR8K7

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Report – Hidden Harms: The Misleading Promise of Monitoring Students Online // Center for Democracy and Technology 

Report – Hidden Harms: The Misleading Promise of Monitoring Students Online // Center for Democracy and Technology  | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Elizabeth Laird, Hugh Grant-Chapman, Cody Venzke, Hannah Quay-de la Vallee

 

"The pressure on schools to keep students safe, especially to protect them physically and support their mental health, has never been greater. The mental health crisis, which has been exacerbated by the COVID-19 pandemic, and concerns about the increasing number of school shootings have led to questions about the role of technology in meeting these goals. From monitoring students’ public social media posts to tracking what they do in real-time on their devices, technology aimed at keeping students safe is growing in popularity. However, the harms that such technology inflicts are increasingly coming to light. 

 

CDT conducted survey research among high school students and middle and high school parents and teachers to better understand the promise of technologies aimed at keeping students safe and the risks that they pose, as reported by those most directly interacting with such tools. In particular, the research focused on student activity monitoring, the nearly ubiquitous practice of schools using technology to monitor students’ activities online, especially on devices provided by the school. CDT built on its previous research, which showed that this monitoring is conducted primarily to comply with perceived legal requirements and to keep students safe. While stakeholders are optimistic that student activity monitoring will keep students safe, in practice it creates significant efficacy and equity gaps: 

  • Monitoring is used for discipline more often than for student safety: Despite assurances and hopes that student activity monitoring will be used to keep students safe, teachers report that it is more frequently used for disciplinary purposes in spite of parent and student concerns. 

  • Teachers bear considerable responsibility but lack training for student activity monitoring: Teachers are generally tasked with responding to alerts generated by student activity monitoring, despite only a small percentage having received training on how to do so privately and securely. 

  • Monitoring is often not limited to school hours despite parent and student concerns: Students and parents are the most comfortable with monitoring being limited to when school is in session, but monitoring frequently occurs outside of that time frame. 

  • Stakeholders demonstrate large knowledge gaps in how monitoring software functions: There are significant gaps between what teachers report is communicated about student activity monitoring, often via a form provided along with a school-issued device, and what parents and students retain and report about it. 

Additionally, certain groups of students, especially those who are already more at risk than their peers, disproportionately experience the hidden harms of student activity monitoring: 

  • Students are at risk of increased interactions with law enforcement: Schools are sending student data collected from monitoring software to law enforcement officials, who use it to contact students. 

  • LGBTQ+ students are disproportionately targeted for action: The use of student activity monitoring software is resulting in the nonconsensual disclosure of students’ sexual orientation and gender identity (i.e., “outing”), as well as more LGBTQ+ students reporting they are being disciplined or contacted by law enforcement for concerns about committing a crime compared to their peers. 

  • Students’ mental health could suffer: While students report they are being referred to school counselors, social workers, and other adults for mental health support, they are also experiencing detrimental effects from being monitored online. These effects include avoiding expressing their thoughts and feelings online, as well as not accessing important resources that could help them. 

  • Students from low-income families, Black students, and Hispanic students are at greater risk of harm: Previous CDT research showed that certain groups of students, including students from low-income families, Black students, and Hispanic students, rely more heavily on school-issued devices. Therefore, they are subject to more surveillance and the aforementioned harms, including interacting with law enforcement, being disciplined, and being outed, than those using personal devices. 


Given that the implementation of student activity monitoring falls short of its promises, this research suggests that education leaders should consider alternative strategies to keep students safe that do not simultaneously put students’ safety and well-being in jeopardy.


See below for our complete report, summary brief, and in-depth research slide deck. For more information, see our letter calling for action from the U.S. Department of Education’s Office for Civil Rights — jointly signed by multiple civil society groups — as well as our related press release and recent blog post discussing findings from our parent and student focus groups.

Read the full report here.

Read the summary brief here.

Read the research slide deck here.

 
 
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

FTC Says EdTech Provider Edmodo Unlawfully Used Children’s Personal Information for Advertising and Outsourced Compliance to School Districts

FTC Says EdTech Provider Edmodo Unlawfully Used Children’s Personal Information for Advertising and Outsourced Compliance to School Districts | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

"The Federal Trade Commission has obtained an order against education technology provider Edmodo for collecting personal data from children without obtaining their parent’s consent and using that data for advertising, in violation of the Children’s Online Privacy Protection Act Rule (COPPA Rule), and for unlawfully outsourcing its COPPA compliance responsibilities to schools. 

Under the proposed order, filed by the Department of Justice on behalf of the FTC, Edmodo, Inc. will be prohibited from requiring students to hand over more personal data than is necessary in order to participate in an online educational activity. This is a first for an FTC order and is in line with a policy statement the FTC issued in May 2022 that warned education technology companies about forcing parents and schools to provide personal data about children in order to participate in online education. During the course of the FTC’s investigation, Edmodo suspended operations in the United States. The order, if approved by the court, will bind the company, including if it resumes U.S. operations.

“This order makes clear that ed tech providers cannot outsource compliance responsibilities to schools, or force students to choose between their privacy and education,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection. “Other ed tech providers should carefully examine their practices to ensure they’re not compromising students’ privacy.”

In a complaint, also filed by DOJ, the FTC says Edmodo violated the COPPA Rule by failing to provide information about the company’s data collection practices to schools and teachers, and failing to obtain verifiable parental consent. The COPPA Rule requires online services and websites directed to children under 13 to notify parents about the personal information they collect and obtain verifiable parental consent for the collection and use of that information.

Until approximately September 2022, California-based Edmodo offered an online platform and mobile app with virtual class spaces to host discussions, share materials and other online resources for teachers and schools in the United States via a free and subscription-based service. The company collected personal information about students including their name, email address, date of birth and phone number as well as persistent identifiers, which it used to provide ads.

Under the COPPA Rule, schools can authorize collection of children’s personal information on behalf of parents. But a website operator must provide notice to the school of the operator’s collection, use and disclosure practices, and the school can only authorize collection and use of personal information for an educational purpose.

Edmodo required schools and teachers to authorize data collection on behalf of parents or to notify parents about Edmodo’s data collection practices and obtain their consent to that collection. Edmodo, however, failed to provide schools and teachers with the information they would need to comply in either scenario as required by the COPPA Rule, according to the complaint. For example, during the signup process for Edmodo’s free service, Edmodo provided minimal information about the COPPA Rule to teachers—providing only a link to the company’s terms of service and privacy policy, which teachers were not required to review before signing up for the company’s service.

Those teachers and schools that did read Edmodo’s terms of service were falsely told that they were “solely” responsible for complying with the COPPA Rule. The terms of service also failed to adequately disclose what personal information the company actually collects or indicate how schools or teachers should go about obtaining parental consent.

These failures led to the illegal collection of personal information from children, according to the complaint.

In addition, Edmodo could not rely on schools to authorize collection on behalf of parents because the company used the personal information it collected from children for a non-educational purpose—to serve advertising. For such commercial uses, the COPPA Rule required Edmodo to obtain consent directly from parents. 

Edmodo also violated the COPPA Rule by retaining personal information indefinitely until at least 2020 when it put in place a policy to delete the data after two years, according to the complaint. COPPA prohibits retaining personal information about children for longer than is reasonably necessary to fulfill the purpose for which it was collected.

In addition to violating the COPPA Rule, the FTC says Edmodo violated the FTC Act’s prohibition on unfair practices by relying on schools to obtain verifiable parental consent. Specifically, the FTC says that Edmodo outsourced its COPPA compliance responsibilities to schools and teachers while providing confusing and inaccurate information about obtaining consent. This is the first time the FTC has alleged an unfair trade practice in the context of an operator’s interaction with schools.

Proposed Order

The proposed order with Edmodo includes a $6 million monetary penalty, which will be suspended due to the company’s inability to pay. Other order provisions, which will provide protections for children’s data should Edmodo resume operations in the United States, include:

  • prohibiting Edmodo from conditioning a child’s participation in an activity on the child disclosing more information than is reasonably necessary to participate in such activity;

  • requiring the company to complete several requirements before obtaining school authorization to collect information from a child;

  • prohibiting the company from using children’s information for non-educational purposes such as advertising or building user profiles;

  • banning the company from using schools as intermediaries in the parental consent process;

  • requiring the company to implement and adhere to a retention schedule that details what information it collects, what the data is used for and a time frame for deleting it; and

  • requiring Edmodo to delete models or algorithms developed using personal information collected from children without verifiable parental consent or school authorization.

The Commission voted 3-0 to refer the civil penalty complaint and proposed federal order to the Department of Justice. The DOJ filed the complaint and stipulated order in the U.S. District Court for the Northern District of California.

NOTE: The Commission authorizes the filing of a complaint when it has “reason to believe” that the named defendant is violating or is about to violate the law and it appears to the Commission that a proceeding is in the public interest. Stipulated orders have the force of law when approved and signed by the District Court judge.

The lead FTC attorneys on this matter are Gorana Neskovic and Peder Magee from the FTC’s Bureau of Consumer protection.

The Federal Trade Commission works to promote competition and protect and educate consumers. Learn more about consumer topics at consumer.ftc.gov, or report fraud, scams, and bad business practices at ReportFraud.ftc.gov. Follow the FTC on social media, read consumer alerts and the business blog, and sign up to get the latest FTC news and alerts."

 

For original post, please visit: 

https://www.ftc.gov/news-events/news/press-releases/2023/05/ftc-says-ed-tech-provider-edmodo-unlawfully-used-childrens-personal-information-advertising?utm_source=govdelivery 

No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Ransomware Gang Claims Edison Learning Data Theft // THE Journal

Ransomware Gang Claims Edison Learning Data Theft // THE Journal | Educational Psychology & Emerging Technologies: Critical Perspectives and Updates | Scoop.it

By Kristal Kuykendall 05/02/23

"The Royal Ransomware is claiming to have infiltrated public school management and virtual learning provider Edison Learning, posting on its dark web data leak site on Wednesday, April 26, that it had stolen 20GB of the company’s data “including personal information of employees and students” and threatening to post the data “early next week.”

Typically, when Royal and similar ransomware groups post such warnings, it indicates they have likely made a ransomware demand and may be in negotiations with the targeted organization, said cybersecurity expert Doug Levin, who is national director at K12 Security Information Exchange and sits on CISA’s Cybersecurity Advisory Committee

Edison Learning confirmed a cyber incident has occurred and said it could not divulge anything else. "Our investigation into this incident is ongoing, and we are unable to provide additional details at this time," Edison Learning Director of Communications Michael Serpe told THE Journal in an email. "We do not have any student data on impacted systems." 


Based in Fort Lauderdale, Florida, Edison Learning was founded in 1992 as the Edison Project to provide school management services for public charter schools and struggling districts in the United States and United Kingdom. 

According to an archived 2015 website page, Edison Learning has managed hundreds of schools in 32 states, serving millions of students over the years. A 2012 Edison Learning sales presentation found online by THE Journal states that during the 2009–2010 school year, the company’s services were providing schooling for 400,000 children in 25 states, the U.K., and the United Arab Emirates.

More recently, Edison Learning has expanded to provide virtual schooling for middle and high school students as well as CTE courses for high school students, social-emotional learning courses for middle and high school, and more. The company operates its own in-house learning management system, called eSchoolware, and on its website touts other services such as “management solutions, alternative education, personal learning plans, and turnaround services for underperforming schools.”

The Royal ransomware gang — whose tactics were the subject of a CISA cybersecurity advisory in March 2023 — wrote on its data leak site on the dark web: “Looks like knowledge providers missed some lessons of cyber security [sic]. Recently we gave one to EdisonLearning and they have failed.”

Levin at K12SIX said that while “occasionally, these groups list victims they didn’t actually compromise,” the opposite is true more often than not. For example, on Royal’s data leak site, scores of companies — including a handful of public school districts, community colleges, and universities — are listed as victims targeted since the beginning of this year, and many include links to the stolen data files for the respective victims, who presumably did not pay the ransom.... 

 

For full post, please visit: 

https://thejournal.com/articles/2023/05/01/ransomware-gang-claims-edison-learning-data-theft.aspx?s=the_nu_020523&oly_enc_id=8831J2755401H5M 

No comment yet.