Educational Psychology & Technology: Critical Perspectives and Resources
29.3K views | +0 today
Scooped by EduResearcher
onto Educational Psychology & Technology: Critical Perspectives and Resources!

When "Innovation" is Exploitation: Data Ethics, Data Harms and Why We Need to Demand Data Justice // Marachi, 2019, Summer Institute of A Black Education Network 

To download pdf, please click on title or arrow above.


For more on the data brokers selling personal information from a variety of platforms, including education, please see: 


Please also visit: Parent Coalition for Student Privacy


the Data Justice Lab:


and the Algorithmic Justice League:  


No comment yet.
Educational Psychology & Technology: Critical Perspectives and Resources
This curated collection includes news, resources, and research related to the intersections of Educational Psychology and Technology. The page also serves as a research tool to organize online content. The grey funnel shaped icon at the top allows for searching by keyword. For research more specific to tech, screen time and health/safety concerns, please see:, to learn about the next wave of privatization involving technology intersections with Pay For Success,  Social Impact Bonds, and Results Based Financing (often marketed with language promoting 'public-private-partnerships'), see, and for additional Educator Resources, please visit [Links to an external site].
Curated by EduResearcher
Your new post is loading...
Your new post is loading...
Scooped by EduResearcher!

When "Innovation" is Exploitation: Data Ethics, Data Harms and Why We Need to Demand Data Justice // Marachi, 2019, Summer Institute of A Black Education Network 

To download pdf, please click on title or arrow above.


For more on the data brokers selling personal information from a variety of platforms, including education, please see: 


Please also visit: Parent Coalition for Student Privacy


the Data Justice Lab:


and the Algorithmic Justice League:  


No comment yet.
Scooped by EduResearcher!

Normalizing Surveillance (Selinger & Rhee, 2021) // Northern European Journal of Philosophy (via SSRN)

Normalizing Surveillance (Selinger & Rhee, 2021) // Northern European Journal of Philosophy (via SSRN) | Educational Psychology & Technology: Critical Perspectives and Resources |


Definitions of privacy change, as do norms for protecting it. Why, then, are privacy scholars and activists currently worried about “normalization”? This essay explains what normalization means in the context of surveillance concerns and clarifies why normalization has significant governance consequences. We emphasize two things. First, the present is a transitional moment in history. AI-infused surveillance tools offer a window into the unprecedented dangers of automated real-time monitoring and analysis. Second, privacy scholars and activists can better integrate supporting evidence to counter skepticism about their most disturbing and speculative claims about normalization. Empirical results in moral psychology support the assertion that widespread surveillance typically will lead people to become favorably disposed toward it. If this causal dynamic is pervasive, it can diminish autonomy and contribute to a slippery slope trajectory that diminishes privacy and civil liberties.


Keywords: normalization, surveillance, privacy, civil liberties, moral psychology, function creep, surveillance creep, slippery slope arguments

For original post and to download full pdf, please visit: 
No comment yet.
Scooped by EduResearcher!

The Color of Surveillance: Monitoring of Poor and Working People // Center on Privacy and Technology // Georgetown Law

The Color of Surveillance: Monitoring of Poor and Working People // Center on Privacy and Technology // Georgetown Law | Educational Psychology & Technology: Critical Perspectives and Resources | 

No comment yet.
Scooped by EduResearcher!

UM study finds facial recognition technology in schools presents many problems, recommends ban // University of Michigan 

UM study finds facial recognition technology in schools presents many problems, recommends ban // University of Michigan  | Educational Psychology & Technology: Critical Perspectives and Resources |

Contact: Jeff Karoub,;
Daniel Rivkin,


"Facial recognition technology should be banned for use in schools, according to a new study by the University of Michigan’s Ford School of Public Policy that cites the heightened risk of racism and potential for privacy erosion.


The study by the Ford School’s Science, Technology, and Public Policy Program comes at a time when debates over returning to in-person school in the face of the COVID-19 pandemic are consuming administrators and teachers, who are deciding which technologies will best serve public health, educational and privacy requirements.

Among the concerns is facial recognition, which could be used to monitor student attendance and behavior as well as contact tracing. But the report argues this technology will “exacerbate racism,” an issue of particular concern as the nation confronts structural inequality and discrimination.

In the pre-COVID-19 debate about the technology, deployment of facial recognition was seen as a potential panacea to assist with security measures in the aftermath of school shootings. Schools also have begun using it to track students and automate attendance records. Globally, facial recognition technology represents a $3.2 billion business.

The study, “Cameras in the Classroom,” led by Shobita Parthasarathy, asserts that not only is the technology not suited to security purposes, but it also creates a web of serious problems beyond racial discrimination, including normalizing surveillance and eroding privacy, institutionalizing inaccuracy and creating false data on school life, commodifying data and marginalizing nonconforming students.




“We have focused on facial recognition in schools because it is not yet widespread and because it will impact particularly vulnerable populations. The research shows that prematurely deploying the technology without understanding its implications would be unethical and dangerous,” said Parthasarathy, STPP director and professor of public policy.


The study is part of STPP’s Technology Assessment Project, which focuses on emerging technologies and seeks to influence public and policy debate with interdisciplinary, evidence-based analysis.


The study used an analogical case comparison method, looking specifically at previous uses of security technology like CCTV cameras and metal detectors, as well as biometric technologies, to anticipate the implications of facial recognition. The research team also included one undergraduate and one graduate student from the Ford school.


Currently, there are no national laws regulating facial recognition technology anywhere in the world.


“Some people say, ‘We can’t regulate a technology until we see what it can do.’ But looking at technology that has already been implemented, we can predict the potential social, economic and political impacts, and surface the unintended consequences,” said Molly Kleinman, STPP’s program manager.


Though the study recommends a complete ban on the technology’s use, it concludes with a set of 15 policy recommendations for those at the national, state and school district levels who may be considering using it, as well as a set of sample questions for stakeholders, such as parents and students, to consider as they evaluate its use."


More information:


For original post, please visit: 

No comment yet.
Scooped by EduResearcher!

I Have a Lot to Say About Signal’s Cellebrite Hack // Center for Internet and Society

I Have a Lot to Say About Signal’s Cellebrite Hack // Center for Internet and Society | Educational Psychology & Technology: Critical Perspectives and Resources |

By Riana Pfefferkorn on May 12, 2021

This blog post is based off of a talk I gave on May 12, 2021 at the Stanford Computer Science Department’s weekly lunch talk series on computer security topics. Full disclosure: I’ve done some consulting work for Signal, albeit not on anything like this issue. (I kinda doubt they’ll hire me again if they read this, though.)

You may have seen a story in the news recently about vulnerabilities discovered in the digital forensics tool made by Israeli firm Cellebrite. Cellebrite's software extracts data from mobile devices and generates a report about the extraction. It's popular with law enforcement agencies as a tool for gathering digital evidence from smartphones in their custody. 

In April, the team behind the popular end-to-end encrypted (E2EE) chat app Signal published a blog post detailing how they had obtained a Cellebrite device, analyzed the software, and found vulnerabilities that would allow for arbitrary code execution by a device that's being scanned with a Cellebrite tool. 

As coverage of the blog post pointed out, the vulnerability draws into question whether Cellebrite's tools are reliable in criminal prosecutions after all. While Cellebrite has since taken steps to mitigate the vulnerability, there's already been a motion for a new trial filed in at least one criminal case on the basis of Signal's blog post. 

Is that motion likely to succeed? What will be the likely ramifications of Signal's discovery in court cases? I think the impact on existing cases will be negligible, but that Signal has made an important point that may help push the mobile device forensics industry towards greater accountability for their often sloppy product security. Nevertheless, I have a raised eyebrow for Signal here too.

Let’s dive in.


What is Cellebrite? 

Cellebrite is an Israeli company that, per Signal’s blog post, “makes software to automate physically extracting and indexing data from mobile devices.” A common use case here in the U.S. is to be used by law enforcement in criminal investigations, typically with a warrant under the Fourth Amendment that allows them to search someone’s phone and seize data from it. 

Cellebrite’s products are part of the industry of “mobile device forensics” tools. “The mobile forensics process aims to recover digital evidence or relevant data from a mobile device in a way that will preserve the evidence in a forensically sound condition,” using accepted methods, so that it can later be presented in court. 

Who are their customers?

Between Cellebrite and the other vendors in the industry of mobile device forensics tools, there are over two thousand law enforcement agencies across the country that have such tools — including 49 of the 50 biggest cities in the U.S. Plus, ICE has contracts with Cellebrite worth tens of millions of dollars. 

But Cellebrite has lots of customers besides U.S. law enforcement agencies. And some of them aren’t so nice. As Signal’s blog post notes, “Their customer list has included authoritarian regimes in Belarus, Russia, Venezuela, and China; death squads in Bangladesh; military juntas in Myanmar; and those seeking to abuse and oppress in Turkey, UAE, and elsewhere.” 

The vendors of these kinds of tools love to get up on their high horse and talk about how they’re the “good guys,” they help keep the world safe from criminals and terrorists. Yes, sure, fine. But a lot of vendors in this industry, the industry of selling surveillance technologies to governments, sell not only to the U.S. and other countries that respect the rule of law, but also to repressive governments that persecute their own people, where the definition of “criminal” might just mean being gay or criticizing the government. The willingness of companies like Cellebrite to sell to unsavory governments is why there have been calls from human rights leaders and groups for a global moratorium on selling these sorts of surveillance tools to governments.

What do Cellebrite’s products do?

Cellebrite has a few different products, but as relevant here, there’s a two-part system in play: the first part, called UFED (which stands for Universal Forensic Extraction Device), extracts the data from a mobile device and backs it up to a Windows PC, and the second part, called Physical Analyzer, parses and indexes the data so it’s searchable. So, take the raw data out, then turn it into something useful for the user, all in a forensically sound manner. 

As Signal’s blog post explains, this two-part system requires physical access to the phone; these aren’t tools for remotely accessing someone’s phone. And the kind of extraction (a “logical extraction”) at issue here requires the device to be unlocked and open. (A logical extraction is quicker and easier, but also more limited, than the deeper but more challenging type of extraction, a “physical extraction,” which can work on locked devices, though not with 100% reliability. Plus, logical extractions won’t recover deleted or hidden files, unlike physical extractions.) As the blog post says, think of it this way: “if someone is physically holding your unlocked device in their hands, they could open whatever apps they would like and take screenshots of everything in them to save and go over later. Cellebrite essentially automates that process for someone holding your device in their hands.”

Plus, unlike some cop taking screenshots, a logical data extraction preserves the recovered data “in its original state with forensically-sound integrity admissible in a court of law.” Why show that the data were extracted and preserved without altering anything? Because that’s what is necessary to satisfy the rules for admitting evidence in court. U.S. courts have rules in place to ensure that the evidence that is presented is reliable — you don’t want to convict or acquit somebody on the basis of, say, a file whose contents or metadata got corrupted. Cellebrite holds itself out as meeting the standards that U.S. courts require for digital forensics.

But what Signal showed is that Cellebrite tools actually have really shoddy security that could, unless the problem is fixed, allow alteration of data in the reports the software generates when it analyzes phones. Demonstrating flaws in the Cellebrite system calls into question the integrity and reliability of the data extracted and of the reports generated about the extraction. 

That undermines the entire reason for these tools’ existence: compiling digital evidence that is sound enough to be admitted and relied upon in court cases.


What was the hack?

As background: Late last year, Cellebrite announced that one of their tools (the Physical Analyzer tool) could be used to extract Signal data from unlocked Android phones. Signal wasn’t pleased.


Apparently in retaliation, Signal struck back. As last month’s blog post details, Signal creator Moxie Marlinspike and his team obtained a Cellebrite kit (they’re coy about how they got it), analyzed the software, and found vulnerabilities that would allow for arbitrary code execution by a device that's being scanned with a Cellebrite tool.


According to the blog post:

Looking at both UFED and Physical Analyzer, ... we were surprised to find that very little care seems to have been given to Cellebrite’s own software security. Industry-standard exploit mitigation defenses are missing, and many opportunities for exploitation are present. ...

“[W]e found that it’s possible to execute arbitrary code on a Cellebrite machine simply by including a specially formatted but otherwise innocuous file in any app on a device that is subsequently plugged into Cellebrite and scanned. There are virtually no limits on the code that can be executed.

“For example, by including a specially formatted but otherwise innocuous file in an app on a device that is then scanned by Cellebrite, it’s possible to execute code that modifies not just the Cellebrite report being created in that scan, but also all previous and future generated Cellebrite reports from all previously scanned devices and all future scanned devices in any arbitrary way (inserting or removing text, email, photos, contacts, files, or any other data), with no detectable timestamp changes or checksum failures. This could even be done at random, and would seriously call the data integrity of Cellebrite’s reports into question.

Signal also created a video demo to show their proof of concept (PoC), which you can watch in the blog post or their tweet about it. They summarized what’s depicted in the video:

[This] is a sample video of an exploit for UFED (similar exploits exist for Physical Analyzer). In the video, UFED hits a file that executes arbitrary code on the Cellebrite machine. This exploit payload uses the MessageBox Windows API to display a dialog with a message in it. This is for demonstration purposes; it’s possible to execute any code, and a real exploit payload would likely seek to undetectably alter previous reports, compromise the integrity of future reports (perhaps at random!), or exfiltrate data from the Cellebrite machine.".... 

No comment yet.
Scooped by EduResearcher!

Kahoot acquires Clever, the US-based edtech portal, for up to $500M // TechCrunch

Kahoot acquires Clever, the US-based edtech portal, for up to $500M // TechCrunch | Educational Psychology & Technology: Critical Perspectives and Resources |

By Ingrid Lunden

"Kahoot, the popular Oslo-based edtech company that has built a big business out of gamifiying education and creating a platform for users to build their own learning games, is making an acquisition to double down on K-12 education and its opportunities to grow in the U.S. It is acquiring Clever, a startup that has built a single sign-on portal for educators, students and their families to build and engage in digital learning classrooms, currently used by about 65% of all U.S. K-12 schools. Kahoot said that the deal — coming in a combination of cash and shares — gives Clever an enterprise value of between $435 million and $500 million, dependent on meeting certain performance milestones.

The plan will be to continue growing Clever’s business in the U.S. — which currently employs 175 people — as well as give it a lever for expanding globally alongside Kahoot’s wider stable of edtech software and services.

“Clever and Kahoot are two purpose-led organizations that are equally passionate about education and unleashing the potential within every learner,” said Eilert Hanoa, CEO at Kahoot, in a statement. “Through this acquisition we see considerable potential to collaborate on education innovation to better service all our users — schools, teachers, students, parents and lifelong learners — and leveraging our global scale to offer Clever’s unique platform worldwide. I’m excited to welcome Tyler and his team to the Kahoot family.”

The news came on the same day that Kahoot, which is traded in Oslo with a market cap of $4.3 billion, also announced strong Q1 results in which it also noted it has closed its acquisition of, a provider of whiteboard tools for teachers, for an undisclosed sum.

The same tides that have been lifting Kahoot have also been playing out for Clever and other edtech companies.


The startup was originally incubated in Y Combinator and launched with a vision to be a “Twilio for education“, which in its vision was to create a unified way of being able to tap into the myriad student sign-on systems and educational databases to make it easier for those building edtech services to scale their products and bring on more customers (schools, teachers, students, families) to use them. As with payments, financial services in general, and telecommunications, it turns out that education is also a pretty fragmented market, and Clever wanted to figure out a way to fix the complexity and put it behind an API to make it easier for others to tap into it.


Over time it built that out also with a marketplace (application gallery in its terminology) of some 600 software providers and application developers that integrate with its SSO, which in turn becomes a way for a school or district to subsequently expand the number of edtech tools that it can use. This has been especially critical in the last year as schools have been forced to close in-person learning and go entirely virtual to help stave off the spread of the COVID-19 pandemic.


Clever has found a lot of traction for its approach both with schools and investors. With the former, Clever says that it’s used by 89,000 schools and some 65% of K-12 school districts (13,000 overall) in the U.S., with that figure including 95 of the 100 largest school districts in the country. This works out to 20 million students logging in monthly and 5.6 billion learning sessions.

The latter, meanwhile, has seen the company raise from a pretty impressive range of investors, including YC current and former partners like Paul Graham and Sam Altman, GSV, Founders Fund, Lightspeed and Sequoia. It raised just under $60 million, which may sound modest these days but remember that it’s been around since 2012, when edtech was not so cool and attention-grabbing, and hasn’t raised money since 2016, which in itself is a sign that it’s doing something right as a business.


Indeed, Kahoot noted that Clever projects $44 million in billed revenues for 2021, with an annual revenue growth rate of approximately 25% CAGR in the last three years, and it has been running the business on “a cash flow neutral basis, redeploying all cash into development of its offerings,” Kahoot noted.


Kahoot itself has had a strong year driven in no small part by the pandemic and the huge boost that resulted in remote learning and remote work. It noted in its results that it had 28 million active accounts in the last twelve months representing 68% growth on the year before, with the number of hosted games in that period at 279 million (up 28%) with more than 1.6 billion participants of those games (up 24%). Paid subscriptions in Q1 were at 760,000, with 255,000 using the “work” (B2B) tier; 275,000 school accounts; and 230,000 thousand in its “home and study” category. Annual recurring revenue is now at $69 million ($18 million a year ago for the same quarter), while actual revenue for the quarter was $16.2 million (up from $4.2 million a year ago), growing 284%.


The company, which is also backed by the likes of Disney, Microsoft and Softbank, has made a number of acquisitions to expand. Clever is the biggest of these to date."...


For full post, please visit: 

No comment yet.
Scooped by EduResearcher!

Privacy and Security in State Postsecondary Data Systems: Strong Foundations 2020 // State Higher Education Executive Officers Association (SHEEO)

Privacy and Security in State Postsecondary Data Systems: Strong Foundations 2020 // State Higher Education Executive Officers Association (SHEEO) | Educational Psychology & Technology: Critical Perspectives and Resources |
State postsecondary data systems contain a wealth of information—including detailed records about individuals—that allow states to analyze and improve their postsecondary education systems. The entities that maintain these system


For SHEEO landing page, click here


To download full report, click here: 

No comment yet.
Scooped by EduResearcher!

As schools reopen with billions in federal aid, surveillance vendors are hawking expensive tools like license plate readers and facial recognition //

By Todd Feathers

"As vaccination rates rise and schools prepare to reopen, surveillance companies have trained their sights on the billions of dollars in federal COVID-19 relief funds being provided to schools across the US, hoping to make a profit by introducing a bevy of new snooping devices.


“$82 BILLION,” reads the huge front-page font on one Motorola Solutions brochure distributed to K-12 schools after the passage of the Coronavirus Response and Relief Supplemental Appropriations Act. “Consider COVID-19 technology from Motorola Solutions for your Education Stabilization Fund dollars.”


Other vendors are using similar language and marketing tactics that attempt to latch on to the amount of money Congress set aside for K-12 schools, colleges, and universities in the COVID-19 stimulus packages.


School administrators are used to receiving constant sales pitches from ed tech vendors. But many of the pricey products now being offered have previously been reserved for cops, or have been spun up over the last year to be marketed as solutions for reopening schools during the pandemic. Privacy experts fear that, if deployed, many of these technologies will remain in schools long after classrooms return to normal.


Motorola Solutions' suite of products in its "safe schools solutions" line includes automated license plate readers, watch lists that send automatic alerts when people enter a building, and anonymous “tip” submission apps for students, according to a copy of the brochure shared with Motherboard. The document also advertises artificial intelligence-powered camera systems that purportedly detect “unusual motion,” track individuals using facial recognition as they move around a school, and allow staff to search through hours of video to find footage of a person simply by typing in their “physical descriptors.”


Verkada, a smart surveillance camera company, and its sales partners have been aggressively pushing AI surveillance tools as a response to COVID-19, according to the company’s blog posts and emails obtained by Motherboard through public records requests.


“Whether leveraging features like Face Search for contact tracing or Crowd Notifications to enforce social distancing, schools can proactively protect their students and staff,” a sales associate offering Verkada facial recognition products wrote in a March 8th email to technology staff at the Morgan-Hill Unified School District in California. 

He added that the products qualify for “ESSER II funding,” a reference to the federal Elementary and Secondary School Emergency Relief Fund created by Congress to help schools cope with the pandemic. It was just one in a long series of emails that district officials received from Verkada and its third-party sellers during the first few months of the year, many of them offering to drop off demonstration products or provide Amazon gift cards and Yeti ramblers in exchange for attending sales webinars.

A day after that email was sent, hackers announced that they had breached Verkada, gaining access to live feeds at hospitals, schools, and company offices.

Motorola Solutions, Verkada, and the other companies mentioned in this article, did not respond to multiple requests for comment.


“Unscrupulous vendors are taking every single technology they can think of and offering them to schools as if it’s going to make them safer,” Lia Holland, campaigns and communications director for the privacy group Fight for the Future, told Motherboard. “The push for surveillance of children in every aspect of their lives, especially in schools, just keeps accelerating and it’s an incredible threat to children’s lifetime privacy, their mental health, and their physical safety to deploy these technologies that are often racially biased.”

Neither states nor the U.S. Department of Education have published detailed data on exactly how local districts have spent their relief funding, so it’s unclear just how successful the surveillance vendors’ marketing strategy has been. But the companies have found at least a small number of buyers and convinced them to provide glowing testimonials.


Given the cost of the surveillance equipment being offered, it’s easy to see why the relief funds are so appetizing to the sellers.

The Godley Independent School District in Texas, for example, purchased 51 Verkada cameras and software licenses for a new building in June 2020 at a cost of $82,000, according to records obtained by Motherboard. The original cost would have been more than $100,000, but the district received a discount from the vendor.

While Godley ISD didn’t use relief funds, the purchase demonstrates what a large chunk of the money a single surveillance project can suck up—it was equivalent to 45 percent of the $182,000 in COVID-19 relief funds the district has received so far, according to federal grant records.

The relief money is intended to help districts implement remote learning systems, reopen schools, reduce the risk of virus transmission, and provide extra aid to low-income, minority, and special needs students. Surveillance vendors have interpreted those purposes liberally.

SchoolPass is one of several companies that have taken the opportunity to sell automated license plate reader (ALPR) systems to schools, going so far as to host webinars for district officials during which experts explain how to apply for and access the new federal funds.


The company explains that by tracking cars as they enter and leave school property, schools can ensure that students are physically distanced when they’re dropped off, thus reducing the risk of transmitting the virus.

What’s not clear is what happens to the ALPR data, and who else—including local police and federal agencies—may have access to it. The company and districts that use SchoolPass did not respond to requests for comment.

What’s not clear is what happens to the ALPR data, and who else—including local police and federal agencies—may have access to it. The company and districts that use SchoolPass did not respond to requests for comment.

As Motherboard has previously reported, ALPR data is uploaded into vast databases that are then used by cops, private investigators, and repo companies to track people across the country—in some cases, illegally.

Motorola Solutions owns two of the largest license plate databases through its subsidiaries Vigilant Solutions and Digital Recognition Network. It’s not clear from the company’s marketing material whether the location data scooped up by the ALPR systems it sells to schools are added to those databases.

Despite vendors’ proclamations about student safety and well-being, research shows that the increase in surveillance is likely to have a severely negative effect on students.

recent study of more than 6,000 high school students conducted by researchers from Johns Hopkins University and Washington University in St. Louis found that students attending “high surveillance” schools were far more likely to be suspended and have lower math achievement rates than students at low-surveillance schools, and they were less likely to go to college. The study controlled for other variables, such as rates of student misbehavior.

It also found that the burden fell particularly hard on Black students, who were four times more likely to attend a high-surveillance high school.

“There’s actually no evidence that it works,” Rory Mir, a grassroots advocacy organizer with the Electronic Frontier Foundation, told Motherboard. “What there is clear proof for is how this technology is biased and disproportionately impacts more at-risk students, and it creates an environment where students are constantly surveilled. It’s treating students like criminals and making money while doing so.” 


For full post, please visit: 


No comment yet.
Scooped by EduResearcher!

Artificial intelligence is infiltrating higher ed, from admissions to grading: As colleges' use of the technology grows, so do questions about bias and accuracy

Artificial intelligence is infiltrating higher ed, from admissions to grading: As colleges' use of the technology grows, so do questions about bias and accuracy | Educational Psychology & Technology: Critical Perspectives and Resources |

By Derek Newton

Students newly accepted by colleges and universities this spring are being deluged by emails and texts in the hope that they will put down their deposits and enroll. If they have questions about deadlines, financial aid and even where to eat on campus, they can get instant answers.


The messages are friendly and informative. But many of them aren’t from humans.

Artificial intelligence, or AI, is being used to shoot off these seemingly personal appeals and deliver pre-written information through chatbots and text personas meant to mimic human banter.  It can help a university or college by boosting early deposit rates while cutting down on expensive and time-consuming calls to stretched admissions staffs.

AI has long been quietly embedding itself into higher education in ways like these, often to save money — a need that’s been heightened by pandemic-related budget squeezes.

Now, simple AI-driven tools like these chatbots, plagiarism-detecting software and apps to check spelling and grammar are being joined by new, more powerful – and controversial – applications that answer academic questions, grade assignments, recommend classes and even teach.


The newest can evaluate and score applicants’ personality traits and perceived motivation, and colleges increasing are using these tools to make admissions and financial aid decisions.

As the presence of this technology on campus grows, so do concerns about it. In at least one case, a seemingly promising use of AI in admissions decisions was halted because, by using algorithms to score applicants based on historical precedence, it perpetuated bias."...


For full post, please visit: 

No comment yet.
Scooped by EduResearcher!

Protecting Youth from Data Exploitation by Online Technologies and Applications // Proposed by CA-HI State NAACP // Passed at National NAACP Conference 9/26/20

The following resolution was proposed and passed in April 2020 at the CA-HI State NAACP Resolutions Conference, and passed by the national delegates at the NAACP National Conference on September 26th, 2020.  


"WHEREAS the COVID19 pandemic has resulted in widespread school closures that are disproportionately disadvantaging families in under-resourced communities; and


WHEREAS resulting emergency learning tools have primarily been comprised of untested, online technology apps and software programs; and


WHEREAS, the National Education Policy Center has documented evidence of widespread adoptions of apps and online programs that fail to meet basic safety and privacy protections; and


WHEREAS privately managed cyber/online schools, many of which have been involved in wide-reaching scandals involving fraud, false marketing, and unethical practices, have seized the COVID crisis to increase marketing of their programs that have resulted in negative outcomes for students most in need of resources and supports; and


WHEREAS, parents and students have a right to be free from intrusive monitoring of their children’s online behaviors, have a right to know what data are being collected, what entities have access, how long data will be held, in what ways data would combined, and how data could be protected against exploitation;


WHEREAS increased monitoring and use of algorithmic risk assessments on students’ behavioral data are likely to disproportionately affect students of color and other underrepresented or underserved groups, such as immigrant families, students with previous disciplinary issues or interactions with the criminal justice system, and students with disabilities; and


WHEREAS serious harms resulting from the use of big data and predictive analytics have been documented to include targeting based on vulnerability, misuse of personal information, discrimination, data breaches, political manipulation and social harm, data and system errors, and limiting or creating barriers to access for services, insurance, employment, and other basic life necessities;


BE IT THEREFORE RESOLVED that the NAACP will advocate for strict enforcement of the Family Education Rights and Privacy Act to protect youth from exploitative data practices that violate their privacy rights or lead to predictive harms; and


BE IT FURTHER RESOLVED that the NAACP will advocate for federal, state, and local policy to ensure that schools, districts, and technology companies contracting with schools will neither collect, use, share, nor sell student information unless given explicit permission by parents in plain language and only after being given full informed consent from parents about what kinds of data would be collected and how it would be utilized; and


BE IT FURTHER RESOLVED that the NAACP will work independently and in coalition with like-minded civil rights, civili liberties, social justice, education and privacy groups to collectively advocate for stronger protection of data and privacy rights; and



BE IT FURTHER RESOLVED that the NAACP will oppose state and federal policies that would promote longitudinal data systems that track and/or share data from infancy/early childhood in exploitative, negatively impactful, discriminatory, or racially profiling ways through their education path and into adulthood; and


BE IT FINALLY RESOLVED that the NAACP will urge Congress and state legislatures to enact legislation that would prevent technology companies engaged in big data and predictive analytics from collecting, sharing, using, and/or selling children’s educational or behavioral data."


No comment yet.
Scooped by EduResearcher!

"How Do You Feel Today?": The Invasion of SEL Software in K-12 // by Shelley Buchanan // 

"How Do You Feel Today?": The Invasion of SEL Software in K-12 // by Shelley Buchanan //  | Educational Psychology & Technology: Critical Perspectives and Resources |

By Shelley Buchanan
"The recent appeal for more mental health services has caused school districts to adopt software touting SEL (social-emotional learning) capabilities. Such programs as GoGuardian, Panorama, and Harmony SEL are now in thousands of schools across the nation. While the need for more mental health supports in schools is evident, the rapid adoption of technology has occurred without adequate scrutiny and parental awareness. Even teachers and district administrators blindly accept these companies’ claims to improve behavior and dramatically drop suicide rates. But such businesses base their product’s effectiveness on few research studies of little value.¹ The valid studies cited may be focusing on the SEL lessons delivered by humans without using the digital program.

One such program called PBIS Rewards touts the benefits of daily student “check-ins.” Students log into the program on their devices and click on an emoji reflecting their current emotional state. This information is then automatically sent to a central database that allows the teacher to track students’ emotions on their computer. The program makers tout the benefits by emphasizing how easy it is to collect and track such student data. Teachers and schools can set goals for students using this data and assign points to desired behaviors. The PBIS Rewards website states, “Students love to see their point totals grow, and to think about which rewards they’ll get with their points.” Parents are encouraged to download the associated app onto their phones to reinforce the program at home. The company assures schools that “Parents enjoy seeing their student’s progress, and are alerted when a referral is given.” ²


Within PBIS Rewards and other SEL software, teachers and administrators can use data collected online from students to create reports.³. Schools can refine these reports to gender and race. Let’s say a school compiles a database that shows their Black male students were angry 70% of the time. It is not difficult to imagine how schools could inadvertently use this information to reinforce pre-existing bias and racial stereotyping. Just because we have data doesn’t mean this leads to equity.⁴ It matters what people do with the data.⁵

The school also keeps this information about students throughout the year. If they do not delete it, there’s a potential for future teachers to develop a bias towards a student even before they meet them.⁶ Some will say knowledge is helpful, but are we not giving kids a chance to start over with a new school year? What if they had a parent who went to prison that year and they were depressed or angry because of it? Yet, a teacher merely sees that the particular student was angry 70% of the time. Now consider if the school shares this information with law enforcement?⁶

According to FERPA, school resource officers and other law enforcement cannot access student information without a specified exception, but districts can creatively interpret these limits.⁷

SEL tech providers will often claim their products promote mental health awareness and can be used to reduce the number of suicidal or dangerous students. Even before the pandemic, the Guardian reported that with such technology, “privacy experts — and students — said they are concerned that surveillance at school might actually be undermining students’ wellbeing.” ⁸ Over-reliance upon potentially invasive technology can erode students’ trust.

Reliance on mental health digital applications during distance learning can also lead to several ethical concerns rarely brought up among staff untrained in mental health issues.⁹ Use of such programs such as GoGuardian to monitor students’ screens for concerning websites can lead to legal problems for unaware educators.¹⁰

This district website sends parents directly to the company’s site to encourage the download of the app.

In addition to requiring children to use these programs in school, ed-tech companies are now encouraging schools to have students and parents download apps. Such actions can create several privacy concerns. The student is downloading an app on their personal device; therefore, they will be using it outside of school networks and all their security. Thus personal information in these apps could be accessed by outside parties. While companies may claim that they have ensured their software is safe, online apps installed on phones are routinely not secure.¹¹ COPPA guidelines often are not abided by.¹² School districts have even been known to put direct links to these apps on their websites, encouraging parents and students to use apps with privacy issues.¹³

The integration of digital SEL programs with other software platforms like Clever adds another layer of privacy concerns. What if another student hacks into Clever or Google Classroom? What if the SEL screen on a teacher’s computer became visible? Teachers often will display their laptop screen to the class. What if they accidentally had a student’s SEL screen open and projected this? Technical issues occur all the time, and it is easy to see how such an incident could happen.

The potential privacy issues surrounding digital SEL programs abound. For example, a popular app called Thrively shares information with third party users (despite their company privacy statement).¹⁴ Many widely used applications in schools are too new for privacy specialists to know to what extent they violate individual privacy.¹⁵ Therefore, schools using these programs often act as experimental laboratories for the legal limits of data collection and usage. We must keep in mind that just because there are no reported incidences of privacy violations doesn’t mean they don’t occur.

Frequently, companies that produce such online programs will offer their product for free to districts. Let us be clear; no one merely gives away software with no compensation in return. Educational technology companies have routinely taken data as payment for the use of their products¹⁶ Sales of data to third party digital operators is big money. Information is the most expensive commodity there is today.¹⁷

Educational technology companies can trade influence for payment.¹⁸ The student usage of Google or Microsoft products can lead to parents purchasing such products for home use. As adults, former students will also be more likely to buy these brand name products. The free license for school districts ends up paying off in such cases. And it’s not only the big guys like Google that are making such an investment. Organizations like Harmony SEL have a whole line of products for home use geared towards parents. Harmony is also associated with a private university, an online charter school, a professional development company, and a company that sells fundraising training for schools. These programs all rely heavily upon funding by billionaire T. Denny Sanford.¹⁹ Of course, consumers of the Harmony SEL system are encouraged to use these other businesses and organizations’ products.


Online educational software does sometimes disregard privacy laws regarding children. In 2020, New Mexico’s attorney general sued Google claiming the tech giant used its educational products to spy on the state’s children and families despite Google’s privacy statement ensuring schools and families that children’s data wouldn’t be tracked.²⁰ The lack of comprehensive and sufficient laws protecting children’s online information makes the ubiquitous use of educational technology all the more troubling.²¹ If schools are not aware of the potential violations, how can parents be? Even more concerning, The State Student Privacy Report Card states, “FERPA contains no specific protections against data breaches and hacking, nor does it require families be notified when inadvertent disclosures occur.” ²²

Educational technology providers can adhere to COPPA guidelines by claiming they require parental consent before children use their products.²³ But frequently, school districts will merely have parents sign a universal consent form covering all digital tools. Although they can, and should, require additional consent for specific applications, they often do not. Besides, if the parental consent form includes all necessary tools such as Google Suite, a student could be denied any devices until a parent signs the form. Such conditions place tremendous pressure on parents to consent.

Equally insidious are the tech marketing claims that feed into school accountability mandates. Makers of SEL software craft their messaging to reflect the mission statements and goals of school districts. For example, Panorama claims that their SEL tracking program can predict “college and career readiness.” Popular terms like “grit” and “growth mindset” are generously sprinkled throughout marketing literature. Other programs claim their products produce a rise in standardized test scores.²⁴ Some even have convinced school districts to do marketing for them, promoting their products for free.²⁵

Underlying many such behavioral programs is the reliance on extrinsic motivators. Yet, the use of rewards for learning is highly problematic.²⁶ Dan Pink found that extrinsic rewards such as gold stars and gift certificates were harmful in the school environment.²⁷ Teachers themselves are even speaking out against the damaging effects of such programs.²⁸


These concerns lead us to the larger question: who decides what feelings are acceptable? How does SEL technology discourage the expression of certain feelings? If we reward students for a “positive mind set,” does that mean we actively should try to stifle negative emotions? Evan Selinger, the author of Reengineering Humanity, warns that “technology, by taking over what were once fundamental functions…has begun to dissociate us from our own humanity.” ²⁹ School SEL programs with objectives to produce more positive feelings may have the unintended effect of telling the child that their emotional reactions are something they entirely create, not a reflection of their environment. Suppose a child is frustrated because they don’t understand the curriculum. In that case, the school may emphasize the child controlling their feelings rather than adapting the material to the student’s needs. Students rarely have the ability or courage to tell teachers why they are feeling what they are feeling. In a system where adults advise students that they alone are responsible for their feelings, a child can easily take the blame for adult behaviors. Districts can then use such data to explain away low standardized test scores, asserting that “students with higher social-emotional competencies tend to have higher scores on Smarter Balanced ELA and math assessments.” Therefore, it is easy to assume that student academic failure has little to do with the quality of instruction in the school but rather the student’s emotional competencies.

“Technology, by taking over what were once fundamental functions, has begun to dissociate us from our own humanity.” — Evan Selinger, author of Reengineering Humanity

In our modern western culture, society encourages parents to minimize negative emotions in their children.³⁰ Child psychologists stress children need to be allowed to express negative feelings. Not only does this tell the child that fear, anger, frustration, etc., are normal, but it also allows the child to practice dealing with negative feelings. It is not sufficient or helpful to encourage positive emotions but censor negative ones. Expression of negative feelings is necessary for mental health.³¹ (Take a look at the millions of adults stuffing their anger and sadness away with alcohol, food, and drugs.) Parental discouragement of negative feelings is one thing, though. It’s another to allow a school, and worse yet, a technology company to regulate a child’s emotion. One can only envision a dystopian future where we are not allowed to feel anything but happiness.

“And that,” put in the Director sententiously, “that is the secret of happiness and virtue — liking what you’ve got to do. All conditioning aims at that: making people like their unescapable social destiny.” — Brave New World

If we take Huxley’s writings seriously, the intention of societal enforced happiness is the control of the individual. One cannot help but think of this when reading about behavioral programs that reward “good” feelings with happy face emojis, stars, or even pizza parties.

Instead of relying on software to monitor and shape children’s behaviors, teachers should be focusing on improving relationships built on trust. Even if a school uses software to identify a child’s feelings, no change will occur because of mere identification. The difference is in the steps schools take to address student anger, frustration, apathy, and the conditions that create them. Over and over again, the one thing that improves student mental health is teachers’ and counselors’ support. Without such beneficial relationships, destructive behavior occurs. Research consistently finds that poor relationships between teachers and pupils can cause bad behavior.³²

When SEL software is adopted, and there are limited counselors and social workers, the teacher decides the meaning of a student’s emotions and mental health. What does depression look like, and how many days of “sad” is symptomatic of a mental health issue? Teachers are not trained mental health providers. But the reliance on and assumed efficacy of such programs may give teachers the false feeling that they can rely on their perspective without contacting a counselor. Broad adoption of such software could be a money-saving measure to cash-strapped districts pressured to deal with a rising level of child mental health issues. The annual cost of a software license is far less than the salaries of certified school counselors and social workers.


Parents and teachers need to be aware of SEL software, its use, and its purpose. The simple addition of a list of licensed applications on a district website is not enough to ensure parental awareness. Often SEL technology is adopted without parent review and feedback. While districts allow parents to review and opt their child out of sex education programs, SEL programs do not have such a requirement in place. This lack of clarity has led to parents (and teachers) voicing their concerns over SEL curriculums and lessons.³³ ³⁴ Rapid adoption without critical voices could lead to school encroachment into families’ values and norms. Whether or not one agrees with the beliefs of individual families, as a society, we need to be aware of how specific policies may negatively impact the civil liberties of individuals.³⁵


Technology is changing at a rapid pace never previously experienced. If we are to harness its benefits, we must first take stock of its harmful impact on our institutions. Quick adoption of SEL programs needs reassessment given the risks associated with their misuse. We must first insist upon the humanity from which all good teaching emanates. Only within this framework can we create environments in which children can develop and flourish."...


  1. García Mathewson , Tara, and Sarah Butrymowicz. Ed Tech Companies Promise Results, but Their Claims Are Often Based on Shoddy Research. The Hechinger Report, 20 May 2020
  2. PBIS Rewards also has a teacher behavior reward system. The PBIS rewards website states that principals can give reward points just like they do for students. Teachers can get rewards points for Bath and Body Works baskets, a dress-down pass, or even a gift card for groceries. (Not making enough money teaching to buy dinner? If you earn enough points, you can too can buy food for your family!) Ironically, principals can even give teachers points for “buying into” the PBIS system. No mention of how such systems can negatively contribute to our teacher attrition problem. Source“Introducing the SEL Check-In Feature with PBIS Rewards.” PBIS Rewards, Motivating Systems, LLC., 4 Sept. 2020
  3. For example, a school district in Nevada used data collected through the Panorama application to create reports of behavioral trends based on gender and race. SourceDavidson, Laura. How Washoe County School District Uses SEL Data to Advance Equity and Excellence, Panorama Education, October 2020
  4. Bump, Philip. Cops Tend to See Black Kids as Less Innocent Than White Kids. The Atlantic, 27 Nov. 2014
  5. Skibba, Ramin. The Disturbing Resilience of Scientific Racism. Smithsonian Magazine, 20 May 2019
  6. An EFF report found few school privacy policies address deletion of data after periods of inactivity, which would allow applications to retain information even after students graduate. Source: Alim, F., Cardoza, N., Gebhart, G., Gullo, K., & Kalia, A. Spying on Students: School-Issued Devices and Student Privacy. Electronic Frontier Foundation, 13 April 2017
  7. Education, Privacy, Disability Rights, and Civil Rights Groups Send Letter to Florida Governor About Discriminatory Student Database. Future of Privacy Forum, 14 Dec. 2020
  8. It is estimated that as many as a third of America’s school districts may already be using technology that monitors students’ emails and documents for phrases that might flag suicidal thoughts. SourceBeckett, Lois. Clear Backpacks, Monitored Emails: Life for US Students under Constant Surveillance. The Guardian, 2 Dec. 2019
  9. D., Florell, et al. “Legal and Ethical Considerations for Remote School Psychological Services.” National Association of School Psychologists (NASP), Accessed 12 February 2021.
  10. Buchanan, Shelley. “The Abuses and Misuses of GoGuardian in Schools.” Medium, Teachers on Fire Magazine, 23 Jan. 2021
  11. COPPA requires that websites and online services directed to children obtain parental consent before collecting personal information from anyone younger than 13; however, many popular apps do not comply. A University of Texas at Dallas study of 100 mobile apps for kids found that 72 violated a federal law aimed at protecting children’s online privacy. Source: University of Texas at Dallas. Tool to protect children’s online privacy: Tracking instrument nabs apps that violate federal law with 99% accuracy. Science Daily, 23 June 2020.
  12. ibid.
  13. For example, Second Step, a program used in many school districts has a link to a children’s app that collects personally identifiable information which is sold to third parties.
  14. “Common Sense Privacy Evaluation for Thrively.” The Common Sense Privacy Program, Common Sense Media. Accessed 12 February 2021.
  15. Tate, Emily. Is School Surveillance Going Too Far? Privacy Leaders Urge a Slow Down. EdSurge News, 10 June 2019
  16. Educator Toolkit for Teacher and Student Privacy: A Practical Guide for Protecting Person al Data. Parent Coalition for Student Privacy & Badass Teachers Association. October 2018.
  17. Jossen , Sam. The World’s Most Valuable Resource Is No Longer Oil, but Data. The Economist , 6 May 2017.
  18. Klein, Alyson. What Does Big Tech Want From Schools? (Spoiler Alert: It’s Not Money). Education Week, 29 Dec. 2020.
  19. T. Denny Sanford also has heavily funded and lent his name to a number of other organizations. Although recently, in late 2020, Sanford Health decided to drop the founders name from their title after reported child pornography investigations of their benefactor. National University (home of the college associated with the Harmony SEL program) also adopted the name of the philanthropist, yet recently reconsidered the change.
  20. Singer, N. and Wakabayashi, D. New Mexico Sues Google Over Children's Privacy Violations. New York Times, 20 February 2020
  21. The State Student Privacy Report Card: Grading the States on Protecting Student Data Privacy. Parent Coalition for Student Privacy & The Network for Public Education, January 2019.
  22. ibid.
  23. COPPA protects children under the age of 13 who use commercial websites, online games, and mobile apps. While schools must ensure the services their students use treat the data they collect responsibly, COPPA ultimately places the responsibility on the online service operator. At the same time, COPPA generally does not apply when a school has hired a website operator to collect information from students for educational purposes for use by the school. In those instances, the school (not an individual teacher) can provide consent on behalf of the students when required, as long as the data is used only for educational purposes.
  24. Such correlation assumes that standardized assessments such as the SBAC are accurate measurements of student’s academic abilities. There are multiple reasons why this is not the case. To blame a student’s success on their emotional state is harmful, considering the tests themselves have serious flaws. If a school decides to use data collected about SEL competencies and sort according to socio-economic status, it would be too easy to assume that poor SEL skills rather than ineffective schools or poverty causes low test scores. It would not be difficult to imagine how this flawed logic could then be used to substantiate a claim that low social-emotional skills cause poverty instead of any societal attributes.
  25. Current WCSD Superintendent Kristen McNeill stated in 2017, “I can’t think of better data to help our 64,000 students on their path to graduation.” SourceServing 5 Million Students, Panorama Education Raises $16M to Expand Reach of Social-Emotional Learning and Increase College Readiness in Schools, Panorama Education, 26 June 2018
  26. Kohn , Alfie. The Risks of Rewards. Eric Digest , 17 Nov. 2014
  27. Truby, Dana. “Motivation: The Secret Behind Student Success.” Accessed 12 January 2021.
  28. “To monitor students like items on a conveyer belt does more for District PR machine than how to assist real students with real complex emotional and social issues.” SourceRubin, Lynda. “Action Item 16 Contracts with Motivating Systems LLC and Kickboard, Inc.” Alliance for Philadelphia Public Schools , 20 Jan. 2021.
  29. Furness, Dylan. “Technology Makes Our Lives Easier, but is it at the Cost of Our Humanity?” Digital Trends, Digital Trends, 28 Apr. 2018.
  30. Denham, S. A. “Emotional Competence During Childhood and Adolescence.” Handbook of Emotional Development, edited by Vanessa LoBue, Vanessa, et al, 2019, pp. 493–541.
  31. Rodriguez, Tori. Negative Emotions Are Key to Well-Being. Scientific American, 1 May 2013.
  32. Cadima J, Leal T, Burchinal M. “The Quality of Teacher-Student Interactions: Associations with First Graders’ Academic and Behavioral Outcomes. Journal of School Psychology. 2010;48:457–82.
  33. Callahan, Joe. Marion School Board Shelves Sanford Harmony Curriculum Over Gender Norm Themes. Ocala Star-Banner, 24 Oct. 2020.
  34. Bailey, Nancy. “Social-Emotional Learning: The Dark Side.” Nancy Bailey’s Education Website, 6 Nov. 2020.
  35. “Problems with Social-Emotional Learning in K-12 Education: New Study.” Pioneer Institute , 10 Dec. 2020.
No comment yet.
Scooped by EduResearcher!

Okta to Acquire Identity Tech Startup Auth0 for $6.5B // Subscription Insider

Okta to Acquire Identity Tech Startup Auth0 for $6.5B // Subscription Insider | Educational Psychology & Technology: Critical Perspectives and Resources |

"On Wednesday, Okta announced it will acquire identity management company Auth0 for $6.5 billion in an all-stock


No comment yet.
Scooped by EduResearcher!

The End Of Student Privacy? Remote Proctoring's Invasiveness and Bias Symposium // Saturday March 6th, 10am-1:30pm PST// Mobilizon

The End Of Student Privacy? Remote Proctoring's Invasiveness and Bias Symposium // Saturday March 6th, 10am-1:30pm PST// Mobilizon | Educational Psychology & Technology: Critical Perspectives and Resources |

"On Saturday, March 6th, from 1:00 – 4:30 pm ET, please join the Surveillance Technology Oversight Project (S.T.O.P.) and Privacy Lab (an initiative of the Information Society Project at Yale Law School) for a symposium on remote proctoring technology. This interdisciplinary discussion will examine how remote proctoring software promotes bias, undermines privacy, and creates barriers to accessibility.

Please join with your web browser at on the day of the event.

We are using privacy-respecting web conferencing software BigBlueButton (provided by PrivacySafe) which only requires a web browser. Best viewed via Firefox or Chrome/Chromium. You may test your setup at

Sessions (all times Eastern):

1:00 pm: Opening Remarks.

1:10 pm – 2:10 pm: Session one will provide an overview of the technology used for remote proctoring, which ranges from keyloggers, to facial recognition, and other forms of artificial intelligence. Panelists will highlight the rapid growth of remote proctoring technology during the COVID-19 pandemic and its potential role in the future.

Expert Panel:

2:15 pm – 3:15 pm: Part two will explore the numerous technical, pedagogical, and sociological drivers of racial bias in remote proctoring technology. Speakers will examine sources of bias for existing software, its legal ramifications, and likely changes in future remote proctoring systems.

Expert Panel:

  • David Brody, Lawyer's Committee for Civil Rights Under Law

2:20 pm – 3:20 pm: Lastly, our final session will explore remote proctoring’s impact on accessibility for students with disabilities. Panelists will detail the difficulties students have already experienced using such software, as well as the potential legal ramifications of such discrimination.

Expert Panel:


4:20 pm: Closing Remarks.

  • Sean O'Brien, Information Society Project at Yale Law School


To register, please visit: 

No comment yet.
Scooped by EduResearcher!

Parents Nationwide File Complaints with U.S. Department of Education; Seek to Address Massive Student Data Privacy Protection Failures // Parents' Coalition of Montgomery County, MD

Parents Nationwide File Complaints with U.S. Department of Education; Seek to Address Massive Student Data Privacy Protection Failures // Parents' Coalition of Montgomery County, MD | Educational Psychology & Technology: Critical Perspectives and Resources |
 "On July 9, 2021, parents of school-age children from Maryland to Alaska, in collaboration with the Student Data Privacy Project (SDPP), will file over a dozen complaints with the U.S. Department of Education (DoE) demanding accountability for the student data that schools share with Educational Technology (EdTech) vendors.
Formed during the pandemic, SDPP is comprised of parents concerned about how their children’s personally identifiable information (PII) is increasingly being mined by EdTech vendors, with the consent of our schools, and without parental consent or school oversight.
With assistance and support from SDPP, 14 parents from 9 states filed requests with their school districts under the Family Educational Rights and Privacy Act (FERPA) seeking access to the PII collected about their children by EdTech vendors. No SDPP parents were able to obtain all of the requested PII held by EdTech vendors, a clear violation of FERPA.
One parent in Maryland never received a response. A New Jersey parent received a generic reply with no date, school name or district identification. Yet a Minnesota parent received over 2,000 files, none of which contained the metadata requested, but did reveal a disturbing amount of personal information held by an EdTech vendor, including the child’s baby pictures, videos of her in an online yoga class, her artwork and answers to in-class questions.
Lisa Cline, SDPP co-founder and parent in Maryland said, “When I tried to obtain data gathered by one app my child uses in class, the school district said, ‘Talk to the vendor.’ The vendor said, ‘Talk to the school.’ This is classic passing of the buck. And the DoE appears to be looking the other way.”
FERPA, a statute enacted in 1974 — almost two decades before the Internet came into existence, at a time when technology in schools was limited to mimeograph machines and calculators  — affords parents the right to obtain their children’s education records, to seek to have those records amended, and to have control over the disclosure of the PII in those records.
Unfortunately, this law is now outdated. Since the digital revolution, schools are either unaware, unable or unwilling to apply FERPA to EdTech vendors. Before the pandemic, the average school used 400-1,000 online tools, according to the Student Data Privacy Consortium. Remote learning has increased this number exponentially.
SDPP co-founder, privacy consultant, law professor and parent Joel Schwarz, noted that “DOE’s failure to enforce FERPA, means that EdTech providers are putting the privacy of millions of children at risk, leaving these vendors free to collect, use and monetize student PII, and share it with third parties at will.”
A research study released by the Me2B Alliance in May 2021, showed that 60% of school apps send student data to potentially high-risk third parties without knowledge or consent. SDPP reached out to Me2B and requested an audit of the apps used by schools in the districts involved in the Project. Almost 70% of the apps reviewed used Software Development Kits (SDKs) that posed a “High Risk” to student data privacy, and almost 40% of the apps were rated “Very High Risk,” meaning the code used is known to be associated with registered Data Brokers. Even more concerning, Google showed up in approximately 80% of the apps that included an SDK, and Facebook ran a close second, showing up in about 60% of the apps.
Emily Cherkin, an SDPP co-founder who writes and speaks nationally about screen use as The Screentime Consultant, noted, “because these schools failed to provide the data requested, we don’t know what information is being collected about our children, how long these records are maintained, who has access to them, and with whom they’re being shared.”
“FERPA says that parents have a right to know what information is being collected about their children, and how that data is being used,” according to Andy Liddell, a federal court litigator in Austin, TX and another SDPP co-founder. “But those rights are being trampled because neither the schools nor the DoE are focused on this issue.”

The relief sought of the DoE includes requiring schools to:
•  actively oversee their EdTech vendors, including regular audits of vendors’ access, use and disclosure of student PII and publicly posting the results of those audits so that parents can validate that their children’s data is being adequately protected;

•  provide meaningful access to records held by EdTech in response to a FERPA request, clarifying that merely providing a student’s account log-in credentials, or referring the requester to the Vendor, does not satisfy the school’s obligations under FERPA;

•  ensure that when their EdTech vendors share student PII with third parties, the Vendor and the school maintain oversight of third-party access and use of that PII, and apply all FERPA rights and protections to that data, including honoring FERPA access requests;

•  protect all of a students’ digital footprints — including browsing history, searches performed, websites visited, etc. (i.e., metadata) — under FERPA, and that all of this data be provided in response to a FERPA access request.

# # # 

If you would like more information, please contact Joel Schwarz at

Parents are invited to join the Student Data Privacy Project. A template letter to school districts can be downloaded from the SDPP website:

SDPP is an independent parent-led organization founded by Joel Schwarz, Andy Liddell, Emily Cherkin and Lisa Cline. Research and filing assistance provided pro bono by recent George Washington University Law School graduate Gina McKlaveen.
No comment yet.
Scooped by EduResearcher!

Rejecting Test Surveillance in Higher Education (Barrett, 2021) // Georgetown University Law Center 

The rise of remote proctoring software during the COVID-19 pandemic illustrates the dangers of surveillance-enabled pedagogy built on the belief that students can’t be trusted. These services, which deploy a range of identification protocols, computer and internet access limitations, and human or automated observation of students as they take tests remotely, are marketed as necessary to prevent cheating. But the success of these services in their stated goal is ill- supported at best and discredited at worst, particularly given their highly over- inclusive criteria for “suspicious” behavior. Meanwhile, the harms they inflict on students are clear: severe anxiety among test-takers, concerning data collection and use practices, and discriminatory flagging of students of color and students with disabilities have provoked widespread outcry from students, professors, privacy advocates, policymakers, and sometimes universities themselves.


To make matters worse, the privacy and civil rights laws most relevant to the use of these services are generally inadequate to protect students from the harms they inflict.

Colleges and universities routinely face difficult decisions that require reconciling conflicting interests, but whether to use remote proctoring software isn’t one of them. Remote proctoring software is not pedagogically beneficial, institutionally necessary, or remotely unavoidable, and its use further entrenches inequities in higher education that schools should be devoted to rooting out. Colleges and universities should abandon remote proctoring software, and apply the lessons from this failed experiment to their other existing or potential future uses of surveillance technologies and automated decision-making systems that threaten students’ privacy, access to important life opportunities, and intellectual freedom.


Keywords: privacy, surveillance, automated decision-making, algorithmic discrimination, COVID-19, higher education, remote proctoring software, FERPA, FTC, ADA


Suggested Citation:

Barrett, Lindsey, Rejecting Test Surveillance in Higher Education (June 21, 2021). Available at SSRN: or
For original post on SSRN, please visit 

No comment yet.
Scooped by EduResearcher!

The Impossibility of Automating Ambiguity // Abeba Birhane, MIT Press

The Impossibility of Automating Ambiguity // Abeba Birhane, MIT Press | Educational Psychology & Technology: Critical Perspectives and Resources |

"On the one hand, complexity science and enactive and embodied cognitive science approaches emphasize that people, as complex adaptive systems, are ambiguous, indeterminable, and inherently unpredictable. On the other, Machine Learning (ML) systems that claim to predict human behaviour are becoming ubiquitous in all spheres of social life. I contend that ubiquitous Artificial Intelligence (AI) and ML systems are close descendants of the Cartesian and Newtonian worldview in so far as they are tools that fundamentally sort, categorize, and classify the world, and forecast the future. Through the practice of clustering, sorting, and predicting human behaviour and action, these systems impose order, equilibrium, and stability to the active, fluid, messy, and unpredictable nature of human behaviour and the social world at large. Grounded in complexity science and enactive and embodied cognitive science approaches, this article emphasizes why people, embedded in social systems, are indeterminable and unpredictable. When ML systems “pick up” patterns and clusters, this often amounts to identifying historically and socially held norms, conventions, and stereotypes. Machine prediction of social behaviour, I argue, is not only erroneous but also presents real harm to those at the margins of society."


To download full document: 

No comment yet.
Scooped by EduResearcher!

Protecting Kids Online: Internet Privacy and Manipulative Marketing - U.S. Senate Subcommittee on Consumer Protection, Product Safety, and Data Security

Protecting Kids Online: Internet Privacy and Manipulative Marketing - U.S. Senate Subcommittee on Consumer Protection, Product Safety, and Data Security | Educational Psychology & Technology: Critical Perspectives and Resources |

"WASHINGTON, D.C.— U.S. Senator Richard Blumenthal (D-CT), the Chair of the Subcommittee on Consumer Protection, Product Safety, and Data Security, will convene a hearing titled, “Protecting Kids Online: Internet Privacy and Manipulative Marketing” at 10 a.m. on Tuesday, May 18, 2021. Skyrocketing screen time has deepened parents’ concerns about their children’s online safety, privacy, and wellbeing. Apps such as TikTok, Facebook Messenger, and Instagram draw younger audiences onto their platforms raising concerns about how their data is being used and how marketers are targeting them. This hearing will examine the issues posed by Big Tech, child-oriented apps, and manipulative influencer marketing. The hearing will also explore needed improvements to our laws and enforcement, such as the Children’s Online Privacy Protection Act, child safety codes, and the Federal Trade Commission’s advertising disclosure guidance.


  • Ms. Angela Campbell, Professor Emeritus, Georgetown Law
  • Mr. Serge Egelman, Research Director, Usable Security and Privacy, International Computer Science Institute, University of California Berkeley
  • Ms. Beeban Kidron, Founder and Chair, 5Rights

Hearing Details:

Tuesday, May 18, 2021

10:00 a.m. EDT

Subcommittee on Consumer Protection, Product Safety, and Data Security (Hybrid)  


Witness Panel 1 

No comment yet.
Scooped by EduResearcher!

More than 40 attorneys general ask Facebook to abandon plans to build Instagram for kids //

More than 40 attorneys general ask Facebook to abandon plans to build Instagram for kids // | Educational Psychology & Technology: Critical Perspectives and Resources |

By Lauren Feiner

"Attorneys general from 44 states and territories urged Facebook to abandon its plans to create an Instagram service for kids under the age of 13, citing detrimental health effects of social media on kids and Facebook’s reportedly checkered past of protecting children on its platform.

Monday’s letter follows questioning from federal lawmakers who have also expressed concern over social media’s impact on children. The topic was a major theme that emerged from lawmakers at a House hearing in March with Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai and Twitter CEO Jack Dorsey. Republican staff for that committee later highlighted online protection for kids as the main principle lawmakers should consider in their legislation.


BuzzFeed News reported in March that Facebook had been exploring creating an Instagram service for children, based on internal documents it obtained.

Protecting children from harm online appears to be one of the rare motivators both Democrats and Republicans can agree on, which puts additional pressure on any company creating an online service for kids.

In Monday’s letter to Zuckerberg, the bipartisan group of AGs cited news reports and research findings that social media and Instagram, in particular, had a negative effect on kids’ mental well-being, including lower self-esteem and suicidal ideation.

The attorneys general also said young kids “are not equipped to handle the range of challenges that come with having an Instagram account.” Those challenges include online privacy, the permanence of internet posts, and navigating what’s appropriate to view and share. They noted that Facebook and Instagram had reported 20 million child sexual abuse images in 2020.

Officials also based their skepticism on Facebook’s history with products aimed at children, saying it “has a record of failing to protect the safety and privacy of children on its platform, despite claims that its products have strict privacy controls.” Citing news reports from 2019, the AGs said that Facebook’s Messenger Kids app for children between 6 and 12 years old “contained a significant design flaw that allowed children to circumvent restrictions on online interactions and join group chats with strangers that were not previously approved by the children’s parents.” They also referenced a recently reported “mistake” in Instagram’s algorithm that served diet-related content to users with eating disorders.


“It appears that Facebook is not responding to a need, but instead creating one, as this platform appeals primarily to children who otherwise do not or would not have an Instagram account,” the AGs wrote. “In short, an Instagram platform for young children is harmful for myriad reasons. The attorneys general urge Facebook to abandon its plans to launch this new platform.”

In a statement, a Facebook spokesperson said the company has “just started exploring a version of Instagram for kids,” and committed to not show ads “in any Instagram experience we develop for people under the age of 13.”

“We agree that any experience we develop must prioritize their safety and privacy, and we will consult with experts in child development, child safety and mental health, and privacy advocates to inform it. We also look forward to working with legislators and regulators, including the nation’s attorneys general,” the spokesperson said.

After publication, Facebook sent an updated statement acknowledging that since children are already using the internet, “We want to improve this situation by delivering experiences that give parents visibility and control over what their kids are doing. We are developing these experiences in consultation with experts in child development, child safety and mental health, and privacy advocates.”

Facebook isn’t the only social media platform that’s created services for children. Google-owned YouTube has a kids service, for example, though with any internet service, there are usually ways for children to lie about their age to access the main site. In 2019, YouTube reached a $170 million settlement with the Federal Trade Commission and New York attorney general over claims it illegally earned money from collecting the personal information of kids without parental consent, allegedly violating the Children’s Online Privacy Protection Act (COPPA).

Following the settlement, YouTube said in a blog post it will limit data collection on videos aimed at children, regardless of the age of the user actually watching. It also said it will stop serving personalized ads on child-focused content and disable comments and notifications on them."... 


For full post, please visit: 

No comment yet.
Scooped by EduResearcher!

Silicon Valley, Philanthrocapitalism, and Policy Shifts from Teachers to Tech // Chapter in Strike for the Common Good: Fighting for the Future of Public Education 

Marachi, R., & Carpenter, R. (2020). Silicon Valley, philanthrocapitalism, and policy shifts from teachers to tech. In Givan, R. K. and Lang, A. S. (Eds.). Strike for the Common Good: Fighting for the Future of Public Education. Ann Arbor: University of Michigan Press. 

No comment yet.
Scooped by EduResearcher!

The Rise—and the Recurring Bias—of Risk Assessment Algorithms // Revue

The Rise—and the Recurring Bias—of Risk Assessment Algorithms // Revue | Educational Psychology & Technology: Critical Perspectives and Resources |
By Julia Angwin (Editor In Chief, The Markup)
"Hello, friends, 
I first learned the term “risk assessments” in 2014 when I read a short paper called “Data & Civil Rights: A Criminal Justice Primer,” written by researchers at Data & Society. I was shocked to learn that software was being used throughout the criminal justice system to predict whether defendants were likely to commit future crimes. It sounded like science fiction.

I didn’t know much about criminal justice at the time, but as a longtime technology reporter, I knew that algorithms for predicting human behavior didn’t seem ready for prime time. After all, Google’s ad targeting algorithm thought I was a man, and most of the ads that followed me around the web were for things I had already bought. 

So I decided I should test a criminal justice risk assessment algorithm to see if it was accurate. Two years and a lot of hard work later, my team at ProPublica published “Machine Bias,” an investigation proving that a popular criminal risk assessment tool was biased against Black defendants, possibly leading them to be unfairly kept longer in pretrial detention. 

Specifically what we found—and detailed in an extensive methodology—was that the risk scores were not particularly accurate (60 percent) at predicting future arrests and that when they were wrong, they were twice as likely to incorrectly predict Black that defendants would be arrested in the future compared with White defendants.

In other words, the algorithm overestimated the likelihood that Black defendants would later be arrested and underestimated the likelihood that White defendants would later be arrested. 

But despite those well-known flaws, risk assessment algorithms are still popular in the criminal justice system, where judges use them to help decide everything from whether to grant pretrial release to the length of prison sentences.

And the idea of using software to predict the risk of human behaviors is catching on in other sectors as well. Risk assessments are being used by police to identify future criminals and by social service agencies to predict which children might be abused.

Last year, The Markup investigative reporter Lauren Kirchner and Matthew Goldstein of The New York Times investigated the tenant screening algorithms that landlords use to predict which applicants are likely to be good tenants. They found that the algorithms use sloppy matching techniques that often generate incorrect reports, falsely labeling people as having criminal or eviction records. The problem is particularly acute among minority groups, which tend to have fewer unique last names. For example, more than 12 million Latinos nationwide share just 26 surnames, according to the U.S. Census Bureau.

And this week, reporter Todd Feathers broke the news for The Markup that hundreds of universities are using risk assessment algorithms to predict which students are likely not to graduate within their chosen major.

Todd obtained documents from four large universities that showed that they were using race as predictor, and in some cases a “high impact predictor” in their risk assessment algorithms. In criminal justice risk algorithms, race has not been included as an input variable since the 1960s. 

At the University of Massachusetts Amherst, the University of Wisconsin–Milwaukee, the University of Houston, and Texas A&M University the software predicted that Black students were “high risk” at as much as quadruple the rate of their White peers."
Representatives of Texas A&M, UMass Amherst, and UW-Milwaukee noted that they were not aware of exactly how EAB’s proprietary algorithms weighed race and other variables. A spokesperson for the University of Houston did not respond specifically to our request for comment on the use of race as a predictor.

The risk assessment software being used by the universities is called Navigate and is provided by an education research company called EAB. Ed Venit, an EAB executive, told Todd it is up to the universities to decide which variables to use and that existence of race as an option is meant to “highlight [racial] disparities and prod schools to take action to break the pattern.”
If the risk scores were being used solely to provide additional support to the students labeled as high risk, then perhaps the racial disparity would be less concerning. But faculty members told Todd that the software encourages them to steer high-risk students into “easier” majors—and particularly, away from math and science degrees. 

“This opens the door to even more educational steering,” Ruha Benjamin, a professor of African American studies at Princeton and author of “Race After Technology,” told The Markup. “College advisors tell Black, Latinx, and indigenous students not to aim for certain majors. But now these gatekeepers are armed with ‘complex’ math.”

There are no standards and no accountability for the ‘complex math’ that is being used to steer students, rate tenants, and rank criminal defendants. So we at The Markup are using the tools we have at our disposal to fill this gap. As I wrote in this newsletter last week, we employ all sorts of creative techniques to try to audit the algorithms that are proliferating across society. 

It’s not easy work, and we can’t always obtain data that lets us definitively show how an algorithm works. But we will continue to try to peer inside the black boxes that have been entrusted with making such important decisions about our lives.
As always, thanks for reading.
Julia Angwin
The Markup
No comment yet.
Scooped by EduResearcher!

Academic Integrity and Anti-Black Aspects of Educational Surveillance and E-Proctoring (Parnther & Eaton, 2021) // Teachers College Record 

Academic Integrity and Anti-Black Aspects of Educational Surveillance and E-Proctoring (Parnther & Eaton, 2021) // Teachers College Record  | Educational Psychology & Technology: Critical Perspectives and Resources |

Academic Integrity and Anti-Black Aspects of Educational Surveillance and E-Proctoring

by Ceceilia Parnther & Sarah Elaine Eaton - June 23, 2021


"In this commentary, we address issues of equity, diversity, and inclusion as related to academic integrity. We speak specifically to the ways in which Black and other racialized minorities may be over-represented in those who get reported for academic misconduct, compared to their White peers. We further address the ways in which electronic and remote proctoring software (also known as e-proctoring) discriminates against students of darker skin tones. We conclude with a call to action to educational researchers everywhere to pay close attention to how surveillance technologies are used to propagate systemic racism in our learning institutions.



The rapid pivot to remote teaching during the COVID-19 pandemic resulted in colleges and universities turning to technological services to optimize classroom management with more frequency than ever before. Electronic proctoring technology (also known as e-proctoring or remote invigilation) is one such fast-growing service, with an expected industry valuation estimated to be $10 Billion by 2026 (Learning Light, 2016). Students and faculty are increasingly concerned about the role e-proctoring technologies play in college exams.




We come to this work as educators, advocates, and scholars of academic integrity and educational ethics.


Ceceilia’s connection to this work lies in her personal and professional identities: “I am a Black, low socioeconomic status, first-generation college graduate and faculty member. The experiences of students who share my identity deeply resonate with me. While I’ve been fortunate to have support systems that helped me navigate college, I am keenly aware that my experience and opportunities are often the exceptions rather than the norm in a system historically designed to disregard, if not exclude, the experiences of minoritized populations. There were many moments where, in honor of the support I received, my career represents a commitment as an advocate, researcher, and teacher to student success and equitable systems in education.”


Sarah’s commitment to equity, diversity, and inclusion stems from experiences as a (White) first-generation student who grew up under the poverty line: “My formative experiences included living in servants’ quarters while my single mother worked as a full-time servant to a wealthy British family (see Eaton, 2020). Later, we moved Halifax, Nova Scotia, where we settled in the North End, a section of the city that is home to many Black and Irish Catholic residents. Social and economic disparities propagated by race, social class, and religion impacted my lived experiences from an early age. I now work as a tenured associate professor of education, focusing on ethics and integrity in higher education, taking an advocacy and social justice approach to my research.”




Higher rates of reporting and adjudicated instances of academic misconduct make Black students especially susceptible to cheating accusations. The disproportionality of Black students charged and found responsible for student misconduct is most readily seen in a K–12 context (Fabelo et al., 2011). However, research supports this as a reasonable assertion in the higher education context (Trachtenberg, 2017; Bobrow, 2020), primarily due to implicit bias (Gillo, 2017). In other words, Black and other minoritized students are already starting from a position of disadvantage in terms of being reported for academic misconduct.


The notion of over-representation is important here. Over-representation happens when individuals from a particular sub-group are reported for crimes or misconduct more often than those of the dominant White population. When we extend this notion to academic misconduct, we see evidence that Black students are reported more often than their White peers. This is not indicative that Black students engage in more misconduct behaviors, but rather it is more likely that White students are forgiven or simply not reported for misconduct as often. The group most likely to be forgiven for student conduct issues without ever being reported are White females, leaving non-White males to be among those most frequently reported for misconduct (Fabelo et al., 2011). Assumptions such as these perpetuate a system that views White student behavior as appropriate, unchallenged, normative, and therefore more trustworthy. These issues are of significant concern in an increasingly diverse student environment.




For Black and other students of color, e-proctoring represents a particular threat to equity in academic integrity. Although technology in and of itself is not racist, a disproportionate impact of consequences experienced by Black students is worthy of further investigation. Many educational administrators have subscribed to the idea that outsourcing test proctoring to a neutral third party is an effective solution. The problem is these ideas are often based on sales pitches, rather than actual data. There is a paucity of data about the effectiveness of e-proctoring technologies in general and even less about its impact on Black and other racialized minority students.

However, there are plenty of reports that show that facial recognition software unfairly discriminates against people with darker skin tones. For example, Robert Julian-Borchak Williams, a Black man from Detroit, was wrongfully accused and arrested on charges of larceny on the basis of facial recognition software—which, as it turns out, was incorrect (Hill, 2020). Williams described the experience as “humiliating” (Hill, 2020). This example highlights not only the inequities of surveillance technologies, but also the devastating effects the software can have when the system is faulty.


Algorithms often make Whiteness normative, with Blackness then reduced to a measure of disparity. Facial recognition software viewing White as normative is often unable to distinguish phenotypical Black individuals at higher rates than Whites (Hood, 2020). Surveillance of living spaces for authentication creates uncomfortable requirements that are anxiety-inducing and prohibitive.


E-proctoring companies often provide colleges and universities contracts releasing them of culpability while also allowing them to collect biodata. For Black students, biodata collection for unarticulated purposes represents concerns rooted in a history of having Black biological information used in unethical and inappropriate ways (Williams, 2020).




As educators and researchers specializing in ethics and integrity, we do not view academic integrity research as being objective. Instead, we see academic integrity inquiry as the basis for advocacy and social justice. We conclude with a call to action to educational researchers everywhere to pay close attention to how surveillance technologies are used to propagate systemic racism in our learning institutions. This call should include increased research on the impact of surveillance on student success, examination, and accountability of the consequences of institutional use of and investment in e-proctoring software, and centering of student advocates who challenge e-proctoring."




Bobrow, A. G. (2020). Restoring honor: Ending racial disparities in university honor systems. Virginia Law Review, 106, 47–70.


Eaton, S. E. (2020). Challenging and critiquing notions of servant leadership: Lessons from my mother. In S. E. Eaton & A. Burns (Eds.), Women negotiating life in the academy: A Canadian perspective (pp. 15–23). Springer.


Fabelo, T., Thompson, M. D., Plotkin, M., Carmichael, D., Marchbanks III, M. P., & Booth, E. A. (2011). Breaking schools’ rules: A statewide study of how school discipline relates to students’ success and juvenile justice involvement. Retrieved from


Hood, J. (2020). Making the body electric: The politics of body-worn cameras and facial recognition in the United States. Surveillance & Society18(2), 157–169. (2019, February 19). Online Proctoring / Remote Invigilation – Soon a multibillion dollar market within eLearning & assessment. Retrieved May 23, 2020, from


Trachtenberg, B. (2017). How university Title IX enforcement and other discipline processes (probably) discriminate against minority students. Nevada Law Journal, 18(1), 107–164.


Williams, D. P. (2020). Fitting the description: historical and sociotechnical elements of facial recognition and anti-black surveillance. Journal of Responsible Innovation7(sup1), 1–10."

Cite This Article as: Teachers College Record, Date Published: June 23, 2021 ID Number: 23752, Date Accessed: 6/25/2021 5:07:12 PM


For full post, please visit: 

No comment yet.
Scooped by EduResearcher!

Hackers post 26,000 Broward school files online 

No comment yet.
Scooped by EduResearcher!

Online learning's toll on kids' privacy // Axios

Online learning's toll on kids' privacy // Axios | Educational Psychology & Technology: Critical Perspectives and Resources | 

No comment yet.
Scooped by EduResearcher!

Algorithmic Racism, Artificial Intelligence, and Emerging Educational Technologies // SJSU

Algorithmic Racism, Artificial Intelligence, and Emerging Educational Technologies // SJSU | Educational Psychology & Technology: Critical Perspectives and Resources |

The slides above were presented at a faculty research sharing event sponsored by the Office of Innovation and Research at San José State University on March 5th, 2021 for an event with the theme, "Artificial Intelligence, Machine Learning, and Ethics."  Each presenter was allowed 5 minutes to share about the ways their research connected with the themes of AI, ML, and ethics. 


The slides above can also be accessed at


No comment yet.
Scooped by EduResearcher!

The End Of Student Privacy? Remote Proctoring's Invasiveness and Bias // Symposium of the Surveillance Technology Oversight Project (S.T.O.P)

On Saturday, March 6th, from 1:00 – 4:30 pm ET,  the Surveillance Technology Oversight Project (S.T.O.P.) and Privacy Lab (an initiative of the Information Society Project at Yale Law School) convened a symposium on remote proctoring technology. This interdisciplinary discussion discussed how remote proctoring software promotes bias, undermines privacy, and creates barriers to accessibility.

1:00 pm: Opening Remarks.
Albert Fox Cahn, Surveillance Technology Oversight Project

1:10 pm – 2:10 pm: Session one will provide an overview of the technology used for remote proctoring, which ranges from keyloggers, to facial recognition, and other forms of artificial intelligence. Panelists will highlight the rapid growth of remote proctoring technology during the COVID-19 pandemic and its potential role in the future.

Expert Panel:
Lindsey Barrett, Institute for Technology Law & Policy at Georgetown Law
Rory Mir, Electronic Frontier Foundation
sava saheli singh, University of Ottawa AI + Society Initiative

2:15 pm – 3:15 pm: Part two will explore the numerous technical, pedagogical, and sociological drivers of racial bias in remote proctoring technology. Speakers will examine sources of bias for existing software, its legal ramifications, and likely changes in future remote proctoring systems.

Expert Panel:
David Brody, Lawyer's Committee for Civil Rights Under Law
Chris Gilliard, Harvard Kennedy School Shorenstein Center
Lia Holland, Fight For The Future

2:20 pm – 3:20 pm: Lastly, our final session will explore remote proctoring’s impact on accessibility for students with disabilities. Panelists will detail the difficulties students have already experienced using such software, as well as the potential legal ramifications of such discrimination.

Expert Panel:
Chancey Fleet, Data and Society
Marci Miller, Potomac Law Group, PLLC
Tara Roslin, National Disabled Law Students Association

4:20 pm: Closing Remarks.
Sean O'Brien, Information Society Project at Yale Law School

No comment yet.
Scooped by EduResearcher!

Jane Doe (Class Action Plaintiff) vs. Northwestern University (Defenant) // Cook County, Illinois Circuit Court 

No comment yet.