Educational Psychology & Technology
27.3K views | +7 today
Scooped by Roxana Marachi, PhD
onto Educational Psychology & Technology!

Do Reading Logs Ruin Reading? // The Atlantic

Do Reading Logs Ruin Reading? // The Atlantic | Educational Psychology & Technology |

"...Rather than creating a new generation of pleasure-readers, forcing kids to keep track of their reading time can turn it into a chore."...

By Erica Reischer

"Children who read regularly for pleasure, who are avid and self-directed readers, are the holy grail for parents and educators. Reading for pleasure has considerable current and future benefits: Recreational readers tend to have higher academic achievement and greater economic success, and even display more civic-mindedness.


But recreational reading is on the decline. According to a National Endowment for the Arts report based on longitudinal data from a series of large, national surveys, the rate at which teens voluntarily read for pleasure has declined by 50 percent over the last 20 years. Reading now competes for children’s time with many other alluring activities, including television, social media, and video games. Most leisure time is now spent in front of a screen.


To ensure that kids are spending at least some time every day reading, classrooms across the country have instituted student reading logs, which typically require kids to read for a certain amount of time—about 20 minutes—each night at home and then record the book title and number of pages read. In some cases, parents must also sign this log before their child turns it in to the teacher."...



For full post, click on title above or here:



No comment yet.
Educational Psychology & Technology
This curated collection includes news, resources, and research related to the intersections of Educational Psychology and Technology. The page also serves as a research tool to organize online content. The grey funnel shaped icon at the top allows for searching by keyword. For research more specific to tech, screen time and health/safety concerns, please see:, to learn about the next wave of privatization involving technology intersections with Pay For Success,  Social Impact Bonds, and Results Based Financing (often marketed with language promoting "public-private-partnerships"), see, and for additional Educator Resources, please visit [Links to an external site].
Your new post is loading...
Your new post is loading...
Scooped by Roxana Marachi, PhD!

When "Innovation" is Exploitation: Data Ethics, Data Harms and Why We Need to Demand Data Justice // Marachi, 2019, Summer Institute of A Black Education Network 

To download pdf, please click on title or arrow above.


For more on the data brokers selling personal information from a variety of platforms, including education, please see: 


Please also visit: Parent Coalition for Student Privacy


and visit the Data Justice Lab:


No comment yet.
Scooped by Roxana Marachi, PhD!

Getting Back To School: Is There Peril or Promise in Online Learning? // In The Public Interest 

Getting Back To School: Is There Peril or Promise in Online Learning? // In The Public Interest  | Educational Psychology & Technology |

"Policymakers, school administrators, educators, and parents continue to make difficult choices due to lost school days, continued social distancing, and precarious public budgets.

While the pandemic has required a shift to distance learning, ultimately we know that in-person classroom teaching, learning, and interaction is the best way to provide a well-rounded and equitable education. Despite this, edtech and online education companies see the crisis as an opportunity to further privatize and profit off of public school students.

In the Public Interest, the National Education Policy Center, the National Superintendents Roundtable, the Network for Public Education, Local Progress, the Schott Foundation, and the Shanker Institute are hosting a webinar on the limitations and possibilities of online learning in public education in the midst of the coronavirus pandemic.

- Dr. Gary Miron, Professor, Western Michigan University College of Education
- Dr. Daniela Kruel DiGiacomo, Assistant Professor, University of Kentucky School of Information Science
- Kiki Ochoa, High school teacher and parent, San Diego, California
- Dr. Morcease Beasley, Superintendent, Clayton County School District, Georgia"


Register at 

No comment yet.
Scooped by Roxana Marachi, PhD!

Ed tech companies promise results, but their claims are often based on shoddy research // Hechinger Report

Ed tech companies promise results, but their claims are often based on shoddy research // Hechinger Report | Educational Psychology & Technology |

By Tara García Mathewson and Sarah Butrymowicz [Hechinger Report]

"School closures in all 50 states have sent educators and parents alike scrambling to find online learning resources to keep kids busy and productive at home. Website traffic to the homepage for IXL, a popular tool that lets students practice skills across five subjects through online quizzes, spiked in March. Same for Matific, which gives students math practice tailored to their skill level, and Edgenuity, which develops online courses.


All three of these companies try to hook prospective users with claims on their websites about their products’ effectiveness. Matific boasts that its game-based activities are “proven to help increase results by 34 percent.” IXL says its program is “proven effective” and that research “has shown over and over that IXL produces real results.” Edgenuity boasts that the first case study in its long list of “success stories” shows how 10th grade students using its program “demonstrated more than an eightfold increase in pass rates on state math tests.”


These descriptions of education technology research may comfort educators and parents looking for ways to mitigate the devastating effects of lost learning time because of the coronavirus. But they are all misleading.


None of the studies behind IXL’s or Matific’s research claims were designed well enough to offer reliable evidence of their products’ effectiveness, according to a team of researchers at Johns Hopkins University who catalog effective educational programs. And Edgenuity’s boast takes credit for substantial test score gains that preceded the use of its online classes.


A Hechinger Report review found dozens of companies promoting their products’ effectiveness on their websites, in email pitches and in vendor brochures with little or shoddy evidence to support their claims."


Misleading research claims are increasingly common in the world of ed tech. In 2002, federal education law began requiring schools to spend federal dollars on research-based products only. As more schools went online and demand for education software grew, more companies began designing and commissioning their own studies about their products. But with little accountability to make sure companies conduct quality research and describe it accurately, they’ve been free to push the limits as they try to hook principals and administrators.


This problem has only been exacerbated by the coronavirus, as widespread school closures have forced districts to turn to online learning. Many educators have been making quick decisions about what products to lean on as they try to provide remote learning options for students during school closures.


A Hechinger Report review found dozens of companies promoting their products’ effectiveness on their websites, in email pitches and in vendor brochures with little or shoddy evidence to support their claims. Some companies are trying to gain a foothold in a crowded market. Others sell some of the most widely used education software in schools today.

Many companies claim that their products have “dramatic” and “proven” results. In some cases, they tout student growth that their own studies admit is not statistically significant. Others claim their studies found effects that independent evaluators say didn’t exist."...


For full story, please visit: 


See also accompanying video:

No comment yet.
Scooped by Roxana Marachi, PhD!

Stop and Frisk Online: Theorizing Everyday Racism in Digital Policing in the Use of Social Media for Identification of Criminal Conduct and Associations // Patton et al. 2017

Stop and Frisk Online: Theorizing Everyday Racism in Digital Policing in the Use of Social Media for Identification of Criminal Conduct and Associations // Patton et al. 2017 | Educational Psychology & Technology |


Police are increasingly monitoring social media to build evidence for criminal indictments. In 2014, 103 alleged gang members residing in public housing in Harlem, New York, were arrested in what has been called “the largest gang bust in history.” The arrests came after the New York Police Department (NYPD) spent 4 years monitoring the social media communication of these suspected gang members. In this article, we explore the implications of using social media for the identification of criminal activity. We describe everyday racism in digital policing as a burgeoning conceptual framework for understanding racialized social media surveillance by law enforcement. We discuss implications for law enforcement agencies utilizing social media data for intelligence and evidence in criminal cases."

No comment yet.
Scooped by Roxana Marachi, PhD!

How emerging technologies amplify racism - even when they're intended to be neutral // CBC

How emerging technologies amplify racism - even when they're intended to be neutral // CBC | Educational Psychology & Technology |


No comment yet.
Rescooped by Roxana Marachi, PhD from Charter Schools & "Choice": A Closer Look!

Two children sue Google for allegedly collecting students' biometric data // CNET

Two children sue Google for allegedly collecting students' biometric data // CNET | Educational Psychology & Technology |

By Richard Nieva

"Two children from Illinois are suing Google for allegedly collecting biometric data, including face scans, of millions of students through the search giant's software tools for classrooms. 

The lawsuit, filed Thursday in a federal court in San Jose, California, is seeking class-action status. The children, known only as H.K. and J.C. in the complaint, are suing through their father, Clinton Farwell.


Google is using its services to create face templates and "voiceprints" of children, the complaint says, through a program in which the search giant provides school districts across the country with Chromebooks and free access to G Suite for Education apps. Those apps include student versions of Gmail, Calendar and Google Docs. 


The data collection would likely violate Illinois' Biometric Information Privacy Act, or BIPA, which regulates facial recognition, fingerprinting and other biometric technologies in the state. The practice would also likely run afoul of the Children's Online Privacy Protection Act, or COPPA, a federal law that requires sites to get parental consent when collecting personal information from users who are under 13 years old.


"Google has complete control over the data collection, use, and retention practices of the 'G Suite for Education' service, including the biometric data and other personally identifying information collected through the use of the service, and uses this control not only to secretly and unlawfully monitor and profile children, but to do so without the knowledge or consent of those children's parents," the lawsuit says.

Google declined to comment. Bloomberg earlier reported news of the lawsuit. 
The complaint is asking for damages of $1,000 for each member of the class for BIPA violations Google committed "negligently," or $5,000 each for each violation committed "intentionally or recklessly."


The lawsuit underscores Google's dominance in American classrooms, which has only grown in recent weeks. Schools are depending more on the tech giant's educational tools as physical classes around the nation are canceled in response to the coronavirus pandemic

As several states enact stay-at-home orders, usage of Google's tools has skyrocketed. Downloads of Google Classroom, which helps teachers manage classes online, have swelled to 50 million, making it the No. 1 education app on Apple's iOS and Google's Android platforms. On Thursday, Google announced a partnership with California Gov. Gavin Newsom to donate 4,000 Chromebooks to students across the state."


For original story, please visit: 

Himani Thakur's comment, April 7, 2:41 AM
Nice image ! Best Boarding School in Haryana !
Scooped by Roxana Marachi, PhD!

Mass school closures in the wake of the coronavirus are driving a new wave of student surveillance // Washington Post

Mass school closures in the wake of the coronavirus are driving a new wave of student surveillance // Washington Post | Educational Psychology & Technology |

"Colleges are racing to sign deals with 'online proctor' companies that watch students through their webcams while they take exams. Education advocates say the surveillance software forces students to choose between privacy and their grades."

By Drew Harwell

"When University of Florida sophomore Cheyenne Keating felt a rush of nausea a few weeks ago during her at-home statistics exam, she looked into her webcam and asked the stranger on the other side: Is it okay to throw up at my desk?


He said yes. So halfway through the two-hour test, during which her every movement was scrutinized for cheating and no bathroom breaks were permitted, she vomited into a wicker basket, dabbed the mess with a blanket and got right back to work. The stranger saw everything. When the test was finished, he said she was free to log off. Only then could she clean herself up.


“Online proctor” services like these have already policed millions of American college exams, tapping into students’ cameras, microphones and computer screens when they take their tests at home. Now these companies are enjoying a rush of new business as the coronavirus pandemic closes thousands of American schools, and executives are racing to capture new clients during what some are calling a once-in-a-lifetime opportunity.


The live proctors these companies hire ensure test-takers abide by a strict set of rules. They watch the students’ faces, listen to them talk and can demand they aim their cameras around the room to prove their honesty. Some companies also use facial-recognition, eye-tracking and other software that purports to detect cheating and rates the students’ “academic integrity.”


Looking off-screen for too long, for instance, can raise a test-taker’s “suspicion” score, potentially leading them to fail the exam. The companies sign contracts with the schools, which cover some of the cost, though many charge students, too: One company, ProctorU, charges students about $15 per test, while another, Proctorio, offers a $100 “lifetime fee.”

“It’s insanity. I shouldn’t be happy. I know a lot of people aren’t doing so well right now, but for us — I can’t even explain it,” Proctorio’s chief executive Mike Olsen said in an interview. “We’ll probably increase our value by four to five X just this year.”


The explosive growth casts light on what could be a pivotal moment for mass surveillance in the United States as privacy concerns clash with the unprecedented realities of a modern pandemic. Hundreds of thousands of students have been sent home from universities, and millions of high school students have seen their local schools closed for the rest of the year.

With more schools pushing to track students’ locations across campus and their testing behaviors at home, education advocates worry the systems are invading students’ personal lives and reducing the practice of learning to a forensic investigation, where students are presumed cheaters until proven upright.


“To take a test you need to let a stranger have a video recording of your room? Are you kidding me?” said Bill Fitzgerald, a researcher at the nonprofit group Consumer Reports who specializes in education technology.


“These platforms exist because they are selling a narrative that students can’t be trusted,” he said. “The people who have the most to lose here are the students, and they’re the farthest away from the decision. … Students are paying tens of thousands of dollars to have their higher-ed institutions sell them out.”


Students bothered by the system’s intrusive eye previously were given the option of taking their exams the old-fashioned way, in a classroom or a testing center. But with campuses shut down, students’ participation has become effectively mandatory — just before their final exams.


The systems have already unnerved students like Neil Buettner, a 28-year-old Marine Corps veteran and student at Harford Community College in Churchville, Md., who was incensed by the demands made by the online proctor service Honorlock before taking his microeconomics exam.


“It’s talking about how it wants to access my computer, my microphone, the webcam. Monitor what’s in the room around me, scan my room. It wants to scan my ID!” he said in an interview. When his professor said he had no option to take the test in person, he opted instead to drop the class. “It’s just a huge step backward,” Buettner said. “Everyone’s giving up their freedom just for the virus.”


Those concerns have not dented the appeal of companies like Proctorio, which staffs four sales offices in the United States and Europe and oversaw more than 1.2 million students during the December peak. Olsen said he expects their business could more than triple by the school year’s end.


The company, which typically adds 100 new universities as clients in a single year, is now fielding about 120 leads a day. Big universities that would normally churn through a months-long negotiation now want to rush deals through in a matter of days. And reluctant administrators and professors, he said, are suddenly finding “they’re being forced” to try it out.


The coronavirus lockdowns have also forced some companies to allow their proctors to work remotely instead of in a supervised office — raising alarms among privacy advocates over who’s gaining access to students’ bedroom video streams. One company, Examity, whose proctor centers in India were recently closed, has posted job listings for full-time contractors who would start watching test-takers as early as this month.


The software’s invasive demands on students have also sparked fury among some professors. A faculty group at the University of California at Santa Barbara wrote a letter to campus leaders last month that argued that the adoption of ProctorU could turn the university into “a surveillance tool.”


“We recognize … there are trade-offs and unfortunate aspects of the migration online that we must accept,” they wrote. “This is not one of them. We are not willing to sacrifice the privacy and digital rights of our students for the expediency of a take-home final exam.” (A ProctorU attorney responded with a letter threatening legal action over the group’s “defamatory correspondence.”)


ProctorU’s chief executive, Scott McFarland, said the skeptics are outnumbered by newly interested school leaders: On a single day last month, his office fielded nearly 1,000 calls from educators asking about the service. The company, he said, has worked largely with colleges and private high schools, but the pandemic has opened the possibility of expanding into grade school exams.


“It was a slow wave, but this changes everything and makes it more like a tsunami event,” he added. “There’s just so much opportunity in places we haven’t really chased before.”


At the start of a ProctorU test, students are told to show the proctor their student ID cards, their rooms and the tops of their desks to prove they don’t have any cheating material at hand. During the test, the proctor listens through the student’s microphone to ensure he or she does not ask for help from someone out of view.


The proctor gains access to the test-takers’ computer screens and receives alerts if they do something unacceptable, like copying and pasting text or opening a new browser tab. A video system analyzes the students’ eyes: If they look off-screen for four straight seconds more than two times in a single minute, the motion will be flagged as a suspect event — a hint that they could be referencing notes posted off-screen.


To ensure the right student is taking the exam, the software uses facial-recognition software to match them to the image on their ID. Random scans are performed throughout the exam to prevent another test-taker from jumping in.


The company also verifies identities with a typing test: A student may be asked to type 140 words at the beginning of the semester, then again just before testing to verify the speed and rhythms of the student’s keystrokes. Any discrepancies can be flagged for closer inspection.


A human proctor watches every second of an exam, though the student cannot see the proctor’s face. In previous versions of the software, the student could see the person watching them, but “the creepiness factor always sort of came up,” McFarland said. If a proctor suspects cheating, they alert a more aggressive specialist, known as an “interventionist,” who can demand that the student aim his or her webcam at a suspicious area or face academic penalty.


Proctors typically work out of one of 11 centers across Alabama, California, India, Jamaica, Panama and the Philippines. But with many of those offices closed, the company said, it is opening backup centers in Canada, hiring more than 100 new workers and instructing many proctors to work from home.


ProctorU, which oversaw 2 million tests last year from more than 750,000 students, has compiled years of data on students’ 15 “behavioral cheating types,” McFarland said. Students’ tests are live-streamed and recorded for later review: The worst offenders, McFarland said, have had their videos edited together into what he called a cheating “Hall of Fame.”

ProctorU’s competitors offer similar anti-cheating surveillance with different strategies. Honorlock, a Florida-based company that CEO Michael Hemlepp said has seen “a massive spike in inquiries,” uses software that looks for “attempted dishonesty” and then sends in a human proctor for further review.


Proctorio goes further, using a completely software-driven approach. After students consent to letting Proctorio monitor their webcams, microphones, desktops or “any other means necessary to uphold integrity,” the system tracks their speech and eye movements, how long they took to complete the test and how many times they clicked the mouse. It then gives professors an automated report ranking test-takers by “suspicion level” and the number of testing “abnormalities.” Students deemed untrustworthy by the computer are color-coded in red and given an icon of two shadowy figures..."


For full post, please visit:

No comment yet.
Scooped by Roxana Marachi, PhD!

Zoom Sued for Allegedly Illegally Disclosing Personal Data // Bloomberg News

Zoom Sued for Allegedly Illegally Disclosing Personal Data // Bloomberg News | Educational Psychology & Technology |

By Joel Rosenblatt

"Zoom Video Communications Inc. was sued by a user who claims the popular video-conferencing service is illegally disclosing personal information.


The company collects information when users install or open the Zoom application and shares it, without proper notice, to third parties including Facebook Inc., according to the lawsuit, filed Monday in federal court in San Jose, California.


Zoom’s shares have more than doubled this year as investors bet that the teleconferencing company would be one of the rare winners from the coronavirus pandemic. The surge in its use comes as more and more people work from home or use the video-conferencing software to keep in touch rather than meet in person.


According to the suit, Zoom’s privacy policy doesn’t explain to users that its app contains code that discloses information to Facebook and potentially other third parties.

The company’s “wholly inadequate program design and security measures have resulted, and will continue to result, in unauthorized disclosure of its users’ personal information,” according to the complaint.

Robert Cullen of Sacramento is seeking to represent other users and asked for a declaration that Zoom violated California’s Consumer Privacy Act. He’s seeking damages under the act and punitive damages."...


 For full post, please visit:

No comment yet.
Scooped by Roxana Marachi, PhD!

The Risks of Recording (Especially Synchronous) Classes

The Risks of Recording (Especially Synchronous) Classes | Educational Psychology & Technology |

By Diane Klein
"In two recent posts, I have presented some arguments in favor of recording your classes and captioning those recordings, based primarily on accessibility issues, including economic ("digital divide") and pedagogical/legal concerns, and the costs and risks of failing to do so.  Without taking any of that back, I'd now like to present the other side: not arguments against taping per se, but some of the distinctive risks associated with recording your classes - and especially, synchronous classes in which students participate.  In the absence of clear institutional taping policies, the problems are non-trivial, the best way to negotiate through them is far from obvious, and the right choice for one class, school, or professor may not be the same as for another.
When it comes to university taping policy, the late Boston College law professor Alexis Anderson's 2017 Journal of Legal Education article, "Classroom Taping Under Legal Scrutiny," is a must-read.  Anderson identifies at least four key components of an appropriate taping policy: advance notice to all faculty and students; clear "conditions of access and use of any recordings"; a way for students to "acknowledge restrictions on dissemination and use...before access [is] granted to posted recordings"; and a way to "identify and publicize the consequences attached to a breach."  She notes, with characteristic understatement, "Absent a well-developed taping policy, a significant likelihood exists that inadvertent legal lapses and pedagogical challenges will occur."

In many universities, the nearly-overnight transition of all face-to-face (F2F) courses to online delivery was imposed with no clear taping policies in place at all.  That state of affairs makes Anderson's likelihood a certainty.  Creating suitable policies, with faculty input, needs to move up on the priority list of every university (yes, even now).

But until that happens, the risks fall into three categories: purely legal; professional/personal; and (last but not least!) pedagogical.

The primary legal risks that I see include laws against nonconsensual recording (wiretapping); Family Educational Rights and Privacy Act (FERPA) and other forms of invasion of privacy; and loss of intellectual property.  Anderson wrote about taping of F2F class meetings, and remarked (in a footnote) that "[d]iscussion of educational recording inherent in online and distance-learning courses is beyond the scope of this article." Because synchronous online classes are not "inherently" recorded, nor were the F2F classes that have now been transitioned to all-online, Anderson's analysis applies to the issues we now face, including but not limited to the absence of the implied consent to record attached to an online course.

In more than one recent conversation about the all-online transition, I have seen professors (not law professors, thankfully!) blithely reassure one another that all you have to do to avoid any legal problems with recording is ask students to "consent" - and voila! problem solved.  They seem not to have considered either what to do if any students say "no," or whether the consent they seek to obtain is genuinely knowing and voluntary.  In Anderson's view, "A non disabled student who asserts that he will not attend a required class if it is recorded might pose a compelling challenge" to recording a class. If a class has an attendance policy, and suddenly in the middle of the semester, in order to get credit, a student is required to consent to being recorded, in what sense are they genuinely free to refuse?  In this midst of this crisis, I have seen faculty members - including law school faculty members, and including law school faculty members with tenure - agree to waive all sorts of rights, including privacy rights and rights to their own intellectual property, for the sake of a smooth all-online transition or for "cybersecurity."  How likely is it that students are better informed than their own professors about the scope of the use rights they are transferring when they not only participate in online education but allow themselves to be videotaped?

Even if students do consent, consent to recording is not necessarily "all or nothing"; the chance of accidental taping (for example before or after the "official" class time) is real, and problematic.  Anderson offers a fictional vignette about a student-teacher colloquy recorded after the class ended but with the taping system still running, but we need not resort to imagined scenarios.  Professors, especially those using systems like Zoom for the very first time, have already reported taped classes that included audible student comments clearly not intended to be recorded.  As Anderson puts it, "even a class taping policy involving advance notice and administrative oversight can go awry."

Her conclusion?  "A published taping policy providing appropriate advance notice to all members of the community is best designed to pass scrutiny under federal law."  But the all-online transition did not include any such advance notice or taping policy, published or otherwise.  Announcements like the one promulgated at my home institution, that "all classes, including those offered at regional campuses, graduate programs, and the College of Law, are to be available online-only" as of some date a few days away, don't even come close.

With respect to invasion of privacy (often protected by state law), including the unauthorized use of someone's name, image, and likeness, as with the coronavirus, we are all vulnerable to one another, and we all may unwittingly put ourselves and others at risk.  Some FERPA requirements are addressed by the Department of Education in a 2014 document called "Protecting Student Privacy While Using Online Educational Services: Requirements and Best Practices," though it focuses primarily on the K-12 setting.  But we all need to understand how telling your colleague in the next office some "unbelievable" thing a student said or did is not at all the same as sending them a link to it.  Posts of screenshots, of comments, etc., may contain so-called "personally identifiable information" (PII), triggering FERPA protections of students.  A student who takes a screenshot of you or other students (and that means of your home office, maybe) and shares it on TikTok or Instagram may be sharing information that is meaningless to them, but valuable to someone who may wish to do you harm.

Folks who think they'll be safe as long as they do everything inside their school's "learning management system" ought not to be so sanguine.  Blackboard, for example, is both hackable (here and here), and proprietary, raising still another set of concerns: intellectual property.  Faculty who are encouraged or even required to create lectures and upload them into Blackboard or a similar system would do well to inquire about who now "owns" your lecture.  (This is the "robot Diane" issue I raised formerly, and it is significant enough that the AAUP has created the Faculty Anti-Privatization Network in part to address it.)  This goes for students, too.  No, there is probably not a lot of commercial value in your students' comments about Pennoyer v. Neff - but suppose Blackboard owned video- and audiotape of Lady Gaga when she was enrolled in music classes at NYU's Tisch School of the Arts from 2003-2005?

Even when you only record yourself, the professional and personal risks unfortunately go beyond the embarrassment of students seeing your (well, my) messy house, or a dog or a child interrupting an important point in your lecture.  On Sunday, March 22, 2020, Charlie Kirk of Turning Point USA tweeted this: "To all college students who have their professors switching to online classes: Please share any and ALL videos of blatant indoctrination with @TPUSA [or] at  Now is the time to document & expose the radicalism that has been infecting our schools[.] Transparency!"

If you don't know who Charlie Kirk and TPUSA are, I'd be inclined to say you should count yourself fortunate - except that President Trump has appeared in person to address Kirk's right-wing high school and college student group twice in the past year, to a rapturous response.  He has 1.7 million followers on Twitter; in less than a day, the tweet above was retweeted four thousand times and "liked" nearly ten thousand times.  Kirk himself, the group's founder, represents the latest stage in the right-wing intelligentsia's bow-tied devolution from William F. Buckley (Yale) to George Will (Princeton), on down to Dinesh D'Souza  (Dartmouth), to Tucker Carlson (Trinity College).  Not to be outdone by Sean Hannity, who enrolled but didn't graduate from NYU and UC Santa Barbara, Kirk dropped out of community college at age 19, and immediately set about attacking American higher education and the allegedly left-wing radical faculty comprising it.

The virulence with which TPUSA goes after those it decides to target should not be underestimated.  Yale philosophy Prof. Jason Stanley describes their M.O. this way, in a tweet thread responding to Kirk's on Sunday: "They try to enlist outrage among alumni, who then contact the admin and chair. They will try to get an article in your student paper. The end goal is to get this into the 'real' media, first Fox News, then others....The volume of hatred is much, much worse for underrepresented minority faculty, especially Black faculty, and women." Stanley speaks "as an individual who has been hit by this before," and emphasized its seriousness.  "This is an attack on academic freedom," he tells us, likeliest to target public university faculty, and reminds us that "the most vulnerable faculty are adjuncts, lecturers, and untenured (in that order). They are right now at very serious risk."  He concludes, "Universities who open their faculty to this abuse without adequate mental health and legal protections are in dereliction of duty."  The AAUP's chair of Committee A, Hank Reichman, agrees."...


For full post, please visit:


Image credit:

No comment yet.
Scooped by Roxana Marachi, PhD!

Racial Bias Skews Algorithms Widely Used to Guide Patient Care // Stat News

Racial Bias Skews Algorithms Widely Used to Guide Patient Care // Stat News | Educational Psychology & Technology |

By Sharon Begley

Decision aids that U.S. physicians use to guide patient care on everything from who receives heart surgery to who needs kidney care and who should try to give birth vaginally are racially biased, scientists reported on Wednesday in the New England Journal of Medicine.

It is the latest evidence that algorithms used by hospitals and physicians to guide the health care given to tens of millions of Americans are shot through with implicit racism that their creators are often unaware of, but which nevertheless often result in Black people receiving inferior care.


The new findings cut across more medical specialties than any previous study of race and algorithm-driven patient care.

Malika Fair, an emergency physician and senior director for health equity partnerships and programs at the Association of American Medical Colleges, called the study “impressive.”

“I am delighted that the use of race in medical decision-making is being questioned in such a thoughtful analysis,” said Fair, who was not involved in the study. “As a medical community, we have not fully embraced the notion that race is a social construct and not based in biology.”


The findings build on earlier studies that focused more narrowly. Last year, for instance, other scientists found that a widely used algorithm that identifies which patients should get additional health care services such as home visits routinely put healthier, white patients into the programs ahead of Black patients who were sicker and needed them more.


The bias resulted from the algorithm’s developers equating higher health care spending with worse health. But white Americans spend more on health care than Black Americans even when their health situations are identical, making spending a poor and racially biased proxy for health.


The new study finds that algorithms used for medical decisions from cardiology to obstetrics are similarly tainted by implicit racial bias and adversely affect the quality of care Black patients receive.

Among them:

Heart Failure Risk Score: Developed by the American Heart Association to determine which hospitalized patients are at risk of dying from heart disease, the algorithm assigns three points to any “nonblack” patient; more points mean higher risk of death. Those deemed at higher risk (non-Blacks) are more likely to be referred to specialized care, said David Shumway Jones of Harvard Medical School, the study’s senior author. That possibility is not merely theoretical: At one Boston hospital, Black and Latinx patients arriving in the emergency room with cardiac symptoms were less likely than white patients with the same symptoms and medical history to be admitted to the cardiology unit, a 2019 study found.

Chest surgery: In a risk calculator used by thoracic surgeons, being Black increases the supposed likelihood of post-operative complications such as kidney failure, stroke, and death. “That could make surgeons steer Black patients away from bypass surgery, mitral valve repair and replacement,” and other life-saving operations, Jones said. “If I have a Black patient and the risk calculator tells me he has a 20% higher risk of dying from this surgery, it might scare me off from offering that procedure.”

Kidney failure: It’s very difficult to measure kidney function directly, so physicians use creatinine levels in the blood as a proxy: less creatinine, better kidney function. A “race adjustment” in a widely used algorithm lowers Black patients’ supposed creatinine levels below what’s actually measured. That makes their kidney function appear better, potentially keeping them from receiving appropriate specialty care. The rationale for “adjusting” creatinine levels by race is that Black people are supposedly more muscular, which can increase the release of creatinine into the blood. As it is, Black people have higher rates of end-stage renal disease than whites.

Kidney donation: An algorithm used by transplant surgeons says that kidneys from Black donors are more likely to fail than kidneys from donors of other races. Because Black patients are more likely to receive an organ from a Black donor, the algorithm reduces the number of kidneys available to them.

Giving birth: Recognizing that cesarean deliveries are more dangerous for both mothers and babies, obstetricians recommend that women who had a previous surgical birth not be automatically scheduled for another cesarean, as was once common practice. But the algorithm they use to determine whether a woman faces too high a risk from vaginal birth automatically says that Black and Latinx women face a higher risk. That was based on a study that found being unmarried and not having insurance also increases that risk. Neither of those socioeconomic factors are included in the algorithm."... 


For full story, please visit:

No comment yet.
Scooped by Roxana Marachi, PhD!

AI-Hiring firm HireVue faces FTC complaint over ‘unfair and deceptive’ practices // Washington Post

AI-Hiring firm HireVue faces FTC complaint over ‘unfair and deceptive’ practices // Washington Post | Educational Psychology & Technology |
By Drew Harwell 

"A prominent rights group is urging the Federal Trade Commission to take on the recruiting-technology company HireVue, arguing that the firm has turned to unfair and deceptive trade practices in its use of face-scanning technology to assess job candidates’ “employability.”

The Electronic Privacy Information Center, known as EPIC, on Wednesday filed an official complaint calling on the FTC to investigate HireVue’s business practices, saying the company’s use of unproven artificial intelligence systems that scan people’s faces and voices constituted a wide-scale threat to American workers.

HireVue’s “AI-driven assessments,” which more than 100 employers have used on a million-plus job candidates, use video interviews to analyze hundreds of thousands of data points related to a person’s speaking voice, word selection and facial movements. The system then creates a computer-generated estimate of the candidates’ skills and behaviors, including their “willingness to learn” and “personal stability.”


Candidates aren’t told their scores, but employers can use those reports to decide whom to hire or disregard. The Utah-based company was the subject of a Washington Post report last month, in which AI researchers criticized its technology as “profoundly disturbing” and “opaque.”

HireVue’s “intrusive collection and secret analysis of biometric data” causes substantial privacy and financial harms, EPIC officials wrote. And “because these algorithms are secret,” they added, “ ... it is impossible for job candidates to know how their personal data is being used or to consent to such uses.”

The FTC declined to comment. HireVue did not respond to requests for comment.


The complaint could for the first time throw a federal spotlight on a growing industry of tech firms that advertise automated systems they say can assess candidates’ résumés, divine people’s personalities and pinpoint problematic recruits. Critics say the systems are dehumanizing, invasive and built on flawed science that could perpetuate discriminatory hiring practices.


The technology is also facing increasing pressure from lawmakers. In January, Illinois will enact a law forcing employers to tell job applicants and regulators how their AI video-interview systems work. Co-sponsors of the bill, approved in August by Gov. J.B. Pritzker (D), said they worried the systems could unfairly penalize candidates and hide biases in how they assess a “model employee.”


HireVue’s systems have become pervasive for employers because they can lower recruiting costs and speed up turnaround time for new hires. Some colleges now instruct students on how to impress the hidden algorithms: In the FTC filing, EPIC lawyers quote a guide from the University of Maryland business school, which tells interviewees, “Robots compare you against existing success stories; they don’t look for out-of-the-box candidates.”


EPIC, based in Washington, has become one of the tech industry’s most renowned and effective watchdogs, helping shape U.S. policy around online privacy, civil liberties and domestic surveillance for nearly 25 years. The group has challenged tech giants and government agencies, including Facebook, Google and the National Security Agency, through consumer complaints, agency filings and federal lawsuits.


EPIC urged the FTC to halt HireVue’s automatic scoring of job candidates and make public the algorithms and criteria used in analyzing people’s behavior. The technology is largely unregulated, but the FTC regularly enforces “unfair and deceptive acts or practices” statutes against companies found to be making claims to consumers without a “reasonable basis” in a way likely to “cause substantial injury.”


In its complaint, EPIC officials said HireVue’s AI-driven assessments produce results that are “biased, unprovable and not replicable.” The system, they argued, could unfairly score someone based on prejudices related to their gender, race, sexual orientation or neurological differences. HireVue says it uses “world-class bias testing” techniques to detect and prevent hiring discrimination.


HireVue advertises that its technology does not use “facial recognition technology” because its systems do not attempt to identify people. But EPIC argued that HireVue’s assertion is misleading, and that the FTC has ruled the term applies to any “technologies that analyze facial geometry to predict demographic characteristics, expression or emotions.”


EPIC also argued that HireVue had failed to meet international standards for AI systems set by the Organization for Economic Cooperation and Development and endorsed by the United States earlier this year. HireVue violated those principles, EPIC said, because its algorithmic assessments can’t be evaluated or “meaningfully challenged” by the job candidates they’ve assessed.


The company has not ensured the accuracy, reliability or validity of its computer-generated scores, the complaint added. It has also not “adequately evaluated whether the purpose, objectives, and benefits of its algorithmic assessments outweigh the risks.”



Read the Complaint: Full PDF

No comment yet.
Scooped by Roxana Marachi, PhD!

Majority of VCU Students Refuse to Participate in Wi-Fi Tracking Program // MuckRock

Majority of VCU Students Refuse to Participate in Wi-Fi Tracking Program // MuckRock | Educational Psychology & Technology |

By Tom Nash
"Nearly 60 percent of the 4,047 students targeted for a Virginia Commonwealth University Wi-Fi tracking pilot program refused to participate, according to documents released by VCU.  
The university began tracking attendance through the Wi-Fi on students’ phones and other devices in November after entering a $96,000, one-year agreement with vendor Degree Analytics.

VCU is using the software as part of a pilot program called RAM Attend. Three sections of freshmen and sophomore students were slated for inclusion. Of those, 2,414 opted out through a webpage offered by the university, leaving 1,633 students whose Wi-Fi signal could be tracked through any device used to register with VCU’s wireless network.

An additional 3,249 opt-outs came from students who had not been singled out for participation in the program, bringing the total number of VCU students rejecting Wi-Fi tracking to 5,842.

The university has said the goal behind the system is to increase student retention. “Academic advisors and faculty already depend on progress reports and mid-term grades to help students recognize and overcome challenges in their courses,” the university said on a webpage explaining the program. “They can do the same by identifying patterns in class attendance.”

You can read the full response here.


Is your college or university using Degree Analytics? Let us know, and we’ll submit a request to add to our project page."


For original post, please visit:


See also:

No comment yet.
Scooped by Roxana Marachi, PhD!

Ed Tech Programs Often Overstate Their Effectiveness // The Hechinger Report

"A Hechinger Review found dozens of ed tech companies overstating their research findings"


Direct Link to Youtube Video above:


For corresponding story, please visit:

No comment yet.
Scooped by Roxana Marachi, PhD!

Researchers Raise Concerns About Algorithmic Bias in Online Course Tools // EdSurge 

Researchers Raise Concerns About Algorithmic Bias in Online Course Tools // EdSurge  | Educational Psychology & Technology |

By Jeffrey Young
"Awareness of the dangers of algorithmic bias in AI systems is growing. Earlier this year, a 42-year-old Detroit resident was wrongly arrested after a face-recognition system falsely matched his photo with that of an image from security camera footage. Such systems have been shown to give more false matches on photos of Black people than white peers.

Some scholars worry that AI in learning management systems used by colleges could lead to misidentifications in academic settings, by doing things like falsely tagging certain students as low-performing, which could lead their professors to treat them differently or otherwise disadvantage them.

For instance, the popular LMS Canvas had a feature that red-flagged students who turned in late work, suggesting on a dashboard shown to professors that such students were less likely to do well in the class, says Roxana Marachi, an associate professor of education at San Jose State University. Yet she imagines scenarios in which students could be misidentified, such as when students turn in assignments on time but in alternative ways (like in paper rather than digital form), leading to false matches.

“Students are not aware that they are being flagged in these ways that their professors see,” she says.

Colleges insist that scholars be incredibly careful with data and research subjects in the research part of their jobs, but not with the tools they use for teaching. “That’s basic research ethics—inform the students about the way their data is being used,” she notes.

While that particular red flag feature is no longer used by Canvas, Marachi says she worries that colleges and companies are experimenting with learning analytics in ways that are not transparent and could be prone to algorithmic bias.

In an academic paper published recently in the journal Teaching in Higher Education: Critical Perspectives, she and a colleague call for “greater public awareness concerning the use of predictive analytics, impacts of algorithmic bias, need for algorithmic transparency, and enactment of ethical and legal protections for users who are required to use such software platforms.” The article was part of a special issue devoted to the “datafication of teaching in higher education.”


At a time when colleges and universities say they are renewing their commitment to fighting racism, data justice should be front and center, according to Marachi. “The systems we are putting into place are laying the tracks for institutional racism 2.0 unless we address it—and unless we put guardrails or undo the harms that are pending,” she adds.

Leaders of the LMS Canvas, which is produced by the company Instructure, insist they take data privacy seriously, and that they are working to make their policies clearer to students and professors.

Just three weeks ago the company hired a privacy attorney, Daisy Bennett, to assist in that work. She plans to write a plain-language version of the company’s user privacy policy and build a public portal explaining how data is used. And the company has convened a privacy council, made up of professors and students, that meets every two to three months to give advice on data practices. “We do our best to engage our end users and customers,” said Jared Stein, vice president of higher education strategy at Instructure, in an interview with EdSurge.

He stressed that Marachi’s article does not point to specific instances of student harm from data, and that the goal of learning analytics features are often to help students succeed. “Should we take those fears of what could go wrong and completely cast aside the potential to improve the teaching and learning experience?” he asked. “Or should we experiment and move forward?”

Marachi’s article raises concerns about a statement made at an Instructure earnings call by then-CEO Dan Goldsmith regarding a new feature:

“Our DIG initiative, it is first and foremost a platform for [Machine Learning] and [Artificial Intelligence], and we will deliver and monetize it by offering different functional domains of predictive algorithms and insights. Maybe things like student success, retention, coaching and advising, career pathing, as well as a number of the other metrics that will help improve the value of an institution or connectivity across institutions.”

Other scholars have focused on the comment as well, noting that the goals of companies sometimes prioritize monetizing features over helping students.

Stein, of Instructure, said that Goldsmith was “speaking about what was possible with data and not necessarily reflecting what we were actually building—he probably just overstated what we have as a vision for use of data.” He said he outlined the plans and strategy for the DIG initiative in a blog post, which points to its commitment to “ethical use of learning analytics.”

As to the concern about LMS and other tools leading to institutional racism? “Should we have guardrails? Absolutely.”

Competing Narratives

Marachi said she has talked with Instructure staff about her concerns, and that she appreciates their willingness to listen. But the argument she and other scholars are making is a critique of whether learning analytics is worth doing at all.

In an introductory article to the journal series on the datafication of college teaching, Ben Williamson and Sian Bayne from the University of Edinburgh, and Suellen Shay from the University of Cape Town, lay out a broad list of concerns about the prospect of using big data in teaching.

“The fact that some aspects of learning are easier to measure than others might result in simplistic, surface level elements taking on a more prominent role in determining what counts as success,” they write. “As a result, higher order, extended, and creative thinking may be undermined by processes that favor formulaic adherence to static rubrics.”

They place datafication in the context of what they see as a commercialization of higher education—as a way to fill gaps caused by policy decisions that have reduced public funding of college.

“There is a clear risk here that pedagogy may be reshaped to ensure it ‘fits’ on the digital platforms that are required to generate the data demanded to assess students’ ongoing learning,” they argue. “Moreover, as students are made visible and classified in terms of quantitative categories, it may change how teachers view them, and how students understand themselves as learners.”

And the mass movement to online teaching due to the COVID-19 pandemic makes their concerns “all the more urgent,” they add.


The introduction ends with a call to rethink higher education more broadly as colleges look at data privacy issues. They cite a book by Raewyn Connell called “The Good University: What Universities Do and Why it Is Time For Radical Change,” that “outlined a vision for a ‘good university’ in which the forces of corporate culture, academic capitalism and performative managerialism are rejected in favour of democratic, engaged, creative, and sustainable practices.”


Their hope is that higher education will be treated as a social and public good rather than a product."


Jeffrey R. Young (@jryoung) is the higher education editor at EdSurge and the producer and co-host of the EdSurge Podcast. He can be reached at jeff [at] edsurge [dot] com.


For original post, please visit:

No comment yet.
Scooped by Roxana Marachi, PhD!

Special Issue: “The datafication of teaching in higher education: Critical issues and perspectives” // Teaching in Higher Education 

Special Issue: “The datafication of teaching in higher education: Critical issues and perspectives” // Teaching in Higher Education  | Educational Psychology & Technology |

By Ben Williamson, Sian Bayne and Suellen Shay

"Universities have been significantly disrupted by the ongoing COVID-19 pandemic, and are likely to remain in a fragile, uncertain condition for months if not years to come. The very rapid shift to online teaching seems increasingly likely to be just a first step on a long path to the expansion of digital or hybrid technologies in higher education.  For many education technology (edtech) vendors, the pandemic is not just a health crisis and an educational emergency, but a market opportunity fueled both by private capital calculations and by desperate university customers. With the very continuation of higher education teaching at stake as universities recover from the coronavirus crisis, companies providing vital digital infrastructure for distance education are attractive prospects for educational and market institutions alike.

Our special issue on The datafication of teaching in higher education was already in production as coronavirus spread around the planet. The issues confronted by many of the authors, however, anticipate discussions now occupying universities as they work out how far to increase their digital delivery, and what to do about the huge quantities of data these technologies collect about their students, staff and institutional performances. Although the use of statistics, metrics and data to measure student achievement, staff outputs and university performance is not new, as we show in the editorial introduction, digital forms of data are becoming increasingly prevalent with the widespread introduction of digital technologies for teaching and learning. Predictive learning analytics, learning management systems, online learning platforms, performance dashboards, plagiarism detection, library resource management, student experience apps, attendance monitoring, and even artificial intelligence assistants and tutors all depend on the persistent collection and analysis of data, and are part of a rapidly growing edtech sector and a multibillion dollar education market.


The datafication of teaching in higher education is transformative in three key ways. First, it is expanding processes of measurement, comparison and evaluation to as many functions and activities of higher education as possible, through increasingly automated systems that run on highly opaque and proprietary code and algorithms that are based on specific technical understandings of education. Second, datafication privileges performance comparison more than ever, and thereby reinforces longstanding political preoccupations with marketization and competition, as the comparative performances of students, staff, courses, programs and whole institutions are made visible for evaluation and assessment. And third, datafication fuses higher education to the business models of a global education industry, which then reshapes higher education to fit its preferred ideas about what  constitutes and measurably beneficial university experience. In other words, technologies of datafication are the material embodiment of particular measurement practices, political priorities and business plans, and reshape institutions of education to fit those forms.


The collected papers in the special issue tease out a number of key concerns. Paul Prinsloo foregrounds issues of ‘data power’, arguing that data systems define what ‘counts’ as a good student, an effective educator, or a quality education. He raises significant questions regarding the ‘data colonialism’ of edtech companies from the Global North pushing into Global South contexts to reveal ‘truths’ about education, students and teachers. Data analytics and the dashboards that present information about students are the focus of Michael Brown, whose article identifies the role of dashboards in ‘seeing students’ and shaping educators’ pedagogical strategies. Educators, he reports, may find their normal pedagogical routines stymied by the demands of datafication, and struggle to make sense of the data presented to them by their dashboards. This, for Michaela Harrison and coauthors, raises the issue of how ‘student data subjects’ are created from data in ways that make them visible to the educator’s eye as digital traces, which they argue may result in a ‘process of (un)teaching’ rather than meaningful teacher-student interaction.


Learning management systems have acquired some of the most extensive databases of student information on the planet. Roxana Marachi and Lawrence Quill draw specific attention to the learning management system Canvas, arguing it enables ‘frictionless’ data transitions across K12, higher education, and workforce data through the integration of third party applications and interoperability or data-sharing across platforms. They make the important call for greater public awareness concerning the use of predictive analytics, impacts of algorithmic bias, and enactment of ethical and legal protections for users who are required to use such software platforms. Juliana Raffaghelli and Bonnie Stewart suggest that building educators’ ‘data literacy’, with an emphasis on critical, ethical and personal approaches to datafication, is an important response to the increase of algorithmic decision-making and data collection in higher education, enabling educators make sense of the systems that shape life and learning in the twenty-first century. Extending a critical data literacies approach a computer science classroom, Mary Loftus and Michael Madden report on an experimental teaching module where students both explore the construction of machine learning models and learn to reflect on their social consequences as ‘students who will be building the autonomous, connected systems of the future’.


A number of the papers examine how datafication reinforces logics of marketization and performativity. Annette Bamberger, Yifat Bronshtein and Miri Yemini, for example, argue that as social media has become central to university marketing and reputation management, techniques of datafication help produce persuasive information that can be circulated as social media marketing material in the context of competitive struggles for the international student market. Aneta Hayes and Jie Cheng then examine the shortcomings of international teaching excellence and higher education outcomes frameworks, arguing that ‘epistemic equality’ and non-discrimination should be officially considered as indicators of teaching excellence, and show how evaluating universities on epistemic equality could work in practice. Such an approach stands in contrast to the surveillance techniques of the ‘smart campus’ analysed by Michael Kwet and Paul Prinsloo, who foreground the risks of normalizing surveillance architectures on-campus, call for a ban on various forms of dataveillance, and argue for decentralized services, ‘public interest technology’ and more democratic pedagogic models.


Rounding out the special issue, Neil Selwyn and Dragan Gasevic stage a dialogue between critical social science and data science. They add a computational dimension to familiar social criticisms of data representativeness, reductionism and injustice, as well as exploring social tensions inherent in technical claims to data-based precision, clarity and predictability, and finally highlight opportunities for productive interdisciplinary exchange and collaboration. Their paper offers a productive way forward for research on datafication in higher education. But significant challenges remain to reimagine and reshape the role of HE in the 2020s, both during the coronavirus recovery and in the longer term. We hope the special issue helps to catalyse debate about the limits, potential and challenges of the datafied university, and about the role of datafication in higher education for the future.

Ben Williamson (University of Edinburgh), Sian Bayne (University of Edinburgh) and Suellen Shay (University of Cape Town)"


[Photo by Pixabay on] 

No comment yet.
Scooped by Roxana Marachi, PhD!

Higher Education’s Microcredentialing Craze: a Postdigital-Deweyan Critique // Postdigital Science and Education, SpringerLink

Higher Education’s Microcredentialing Craze: a Postdigital-Deweyan Critique // Postdigital Science and Education, SpringerLink | Educational Psychology & Technology |


As the value of a university degree plummets, the popularity of the digital microcredential has soared. Similar to recent calls for the early adoption of Blockchain technology, the so-called ‘microcredentialing craze’ could be no more than a fad, marketing hype, or another case of ‘learning innovation theater.’ Alternatively, the introduction of these compact skills- and competency-based online certificate programs might augur the arrival of a legitimate successor to the four-year university diploma. The thesis of this article is that the craze for microcredentialing reflects (1) administrative urgency to unbundle higher education curricula and degree programs for greater efficiency and profitability and (2) a renascent movement among industry and higher education leaders to reorient the university curriculum towards vocational training."


Article DOI:  

For link to online version of article, please visit:


To download pdf, please visit:

No comment yet.
Scooped by Roxana Marachi, PhD!

The case of Canvas: Longitudinal datafication through learning management systems // Marachi & Quill, 2020 Teaching in Higher Education: Critical Perspectives  

The case of Canvas: Longitudinal datafication through learning management systems // Marachi & Quill, 2020 Teaching in Higher Education: Critical Perspectives   | Educational Psychology & Technology |
The Canvas Learning Management System (LMS) is used in thousands of universities across the United States and internationally, with a strong and growing presence in K-12 and higher education markets. Analyzing the development of the Canvas LMS, we examine 1) ‘frictionless’ data transitions that bridge K12, higher education, and workforce data 2) integration of third party applications and interoperability or data-sharing across platforms 3) privacy and security vulnerabilities, and 4) predictive analytics and dataveillance.  We conclude that institutions of higher education are currently ill-equipped to protect students and faculty required to use the Canvas Instructure LMS from data harvesting or exploitation. We challenge inevitability narratives and call for greater public awareness concerning the use of predictive analytics, impacts of algorithmic bias, need for algorithmic transparency, and enactment of ethical and legal protections for users who are required to use such software platforms.

No comment yet.
Scooped by Roxana Marachi, PhD!

Dallas ISD Zoom Call Hacked With Child Porn Images // 

Dallas ISD Zoom Call Hacked With Child Porn Images //  | Educational Psychology & Technology | 

No comment yet.
Scooped by Roxana Marachi, PhD!

Google Sued for Active, Illegal, and Ongoing Collection of Data from Schoolchildren in New Mexico  // Consumer Reports

Google Sued for Active, Illegal, and Ongoing Collection of Data from Schoolchildren in New Mexico  // Consumer Reports | Educational Psychology & Technology |

By Allen St. John [Feb. 21st, 2020]
New Mexico's attorney general filed a lawsuit this week claiming that Google uses its education products and services to illegally collect data from children. The case highlights how the tech company has become deeply embedded in schools across the country, raising privacy concerns among child and privacy advocates. 


Katie McInnis, policy counsel for Consumer Reports, says the case demonstrates the need for school officials to look more closely at issues of student privacy.


“If the claims prove to be true, Google has been collecting highly sensitive information about children, including voice recordings, in violation of COPPA," she says. "Hopefully this example will make schools nationwide more fully examine the tech products and companies they invite into the classroom and children’s everyday lives."

The lawsuit, filed Thursday in a federal court in New Mexico, claims that Google engages in "active, illegal and ongoing" collection of data from New Mexico schoolchildren under 13 without parental consent, in violation of the Federal Children’s Online Privacy Protection Act, or COPPA, as well as a New Mexico law called the Unfair Practices Act.

According to the lawsuit, the data include geolocation information, websites visited, terms searched for on Google and YouTube, contact lists, and voice recordings.


Google offers its G Suite for Education products, including Gmail, Calendar, Drive, Docs, and Sheets, to school districts at no cost, and sells its Chromebook laptops at a low cost. Many students are required by their schools to use Google's products and services. 

For those students, there's no way to receive a public school education without having personal data collected by the internet giant, the lawsuit alleges. It calls that "unconscionable," even if the data aren't used for targeted advertising.

Google disputes the allegations. “These claims are factually wrong," a spokesman told Consumer Reports in an email. "G Suite for Education allows schools to control account access and requires that schools obtain parental consent when necessary."

“These claims are factually wrong. G Suite for Education allows schools to control account access and requires that schools obtain parental consent when necessary. We do not use personal information from users in primary and secondary schools to target ads. School districts can decide how best to use Google for Education in their classrooms and we are committed to partnering with them.”  - Google spokesperson 

Google's products are widely used in public and private schools throughout the U.S. The New Mexico lawsuit reports that 80 million students and educators across the U.S. use G Suite for Education, and 25 million students use Chromebook laptops.

Google provides a separate privacy policy for its G Suite for Education core products, which include Gmail, Calendar, Classroom, and Drive. But students can also access other Google services, such as Google Maps, Blogger, and YouTube, which are covered under the company's broader privacy policy. That gives Google the option to "offer[s] users tailored content, such as more relevant search results," and "combine personal information from one service with information, including personal information, from other Google services."

There are legitimate reasons why school districts use the company's products and services. Chromebooks are relatively inexpensive, and the company's software provides technology support that most school districts would be unable to execute, especially in an era of shrinking budgets.

But privacy advocates say they're concerned about the lack of a firm separation between Google's education products and its regular offerings. And they say the company isn't transparent enough when describing how it uses student data.

"YouTube is used by teachers all over the country all the time, and it doesn't seem to have the same protection for school accounts that, say, Gmail does," says Josh Golin, executive director of the advocacy group Campaign for a Commercial-Free Childhood. "Tens of millions of schoolchildren are using Google products in school but no one really understands what's going on with that data, and that's both a problem and a violation of the law."

According to published reports, the Norwegian Data Protection Authority is investigating the use of Google services in Norwegian schools, evaluating potential violations of Europe's GDPR data protection law. Last July, Germany banned schools from using cloud-based productivity software from Microsoft, Google, and Apple because of privacy concerns.

Last September, Google paid a $170 million fine in a settlement with the Federal Trade Commission over charges that YouTube, which is owned by Google, was collecting data about children without parental consent and using that personal information to target ads to them."...

For full report, please visit: 
No comment yet.
Scooped by Roxana Marachi, PhD!

Yet Analytics, HP, the "Experience Graph" and the Future of Human Capital Analytics

Yet Analytics, HP, the "Experience Graph" and the Future of Human Capital Analytics | Educational Psychology & Technology |

[Note: The re-sharing of this post on the Educational Psychology & Technology collection does not indicate endorsement of its content]



By Shelly Blake-Plock

"Those of you who have followed Yet Analytics will know that to date, we've concentrated on xAPI. We've done so for two reasons. The first being the clear differentiators provided by xAPI as regards data interoperability in matters of learning and human capital data. The second is the ability xAPI provides in helping to capture granular data on-the-ground. We've released the Yet xAPI LRS and see xAPI as core to part of our strategy to revolutionize human capital analytics.


But it is only one part of that strategy.

Whereas we could describe the xAPI on-the-ground approach as a microcosmic strategy, it perhaps becomes more clear what our aims are as regards human capital data when we take a look at Yet's developments on the macro side of the table. Earlier this year, I was invited to speak at the Education World Forum in London. On stage, I debuted the collaborative work we have done with our partners at HP, Inc. The short transcript below describes that work and how we came to build the HP Education Data Command Center powered by Yet's new EIDCC platform.

And with the development of the EIDCC — a platform which can take data of any variety and provide the power of machine learning and neural networks to derive insight from it — Yet Analytics is rolling out a total data solution for human capital analytics, particularly as it concerns human development and the factors that investment and activity have on it.

Part of this involves xAPI. But xAPI is one of many factors. And where xAPI is used, it should be done so in the most intelligent and applicable manner. The broader reality is that the Experience API is something that should not be applied just for its own sake as the next big thing, but rather that the power of xAPI should be applied where applicable in the record of human behavior and performance — alongside any other data source — in the development of Experience Intelligence.

In other words — xAPI is a vital piece of the emerging human capital analytics ecosystem. But a meaningful solution in the space must look at the fullness of that very ecosystem in order to leverage it to produce insight. Macro level data — whether we're looking at the impact of social investment on a country's GDP or the relation between investment in employee experience and a Fortune 500 corporation's bottom line — is equally a key part of this understanding. Yet's goal is increasingly to bring these aspects — micro and macro — together and to make the insight of their mingling available on a single platform.

We can not imagine the modern learning experience without xAPI. And we can not imagine the future of human capital analytics without the ability to power AI and derive meaning across the diverse and divergent data assets of the Web. Those two things are not mutually exclusive. Rather, they are complementary values in the build out of the new architecture of human capital analytics. They are complimentary and necessary in the build out of the Experience Graph.

The Experience Graph is exactly that — it's a graph of experience. And just as experience is unlimited, the tools we use to leverage experience should be built to leverage it in all its unlimited and contextual facets. That means capturing it — and the macro context that surrounds it — through a matrix of strategies. Yet Analytics is deploying the tools necessary to meet the needs inherent in that project — xAPI databases and analytics engines, automated and interactive experience visualization tools and now a data platform leveraging AI to distill insights across contexts. We are dedicated to this work because as the Experience Graph grows, so too does the ability of organizations and employees to use data to improve their outcomes both in human capital development and in the impact that development has on organizational culture, economics and growth. This is something that improves people's lives.

By bringing together the on-the-ground nature of granular activity data collection about human engagement and learning behaviors with macro econometric data and the predictive capabilities of artificial intelligence, Yet Analytics is turning human capital data into a key piece of business and strategy intelligence. The following transcript describes how Yet and HP have begun applying these practices to the evaluation and forecasting of human capital investment at nation-state scale, beginning by solving problems for ministers of education across the globe. I think that you will quickly realize how this approach to a problem in the global education space as described below may provide a template for human capital investment and development solutions for any company or large organization.


Prepared Remarks from the Education World Forum 2017

For too long countries have been mass proliferators of educational laptops and tablets with no real proof of their impact. But increasingly, government leaders are being held accountable for responsible spending. As the transformation and automation of work means that the best jobs will be skilled labor fully immerced in ICT, how can governments demonstrate the return on investment of their human capital technology spending both in fiscal and social-economic terms?


In the economic sphere, Dr. Eric Hanushek has been something of the godfather of education data. He is responsible for establishing the quantitative connection between cognitive skills and long-term economic growth. It is in continuing that path of study that Yet Analytics and HP, Inc. have worked together to develop an artificial brain capable of identifying these economic and cognitive ROI connections at scale and in real-time.


The synapses of our artificial brain leverage machine learning and are programmed to fire based on the ingestion and querying of big data comprising information on learning, economic and social factors and outcomes gathered by the World Bank, the World Economic Forum, the United Nations and elsewhere. The outcome is the ability to predict multi-year return on investment on a great variety of learning, economic and social measures.


We knew that variables including adolescent fertility rates, infant mortality rates and the balance of trade goods all had significant relationships with GDP per capita. The artificial brain now recognizes the trends in these factors alongside educational and cognitive trends and can forecast the effect of such educational, economic and social factors on GDP. For example, the machine computes the trends and finds that in a given country investment in the math literacy of females as measured by PISA when combined with educational gender parity and a sustained level of internet access will yield a significant percentage outcome in the future growth of GDP per capita. The same factors in a different context may result in a different forecast. Perhaps most importantly, the artificial brain also forecasts the timetable of such investment and provides objective guidance by weighting value in the context of dozens of concurrent social and economic factors.


Beginning with, but advancing beyond methods of automated multilinear regression analysis, the artificial brain is comprised of neural networks, each trained to identify trends and relationships within and among key variables. Individual data records are treated as observations which together comprise layers of information. Training data is passed through these networks thousands, even hundreds of thousands of times, in order to learn the trend and relationship between the presented patterns and the individual country's GDP per capita. The neural networks learn the differences in variable values year to year in order to forecast GDP.


The result is an artificial brain purpose-built to assist in the identification and forecasting of return on investment in learning, economic, and social endeavors. Expressly built to take into account the temporality of data, the artificial brain and its neural networks can be trained for each and every country — meaning that countries themselves may add their own data in order to attain even more precise and relevant forecasts.


Yet Analytics' EIDCC artificial brain powers the HP Education Data Command Center.


The interactive analytics and data visualizations provide:

  • A real-time comparison of thousands of data points customizable by country
  • The ability to identify and choose key variables to drill down into micro-components of the time series such as relationship between technology spending, social trends, and education strategies
  • Hypothesis testing on past and future events
  • Real-time computation and visualization of multi-year strategic ROI including social and cognitive measures


For government leaders, this means the ability to demonstrate the responsible and strategic fiscal rigor of a government; the visionary education reform leadership of government and educational leaders; and the prediction of future payback and time horizons for economic and social outcomes. It makes a clear case for investing in human capital from early childhood through tertiary education. Drawing from a number of sources and data streams ranging from the internationally comparable to the hyper local, it renders complex data and statistics as elegant and accessible data visualizations. And it proves to international financing organizations that development ROI will be measured and met.

The HP Education Data Command Center, powered by Yet Analytics' EIDCC artificial brain, provides cross-modal quantitative evidence of return-on-investment of technology spending in education as well as predictive insights to maximize a country's economic and social outcomes as a result of investments in human capital." 

No comment yet.
Scooped by Roxana Marachi, PhD!

Despite assurances of flexibility, educators fear liability in online instruction of special ed students // EdSource

Despite assurances of flexibility, educators fear liability in online instruction of special ed students // EdSource | Educational Psychology & Technology |

Image credit: Alison Lin / EdSource


"Some districts had been reluctant to offer online learning to all students, fearing lawsuits in the event they could not offer comparable opportunities for special ed students."

By Carolyn Jones [EdSource]

"State and federal education leaders have assured school districts they would have flexibility in serving out-of-school special education students, but some districts are still afraid of lawsuits if they are unable to appropriately educate those students amid the coronavirus crisis.


U.S. Secretary of Education Betsy DeVos announced over the weekend that school districts should continue providing special education services, despite the difficulties they may face in offering online instruction during the pandemic and the threat of lawsuits if they are unable to do so.


The department also said it would grant districts and parents greater flexibility in meeting timelines spelled out in several federal laws governing special education. The California Department of Education also provided guidance, giving districts more leeway in implementing the special education laws.


The issue has direct implications for the nearly 800,000 special education students in California, who comprise 12.5 percent of the state’s public school enrollment, and who are now at home, with their often carefully constructed education programs completely upended.


DeVos’ announcement was welcomed by special education advocates, who said it provided clear guidance but enough flexibility for districts to find effective ways to meet the needs of special education students.


However, school administrators feel that the guidance they are receiving from both Washington and Sacramento is inadequate to assure school districts that they won’t face legal action if they are unable to provide all special education students with what is termed an “appropriate education” using online tools.


“(The state and federal guidance) is not nearly enough,” said Wesley Smith, executive director of the Association of California School Administrators, representing over 17,000 superintendents, principals and other administrators. “We need explicit waivers of explicit provisions. … Our districts are asking for relief so they can enact the governor’s orders to continue providing high-quality education.”


Some educators are seeking more specific waivers, which may be permitted in the education portion of the massive bailout legislation now being debated in Congress.


Last week, thousands of school districts across the country closed campuses to stem the spread of the coronavirus and began offering instruction online, or were planning to do so. That could pose a challenge for students enrolled in special education, many of whom rely on in-person assistance for speech, occupational, physical or behavioral therapy, as well as instructional aides to help in regular classrooms.


DeVos’ announcement came late last week after reports that some school districts were limiting — or not providing — online lessons for all students for fear of running afoul of federal special education laws, which guarantee students with disabilities the right to an equal education.


“Some educators have been reluctant to provide any distance instruction because they believe that federal disability law presents insurmountable barriers to remote education,” DeVos said. “This is simply not true. We remind schools they should not opt to close or decline to provide distance instruction, at the expense of students, to address matters pertaining to services for students with disabilities.”


But some districts want more assurance from the state and federal governments that they will not be subject to lawsuits from parents for deviating from a student’s special education plan, typically referred to as an Individualized Education Program, or IEP, which outlines all the services that a student is entitled to under federal and state law."...


For full, post, please visit:



No comment yet.
Scooped by Roxana Marachi, PhD!

30 Years of Gender Inequality and Implications on Curriculum Design in Open and Distance Learning (Koseoglu, Ozturk, Ucar, Karahan, and Bozkurt, 2020) // Journal of Interactive Media in Education 

30 Years of Gender Inequality and Implications on Curriculum Design in Open and Distance Learning (Koseoglu, Ozturk, Ucar, Karahan, and Bozkurt, 2020) // Journal of Interactive Media in Education  | Educational Psychology & Technology |


Gender inequality is a pressing issue on a global scale, yet studies on this important issue have stayed on the margins of open and distance learning (ODL) literature. In this study, we critically analyse a batch of ODL literature that is focused on gender inequality in post-secondary and higher education contexts. We use Therborn’s social justice framework to inform and guide the study. This is a comprehensive social justice lens that sees inequality as “a life and death issue,” approaching empowerment as a central area of concern. Qualitative content analysis of 30 years of peer-reviewed literature reveals patriarchy and androcentrism as significant mechanisms that continue to produce gender inequality, in particular in women’s access to educational resources and formal learning opportunities.
We highlight three themes that emerged in the content analysis: (1) ODL and equal opportunity; (2) Feminism and gender-sensitive curriculum design; and (3) Culturally relevant curriculum design. We critique views of access to technology-enabled education as an instrument for social justice, and provide a pedagogical model for an ODL curriculum centered on empowerment and agency, two concepts closely linked to existential inequality. We argue that such a curriculum is public service and requires a model of education that is based on participation and co-construction, and lies at the intersection of critical, feminist, and culturally relevant pedagogical practices."

No comment yet.
Scooped by Roxana Marachi, PhD!

Attorney General Balderas Sues Google for Illegally Collecting Personal Data of New Mexican School Children 

To download, click on title or arrow above.


See also:



No comment yet.
Scooped by Roxana Marachi, PhD!

Testimony of Ifeoma Ajunwa, JD, PhD for Hearing on The Future of Work: Protecting Workers' Civil Rights in the Digitial Age, Feb. 5th, 2020, Washington, D.C. 

To download, click on title above or here: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Decentering Technology in Discourse on Discrimination // Gangadharan & Niklas (2019), Information, Communication, and Society 

Decentering Technology in Discourse on Discrimination // Gangadharan & Niklas (2019), Information, Communication, and Society  | Educational Psychology & Technology |

For access to full article, please visit:

No comment yet.
Scooped by Roxana Marachi, PhD!

Fight for the Future Launches Campaign to Keep Facial Recognition Off U.S. College Campuses // Venture Beat

Fight for the Future Launches Campaign to Keep Facial Recognition Off U.S. College Campuses // Venture Beat | Educational Psychology & Technology |

By Kyle Wiggers
"Fight for the Future, the Boston-based nonprofit promoting causes related to copyright legislation, online privacy, and internet censorship, today announced that it will team up with advocacy group Students for Sensible Drug Policy in an effort to ban facial recognition on university campuses in the U.S. To kickstart the grassroots movement, the organizations this morning launched a website and an organizing toolkit for student groups.

The push is part of Fight for the Future’s broader Ban Facial Recognition campaign, which launched in July 2019 and calls on local, state, and federal lawmakers to prevent government and law enforcement use of facial recognition. While facial recognition isn’t widely deployed on U.S. campuses, Evan Greer, deputy director of Fight for the Future, asserts that it is likely to threaten privacy, civil liberties, and equity as companies increasingly market the tech to schools.


“Facial recognition surveillance spreading to college campuses would put students, faculty, and community members at risk. This type of invasive technology poses a profound threat to our basic liberties, civil rights, and academic freedom,” said Greer. “Schools that are already using this technology are conducting unethical experiments on their students. Students and staff have a right to know if their administrations are planning to implement biometric surveillance on campus … The data collected is vulnerable to hackers and in the wrong hands could be used to target and harm students. And it’s invasive, enabling anyone with access to the system to watch students’ movements; analyze facial expressions; [and] monitor who they talk to, what they do outside of class, and every move they make.”

Facial recognition found itself in the news last year more than perhaps any other application of AI.

A number of efforts to use facial recognition systems within schools have met with resistance from parents, students, alumni, community members, and lawmakers alike. The Lockport City School District in upstate New York abandoned plans to pilot components of a face-recording system after it was revealed that the district planned to flag suspended students. At the college level, a media firestorm erupted after it was revealed that a University of Colorado professor secretly photographed thousands of students, employees, and visitors on public sidewalks for a military anti-terrorism project, and after University of California San Diego researchers admitted to studying footage of students’ facial expressions to predict engagement levels."...


For full post, please visit:

No comment yet.