Safe Schools & Communities Resources and Research
11.0K views | +2 today
Scooped by Roxana Marachi, PhD
onto Safe Schools & Communities Resources and Research!

Social Media Helpline for Schools: "" // Open 9-4 on School Days in CA: (855) 997-0409

Social Media Helpline for Schools: "" // Open 9-4 on School Days in CA: (855) 997-0409 | Safe Schools & Communities Resources and Research |

"iCanHelpline is where schools and districts can call or email to get help in resolving problems that surface in social media – problems such as cyberbullying, sexting, and reputation issues involving students, staff or anyone in the school community. It’s a free service for schools, so we ask that individuals seeking help with a social media issue ask their school to contact us so we can all work together.

As the first step in developing a national helpline, iCanHelpline is being piloted in California during the 2015-’16 school year. It’s a joint project of California-based #iCANHELP and Net Family News Inc., national nonprofit organizations with more than a decade and a half of experience in education, student leadership and Internet safety."... 

No comment yet.
Safe Schools & Communities Resources and Research
This collection includes resources for strengthening school climate, and improving health, safety, connectedness, and student engagement.  Readers are encouraged to explore related links for further information.  For events and community resources specific to Santa Clara County, check out:
Your new post is loading...
Your new post is loading...
Rescooped by Roxana Marachi, PhD from Health Education Resources!

Healing Together: Community-Level Trauma. Its Causes, Consequences, and Solutions // Johns Hopkins Urban Health Institute

Click here to download pdf of document:

Creatrixi54's curator insight, August 21, 2015 6:46 PM

This is how #hiphopbasededu #hiphoptherapy will pave the way for new ways to engage and heal the people. 

Fleur Harding's curator insight, November 16, 2017 12:35 AM
This Scoop perhaps best explains why I chose to do this OCHS unit as an elective for my social work degree. As a support worker for young people at risk of homelessness, I am very aware of the mental strain and emotional turmoil that comes from working with people who have experienced trauma. The article discusses the vicarious trauma that human service workers in the justice system can experience through exposure to dangerous and distressing events and situations. This all ties in with the mental disorders discussed during the lecture and my other scoops for mental health OHS and is something I am keen to learn more about in the future.
Scooped by Roxana Marachi, PhD!

Technological School Safety Initiatives: Considerations to Protect All Students // Center for Democracy & Technology and Brennan Center for Justice 

Download by clicking on title or arrow above or at the following link: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Advocates call on FTC to investigate manipulative design abuses in popular FIFA game // FairPlay 

Advocates call on FTC to investigate manipulative design abuses in popular FIFA game // FairPlay  | Safe Schools & Communities Resources and Research |


David Monahan, Fairplay
Jeff Chester, Center for Digital Democracy,, 202-494-7100

Advocates call on FTC to investigate manipulative design abuses in popular FIFA game // Groups say FIFA: Ultimate Team Preys on Children's Vulnerability with Loot Boxes, “Funny Money”

BOSTON and WASHINGTON, DC – Thursday, June 2, 2022 – Today, advocacy groups Fairplay and Center for Digital Democracy (CDD) led a coalition of 15 advocacy groups in calling on the Federal Trade Commission (FTC) to investigate video game company Electronic Arts (EA) for unfairly exploiting young users in EA’s massively popular game, FIFA: Ultimate Team. In a letter sent to the FTC, the advocates described how the use of loot boxes and virtual currency in FIFA: Ultimate Team exploits the many children who play the game, especially given their undeveloped financial literacy skills and poor understanding of the odds of receiving the most desirable loot box items.

Citing the Norwegian Consumer Council’s recent report, Insert Coin: How the Gaming Industry Exploits Consumers Using Lootboxes, the advocates’ letter details how FIFA: Ultimate Team encourages gamers to engage in a constant stream of microtransactions as they play the game. Users are able to buy FIFA points, a virtual in-game currency, which can then be used to purchase loot boxes called FIFA packs containing mystery team kits; badges; and player cards for soccer players who can be added to a gamer’s team.

In their letter, the advocates noted the game’s use of manipulative design abuses such as “lightning round” sales of premium packs to promote the purchase of FIFA packs, which children are particularly vulnerable to. The advocates also cite the use of virtual currency in the game, which obscures the actual cost of FIFA packs to adult users, let alone children. Additionally, the actual probability of unlocking the best loot box prizes in FIFA: Ultimate Team is practically inscrutable to anyone who is not an expert in statistics, according to the advocates and the NCC report. In order to unlock a specific desirable player in the game, users would have to pay around $14,000 or spend three years continuously playing the game.

“By relentlessly marketing pay-to-win loot boxes, EA is exploiting children’s desire to compete with their friends, despite the fact that most adults, let alone kids, could not determine their odds of receiving a highly coveted card or what cards cost in real money. The FTC must use its power to investigate these design abuses and determine just how many kids and teens are being fleeced by EA.” Josh Golin, Executive Director, Fairplay

“Lootboxes, virtual currencies, and other gaming features are often designed deceptively, aiming to exploit players’ known vulnerabilities. Due to their unique developmental needs, children and teens are particularly harmed. Their time and attention is stolen from them, they’re financially exploited, and are purposely socialized to adopt gambling-like behaviors. Online gaming is a key online space where children and teens gather in millions, and regulators must act to protect them from these harmful practices.” Katharina Kopp, Deputy Director, Center for Digital Democracy

“As illustrated in our report, FIFA: Ultimate Team uses aggressive in-game marketing and exploits gamers’ cognitive biases – adults and children alike – to manipulate them into spending large sums of money. Children especially are vulnerable to EA’s distortion of real-world value of its loot boxes and the complex, misleading probabilities given to describe the odds of receiving top prizes. We join our US partners in urging the Federal Trade Commission to investigate these troubling practices.” Finn Lützow-Holm Myrstad, Digital Policy Director, Norwegian Consumer Council

“The greed of these video game companies is a key reason why we’re seeing a new epidemic of child gambling in our families. Thanks to this report, the FTC has more than enough facts to take decisive action to protect our kids from these predatory business practices.” Les Bernal, National Director of Stop Predatory Gambling and the Campaign for Gambling-Free Kids

“Exploiting consumers, especially children, by manipulating them into buying loot boxes that, in reality, rarely contain the coveted items they are seeking, is a deceptive marketing practice that causes real harm and needs to stop. strongly urges the FTC to take action.” Laura Smith, Legal Director at

Advocacy groups signing today’s FTC complaint include Fairplay; the Center for Digital Democracy; Campaign for Accountability; Children and Screens: Institute of Digital Media and Child Development; Common Sense Media; Consumer Federation of America; Electronic Privacy Information Center (EPIC); Florida Council on Compulsive Gambling, Inc.; Massachusetts Council on Gaming and Health; National Council on Problem Gambling; Parent Coalition for Student Privacy; Public Citizen; Stop Predatory Gambling and the Campaign for Gambling-Free Kids; (Truth in Advertising, Inc.); U.S. PIRG"


Please visit the following website for original announcement: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Resources for Responding to a Crisis 

The following list of resources has been curated from colleagues whose work is focused in school safety, school climate, student mental health, and educational leadership. Thanks to California Council on Teacher Education and UCLA Center for Mental Health in Schools and Student/Learning Supports for contributions below in response to the Robb Elementary School Shooting in Uvalde, Texas. 


Enough is Enough Syllabus 

Resources provided by students and faculty of University of Minnesota in response to Stoneman Douglas High School shooting in 2018

Teaching and Learning on Days After: Tragedy and Trauma Resources: // Curated by Dr. Kaitlin Popielarz


Responding to a Mass Casualty Event at a School: General Guidance for the First Stage of Recovery


Responding to School Violence: Tips for Administrators


Helping Youth after Community Trauma: Tips for Educators


Talking to Children About Violence: Tips for Parents and Teachers


From San Diego County Office of Education: Resources for Educators, Families to Discuss School Shootings


For more, see also:



No comment yet.
Scooped by Roxana Marachi, PhD!

COVID-19 (Coronavirus) Scams and Tips to Help Prevent Exploitation // Santa Clara County Office of Privacy 

The resource sheet provided above was created by the Privacy Office of the County of Santa Clara, California. To download, please click on title or arrow above or link below: 

No comment yet.
Rescooped by Roxana Marachi, PhD from Safe Schools & Communities Resources and Research!

Books to Help Kids Understand the Fight for Racial Equality // Brightly

Books to Help Kids Understand the Fight for Racial Equality // Brightly | Safe Schools & Communities Resources and Research |

By Olugbemisola Rhuday-Perkovich 

Via Roxana Marachi, PhD
No comment yet.
Scooped by Roxana Marachi, PhD!

Help Stop Hate Crimes // Resources from NAACP 

To download flyer above, please click title above or here: 



See also: 

NAACP Letter 2016:

Rising Nazism and Racial Intolerance in the United States:
Preventing Youth Hate Crime: A Manual for Schools and Communities:

Resource list above provided by Rev. Jethroe Moore, President of the San Jose / Silicon Valley NAACP 
No comment yet.
Scooped by Roxana Marachi, PhD!

NYCLU Criticizes Unproven, AI-driven Surveillance of Students // Times Union 

NYCLU Criticizes Unproven, AI-driven Surveillance of Students // Times Union  | Safe Schools & Communities Resources and Research |

By Rachel Silberstein
[Selected quotes]

..."cybersecurity and privacy experts warn that school districts are entering uncharted territory by installing unproven, artificial intelligence-driven surveillance devices that raise new questions about student privacy and how minors' images may be stored and shared. School shootings are statistically rare, while the use and potential abuse of private student information presents an imminent risk, experts say.

Last week, the State Education Department (SED) greenlit the state's first school facial recognition systems as a means to prevent intruders from gaining access to school facilities in Lockport, Niagara County, despite objections from parents and civil liberties groups who asked the district to hold off until the state finalizes guidelines on how the images may be used.

After meeting with Lockport school leaders over several months, state education officials approved the cameras' use after the district agreed to revise its privacy policy to ensure that no student data would be retained, according to a letter sent by SED.

“With these additional revisions, the Department believes that the Education Law ... issues it has raised to date relating to the impact on the privacy of students and student data appear to be addressed," said Temitope Akinyemi, chief privacy officer at SED.

Lockport’s software, called Aegis, can recognize weapons and sex offenders as well as a small group of individuals whose images have been placed in the system, such as staff or students who have been suspended or people barred from school property, according to the policy.


Experts say there is no way to prevent the tech companies from using the facial impressions of children to fine-tune their algorithms. They warn of potential bias and false positives; studies have found that facial recognition programs tend to disproportionately flag people of color, women and young people.

"We don't think the State Education Department has done its due diligence in really getting a grasp on how this technology works," said Johanna Miller, a civil rights attorney and director at the New York Civil Liberties Union's Education Policy Center. "It seems from the letter that they are not familiar with the technology at all. The concept that no student data will be retained is flawed."

The cameras have been used in settings like airports, stores and sports arenas, but rarely in public schools. The use of biometric technology by a public entity is highly controversial and is being debated nationally.


New York recently eradicated the use of fingerprinting — an early use of biometrics — to determine eligibility for programs like food stamps and Medicaid, citing criticism that the process is intrusive and a potential deterrent to applying for services. San Francisco earlier this year passed legislation prohibiting the use of facial recognition cameras by any government entity; there is speculation that the state of California may do the same.

Pending legislation in New York, sponsored by Assemblywoman Monica Wallace, a Democrat from the suburbs east of Buffalo, would create a one-year moratorium on the technology's use in New York schools to allow policymakers time to study its application and issue regulations.

This year, the Legislature and Gov. Andrew M. Cuomo enacted the Shield Act, which broadens the legal definition of "private information" to include biometric data, and sets limits on how companies may handle it.

“It is very concerning to me that kids are in the middle of this," said state Sen. Kevin Thomas, D-Long Island, the bill's main sponsor and chair of the chamber's consumer protection committee. "There is too much surveillance going on. We as a society are trading our privacy for security, and this has become the new norm."

Ed-tech boom

A growing education-tech sector is fueled in part by state incentives for schools to invest in high-tech security systems – grants that policymakers tout as a policy response to mass shootings around the nation.

The facial recognition cameras in Lockport were funded by taxpayers through the Smart Schools Bond Act of 2014, which grants schools millions for technological upgrades like whiteboards and internet upgrades, as well as security technologies. Lockport's schools spent the bulk of a $2.7 million grant on the biometric software.

School officials are under pressure from communities to do all they can to keep students safe and are targeted by tech companies looking to capitalize on the unease around school safety, sometimes through hard-selling "security consultants" who act as middlemen.


Documents uncovered by the NYCLU through a freedom of information (FOIL) request that the same security consultant urged the Lockport district to utilize facial recognition technology may be benefiting financially from the Aegis and an electrical company that installed the system.


"The pattern in schools we are seeing is really technology in search of a market," the NYCLU's Miller said. "These are tech start-ups that see deep pockets ... and school districts are put in a position to go way out of their comfort zone and evaluate these vendors."


Other districts, including Courtland City schools, are buying programs that monitor social media or log keystrokes on school-issued devices to flag words associated with bullying or self-harm, despite little evidence that the technology makes schools safer, according to a recent report by the Brennan Center for Justice.

Civil rights groups say these tactics may stifle learning and free speech when school-supplied devices are the only way for some students to access the internet."... 

No comment yet.
Scooped by Roxana Marachi, PhD!

Virtual Violence Impacts Children on Multiple Levels // American Academy of Pediatrics

Virtual Violence Impacts Children on Multiple Levels // American Academy of Pediatrics | Safe Schools & Communities Resources and Research |

Virtual Violence Impacts Children on Multiple Levels


Policy and commentary published in Pediatrics detail the impacts of media violence on children, including aggressive behavior and victimization

Virtual violence – violence experienced via media or realistic technologies – is an inescapable component of children's lives, and research shows that without guidance or controls it has the power to make children more aggressive, violent and fearful.

The American Academy of Pediatrics (AAP) will publish a policy statement, "Virtual Violence," in the August 2016 issue of Pediatrics (released online July 18), which reviews the evidence of how virtual violence impacts children, and offers guidance to parents, media producers and pediatricians. A related commentary published in the same issue expands on the impacts of social media, smart phones and apps like Instagram and YouTube on virtual violence and teens.

"The American Academy of Pediatrics continues to be concerned about the impact that virtual violence has on children, and we know that parents are also concerned, because it's a question that pediatricians often receive during wellness exams," said Dimitri Christakis, MD, FAAP, lead author of the policy statement. "Pediatricians can let parents know that there are ways to mitigate the impact of media violence, by co-viewing games and movies with their kids, making a media plan for their family and protecting children under age 6 from all violent media."

Media violence is very common. In the year 2000, every G-rated movie contained violence, as did 60 percent of prime-time television shows, according to a study published in JAMA. A comprehensive assessment of screen violence in 1998 estimated that by middle school a typical child would have seen 8,000 murders and 100,000 other acts of violence, including rape and assault. Today, children experience screen violence though an even greater number of devices and platforms.

"With the advent of smart phones and aps like Snapchat and Instagram, children can capture, view and share violent acts in ways that are new to millennials and centennials," said Rhea Boyd, MD, FAAP, a member of the Executive Committee of the AAP Council on Communications and Media and lead author of the Pediatrics commentary, "The Evolution of Virtual Violence: How Mobile Screens Provide Windows to Real Violence."

"Nearly three out of four teenagers have access to a smart phone, and exposure to real-world violence via these devices, often without parental knowledge or control, can create feelings of distress, victimization and even fear," Dr. Boyd said.

In the Pediatrics commentary, Dr. Boyd and her co-author, Wendy Sue Swanson, MD, MBE, argue that portable smartphone cameras can expose young people to real-world violence, which is fundamentally different than the simulated violence depicted in traditional media sources, like television, movies, or video games. This access to real-world violence can result in complex emotions and behaviors in youth that may vary based on the family, community, or cultural group with whom youth identify and process acts of violence. For example, a teenager viewing a video of police violence may be distressed by the images but also moved to social action.

While hundreds of studies have found violent media can raise aggression in children, research has also shown that exposing children to prosocial media content can decrease aggression and improve overall behavior.

The AAP recommends:

  • Pediatricians should consider a child's "media diet" as a part of wellness exams, considering not just the quantity of media but also the quality.
  • Parents should be mindful of their child's media consumption, and should co-view media and co-play games with their children.
  • Protect children under age 6 from all virtual violence, because they cannot always distinguish fantasy from reality.
  • Policy-makers should consider legislation to prohibit easy access to violent content for minors and should create a robust and useful "parent-centric" media rating system.
  • Pediatricians should advocate for and help create child-positive media, collaborating with the entertainment industry on shows and games that don't include violence as a central theme.
  • The entertainment industry should create content that doesn't glamorize guns or violence, doesn't use violence as a punch line and eliminates gratuitous portrayals of violence and hateful, misogynistic or homophobic language unless also portraying the impacts of these words and actions.
  • In video games, humans or living targets should never be shot for points.
  • The news media should acknowledge the proven scientific connection between virtual violence and real world aggression and stop portraying the link as controversial.


The policy updates a previous statement published in 2009.

For full post, click on title above or here:

No comment yet.
Scooped by Roxana Marachi, PhD!

‘Thinstagram’: Instagram’s Algorithm Fuels Eating Disorder Epidemic // The Tech Transparency Project

‘Thinstagram’: Instagram’s Algorithm Fuels Eating Disorder Epidemic // The Tech Transparency Project | Safe Schools & Communities Resources and Research |

"Instagram says it removes content that encourages anorexia and bulimia. New research finds that the platform still pushes such content to teen and adult accounts."


By The Tech Transparency Project

"Instagram continues to promote dangerous eating disorders to vulnerable users including young teenagers, according to new investigation by Reset and the Tech Transparency Project (TTP) that highlights the platform’s role in amplifying unhealthy body ideals.

Researchers found that Instagram recommended accounts full of disturbing images of underweight women to users who showed an interest in getting thin. Many of the recommended accounts explicitly promoted anorexia and bulimia, listing goal weights as low as 77 pounds.

The investigation also revealed just how easy it is to get pulled into Instagram’s “thinfluencer” culture, with anorexia “coaches” reaching out with unsolicited offers to provide weight loss advice.

Meanwhile, Instagram makes it exceedingly easy to search for hashtags and terms associated with eating disorders on the platform.

According to documents leaked earlier this year by Facebook whistleblower Francis Haugen, Instagram executives are acutely aware of the effects of content promoting unhealthy body ideals on young users. An internal presentation by an Instagram employee in 2019 said, “We make body image issues worse for one in three teen girls.” But the platform continues to amplify content promoting extreme weight loss, failing to enforce its own moderation policies.

The research by Reset and TTP adds to growing questions about Instagram’s impact on users struggling with body image issues, especially young people. Such questions will be front and center for Adam Mosseri, the head of Instagram, when he testifies before a Senate panel on Wednesday.

“The conclusions of this research are deeply concerning but sadly not surprising,” said Dr. Elaine Lockhart, chair of the Faculty of Child and Adolescent Psychiatry at the Royal College of Psychiatrists in the U.K. “I’m seeing more and more young people affected by harmful online content.”

“We need to see tougher regulation and stricter penalties for organizations that promote or amplify this content to users,” she said, adding that “government must compel social media companies to hand over anonymized data to researchers.”

The world of ‘thinspo’

To assess the extent to which Instagram protects users from content that encourages eating disorders, our researchers created an account on Oct. 20, 2021 for a hypothetical user who showed an interest in getting thin, and used that to document the content that Instagram’s algorithm recommended.

The account was for a 29-year-old adult. Over the course of a week, we posted six pictures of thin bodies and used vocabulary in the bio section that is common in thinspiration communities—such as “My thinspo” and “TW,” short for trigger warning. (While originally meant to help people avoid triggering images, the term “TW” has become a signal that can attract users to eating disorder content.) We also subscribed to other “thinspo” Instagram accounts, both private and public.

To examine whether Instagram affords a higher level of protection to minors, we repeated the same experiment with a second account for a 14-year-old. We explicitly stated the age in the bio for that account.

Experimental teen account

During this process, when our hypothetical users followed just one account associated with eating disorders, Instagram started recommending similar accounts.

For example, when our first test user started following a verified account with over 700,000 followers, run by a figure with a fan base in the "thinfluencer" community, Instagram’s algorithm suggested we also follow so-called “pro-ana” accounts. (“Ana” is a common shorthand for anorexia nervosa.) It’s easy to see how this could send a vulnerable person down a rabbit hole that normalizes toxic body images and extreme weight loss.


Interestingly, many of the pro-ana accounts recommended by Instagram had smaller follower counts. Accounts like these would normally have a hard time getting traction on the platform, but Instagram’s algorithmic amplification actively promoted them to new users, helping them find a broader audience.

The growth curve of our first test account further illustrates the problem of algorithmic amplification on Instagram. The account’s audience increased by more than seven fold in the three weeks after its last activity, suggesting that Instagram recommended it to other users.


Our findings for the 14-year-old account were equally alarming.

In the teen user’s Discovery tab, Instagram recommended a number of large “thinfluencer” accounts that had at least 1,000 followers and featured highly produced content with dangerous body images. At the same time, Instagram’s “Discover people” feature (found in a user's profile) recommended smaller private accounts of young users oriented around extreme weight loss. This all creates a troubling ecosystem: “Thinfluencer” accounts on Instagram promote unhealthy body ideals, while peer communities of young users encourage each other to pursue those ideals.


Anorexia ‘coaches’

Within the Instagram communities promoting extreme body images, self-described “coaches” provide weight-loss advice to other users. The media have reported for years on the dangers of these kinds of individuals preying on vulnerable young people, particularly girls, on social media, so it should come as no surprise to Instagram. But it took only four days for a “coach” to contact our first account.

The following are screenshots of the interaction between the “coach” and our test account. The “coach” immediately attempted to shift the interaction to other platforms such as Snapchat or Telegram, where personal conversations between users are harder to track by both the platforms and law enforcement authorities.


Unmoderated group chats

During the experiment, our first account received personal messages from other users who likely found our account through the Discovery tab. (Instagram does not provide researchers with the data needed to track amplification patterns.) Those other users asked for tips on “how to get skinny quickly” or whether we wanted to be “ana buddies.” In many cases, it was impossible to verify the users’ authenticity, and we did not respond to any of the messages.  


Upon being invited, we did join one group chat called “Supporting starvation,” which had 17 other members. We did not write any messages in that chat, but captured messages from other users documented below: [See article for image]


Enforcement loopholes

Instagram’s official policy states: “[W]e’ll remove content that promotes or encourages eating disorders” while allowing people to “share their own experiences and journeys around self-image and body acceptance.” But our investigation showed that the company’s enforcement of this policy is patchy at best.

During our experiment, Instagram blocked the hashtags #ana (short for anorexia) and #mia (short for bulimia), but our researchers found that the fully spelled out hashtags for #аnorexia, #bulimia, and #magersucht (anorexia in German) were still active. What’s more, typing “ana” or “mia” into the Instagram search bar as non-hashtags still yields a significant amount of content promoting eating disorders.


Instagram provides some resources to help people suffering from eating disorders. Some pages for hashtags associated with eating disorders include a pop-up message pointing to ways to get support, including a link to reach a helpline volunteer. But there are loopholes in this system. For example, if a user searches for a banned hashtag like #thinspo, they don't get the pop-up message. 

TTP also found discrepancies in how Instagram deploys safety features. Take the page for the hashtag #th1n, a reference to "thin." On the app, Instagram showed a warning on the #th1n page that the content may go against its Community Guidelines. Yet the same #th1n page on the Instagram website included no such warnings, despite the presence of some graphic eating disorder content.


Moreover, without a prompt, resources for people suffering from eating disorders are hard to find on Instagram. A user must click on their profile icon, then click "Settings," then click "Help" in the sidebar menu, then click "Help Center," then click on the drop-down menu for "Privacy, Safety, and Security," and finally click "About Eating Disorders."

‘Multiple accounts’ policy

According to Instagram's Help Center, one user can have up to five different Instagram accounts and switch between them without logging out—and our researchers found this feature plays an important role in the “thinspiration” community.

We analyzed thousands of “thinspiration” profiles and found that backup accounts are common, with users often featuring them in their bios. The apparent strategy with these backup accounts is to evade removals or suspensions by Instagram that might cut people off completely from the eating disorder content they seek.


We found examples of users arguing that reporting their accounts won’t help them recover. One 13-year-old Instagram user who was engaging with pro-anorexia content said “reporting me won’t make me magically wanna recover, all it does is annoy me.”



Our research reveals multiple loopholes in Instagram's product design and safety policies, which make Instagram a danger to the mental health and physical well-being of one its most vulnerable user groups: people with eating disorders.

Instagram not only fails to enforce its own policies, but it also proactively recommends toxic body image content to its adult and teen users. In this way, Instagram fuels the idealization and marketization of dangerous body ideals, while fostering communities of young users prone to eating disorders.

The platform, meanwhile, hasn’t adequately addressed the threat of anorexia “coaches” who prey on young people. These shortcomings greatly increase the risk of users being drawn into communities of self-harm."


For original post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Restorative Justice as a Doubled-Edged Sword: Conflating Restoration of Black Youth with Transformation of Schools // Daneshzadeh & Sirrakos (2018)

The anchoring weight of slavery continues to ground schools by design and implementation, 151 years after the 13th Amendment to the Constitution was ratified. Empirical literature is rife with evidence that Black and Brown youth are penalized more frequently and with greater harshness than their white, suburban counterparts for the same offenses (Gregory, Skiba, & Noguera, 2010; Welch & Payne, 2010), to the point where Triplett, Allen, and Lewis (2014) describe this phenomenon as a civil rights issue. The authors examine how a constellation of school-sanctioned discipline policies have connected the legacy of slavery with punishment. In order to curb burgeoning suspension rates that disproportionately target Black youth, schools and grassroots organizations have adopted various tiers of Restorative Justice (RJ). This article draws upon existing theoretical frameworks of Restorative Justice to discuss new approaches and directions, as well as the limitations of its hyper-individualized applications in K-12 schools. Finally, the authors assess two case studies that aim to transform schools and community engagement by refocusing restorative philosophy on the ecological conditions of student contexts, rather than the presumed intrapsychic symptoms habitually ascribed to youth behavior and Black culture."


To download, click on title, arrow above, or this link below: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Information and Resources To Support Families During COVID19 Shelter-In-Place Order // County of Santa Clara

To download, click on title or arrow above. Document is also available at:

No comment yet.
Scooped by Roxana Marachi, PhD!

Advocates Warn Students' Privacy At Risk In GOP Gun Violence Bill // The Hill

Advocates Warn Students' Privacy At Risk In GOP Gun Violence Bill // The Hill | Safe Schools & Communities Resources and Research |

By Emily Birnbaum
"A long-awaited GOP proposal to combat mass shootings has been receiving pushback from education groups and children's privacy advocates over language they say could result in the “over-surveillance” of minors.

After months of deliberations, including meetings with victims and law enforcement officials in communities wracked by deadly shootings, Sen. John Cornyn (R-Texas) introduced a Republican-backed “bill to help prevent mass shootings" on Wednesday.

The Restoring, Enhancing, Strengthening, and Promoting Our Nation’s Safety Efforts (Response) Act, which has several Republican co-sponsors, bundles some of the top GOP proposals to combat mass shootings into one bill. It would expand resources for mental health treatment, facilitate the creation of “behavioral intervention teams” to monitor students exhibiting disturbing behavior and offer new tools for law enforcement.


The bill’s school safety proposals are a response to years of school shootings perpetrated by young people described as isolated and troubled.

But advocates have raised red flags over the Response Act’s requirement that schools begin monitoring their computer networks to “detect [the] online activities of minors who are at risk of committing self-harm or extreme violence against others.”

Under Cornyn’s legislation, nearly all federally funded schools in the U.S. would be required to install software to surveil students’ online activities, potentially including their emails and searches, in order to flag “violent” or alarming content.

The proposal would significantly expand the Children’s Internet Protection Act, a 2000 law that is mostly interpreted today as blocking children from looking up pornography on school computers.

Privacy experts and education groups, many of which have resisted similar efforts at the state level, say that level of social media and network surveillance can discourage children from speaking their minds online and could disproportionately result in punishment against children of color, who already face higher rates of punishment in school."...


For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

School Surveillance Will Never Protect Kids From Shootings // Chris Gilliard // Wired

School Surveillance Will Never Protect Kids From Shootings // Chris Gilliard // Wired | Safe Schools & Communities Resources and Research |

By Chris Gilliard

"If we are to believe the purveyors of school surveillance systems, K-12 schools will soon operate in a manner akin to some agglomeration of Minority ReportPerson of Interest, and Robocop. “Military grade” systems would slurp up student data, picking up on the mere hint of harmful ideations, and dispatch officers before the would-be perpetrators could carry out their vile acts. In the unlikely event that someone were able to evade the predictive systems, they would inevitably be stopped by next-generation weapon-detection systems and biometric sensors that interpret the gait or tone of a person, warning authorities of impending danger. The final layer might be the most technologically advanced—some form of drone or maybe even a robot dog, which would be able to disarm, distract, or disable the dangerous individual before any real damage is done. If we invest in these systems, the line of thought goes, our children will finally be safe.


Not only is this not our present, it will never be our future—no matter how expansive and intricate surveillance systems become.

In the past several years, a host of companies have sprouted up, all promising a variety of technological interventions that will curtail or even eliminate the risk of school shootings. The proposed “solutions” range from tools that use machine learning and human monitoring to predict violent behavior, to artificial intelligence paired with cameras that determine the intent of individuals via their body language, to microphones that identify potential for violence based on a tone of voice. Many of them use the specter of dead children to hawk their technology. Surveillance company AnyVision, for instance, uses images of the Parkland and Sandy Hook shootings in presentations pitching its facial- and firearm-recognition technology. Immediately after the Uvalde shooting last month, the company Axon announced plans for a taser-equipped drone as a means of dealing with school shooters. (The company later put the plan on pause, after members of its ethics board resigned.) The list goes on, and each company would have us believe that it alone holds the solution to this problem.


The failure here is not only in the systems themselves (Uvalde, for one, seemed to have at least one of these “security measures” in place), but in the way people conceive of them. Much like policing itself, every failure of a surveillance or security system most typically results in people calling for more extensive surveillance. If a danger is not predicted and prevented, companies often cite the need for more data to address the gaps in their systems—and governments and schools often buy into it. In New York, despite the many failures of surveillance mechanisms to prevent (or even capture) the recent subway shooter, the mayor of the city has decided to double down on the need for even more surveillance technology. Meanwhile, the city’s schools are reportedly ignoring the moratorium on facial recognition technology. The New York Times reports that US schools spent $3.1 billion on security products and services in 2021 alone. And Congress’ recent gun legislation includes another $300 million for increasing school security.

 What many of these predictive systems promise is a measure of certainty in situations about which there can be none.

But at their root, what many of these predictive systems promise is a measure of certainty in situations about which there can be none. Tech companies consistently pitch the notion of complete data, and therefore perfect systems, as something that is just over the next ridge—an environment where we are so completely surveilled that any and all antisocial behavior can be predicted and thus violence can be prevented. But a comprehensive data set of ongoing human behavior is like the horizon: It can be conceptualized but never actually reached.


Currently, companies engage in a variety of bizarre techniques to train these systems: Some stage mock attacks; others use action movies like John Wick, hardly good indicators of real life. At some point, macabre as it sounds, it’s conceivable that these companies would train their systems on data from real-world shootings. Yet, even if footage from real incidents did become available (and in the large quantities these systems require), the models would still fail to accurately predict the next tragedy based on previous ones. Uvalde was different from Parkland, which was different from Sandy Hook, which was different from Columbine.


Technologies that offer predictions about intent or motivations are making a statistical bet on the probability of a given future based on what will always be incomplete and contextless data, no matter its source. The basic assumption when using a machine-learning model is that there is a pattern to be identified; in this case, that there’s some “normal” behavior that shooters exhibit at the scene of the crime. But finding such a pattern is unlikely. This is especially true given the near-continual shifts in the lexicon and practices of teens. Arguably more than many other segments of the population, young people are shifting the way they speak, dress, write, and present themselves—often explicitly to avoid and evade the watchful eye of adults. Developing a consistently accurate model of that behavior is near impossible.

Not only are these technologies incapable of preventing our worst nightmares, their presence is actively moving us toward a dystopian one. If society were to deploy every surveillance and analytical tool available, schools would be hardened to a point where even the most anodyne signs of resistance or nonconformity on the part of young people would be flagged as potentially dangerous—surely an ongoing disaster for the physical, social, and emotional well-being of children, for whom testing boundaries is an essential element of figuring out both themselves and the world they live in. This applies as well to the proposal for more hardware. It’s possible to envision schools as a site where drones and robots are ready to launch into action, such that they come to resemble some combination of a penitentiary and an Amazon warehouse. Worse yet, this hyper-surveilled future is likely to significantly increase the violence visited upon Black students, trans students, and now, given the overturning of Roe, students seeking information on sexual health. All without bringing us any closer to the intended goal of eliminating shootings.


There’s a long-standing maxim among scholars and activists who study the history of technology: Innovations by themselves will never solve social problems. The school shooting epidemic is a confluence of many issues, none of which as a society we will “tech” our way out of. The common refrain is that these attempts are “better than nothing.”  Rick Smith, the CEO of Axon who briefly proposed the taser drones, told Motherboard that his plan was in fact motivated by the gridlock in Washington DC.

In one sense, it is true that doing absolutely nothing may be worse than what we have now. But this artificial dichotomy obscures other options—such as making it harder to obtain weapons capable of inflicting incalculable damage in a matter of seconds—that many countries have already done. “Better than nothing” is a set of practices that arise at the expense of children. It’s a half measure because as a society we are unwilling to do what actually works.

Chris Gilliard is a visiting research fellow at the Harvard Kennedy School Shorenstein Center."


For original article, please visit: 

No comment yet.
Rescooped by Roxana Marachi, PhD from Social & Emotional Learning and Critical Perspectives on SEL Related Initiatives!

Restorative Practices Guide and Toolkit // Chicago Public Schools 

No comment yet.
Scooped by Roxana Marachi, PhD!

Sextortion- What Parents Need to Know

The following announcement is from a Screen Time Action Network email update (6/4/22) 


"The FBI warns of an increase in incidents involving sextortion of young children, particularly 14-17 year- old boys. Sextortion occurs when criminals lure a social media user into sending explicit photos, then threaten to expose them publicly to friends and family.


As the school year winds down and youth gear up to have more time online this summer, it’s important to give preteens and teens tools to recognize how sextortion starts and what they can do before it’s too late. When it happens, teens don’t always understand they are the victim of a serious crime, and often resist turning to trusted adults for help because they are so embarrassed and traumatized about what they have done. 


Lisa Honold, of the Center for Online Safety, created this resource, which explains what sextortion is, how to prevent it, and what to do if it has happened. Lisa also co-chairs our Cyberbullying and Online Safety Work Group. We are grateful for her expertise and dedication to protecting children.


Read the full article here or visit Lisa’s blog for parents, caregivers, educators, and others who live or work with kids."


Document above may be downloaded from clicking title or this link below: 

No comment yet.
Scooped by Roxana Marachi, PhD!

What Gun Violence Does to Our Mental Health // The New York Times

What Gun Violence Does to Our Mental Health // The New York Times | Safe Schools & Communities Resources and Research |

"Mass shootings and other types of trauma can have ripple effects not only for survivors but also for those who follow the news of the events."


By Christina Caron 

"Heather Martin was a senior at Columbine High School in 1999 when two gunmen, also teenagers, killed 13 people and wounded 21 more before taking their own lives. She ended up barricaded in a room for three hours. And although she wasn’t physically injured, she witnessed the aftermath of the shooting, which she described as “horrifying.”

Despite having survived such a traumatic event, she did not consider how deeply her mental health might have been affected. “I minimized my own experience and always thought, Someone has it worse. I should just be fine or be better,” she said.

But she wasn’t fine. Ms. Martin had recurring nightmares for years and eventually dropped out of college after developing an eating disorder and taking recreational drugs.

It wasn’t until the 10th anniversary of the shooting that she finally found the support she needed and reconnected with some of her classmates “who got it, who were also struggling, who didn’t judge me,” she said.


Mass shootings have become more common during the pandemic, and so, too, have other types of gun violence. So far this year there have been more than 200 mass shootings in the United States, including the one that caused the deaths of 19 children and two teachers in Uvalde, Texas, on Tuesday. But beyond the statistics is a number that is harder to quantify: The large swath of people grappling with the psychological effects that stem from the violence.

The mental health toll doesn’t just affect those closest to gun violence. It also ripples through a community and the nation, said Erika Felix, an associate professor of clinical psychology at the University of California, Santa Barbara, who has studied survivors of shootings.

“It’s felt everywhere,” she said. “We really have to look at this as a public mental health crisis.”

For survivors, victims’ families and those who live near the location of a shooting, the psychological effects can be intense and prolonged. They may include post-traumatic stress disorder, substance abuse, self-harm and major depressive disorders.

But even among those who do not frequently experience gun violence or who have never been directly affected by a mass shooting, feelings of fear, anger or helplessness can arise. And studies have found that continually consuming news media after a tragedy can lead to acute stress.

“It affects our perceptions of vulnerability and risk,” Dr. Felix said. 

Could have happened to any of us

In a 2018 survey conducted by the Harris Poll for the American Psychological Association, 75 percent of young people between 15 and 21 said that mass shootings were significant sources of stress for them. Most adults ranging in age from 22 to 72 said the same.

The fact that the shooting in Uvalde could have happened to any of us “is deeply unsettling,” said Dr. Sara Johnson, a professor of pediatrics at Johns Hopkins University School of Medicine who has studied how chronic stress affects child development and behavior.

Some people may develop a sense that the world is not a safe place, that others cannot be trusted “or that they are powerless to change the circumstances in which they’re living,” Dr. Johnson said. “These kinds of mass shootings really tear at the fabric of society.”

But despite the potential for far-reaching psychological effects, there is limited data on what firearm injury does to our collective mental health.


This is in part because agencies like the Centers for Disease Control and Prevention did not fund gun violence research for more than two decades after a provision called the Dickey Amendment prohibited the use of federal money to “advocate or promote gun control.”

What experts have found is that directly after mass violence, most survivors and responders will have stress reactions that gradually decrease over time, according to the National Center for PTSD. But some people — and especially those with specific risk factors — may experience lasting consequences, including PTSD.

PTSD symptoms can be similar in adults and children, said Nicole R. Nugent, an associate professor of psychiatry and human behavior at the Warren Alpert Medical School of Brown University and an expert in PTSD identification and treatment.


Those with PTSD often have trouble sleeping and may become emotionally numb, continuously on edge or easily startled, she said. The world will often feel unsafe to them, and upsetting memories may intrude on their daily thoughts. Some people may try to avoid things that remind them of their trauma. Teens and adults might turn to substance abuse.

Younger children may experience stomachaches or headaches, and lower-grade anxiety that causes them to misbehave or have trouble concentrating. They may also engage in “traumatic play,” acting out the trauma they experienced, Dr. Nugent added. If the behavior persists, she said, “then we start to worry that it could be signaling something significant like PTSD.”

Proximity to violence

Much like those who experience gun violence, those who live near it may also suffer.

Dr. Aditi Vasan, a general pediatrician at Children’s Hospital of Philadelphia, decided to investigate how children in her community were psychologically affected by nearby shootings after speaking with patients who had anxiety, depression or difficulty sleeping.

“When I asked them when these symptoms started, they told me it was after a classmate or a friend or a neighbor was shot,” she said.

The resulting study, published in JAMA Pediatrics in 2021, examined emergency department admissions between 2014 and 2018 and found that children and teenagers in west and southwest Philadelphia who lived within about four to six blocks of where a shooting had occurred were more likely than other children to use an emergency room for mental health reasons during the two months after the shooting.

The odds rose among children who were exposed to multiple shootings and among those who lived closest to a shooting’s location, within two or three blocks. Their symptoms included anxiety, panic attacks, suicidal ideation and self-harm behavior, Dr. Vasan said.

Another study, in California, looked at the effects of police killings on several communities in Los Angeles. It showed decreases in high school students’ academic performance, learning deficiencies related to PTSD and higher levels of depression and school dropouts that correlated to how close students lived to where the shootings occurred. These problems were most pronounced among Black and Latino students who lived near the locations of police shootings of Black and Latino people.

“The fear overcomes the need to connect with other people, and that’s the real tragedy of what violence does to communities,” said Dr. Joel Fein, an emergency medicine physician at Children’s Hospital Philadelphia, where he co-directs the Center for Violence Prevention.


Addressing the psychological effects of gun violence

For younger children affected by violence, Dr. Nugent recommended keeping as much structure in place as possible, like regular bedtimes and mealtimes.

“They are looking to us for those subtle signals that things are OK and things are safe,” she said.

It’s also important to allow ourselves to feel grief, rather than to bottle it up, and to allow our children to acknowledge it, too, said Dr. Megan L. Ranney, an emergency physician and the academic dean of the School of Public Health at Brown University.

Finding the things that give us a sense of control can help us cope as well. Plan on disconnecting from the news media from time to time, Dr. Ranney added, so as not to “re-traumatize yourself over and over.” And consider making a positive contribution to your neighborhood, like getting involved in organizations such the Boys & Girls Clubs of America or planting a community garden.

Shortly after the shooting in a movie theater in Aurora, Colo., in 2012, Ms. Martin and one of her high school friends co-founded the Rebels Project, a nonprofit, nonpartisan peer support group for those directly affected by mass violence. With about 1,700 members, it is one of the largest organizations of its kind, she said.

People will “push down their trauma and their experiences, and it can lead to some really dangerous places,” said Ms. Martin, now 41 and a high school English teacher in Aurora. “It’s really about acknowledging that you are impacted.”


Christina Caron is a reporter for the Well section, covering mental health and the intersection of culture and health care. Previously, she was a parenting reporter, general assignment reporter and copy editor at The Times. @cdcaron


For original post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

The Starts and Stumbles of Restorative Justice in Education: Where Do We Go from Here? // National Education Policy Center

The Starts and Stumbles of Restorative Justice in Education: Where Do We Go from Here? // National Education Policy Center | Safe Schools & Communities Resources and Research |

Anne Gregory and Katherine R. Evans, January 14, 2020

"Schools are implementing Restorative Justice in Education (RJE) initiatives across the United States, often to reduce the use of out-of-school suspension, which is known to increase the risk for dropout and arrest. Many RJE initiatives also aim to strengthen social and emotional competencies, reduce gender and racial disparities in discipline, and increase access to equitable and supportive environments for students from marginalized groups. This policy brief summarizes research on restorative initiatives, with a focus on implementation and outcomes in U.S. schools. After examining the evidence, the authors offer recommendations for comprehensive RJE models and strategic implementation plans to drive more consistently positive outcomes."

No comment yet.
Scooped by Roxana Marachi, PhD!

Sextortion and What Parents Need to Know // Center for Online Safety

Sextortion and What Parents Need to Know // Center for Online Safety | Safe Schools & Communities Resources and Research |

"What is sextortion?

Sextortion is when someone coerces you to send explicit videos or photos online then threatens to share them publicly if you don’t give them what they want - either pay them, give them more explicit images or meet in person.

It can happen to any age person, but we’re going to focus on the teen victims. Sextortion is a serious crime and potentially life-threatening issue. Parents need to know how to help their kids avoid it and all of the repercussions that come with it.

The FBI warns that sextortion is hitting US communities hard and is particularly concerned for the teens targeted. The number of sextortion incidents reported to the FBI so far this year is on track to surpass last year's total, reinforcing the need for parents, guardians, and teenagers to be aware of this growing online danger."...



Please visit Center for Online Safety for original published post and resources for prevention and support here: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Child Psychiatrists Warn That The Pandemic May Be Driving Up Kids' Suicide Risk // NPR

Child Psychiatrists Warn That The Pandemic May Be Driving Up Kids' Suicide Risk // NPR | Safe Schools & Communities Resources and Research |

By Rhitu Chatterjee

"For ways to help kids at risk, read Part 2 of this story.

If you or someone you know may be considering suicide, contact the National Suicide Prevention Lifeline at 1-800-273-8255 (en español: 1-888-628-9454; deaf and hard of hearing: dial 711, then 1-800-273-8255) or the Crisis Text Line by texting HOME to 741741."

"Anthony Orr was almost done with his high school coursework when the governor of Nevada ordered a statewide shutdown of nonessential businesses on March 17, 2020.

"He was looking forward to all of the senior activities, prom and graduation," says his mother, Pamela Orr. But all he got was a "mini [graduation] ceremony," with only a handful of students walking, wearing masks and at a distance from each other.

"That was the most we could do because of COVID," she says.

Anthony graduated with honors as he had planned to, wearing a white robe and cap and an advanced honors sash, says Pamela. But he decided against going to college.

"Right now ... it's all online, and you just lose the whole college experience," she says.


Instead, he got a job working in construction. His parents thought he was doing fine. "He seemed happy to us," says Pamela. "He seemed happy."

But in August of last year, Anthony died by suicide.

While Pamela and her husband, Marc, struggle to come to terms with their loss, his school district in Las Vegas is trying to come to grips with the troubling statistic his death is part of.

He was one of 19 students who has died by suicide in the district since the shutdown last March. Thirteen of those deaths occurred since July.


"There's a sense of urgency," says Jesus Jara, the superintendent of the Clark County School District. "You know, we have a problem."

Suicide is complex, involving layers of risk factors, including biological and environmental ones. And it's hard to know the exact factors involved in the deaths of these 19 students.

But the sudden rise in deaths has school district officials worried that the coronavirus pandemic may have played a role. And educators and mental health care providers in other parts of the United States have the same concern.

In recent months, many suicidal children have been showing up in hospital emergency departments, and more kids are needing in-patient care after serious suicide attempts.


"Across the country, we're hearing that there are increased numbers of serious suicidal attempts and suicidal deaths," says Dr. Susan Duffy, a professor of pediatrics and emergency medicine at Brown University.

According to the Centers for Disease Control and Prevention, between April and October 2020, hospital emergency departments saw a rise in the share of total visits that were from kids for mental health needs.

Now, there are no nationwide numbers on suicide deaths in 2020 yet, and researchers have yet to clearly link recent suicides to the pandemic. Yet on the ground, there's growing concern.

NPR spoke with providers at hospitals in seven states across the country, and all of them reported a similar trend: More suicidal children are coming to their hospitals — in worse mental states.

"The kids that we are seeing now in the emergency department are really at the stage of maybe even having tried or attempted or have a detailed plan," says Dr. Vera Feuer, director of pediatric emergency psychiatry at Cohen Children's Medical Center of Northwell Health in New York. "And we're admitting to the hospital more kids than usual because of how unwell they are."

She has seen a slight increase in 10-to-11-year-olds attempting, but the majority of kids she sees are teenagers. Other places are seeing a rise in 2020 numbers compared with 2019 as well.


The number of kids with suicide attempts coming to the emergency room at Children's Hospital Oakland, in California, in the fall of 2020 was double the number in the fall of 2019, says Marisol Cruz Romero, a psychologist and the coordinator for the hospital's behavioral emergency response team.

At Riley Hospital for Children in Indianapolis, the number of children and teens hospitalized after suicide attempts went up from 67 in 2019 to 108 in 2020. And October 2020 saw a 250% increase in these numbers over the previous October, says Hillary Blake, a pediatric psychologist at the hospital.

Psychiatrists and other doctors who work with children say the pandemic has created a perfect storm of stressors for kids, increasing the risk of suicide for many. It has exacerbated an ongoing children's mental health crisis — suicide rates had already been going up for almost a decade among children and youth.

The problems brought on by the pandemic, they say, only highlight the weaknesses in the mental health safety net for children — and point to an urgent need for new solutions.

"The stories that we hear day by day in the emergency department really speak to us about the level of difficulties, the layers of traumas and the real problems that families are facing," says Feuer.

Loss of critical in-person support services

Many young people, like Anthony Orr, have no diagnosis or known history of mental illness when they start struggling with thoughts of suicide.

But the children who are most vulnerable right now, says Duffy, are the ones with underlying physical or mental illness, because the pandemic has disrupted in-person services they relied on in communities and at school."...


For full post, please visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

The Activist Dismantling Racist Police Algorithms // Technology Review

The Activist Dismantling Racist Police Algorithms // Technology Review | Safe Schools & Communities Resources and Research |

"Hamid Khan is winning his fight for the abolition of surveillance technology used by the LAPD"


By Tate Ryan-Mosley and Jennifer Strong 

"Hamid Khan has been a community organizer in Los Angeles for over 35 years, with a consistent focus on police violence and human rights. He talked to us on April 3, 2020, for a forthcoming podcast episode about artificial intelligence and policing. As the world turns its attention to police brutality and institutional racism, we thought our conversation with him about how he believes technology enables racism in policing should be published now.  

Khan is the founder of the Stop LAPD Spying Coalition, which has won many landmark court cases on behalf of the minority communities it fights for. Its work is perhaps best known for advocacy against predictive policing. On April 21, a few weeks after this interview, the LAPD announced an end to all predictive policing programs


Khan is a controversial figure who has turned down partnerships with groups like the Electronic Frontier Foundation (EFF) because of its emphasis on reform. He doesn’t believe reform will work. The interview has been edited for length and clarity. 

Tell us about your work. Why do you care about police surveillance?

The work that we do, particularly looking at the Los Angeles Police Department, looks at how surveillance, information gathering, storing, and sharing has historically been used to really cause harm, to trace, track, monitor, stalk particular communities: communities who are poor, who are black and brown, communities who would be considered suspect, and queer trans bodies. So on various levels, surveillance is a process of social control. 

Do you believe there is a role for technology in policing?

The Stop LAPD Spying Coalition has a few guiding values. The first one is that what we are looking at is not a moment in time but a continuation of history. Surveillance has been used for hundreds of years. Some of the earliest surveillance processes go back to lantern laws in New York City in the early 1700s. If you were an enslaved person, a black or an indigenous person, and if you were walking out into the public area without your master, you had to walk with an actual literal lantern, with the candle wick and everything, to basically self-identify yourself as a suspect, as the “other.”


Another guiding value is that there’s always an “other.” Historically speaking, there’s always a “threat to the system.” There's always a body, an individual, or groups of people that are deemed dangerous. They are deemed suspect. 

The third value is that we are always looking to de-sensationalize the rhetoric of national security. To keep it very simple and straightforward, [we try to show] how the information-gathering and information-sharing environment moves and how it’s a process of keeping an eye on everybody.


And one of our last guiding values is that our fight is rooted in human rights. We are fiercely an abolitionist group, so our goal is to dismantle the system. We don’t engage in reformist work. We also consider any policy development around transparency, accountability, and oversight a template for mission creep. Any time surveillance gets legitimized, then it is open to be expanded over time. Right now, we are fighting to keep the drones grounded in Los Angeles, and we were able to keep them grounded for a few years. And in late March, the Chula Vista Police Department in San Diego announced that they are going to equip their drones with loudspeakers to monitor the movement of unhoused people.

Can you explain the work the Stop LAPD Spying Coalition has been doing on predictive policing? What are the issues with it from your perspective?

PredPol was location-based predictive policing in which a 500-by-500-square-foot location was identified as a hot spot. The other companion program, Operation Laser, was person-based predictive policing.


In 2010, we looked at the various ways that these [LAPD surveillance] programs were being instituted. Predictive policing was a key program. We formally launched a campaign in 2016 to understand the impact of predictive policing in Los Angeles with the goal to dismantle the program, to bring this information to the community and to fight back.


Person-based predictive policing claimed that for individuals who are called “persons of interest” or “habitual offenders,” who may have had some history in the past, we could use a risk assessment tool to establish that they were going to recidivate. So it was a numbers game. If they had any gun possession in the past, they were assigned five points. If they were on parole or probation, they were assigned five points. If they were gang-affiliated, they were assigned five points. If they’d had interactions with the police like a stop-and-frisk, they would be assigned one point. And this became where individuals who were on parole or probation or minding their own business and rebuilding their lives were then placed in what became known as a Chronic Offender Program, unbeknownst to many people.


Then, based on this risk assessment, where Palantir is processing all the data, the LAPD created a list. They started releasing bulletins, which were like a Most Wanted poster with these individuals’ photos, addresses, and history as well, and put them in patrol cars. [They] started deploying license plate readers, the stingray, the IMSI-Catcher, CCTV, and various other tech to track their movements, and then creating conditions on the ground to stop and to harass and intimidate them. We built a lot of grassroots power, and in April 2019 Operation Laser was formally dismantled. It was discontinued.


And right now we are going after PredPol and demanding that PredPol be dismantled as well. [LAPD announced an end to PredPol on April 21, 2020.] Our goal for the abolition and dismantlement of this program is not just rooted in garbage in, garbage out; racist data in and racist data out. Our work is really rooted in that it ultimately serves the whole ideological framework of patriarchy and capitalism and white supremacy and settler colonialism.


We released a report, “Before the Bullet Hits the Body,” in May 2018 on predictive policing in Los Angeles, which led to the city of Los Angeles holding public hearings on data-driven policing, which were the first of their kind in the country. We demanded a forensic audit of PredPol by the inspector general. In March 2019, the inspector general released the audit and it said that we cannot even audit PredPol because it’s just not possible. It’s so, so complicated.


Algorithms have no place in policing. I think it’s crucial that we understand that there are lives at stake. This language of location-based policing is by itself a proxy for racism. They’re not there to police potholes and trees. They are there to police people in the location. So location gets criminalized, people get criminalized, and it’s only a few seconds away before the gun comes out and somebody gets shot and killed.

How do you ensure that the public understands these kinds of policing tactics? 

Public records are a really good tool to get information. What is the origin of this program? We want to know: What was the vision? How was it being articulated? What is the purpose for the funding? What is the vocabulary that they’re using? What are the outcomes that they’re presenting to the funder? 


They [the LAPD] would deem an area, an apartment building, as hot spots and zones. And people were being stopped at a much faster pace [there]. Every time you stop somebody, that information goes into a database. It became a major data collection program. 


We demanded that they release the secret list that they had of these individuals. LAPD fought back, and we did win that public records lawsuit. So now we have a secret list of 679 individuals, which we’re now looking to reach out to. And these are all young individuals, predominantly about 90% to 95% black and brown. 


Redlining the area creates conditions on the ground for more development, more gentrification, more eviction, more displacement of people. So the police became protectors of private property and protectors of privilege.

What do you say to people who believe technology can help mitigate some of these issues in policing, such as biases, because technology can be objective? 

First of all, technology is not operating by itself.  From the design to the production to the deployment to the outcome, there is constantly bias built in. It’s not just the biases of the people themselves; it’s the inherent bias within the system


There’s so many points of influence that, quite frankly, our fight is not for cleaning up the data. Our fight is not for an unbiased algorithm, because we don’t believe that even mathematically, there could be an unbiased algorithm for policing at all.

What are the human rights considerations when it comes to police technology and surveillance?

The first human right would be to stop being experimented on. I’m a human, and I am not here that you just unpack me and just start experimenting on me and then package me. There’s so much datafication of our lives that has happened. From plantation capitalism to racialized capitalism to now surveillance capitalism as well, we are subject to being bought and sold. Our minds and our thoughts have been commodified. It has a dumbing-down effect as well on our creativity as human beings, as a part of a natural universe. Consent is being manufactured out of us.

With something like coronavirus, we certainly are seeing that some people are willing to give up some of their data and some of their privacy. What do you think about the choice or trade-off between utility and privacy? 

We have to really look at it through a much broader lens.  Going back to one of our guiding values: not a moment in time but a continuation of history. So we have to look at crises in the past, both real and concocted. 


Let's look at the 1984 Olympics in Los Angeles. That led to the most massive expansion of police powers and militarization of the Los Angeles Police Department and the sheriff’s department under the guise of public safety. The thing was “Well, we want to keep everything safe.” But not only [did] it become a permanent feature and the new normal, but tactics were developed as well. Because streets had to be cleaned up, suspect bodies, unhoused folks, were forcibly removed. Gang sweeps supposedly started happening. So young black and brown youth were being arrested en masse. This is like 1983, leading to 1984.


By 1986-1987 in Los Angeles, gang injunctions became a permanent feature. This resulted in massive gang databases, and children as young as nine months old going into these gang databases. That became Operation Hammer, where they had gotten tanks and armored vehicles, used by SWAT, for delivering low-level drug offenses, and going down and breaking down people’s homes.


Now we are again at a moment. It’s not just the structural expansion of police powers; we have to look at police now increasingly taking on roles as social workers.  It’s been building over the last 10 years. There’s a lot of health and human services dollars attached to that too. For example, in Los Angeles, the city controller came out with an audit about five years ago, and they looked at $100 million for homeless services that the city provides. Well, guess what? Out of that, $87 million was going to LAPD.  

Can you provide a specific example of how police use of technology is impacting community members?

Intelligence-led policing is a concept that comes out of England, out of the Kent Constabulary, and started about 30 years ago in the US. The central theme of intelligence-led policing is behavioral surveillance.  People’s behavior needs to be monitored, and then be processed, and that information needs to be shared. People need to be traced and tracked.  


One program called Suspicious Activity Reporting came out of 9/11, in which several activities which are completely constitutionally protected are listed as potentially suspicious. For example, taking photographs in public, using video cameras in public, walking into infrastructure and asking about hours of operations. It’s observed behavior reasonably indicative of preoperational planning of criminal and/or terrorist activity. So you’re observing somebody’s behavior, which reasonably indicates there is no probable cause. It creates not a fact, but a concern. That speculative and hunch-based policing is real.  


We were able to get numbers from LAPD’s See Something, Say Something program. And what we found was that there was a 3:1 disparate impact on the black community. About 70% of these See Something, Say Something reports came from predominantly white communities in Los Angeles. So now a program is being weaponized and becomes a license to racially profile.


The goal is always to be building power toward abolition of these programs, because you can’t reform them. There is no such thing as kinder, gentler racism, and these programs have to be dismantled.

So, you really think that reform won’t allow for use of these technologies in policing?

I can only speak about my own history of 35 years of organizing in LA. It’s not a matter of getting better, it’s a matter of getting worse. And I think technology is furthering that. When you look at the history of reform, we keep on hitting our head against the wall, and it just keeps on coming back to the same old thing. We can’t really operate under the assumption that hearts and minds can change, particularly when somebody has a license to kill.


I’m not a technologist. Our caution is for the technologists: you know, stay in your lane. Follow the community and follow their guidance."


For original post, please visit:


Photo credit: Damon Casarez

No comment yet.
Scooped by Roxana Marachi, PhD! | Safe Schools & Communities Resources and Research |

"Restorative justice brings people together to build community and address harm through community-based circle processes. At the heart of RJ is the belief that strong communities are essential to preventing harm from occurring. When harm or conflict arises, a trauma-informed, circle practitioner engages participants in transformational processes that address the needs of all who are affected. These processes emphasize accountability, humanity and community. Restorative practices promote the creation of spaces of trust and respect with housemates, co-workers, and partners for difficult conversations and deep listening." 

No comment yet.
Scooped by Roxana Marachi, PhD!

When Middle Schoolers Say #MeToo // By Rachel Simmons 

When Middle Schoolers Say #MeToo // By Rachel Simmons  | Safe Schools & Communities Resources and Research | 

No comment yet.
Scooped by Roxana Marachi, PhD!

Anti-Bullying and Harassment Resource Library // National Center for Youth Law

Anti-Bullying and Harassment Resource Library // National Center for Youth Law | Safe Schools & Communities Resources and Research |

"NCYL is collecting model policies and practices that schools and school districts can adopt and implement to ensure students are safe in school and their dignity respected. This includes model school board resolutions; best practices with respect to investigating reported incidents of bullying, harassment and intimidation; and training materials for teachers and students on cultural competency, growth mindset and implicit bias. There are also public advocacy tools for students and families such as model complaints they can file with their state department of education or the Federal Department of Education, Office of Civil Rights." 

No comment yet.
Scooped by Roxana Marachi, PhD!

Stalkerware: What to do if you're the target // CNET

Stalkerware: What to do if you're the target // CNET | Safe Schools & Communities Resources and Research |

Stalkerware can turn phones into all-seeing surveillance tools.

Brett Pearce/CNET

"This article discusses domestic violence. CNET would like to remind readers that browsing histories, including this story, can be monitored and are impossible to completely clear. If you need help, please call the National Domestic Violence Hotline at 1-800-799-7233.

Things got weird at the end of Allie's relationship with her boyfriend. One night, he seemed to know where she'd been when she was out without him, and another night he started talking about something she'd recently read on her personal computer at home, where she lived alone. 

At the beginning of their relationship, he said he had cyberstalked a past girlfriend, but he assured her that those days were behind him. Now Allie, who asked to use a pseudonym out of concern for her safety, wondered if her soon-to-be-ex boyfriend was spying on her.

"I thought I was going nuts because I was pretty sure I hadn't shared that information," said Allie, who ditched her laptop and phone rather than find out what software her ex might have installed on them. "In hindsight, it was subtle intimidation."

The paranoia that Allie felt is becoming a sadly common experience. It's jaw-droppingly easy for someone to buy and install intrusive apps, known as stalkerware, on someone else's device. The apps are plentiful, according to antivirus software firms that track their prevalence. A recent Harris poll conducted with antivirus firm NortonLifeLock found that one in 10 people admit to using stalkerware to track a partner or ex-partner. The apps are so simple that some people on TikTok have posted 60-second tutorials on how to use them.

The software works on computers but has become especially powerful to use on phones, turning the gadgets into all-seeing surveillance devices that reveal location data as well as emails, web browsing histories and more. Stalkerware on smartphones can lead domestic abusers to partners who may be in hiding. The apps give heightened control to abusers whose partners haven't left, making escape harder to manage. Stalkerware apps have been tied to horrible acts of violence.

There can be legitimate reasons to use tracking apps, such as monitoring children's phones, or monitoring employees (with their consent). However, the distinction between these apps and what's often called stalkerware is blurry. Many apps bill themselves as legitimate monitoring apps but can offer staggering amounts of information from targets' phones and can operate completely undetected. The reality is that these apps get abused by people who spy on adults without their consent, according to law enforcement officials and to domestic-violence and legal experts. 

You might at some point worry you have stalkerware on your phone or laptop. It isn't easy to decide what to do about it, domestic-violence experts say, because your partner or ex might become more dangerous if you delete the software on your device. But there are steps you can take to learn more about the software and whether it's on your device.

What is stalkerware?

Stalkerware refers to a broad group of apps that someone else can install on your device to intercept texts and phone calls, access your location, log your web browsing activity and turn on your camera or microphone. The information gathered by such an app typically gets sent to a portal or companion app accessed by the person who installed the stalkerware. 

The apps can be installed on all kinds of phones, though it's a bit more complex to get stalkerware working on iPhones. The person installing stalkerware typically has to get physical access to the user's phone to install an app. A big exception to this is if the person installing stalkerware has the target's iCloud credentials, allowing them to access backups of the other person's phone.

Is stalkerware illegal?

Surreptitious spying on your devices without your consent is illegal. So is stalking. Additionally, the apps usually violate the policies for apps sold on stores run by Google and Apple, and they're frequently taken down from those stores.

People still install them on other people's phones, though, finding the apps for sale on the app makers' websites instead of an app store, and at times undermining the foundational security of a target's phone by jailbreaking it. The apps are often sold as child or employee monitoring services, but they're ripe for abuse because they can run undetected on a device, say law enforcement officials and domestic-violence experts.

There have been prosecutions of people who used stalkerware, but they're uncommon. 

How do I know if my phone has stalkerware?

That can be hard. The software often disguises itself, either by displaying an innocuous icon (like a battery monitor), or by not displaying an icon at all, says Kevin Roundy, technical director at the NortonLifeLock research group.

While researching stalkerware apps, Roundy identified other categories of apps that often work in concert with the intrusive software. One of these is an app-hiding app, which can remove the icon of a stalkerware app from your screen.

Even if an app's icon is hidden on your phone, it should show up in your settings as an item in the list of applications running on your device. The app still probably won't have a label that immediately identifies it as stalkerware, Roundy says, so look for any app you don't recognize. You can look up any unusual looking apps online on another device to see if you can find more information about them."...


For full post, please visit

No comment yet.
Scooped by Roxana Marachi, PhD!

The Web of Violence: Exploring Connections Among Different Forms of Interpersonal Violence and Abuse // Hamby and Grych (2013) Springer

The Web of Violence: Exploring Connections Among Different Forms of Interpersonal Violence and Abuse // Hamby and Grych (2013) Springer | Safe Schools & Communities Resources and Research | 

No comment yet.