Educational Psychology & Technology
21.3K views | +9 today
Rescooped by Roxana Marachi, PhD from Leadership Psychology
onto Educational Psychology & Technology!

Digital storytelling: How to tell a story that stands out in the digital age

Digital storytelling: How to tell a story that stands out in the digital age | Educational Psychology & Technology |

By Jasper Visser: "To address the most important issue first: there is no such thing as digital storytelling. There’s only storytelling in the digital age, and frankly speaking this isn’t much different from storytelling in the age of hunters, gatherers, dinosaurs and ICQ" ...


For full post, click on title above or link here:

Via The Digital Rocking Chair, Dr. Pamela Rutledge
ChelseyMarie's curator insight, January 18, 2013 8:10 PM

What the video! It's amazing.  Book Burn Pary!!! YESSS.

Educational Psychology & Technology
This curated collection includes news, resources, and research related to Educational Psychology and/or Technology. The page also serves as a research tool to organize online content. The grey funnel shaped icon at the top allows for searching by keyword. For research more specific to tech, screen time and health/safety concerns, please see:, to learn about the next wave of privatization involving technology intersections with "Social Impact Bonds", see, and for additional Educator Resources, please visit [Links to an external site].
Your new post is loading...
Your new post is loading...
Rescooped by Roxana Marachi, PhD from Screen Time and Tech Safety Research!

Health and Safety Research Gaps in Policies and Practices Integrating Emerging Technologies for Young Children 

Links are as follows in order of the slides: 


The Silicon Valley Billionaires Remaking America's Schools 


Dr. Catherine Steiner-Adair
Clinical Psychologist and Research Associate at Harvard Medical School 


Video link may be viewed at: 


Carter B, Rees P, Hale L, Bhattacharjee D, Paradkar MS. Association Between Portable Screen-Based Media Device Access or Use and Sleep Outcomes: A Systematic Review and Meta-analysis.JAMA Pediatr. 2016 Oct 31. doi: 10.1001/jamapediatrics.2016.2341. [Epub ahead of print] 


Screen Time Hurts More Than Kids' Eyes 


New Media Consortium / Consortium for School Networking Horizon Report 


"American Revolution 2.0: How Education Innovation is Going to Revitalize America and Transform the U.S. Economy" 


"Preschool is Good For Children But It's Expensive So Utah Is Offering It Online" m/local/education/preschool-is- good-for-poor-kids-but-its- expensive-so-utah-is-offering-it- online/2015/10/09/27665e52- 5e1d-11e5-b38e- 06883aacba64_story.html  


Philanthropy Roundtable's: "Blended Learning: Wise Givers Guide to Supporting Tech-Assisted Learning" (Formerly chaired by B. DeVos)  


CyberCharters Have Overwhelming Negative Impact 


Ma, J., van den Heuvel, M., Maguire, J., Parkin, P., Birken, C. (2017). Is handheld screen time use associated with language delay in infants? Presented at the Pediatric Academic Societies Meeting, San Francisco, CA.  


Jonathan Rochelle’s GSV/ASU PRIMETIME Keynote Speech pitching Google Cardboard for children in schools as proxy for actual field trips: 


Scientists Urge Google to Stop Untested Microwave Radiation of Children's Eyes and Brains with Virtual Reality Devices in Schools //  Asus product manual 


Telecom Industry Liability and Insurance Information 


National Association for Children and Safe Technology - iPad Information 


For infant/pregnancy related safety precautions, please visit 


194 Signatories (physicians, scientists, educators) on Joint Statement on Pregnancy and Wireless Radiation 


Article screenshot from France: "Portables. L'embrouille des ondes electromagnetiques


Wireless Phone Radiation Risks and Public Policy 


"Show The Fine Print" 


Scientist petition calls for greater protective measures for children and pregnant women, cites need for precautionary health warnings, stronger regulation of electromagnetic fields, creation of EMF free zones, and media disclosures of experts’ financial relationships with industry when citing their opinions regarding the safety of EMF-emitting technologies. Published in European Journal of Oncology 


International Agency for Research on Cancer Classifies Radiofrequency Electromagnetic Fields as Possibly Carcinogenic to Humans (2011)


For more on source of funding research, see: and 


Maryland State Children’s Environmental Health and Protection Advisory Council // Public Testimony


"Until now, radiation from cell towers has not been considered a risk to children, but a recent study raises new questions about possible long-term, harmful effects." 


For further reading, please see Captured Agency report published by Harvard’s Center for Ethics  or 


Updates/posts/safety information on Virtual Reality: 


Environmental Health Trust Virtual Reality Radiation Absorption Slides 


Healthy Kids in a Digital World: 


National Association for Children and Safe Technology 


Doctors’ Letters on Wifi in Schools// 154 page compilation 


Insurance and Liability Disclaimers/Information from Telecom Companies 


Most of the documents and articles embedded within the presentation above are searchable/accessible on the following page:

Document above is a pdf with live links. They are provided above for easier access. To download the original file, please click on title or arrow above. It is a large file so may take several minutes.  

No comment yet.
Scooped by Roxana Marachi, PhD!

"Data Exploitation Isn't A Bug At Facebook, It's A Feature."  //

"Data Exploitation Isn't A Bug At Facebook, It's A Feature."  // | Educational Psychology & Technology |

By Dylan Byers

"The real scandal? Data exploitation isn't a bug at Facebook, it's a feature.


Facebook is in the data exploitation business: They make money by harvesting your data and selling it to app developers and advertisers. Indeed, the most alarming aspect of Cambridge Analytica's "breach" of 50 million users' data is that it wasn't a breach at all. It happened almost entirely above board and in line with Facebook policy.


The one rule that Cambridge professor Aleksandr Kogan violated, according to Facebook, was passing user data to third parties, including Cambridge Analytica. But even my Facebook sources acknowledge that it is impossible for the company to completely monitor what developers and advertisers do with the data.


This is why it is so hard to trust Facebook when they say "protecting people's information is at the heart of everything we do." In fact, Facebook's business is providing people's information to outside parties whose ultimate goals are unknowable.


If protecting data were truly at the heart of Facebook's business, Facebook wouldn't be in business."...


For full post, see: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Error in UK-wide School Info Management System Mismatched Student and Parent Data // The Register

Error in UK-wide School Info Management System Mismatched Student and Parent Data // The Register | Educational Psychology & Technology |

By Rebecca Hill

"Updated Capita has admitted a bug in an information management system used by 21,000 UK schools could have incorrectly linked contact details to the wrong pupils – an incident with huge implications for pupils' data protection.


The error, which has been pinned on a December 2017 upgrade to the Schools Information Management System, could have resulted in schools sending out information about pupils to the wrong address.

Capita has apologised to schools for the bug and has issued a patch that will prevent further records from being corrupted – but this will be cold comfort for those who used the system to send out correspondence ahead of the end of year.

According to an email sent out to schools in one UK county, and seen by The Register, the fault was in the SIMS software matching routine for new pupils. It is understood to affect all users nationally, regardless of whether they are locally or centrally hosted.

“The consequence of the corruption is that contact information for the incoming pupil for example, address, telephone number and email address, may have become associated with other pupil’s records, or the new pupil could themselves be linked to the wrong contact details,” the email stated.


“The problem could have impacted pre-admissions, pupils on roll and the records of school leavers.”


The problem, which affects common transfer files (CTF) – which are used to transfer children’s information between primary and secondary schools for moving pupils and other ad-hoc transfers – is explained in more detail in a note on Capita’s website.

“If you have imported a CTF for pupils joining your school, that included parents or other contacts with a name that matched exactly to a contact record already in your database, the applicant may have been linked incorrectly to this person and some data may have changed.”

According to a user on an education technology forum, the software uses a matching routine to check if any of the pupils that are being imported have contacts that already exist in SIMS. If this is the case, that pupil is linked to that contact, with the address being updated if necessary.


“Unfortunately, several aspects of the matching routine [were] accidentally removed so that the matching routine only checks for matching names and gender. The net result is that existing contacts can be given the wrong address and the imported pupil in the admission group can be linked to the wrong contacts,” the user wrote.

“Also where no address for the contact has been included in the [common transfer file], the link from the imported pupil will be to the first existing contact with the same name and gender, but in this case the address for that contact is not changed.”


Capita said that there was a patch for the bug, and that upgrading to the Summer 2018 version would prevent any further corruption.

However, the outsourcer appeared to have been immediately unable to confirm which records were affected. In the email, the local council said that it was waiting for Capita to offer “more clarity” and that until then “we cannot identify or fix any records that have already been corrupted."...

Do not use the SIMS DB

In the meantime, schools are told that, in order to avoid a potential data breach, it is “vital” not to use the SIMS database to send out any communications without thorough checks that the contact details are correct for each pupil.


There are also questions of how long the firm has known about the problem and when schools were informed – which is likely more frustrating as many schools will have recently carried out transfers of pupil information for the coming September new school year."...


For full post and current updates, see: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Democracy Disrupted? Personal Information and Political Influence // Information Commissioner's Office 

No comment yet.
Scooped by Roxana Marachi, PhD!

Education Department Investigating Westchester Square Academy's Online Courses // NYPost

Education Department Investigating Westchester Square Academy's Online Courses // NYPost | Educational Psychology & Technology |

"When Laurenz Santiago showed up in his cap and gown at Westchester Square Academy, he and his family got a humiliating reception.

“Your son may not be attending the graduation,” his stunned dad, Edward Santiago, said a school staffer told him.

The parents were ushered into Principal Yira Salcedo’s office. She explained that Laurenz was among five seniors who could not graduate because a math teacher had failed to sign off on the online courses they had taken. “The principal was in tears,” the dad said.


In fact, two teachers who were asked to give grades for the online work refused to do so because they didn’t know the students – and believed it was wrong to award the credits, insiders said.


After inquiries by The Post, the city Department of Education said it’s investigating whether the Bronx high school violated city rules requiring students who take online courses to have “regular and substantive interaction” with qualified teachers.

The department also said it’s barred 16 additional Westchester Square students who took online math courses from receiving credits, and is looking into others.

Daniel Gardner, who took U.S. history and geometry online, said the courses are quick – and easy for students to fake.

“You can cheat off someone. If you wanted to bring all your friends in to cheat, so be it,” he told The Post."... 

No comment yet.
Scooped by Roxana Marachi, PhD!

Two Senators Call for Investigation of Smart TV Industry // The New York Times

Two Senators Call for Investigation of Smart TV Industry // The New York Times | Educational Psychology & Technology |

By Sapna Maheshwari

"Two Democratic senators have asked federal regulators to investigate the business practices of smart-television manufacturers amid worries that companies are tracking consumers’ viewing behavior without their knowledge.


In a letter on Thursday to Joseph Simons, the chairman of the Federal Trade Commission, Senators Edward J. Markey of Massachusetts and Richard Blumenthal of Connecticut said they were concerned about “consumer privacy issues raised by the proliferation of smart-TV technology.”


Companies are using new tools to identify and log what people are watching as part of an effort to profile consumers and direct ads to other devices in their homes. The letter cited a New York Times article, published last week, that detailed the practices of Samba TV, a San Francisco software company. Privacy advocates have criticized the company for not being transparent with consumers when it seeks permission to track their viewing on internet-connected TVs to sell ads.


“Regrettably,” the senators wrote, “smart-TV users may not be aware of the extent to which their televisions are collecting sensitive information about their viewing habits.” The letter went on to argue that Samba TV “does not provide sufficient information about its privacy practices to ensure users can make truly informed decisions.”...


For full post, see: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Loopholes and Flaws in the Student Privacy Pledge // Electronic Frontier Foundation 

Loopholes and Flaws in the Student Privacy Pledge // Electronic Frontier Foundation  | Educational Psychology & Technology |
By Gennie Gebhard and Sophia Cope

With a new school year underway, concerns about student privacy are at the forefront of parents’ and students’ minds. The Student Privacy Pledge, which recently topped 300 signatories and reached its two-year launch anniversary, is at the center of discussions about how to make sure tech and education companies protect students and their information. A voluntary effort led by the Future of Privacy Forum and the Software and Information Industry Association (SIAA), the Pledge holds the edtech companies who sign it to a set of commitments intended to protect student privacy.


But the Student Privacy Pledge as it stands is flawed. While we praise the Pledge’s effort to create industry norms around privacy, its loopholes prevent it from actually protecting student data.

All in the fine print

The real problems with the Student Privacy Pledge are not in its 12 large, bold commitment statements—which we generally like—but in the fine-print definitions under them.

First, the Pledge’s definition of “student personal information” is enough to call into question the integrity of the entire Pledge. By limiting the definition to data to that is “both collected and maintained on an individual level” and “linked to personally identifiable information,” the Pledge seems to permit signatories to collect sensitive and potentially identifying data such as search history, so long as it is not tied to a student’s name. The key problem here is that the term “personally identifiable information” is not defined and is surely meant to be narrowly interpreted, allowing companies to collect and use a significant amount of data outside the strictures of the Pledge. This pool of data potentially available to edtech providers is more revealing than traditional academic records, and can paint a picture of students’ activities and habits that was not available before.


By contrast, the federal definition, found in FERPA and the accompanying regulations, is broad and includes both “direct” and “indirect” identifiers, and any behavioral “metadata” tied to those identifiers. The federal definition also includes “Other information that, alone or in combination, is linked or linkable to a specific student that would allow a reasonable person in the school community, who does not have personal knowledge of the relevant circumstances, to identify the student with reasonable certainty.”

Second, the Pledge’s definition of “school service provider” is limited to providers of applications, online services, or websites that are “designed and marketed” for educational purposes.

A provider of a product that is marketed for and deployed in classrooms, but wasn’t necessarily “designed” for educational purposes, is outside the Pledge. This excludes providers while they’re providing “general audience” apps, online services and websites. We alleged in our FTC complaint against Google that the Pledge does apply to data collection on “general audience” websites when that data collection is only possible by virtue of a student using log-in credentials that were generated for educational purposes. However, SIIA, a principal developer of the Pledge, argued to the contrary and said that the Pledge permits providers to collect data on students on general audience websites even if students are using their school accounts.

The Pledge’s definition also does not include providers of devices like laptops and tablets, who are free to collect and use student data contrary to the Pledge.

Definition changes

Simple changes to the definitions of “student personal information” and “school service provider”—to bring them in line with how we generally understand those plain-English terms—would give the Pledge real bite, especially since the Pledge is intended to be legally enforced by the Federal Trade Commission.


While enforcement only applies to companies who choose to sign on, we think that if the Student Privacy Pledge meant what it said, and if signatories actually committed to the practices outlined under the heading “We Commit To”, it would amount to genuine protection for students. But with the definitions as they stand, the Pledge rings hollow.


Notwithstanding the need to improve the definitions, the Pledge could do some good. Unfortunately, the FTC has yet to take action on our complaint alleging that Google violated the Student Privacy Pledge. We urge the Commission to take this matter seriously so that parents and students can trust that when companies promise to do (or not do) something, they will be held accountable."..


For full post, see: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Mapping The Data Infrastructure of Market Reform in Higher Education // Ben Williamson

Mapping The Data Infrastructure of Market Reform in Higher Education // Ben Williamson | Educational Psychology & Technology |

By Ben Williamson
"A new regulator for Higher Education in England came into legal existence on 1st January 2018. Announced as part of the 2017 Higher Education and Research Act (HERA), the Office for Students is already controversial before it formally begins operations in April. The appointment to its board of Toby Young, the free schools champion and journalist, appalled critics who vocally called for his sacking over previous misogynistic comments in the press and on social media. Despite Conservative Party ministers including Jo Johnson, Boris Johnson and Michael Gove defending his selection, Young resigned within 10 days.


The Toby Young storm, however, has distracted attention from one of the most significant aspects of the HE reforms the Office for Students will preside over. That is the escalation and acceleration in the collection, analysis and use of student data, and the building of a new HE data infrastructure to enact that task. Under the Office for Students, student data is to become a significant source for regulating the HE sector, as universities are put under increasing pressures of market reform, metrics and competition by HERA.


As with all data infrastructure, mapping the HE data infrastructure is a complex task. In this initial attempt to document it (part of a forthcoming paper), I am following Rob Kitchin’s call for case studies that trace out the ‘sociotechnical arrangements’ of people, organizations, policies, discourses and technologies involved in the development, evolution, influence, dead-ends and failures of data infrastructures. It is necessarily a very partial account of a much larger project to follow the development, rollout and upkeep of a new data infrastructure in UK HE, and to chart how big data, learning analytics and adaptive learning technologies are being positioned as part of this program to deliver a reformed ‘smart’ sector for the future."... 

No comment yet.
Scooped by Roxana Marachi, PhD!

EduVentures #HigherEd #Tech Landscape 2017

EduVentures #HigherEd #Tech Landscape 2017 | Educational Psychology & Technology | 

No comment yet.
Scooped by Roxana Marachi, PhD!

Don't Buy The Arizona State Report On Digital Learning // Forbes 

Don't Buy The Arizona State Report On Digital Learning // Forbes  | Educational Psychology & Technology |

By Derek Newton
"Arizona State University has the third largest online enrollment among public schools with some 30,000 students taking at least one ASU class online. And according to an ASU business plan presented to the Arizona Board of Regents, revenue from online programs is expected to reach $230 million this year and grow to nearly half a billion dollars by 2025. The ASU website says they offer “more than 150 highly respected degree programs available 100% online.”


Considering their leading role in online learning, ASU, to their credit, established the Action Lab. In an introductory video, ASU President Michael Crow describes the ASU Lab as something that will, “research the outcomes of the new learning models…” and describes Action Lab as, “our own internal research and development group making assessments about quality, about efficacy, and about outcomes…” regarding online and digital learning.


In April, the Action Lab, along with Boston Consulting Group, and with funding from the Gates Foundation, released a new study of online learning titled, “Making Digital Learning Work.”


To describe the ASU report as sloppy is generous.  Dishonest is more accurate.


For example, in the section titled, “What the Research Base Says About Digital Learning,” the report authors wrote, “…a study published in 2015 concluded that, ‘students in online courses will receive a grade point average that is .39 points (almost 40% of a letter grade) higher than a student taking a face-to-face course.’”


The footnote links to this study from Wright State University and, in literally the very paragraph after the reference to the .39 grade bump, the Wright State report says, “…the majority of this variation was a product of other academic and demographic parameters rather than course delivery mode.” In other words, the higher grades in online courses were not the result of those classes being online, even though ASU flatly said they were.


The Wright State report found, in fact, that grades in online and on-campus classes, “translated into a negligible difference of less than 0.07 GPA points on a 4-point scale” and cited other research showing that online students get lower grades than in-person students do.

And there’s more.


The ASU publication says, “… some studies have shown that institutions that have implemented digital learning have improved their financial outlook” and cites this 2009 study of remedial math classes. Here, the ASU conflates “digital learning” with online learning, when, ironically, the classes in the cited study required in-person attendance. “At participating colleges, attendance counted between five and 10% of the final grade, which provided sufficient motivation for students to attend class during which they were required to work on their course,” the 2009 report says.

In the cited 2009 study, “digital learning” largely meant replacing lectures or textbooks with course materials presented in a computer lab and overseen by a teaching assistant instead of a professor. Throughout the report, ASU highlights the cost savings of hiring less expensive teachers.

In another example, the ASU report dismisses as “a myth” that “digital learning fails to produce outcomes that are equal to or better than … face-to-face only instruction and that it widens the achievement gap,” The ASU publication blames ignorant faculty for thinking so, “faculty who have never taught a blended or online course … have bolstered this myth.”


But a 2014 study by the Public Policy Institute of California which examined California’s Community Colleges – the largest provider of online classes in the country – found that, “ … online learning does nothing to overcome achievement gaps across racial/ethnic groups—in fact, these gaps are even larger in online classes.”...


For full post, see: 

No comment yet.
Rescooped by Roxana Marachi, PhD from Screen Time and Tech Safety Research!

Absorption of Wireless Radiation In The Child Versus Adult Brain and Eye From Cell Phone Radiation or Virtual Reality // Environmental Research 

Wireless radiation absorption in child vs adult brain & eye from cell phone conversation or virtual reality // Fernandez C, de Salles AA, Sears ME, Morris RD, Davis DL. Absorption of wireless radiation in the child versus adult brain and eye from cell phone conversation or virtual reality. Environmental Research. Jun 5, 2018.

• More cell phone radiation is absorbed by children's inner brain tissues than adults’.
• Children's radio-frequency radiation exposure should be reduced.
• Further research to evaluate the risks to the eye from use of VR is urgently needed.
• It is biologically relevant and feasible to reduce the standards’ averaging volume.
• Current methods to determine wireless device compliance should be revised.

Children's brains are more susceptible to hazardous exposures, and are thought to absorb higher doses of radiation from cell phones in some regions of the brain. Globally the numbers and applications of wireless devices are increasing rapidly, but since 1997 safety testing has relied on a large, homogenous, adult male head phantom to simulate exposures; the “Standard Anthropomorphic Mannequin” (SAM) is used to estimate only whether tissue temperature will be increased by more than 1 Celsius degree in the periphery. The present work employs anatomically based modeling currently used to set standards for surgical and medical devices, that incorporates heterogeneous characteristics of age and anatomy. Modeling of a cell phone held to the ear, or of virtual reality devices in front of the eyes, reveals that young eyes and brains absorb substantially higher local radiation doses than adults’. Age-specific simulations indicate the need to apply refined methods for regulatory compliance testing; and for public education regarding manufacturers' advice to keep phones off the body, and prudent use to limit exposures, particularly to protect the young.
"In summary, compared with adult models, children experience two- to three-fold higher RF doses to: 1) localized areas of the brain when a cell phone is positioned next to the ear; and 2) the eyes and frontal lobe when a cell phone is used to view virtual reality. These findings raise serious questions about the current approach to certify cell phones; particularly the use of the SAM. "
"Our modeling demonstrates clearly that localized psSAR varies significantly for critical components of the brain. Younger models absorb proportionally more radiation in the eyes and brain – grey matter, cerebellum and hippocampus—and the local dose rate varies inversely with age. This reflects the fact that the head is not homogeneous. Indeed, localized heating up to 5 Centigrade degrees has been detected as a result of mobile phone radiation studied ex vivo in cow brain using Nuclear Magnetic Resonance thermometry (Gultekin and Moeller, 2013)." 
"Our findings support reexamination of methods to determine regulatory compliance for wireless devices, and highlight the importance of precautionary advice such as that of American Academy of Pediatrics (2016). The Academy recommends that younger children should not use cell phones, and that prudent measures should be taken to eliminate exposure (e.g. using devices for amusement or education only when all wireless features are turned off – in “airplane mode”) or to minimize exposure (e.g. texting or using speakerphone), and that cell phones should not be kept next to the body. Use of wires/cables in schools and homes circumvents needless exposures of children to radiation from both devices and Wi-Fi routers. There is also an urgent need for research to evaluate the risks to the eye from use of cell phones in virtual reality applications." 
No comment yet.
Scooped by Roxana Marachi, PhD!

Predicting Learners’ Emotions in Mobile MOOC Learning via a Multimodal Intelligent Tutor

Predicting Learners’ Emotions in Mobile MOOC Learning via a Multimodal Intelligent Tutor | Educational Psychology & Technology |

"Massive Open Online Courses (MOOCs) are a promising approach for scalable knowledge dissemination. However, they also face major challenges such as low engagement, low retention rate, and lack of personalization. We propose AttentiveLearner2, a multimodal intelligent tutor running on unmodified smartphones, to supplement today’s clickstream-based learning analytics for MOOCs. AttentiveLearner2 uses both the front and back cameras of a smartphone as two complementary and fine-grained feedback channels in real time: the back camera monitors learners’ photoplethysmography (PPG) signals and the front camera tracks their facial expressions during MOOC learning. AttentiveLearner2 implicitly infers learners’ affective and cognitive states during learning from their PPG signals and facial expressions. Through a 26-participant user study, we found that: (1) AttentiveLearner2 can detect 6 emotions in mobile MOOC learning reliably with high accuracy (average accuracy = 84.4%); (2) the detected emotions can predict learning outcomes (best R2 = 50.6%); and (3) it is feasible to track both PPG signals and facial expressions in real time in a scalable manner on today’s unmodified smartphones." 

No comment yet.
Scooped by Roxana Marachi, PhD!

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation // Future of Humanity Institute, Univ. of Oxford, EFF, OpenAI, et al.

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation // Future of Humanity Institute, Univ. of Oxford, EFF, OpenAI, et al. | Educational Psychology & Technology |

 "Artificial intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis. Countless more such applications are being developed and can be expected over the long ter. Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously. This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies and proposes ways to better forecast, prevent and mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed."...

No comment yet.
Scooped by Roxana Marachi, PhD!

What is Data Exploitation? // Privacy International

To view video on YouTube, see:


For questions related to the potential for Data Exploitation with "Smart Cities" projects, see: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Facebook Suspends Analytics Firm on Concerns About Sharing of Public User-Data // Wall Street Journal

Facebook Suspends Analytics Firm on Concerns About Sharing of Public User-Data // Wall Street Journal | Educational Psychology & Technology |

By Kristen Grind

"Facebook Inc. suspended another company that harvested data from its site and said it was investigating whether the analytics firm’s contracts with the U.S. government and a Russian nonprofit tied to the Kremlin violate the platform’s policies.


Crimson Hexagon, based in Boston, has had contracts in recent years to analyze public Facebook data for those and other clients, according to people familiar with the matter and federal procurement data. Crimson Hexagon says it has the largest repository of public social media posts"...


For full post, see:

No comment yet.
Rescooped by Roxana Marachi, PhD from Screen Time and Tech Safety Research!

The Inconvenient Truth About Cancer and Mobile Phones // The Guardian 

The Inconvenient Truth About Cancer and Mobile Phones // The Guardian  | Educational Psychology & Technology |

By Mark Hertsgaard and Mark Dowie

"On 28 March this year, the scientific peer review of a landmark United States government study concluded that there is “clear evidence” that radiation from mobile phones causes cancer, specifically, a heart tissue cancer in rats that is too rare to be explained as random occurrence.

Eleven independent scientists spent three days at Research Triangle Park, North Carolina, discussing the study, which was done by the National Toxicology Program of the US Department of Health and Human Services and ranks among the largest conducted of the health effects of mobile phone radiation. NTP scientists had exposed thousands of rats and mice (whose biological similarities to humans make them useful indicators of human health risks) to doses of radiation equivalent to an average mobile user’s lifetime exposure.

The peer review scientists repeatedly upgraded the confidence levels the NTP’s scientists and staff had attached to the study, fueling critics’ suspicions that the NTP’s leadership had tried to downplay the findings. Thus the peer review also found “some evidence” – one step below “clear evidence” – of cancer in the brain and adrenal glands.

Not one major news organisation in the US or Europe reported this scientific news. But then, news coverage of mobile phone safety has long reflected the outlook of the wireless industry. For a quarter of a century now, the industry has been orchestrating a global PR campaign aimed at misleading not only journalists, but also consumers and policymakers about the actual science concerning mobile phone radiation. Indeed, big wireless has borrowed the very same strategy and tactics big tobacco and big oil pioneered to deceive the public about the risks of smoking and climate change, respectively. And like their tobacco and oil counterparts, wireless industry CEOs lied to the public even after their own scientists privately warned that their products could be dangerous, especially to children."...

For full post, see: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Who is Driving The AI Agenda and What Do They Stand To Gain? // The New Statesman 

Who is Driving The AI Agenda and What Do They Stand To Gain? // The New Statesman  | Educational Psychology & Technology |

By Corinne Cath
"From the critical, like law enforcement, healthcare, and humanitarian aid, to the mundane, like dating and shopping, artificial intelligence (AI) seems to be the answer to all our problems. AI is a catch-all phrase for a wide-ranging set of technologies most of which apply learning techniques from statistics to find patterns in large sets of data and make predictions based on those patterns.

It seems like there are meetings every other week, organised by representatives from industry, government, academia, and civil society to address the perils of AI and formulate solutions to harness its potential.

But who is driving the regulatory agenda and what do they stand to gain? Cui Bono? Who benefits?

This question needs to be answered because letting industry needs drive the AI agenda presents real risks. With so many digital giants like Amazon and Facebook housed in the US, one particular concern regarding AI is its potential to mirror societies in the image of US culture and to the preferences of large US companies, even more than is currently the case.


AI programming does not necessarily require massive resources. Much of its value is derived from the data that is held. As a result, most of the technical innovation is led by a handful of American companies. As these companies are at the forefront of many regulatory initiatives, including those in Europe, it is essential to ensure this particular concern is not exacerbated.


AI systems are presented as very complex and difficult to explain, even for the technically ordained. The merits of those arguments aside, companies and governments alike use this reasoning to justify the deep involvement of the AI industry in policy making and regulation. And it’s not just any industry players that are involved, but the same select group that is leading the business of online marketing and data collection.


This is no coincidence. Companies like Google, Facebook, and Amazon sit on troves of data, which can be turned into the feeding material for new AI-based services. The ‘turn to AI’ thus both further consolidates their market position and provides legitimacy to their inclusion in regulatory processes."...


For full post, visit here: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Ocean Protocol: A Decentralized Data Exchange Protocol to Unlock Data for AI 

Ocean Protocol: A Decentralized Data Exchange Protocol to Unlock Data for AI  | Educational Psychology & Technology |

"What is Ocean Protocol?
Ocean Protocol is an ecosystem for sharing data and associated services. It provides a tokenized service layer that exposes data, storage, compute and algorithms for consumption with a set of deterministic proofs on availability and integrity that serve as verifiable service agreements. There is staking on services to signal quality, reputation and ward against Sybil Attacks.

Ocean helps to unlock data, particularly for AI. It is designed for scale and uses blockchain technology that allows data to be shared and sold in a safe, secure and transparent manner.


How Ocean Protocol Works

The Ocean Protocol is an ecosystem composed of data assets and services, where assets are represented by data and algorithms, and services are represented by integration, processing and persistence mechanisms. Ocean Protocol facilitates discovery by storing and promoting metadata, linking assets and services, and provides a licensing framework that has toolsets for pricing."...


For original page, see 

No comment yet.
Scooped by Roxana Marachi, PhD!

Data Analytics and Decision-Making in Education: Towards the Educational Data Scientist as a Key Actor in Schools and Higher Education Institutions // Agastisi and Bowers, 2017

To download, click on title or arrow above or to access from ResearchGate, visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

IBM Is Using its AI to Predict How Employees Will Perform // TNW

IBM Is Using its AI to Predict How Employees Will Perform // TNW | Educational Psychology & Technology |

By Tristan Greene

"IBM may have the most forward-thinking employee performance review system around. Rather than simply judge employees on what they’ve already done, the company uses its Watson AI to predict what they’re going to do in the future.

How it works:
 Predicting the future is right inside of Watson’s wheelhouse. In this case it isn’t determining whether you’re going to win the lottery and quit, it’s using company data to make logical projections about individual performance."... 

No comment yet.
Scooped by Roxana Marachi, PhD!

Disrupted Childhood: The Cost of Persuasive Design // 5Rights

Disrupted Childhood: The Cost of Persuasive Design // 5Rights | Educational Psychology & Technology | 

No comment yet.
Scooped by Roxana Marachi, PhD!

Scientists Seek Genetic Data to Personalize Education // Ben Williamson

Scientists Seek Genetic Data to Personalize Education // Ben Williamson | Educational Psychology & Technology |

"Researchers have begun to propose using genetic data from students to personalize education. Bringing genetics into education is highly controversial. It raises significant concerns about biological discrimination and rekindles long debates about eugenics and the genetic inheritance of intelligence.


Current proposals to personalize learning by enabling “educational organisations to create tailor-made curriculum programmes based on a pupil’s DNA profile” demand very close and critical attention. The potential of “the new geneism” to reproduce “dangerous ideas about the genetic heritability of intelligence” has already raised concerns. Scientists may be seeking new technologies to personalize teaching and learning according to students’ genetic data, but we need an informed debate about the implications for educational policy and practice of the emerging era of genetic forecasting.


Educational Genomics

Performing an adequate critique of genetics in education first requires a better engagement with its scientific basis. These ideas are only possible now owing to the sequencing of the human genome a decade ago. The human genome is the entire genetic structure of human DNA, and consequently studies in human genomics have flourished.


As a research field, educational genomics seeks to unpack the genetic factors involved in individual differences in learning ability, behavior, motivation, and achievement. A contentious aspect of educational genomics is identifying links between genetic factors and intelligence. Recent genomics advances have found statistically strong connections between intelligence test scores and specific genes, with over 500 genes identified as having clear-cut influence on intelligence test scores. Cheap DNA kits for IQ testing in schools may not be far away.


Importantly, however, researchers of educational genomics do not assume either that there is any single genetic factor that determines learning ability, or that genetic factors entirely explain the complexity of learning. Identifying an individual’s genotype — the full heritable genetic identity of a person — and its relationship to learning, intelligence or educational outcomes remains complex. The concept of the phenotype captures how genotypes and environments jointly contribute to a person’s physical, behavioral and mental characteristics.


Instead, educational genomics looks for patterns in huge numbers of genetic factors that might explain behaviors and achievements in individuals. It also focuses on the ways that individuals’ genotypes and environments interact, or how other “epigenetic” factors impact on whether and how genes become active. Researchers of “behavioral genetics” study the interaction of genetic and environmental influences on phenotypical behaviours.


Precision Education

Bolstered by these scientific advances, supporters of educational genomics and behavioural genetics increasingly argue that genetic data should be used to individualize teaching and learning.


The concept of “precision education” has begun to circulate among scientists who engage with psychology, neuroscience and genomics to understand learning processes. Precision education is both an interdisciplinary “science of learning” and an idealized model of teaching and learning informed by the sciences of the human mind, brain and genome.


According to one advocate: “We are currently a long way off from having the kinds of information needed to realise precision” but “the groundwork has started” to build the knowledge required for “evidence-based individualized learning.” It would require extensive data gathering from learners and complex analysis to identify patterns across psychological, neural and genetic datasets.


The aspiration behind precision education is to build scientific consensus for an interdisciplinary science of learning that might contribute to new evidence-based policies and practices in education.


A key figure bringing genomics research into education is the behavioral geneticist Robert Plomin. Plomin has extensively studied the links between genes and attainment. He has proposed that DNA analysis devices such as “learning chips” could make reliable genetic predictions of heritable differences in academic achievement.

A key scientific innovation in Plomin’s research is “genome-wide polygenic scoring” (GPS). A polygenic score is produced by analysing huge number of genetic markers, and their interactions with environmental factors, in order to predict a particular behavioral or psychological trait. As computer processing power, data storage capacity, and data analytics technologies have advanced in recent years, it has become possible to correlate genotypical data with a host of phenotypical traits.

Driven by the “new genetics of intelligence,” Plomin and colleagues foresee that “precision education” based on GPSs could be used to “customize education.” Using GPS methods, Plomin and colleagues have used polygenic scores to predict academic achievement in schools. The substantial increase in heritability they found “represents a turning point in the social and behavioral sciences because it makes it possible to predict educational achievement for individuals directly from their DNA,” thereby “moving us closer to the possibility of early intervention and personalized learning.”

Although educational genomics remains in its infancy, with very little known about how genetic variants actually work to produce different phenotypical traits or characteristics such as behavior and cognitive ability, it seems likely to advance considerably in coming years. Factoring in environmental and epigenetic influences on the human genome and phenotypical characteristics will be a key part of such research.

Other studies in educational genomics, for example, have highlighted how both social policies and socio-economic circumstances can expert “profound effects on the heritability of educational attainment and achievement” because they can either support or stifle “the expression of educationally relevant genetic propensities.”

As more findings emerge, further support will grow for evidence-based scientific perspectives on learning, and for new models of precision education using genetic data to personalize teaching and learning."


For full post, please see: 

No comment yet.
Scooped by Roxana Marachi, PhD!

What If People Were Paid For Their Data? // The Economist 

What If People Were Paid For Their Data? // The Economist  | Educational Psychology & Technology |

"“DATA SLAVERY.” Jennifer Lyn Morone, an American artist, thinks this is the state in which most people now live. To get free online services, she laments, they hand over intimate information to technology firms. “Personal data are much more valuable than you think,” she says. To highlight this sorry state of affairs, Ms. Morone has resorted to what she calls “extreme capitalism”: she registered herself as a company in Delaware in an effort to exploit her personal data for financial gain. She created dossiers containing different subsets of data, which she displayed in a London gallery in 2016 and offered for sale, starting at £100 ($135). The entire collection, including her health data and social-security number, can be had for £7,000.


Only a few buyers have taken her up on this offer and she finds “the whole thing really absurd”. Yet if the job of the artist is to anticipate the Zeitgeist, Ms Morone was dead on: this year the world has discovered that something is rotten in the data economy. Since it emerged in March that Cambridge Analytica, a political consultancy, had acquired data on 87m Facebook users in underhand ways, voices calling for a rethink of the handling of online personal data have only grown louder. Even Angela Merkel, Germany’s chancellor, recently called for a price to be put on personal data, asking researchers to come up with solutions.


"Data provided by humans can be seen as a form of labour which powers artificial intelligence"...


For original post, visit: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Secret ‘Fusion Centers’ and the Search for the Next School Shooter // EdSurge 

By Jenny Abamu

"Since 2001, the Federal government has provided state and local law enforcement agencies with grants, training and other forms of assistance to create a national network of so-called “fusion centers”—buildings in locations not disclosed to the public where government officials gather, process and disseminate information about terrorism, homeland security and criminal justice on a daily basis.

Oh, and they’re highly secretive.

The idea sprung up after the September 11th terrorist attacks when Americans seemed more open to ceding particular privacy rights in the name of public safety. The concept alone may seem a little troubling to privacy advocates, but it takes on a more dystopian bent in its intersection with school safety.


During the 2017-18 school year, there were more than 23 school shootings, putting both federal and local officials on high alert as they entered into a continuous and heated debate about the best way to respond. While some—including the President—have called for arming teachers, and others for stricter gun control laws, another swath of leaders are calling for increased, high-tech surveillance of students—reflecting something that may sound eerily similar to the 2002 Steven Spielberg film, Minority Report, where a specialized law enforcement unit apprehends criminals before they commit crimes based on pre-knowledge provided by psychics.

But for fusion centers, finding the next school shooter before it happens doesn’t involve futuristic mediums. Rather, they are digging through social media and other more ambiguous forms of data to predict and apprehend students who could potentially become threats in the future, and the work may be spreading. This past May, Texas Governor Greg Abbott called for the creation of more fusion centers as part of a school safety plan specifically to monitor student social media accounts for possible threats.


Fusion center officials say that they have seen some success in using predictive information to apprehend people who pose possible threats to schools, telling EdSurge that their work has kept “students safe and provide[d] first responders with greater insight.” Also in May, local media reports from Corpus Christi, Tex., told the story of three students who were arrested after fusion center officers found a video of them ranting on Instagram about a plan to shoot their peers in a Texas high school.


But the data local officials act on—sometimes collected by algorithms that scan social media for keywords like ‘kill’ and ‘bomb’—have a risk of misidentifying students as threats. People note these words often appear in slang phrases such as ‘I killed that test’ or ‘these shoes are the bomb.’ Some privacy advocates fear it may disproportionately target vulnerable student populations, such as minorities and those with disabilities, who are already known to be subject to harsher school discipline policies than their peers. In June, Oregonian reporter Bethany Barnes chronicled the unfortunate experience of an autistic high school student who eventually dropped out after constant surveillance and questioning from law enforcement officials who believed they had sufficient data to suspect he could be a school shooter—a traumatizing experience for the teen and his family."... 


For full post, see: 

No comment yet.
Scooped by Roxana Marachi, PhD!

Amazon Needs To Come Clean About Racial Bias In Its Algorithms // The Verge

Amazon Needs To Come Clean About Racial Bias In Its Algorithms // The Verge | Educational Psychology & Technology |

By Russell Brandom

"Yesterday, Amazon’s quiet Rekognition program became very public, as new documents obtained by the ACLU of Northern California showed the system partnering with the city of Orlando and police camera vendors like Motorola Solutions for an aggressive new real-time facial recognition service. Amazon insists that the service is a simple object-recognition tool and will only be used for legal purposes. But even if we take the company at its word, the project raises serious concerns, particularly around racial bias.


Facial recognition systems have long struggled with higher error rates for women and people of color — error rates that can translate directly into more stops and arrests for marginalized groups. And while some companies have responded with public bias testing, Amazon hasn’t shared any data on the issue, if it’s collected data at all. At the same time, it’s already deploying its software in cities across the US, its growth driven by one of the largest cloud infrastructures in the world. For anyone worried about algorithmic bias, that’s a scary thought.



For the ACLU-NC’s Matt Cagle, who worked on yesterday’s report, the possibility for bias is one of the system’s biggest problems. “We have been shocked at Amazon’s apparent failure to understand the implications of its own product on real people,” Cagle says. “ Face recognition is a biased technology. It doesn’t make communities safer. It just powers even greater discriminatory surveillance and policing.”


The most concrete concern is false identifications. Police typically use facial recognition to look for specific suspects, comparing suspect photos against camera feeds or photo arrays. But white subjects are consistently less likely to generate false matches than black subjects, a bias that’s been found across a number of algorithms. In the most basic terms, that means facial recognition systems pose an added threat of wrongful accusation and arrest for non-white people. The bias seems to come from the data used to train the algorithm, which often skews white and male. It’s possible to solve that problem, but there’s no public evidence that Amazon is working on the issue."... 



No comment yet.
Scooped by Roxana Marachi, PhD!

Assessing the social impact of Emotional AI (Artificial Intelligence) // Professor Andrew McStay

Assessing the social impact of Emotional AI (Artificial Intelligence) // Professor Andrew McStay | Educational Psychology & Technology | 

No comment yet.