Educational Psychology & Technology
26.2K views | +0 today
Follow
 
Scooped by Roxana Marachi, PhD
onto Educational Psychology & Technology
Scoop.it!

The Structural Consequences of Big Data-Driven Education // Elana Zeide 

The Structural Consequences of Big Data-Driven Education // Elana Zeide  | Educational Psychology & Technology | Scoop.it

Abstract
Educators and commenters who evaluate big data-driven learning environments focus on specific questions: whether automated education platforms improve learning outcomes, invade student privacy, and promote equality. This article puts aside separate unresolved—and perhaps unresolvable—issues regarding the concrete effects of specific technologies. It instead examines how big data-driven tools alter the structure of schools’ pedagogical decision-making, and, in doing so, change fundamental aspects of America’s education enterprise. Technological mediation and data-driven decision-making have a particularly significant impact in learning environments because the education process primarily consists of dynamic information exchange. In this overview, I highlight three significant structural shifts that accompany school reliance on data-driven instructional platforms that perform core school functions: teaching, assessment, and credentialing.

First, virtual learning environments create information technology infrastructures featuring constant data collection, continuous algorithmic assessment, and possibly infinite record retention. This undermines the traditional intellectual privacy and safety of classrooms. Second, these systems displace pedagogical decision-making from educators serving public interests to private, often for-profit, technology providers. They constrain teachers’ academic autonomy, obscure student evaluation, and reduce parents’ and students’ ability to participate or challenge education decision-making. Third, big data-driven tools define what ‘counts’ as education by mapping the concepts, creating the content, determining the metrics, and setting desired learning outcomes of instruction. These shifts cede important decision-making to private entities without public scrutiny or pedagogical examination. In contrast to the public and heated debates that accompany textbook choices, schools often adopt education technologies ad hoc. Given education’s crucial impact on individual and collective success, educators and policymakers must consider the implications of data-driven education proactively and explicitly.

 

Keywords: big data; personalized learning; competency-based education; smart tutors; learning analytics; MOOCs

Suggested Citation:

Zeide, Elana, The Structural Consequences of Big Data-Driven Education (June 23, 2017). Big Data, Vol 5, No. 2 (2017): 164-172. Available at SSRN: https://ssrn.com/abstract=2991794"

 

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2991794 

 

Shortlink to download: http://bit.ly/EdBigData

 

more...
No comment yet.
Educational Psychology & Technology
This curated collection includes news, resources, and research related to the intersections of Educational Psychology and Technology. The page also serves as a research tool to organize online content. The grey funnel shaped icon at the top allows for searching by keyword. For research more specific to tech, screen time and health/safety concerns, please see: http://bit.ly/screen_time, to learn about the next wave of privatization involving technology intersections with Pay For Success,  Social Impact Bonds, and Results Based Financing (often marketed with language promoting "public-private-partnerships"), see http://bit.ly/sibgamble, and for additional Educator Resources, please visit http://EduResearcher.com [Links to an external site].
Your new post is loading...
Your new post is loading...
Scooped by Roxana Marachi, PhD
Scoop.it!

When "Innovation" is Exploitation: Data Ethics, Data Harms and Why We Need to Demand Data Justice // Marachi, 2019, Summer Institute of A Black Education Network at Stanford University, California 

To download pdf, please click on title or arrow above.

 

For more on the data brokers selling personal information from a variety of platforms, including education, please see: https://www.fastcompany.com/90310803/here-are-the-data-brokers-quietly-buying-and-selling-your-personal-information 

 

Please also visit: Parent Coalition for Student Privacy

https://www.studentprivacymatters.org/

 

See the Data Exploitation page at Privacy International

https://privacyinternational.org/video/1626/video-what-data-exploitation

 

and visit the Data Justice Lab: 

https://datajusticelab.org/

 

more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Amazon Ring doorbells exposed home Wi-Fi passwords to hackers // TechCrunch 

Amazon Ring doorbells exposed home Wi-Fi passwords to hackers // TechCrunch  | Educational Psychology & Technology | Scoop.it

https://techcrunch.com/2019/11/07/amazon-ring-doorbells-wifi-hackers/ 

more...
No comment yet.
Rescooped by Roxana Marachi, PhD from Screen Time, Tech Safety & Harm Prevention Research
Scoop.it!

The Ethics of Virtual Reality Technology: Social Hazards and Public Policy Recommendations // Spiegel, 2018; Science and Engineering Ethics

The Ethics of Virtual Reality Technology: Social Hazards and Public Policy Recommendations // Spiegel, 2018; Science and Engineering Ethics | Educational Psychology & Technology | Scoop.it

Abstract
This article explores four major areas of moral concern regarding virtual reality (VR) technologies. First, VR poses potential mental health risks, including Depersonalization/Derealization Disorder. Second, VR technology raises serious concerns related to personal neglect of users’ own actual bodies and real physical environments. Third, VR technologies may be used to record personal data which could be deployed in ways that threaten personal privacy and present a danger related to manipulation of users’ beliefs, emotions, and behaviors. Finally, there are other moral and social risks associated with the way VR blurs the distinction between the real and illusory. These concerns regarding VR naturally raise questions about public policy. The article makes several recommendations for legal regulations of VR that together address each of the above concerns. It is argued that these regulations would not seriously threaten personal liberty but rather would protect and enhance the autonomy of VR consumers."


For access to article, visit: 

https://link.springer.com/article/10.1007/s11948-017-9979-y 

more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Yet Analytics, HP, the "Experience Graph" and the Future of Human Capital Analytics

Yet Analytics, HP, the "Experience Graph" and the Future of Human Capital Analytics | Educational Psychology & Technology | Scoop.it

[Note: The re-sharing of this post on the Educational Psychology & Technology collection does not indicate endorsement of its content]

______________________________________________________________________

 

By Shelly Blake-Plock

"Those of you who have followed Yet Analytics will know that to date, we've concentrated on xAPI. We've done so for two reasons. The first being the clear differentiators provided by xAPI as regards data interoperability in matters of learning and human capital data. The second is the ability xAPI provides in helping to capture granular data on-the-ground. We've released the Yet xAPI LRS and see xAPI as core to part of our strategy to revolutionize human capital analytics.

 

But it is only one part of that strategy.

Whereas we could describe the xAPI on-the-ground approach as a microcosmic strategy, it perhaps becomes more clear what our aims are as regards human capital data when we take a look at Yet's developments on the macro side of the table. Earlier this year, I was invited to speak at the Education World Forum in London. On stage, I debuted the collaborative work we have done with our partners at HP, Inc. The short transcript below describes that work and how we came to build the HP Education Data Command Center powered by Yet's new EIDCC platform.

And with the development of the EIDCC — a platform which can take data of any variety and provide the power of machine learning and neural networks to derive insight from it — Yet Analytics is rolling out a total data solution for human capital analytics, particularly as it concerns human development and the factors that investment and activity have on it.

Part of this involves xAPI. But xAPI is one of many factors. And where xAPI is used, it should be done so in the most intelligent and applicable manner. The broader reality is that the Experience API is something that should not be applied just for its own sake as the next big thing, but rather that the power of xAPI should be applied where applicable in the record of human behavior and performance — alongside any other data source — in the development of Experience Intelligence.

In other words — xAPI is a vital piece of the emerging human capital analytics ecosystem. But a meaningful solution in the space must look at the fullness of that very ecosystem in order to leverage it to produce insight. Macro level data — whether we're looking at the impact of social investment on a country's GDP or the relation between investment in employee experience and a Fortune 500 corporation's bottom line — is equally a key part of this understanding. Yet's goal is increasingly to bring these aspects — micro and macro — together and to make the insight of their mingling available on a single platform.

We can not imagine the modern learning experience without xAPI. And we can not imagine the future of human capital analytics without the ability to power AI and derive meaning across the diverse and divergent data assets of the Web. Those two things are not mutually exclusive. Rather, they are complementary values in the build out of the new architecture of human capital analytics. They are complimentary and necessary in the build out of the Experience Graph.

The Experience Graph is exactly that — it's a graph of experience. And just as experience is unlimited, the tools we use to leverage experience should be built to leverage it in all its unlimited and contextual facets. That means capturing it — and the macro context that surrounds it — through a matrix of strategies. Yet Analytics is deploying the tools necessary to meet the needs inherent in that project — xAPI databases and analytics engines, automated and interactive experience visualization tools and now a data platform leveraging AI to distill insights across contexts. We are dedicated to this work because as the Experience Graph grows, so too does the ability of organizations and employees to use data to improve their outcomes both in human capital development and in the impact that development has on organizational culture, economics and growth. This is something that improves people's lives.

By bringing together the on-the-ground nature of granular activity data collection about human engagement and learning behaviors with macro econometric data and the predictive capabilities of artificial intelligence, Yet Analytics is turning human capital data into a key piece of business and strategy intelligence. The following transcript describes how Yet and HP have begun applying these practices to the evaluation and forecasting of human capital investment at nation-state scale, beginning by solving problems for ministers of education across the globe. I think that you will quickly realize how this approach to a problem in the global education space as described below may provide a template for human capital investment and development solutions for any company or large organization.

— SBP

Prepared Remarks from the Education World Forum 2017

For too long countries have been mass proliferators of educational laptops and tablets with no real proof of their impact. But increasingly, government leaders are being held accountable for responsible spending. As the transformation and automation of work means that the best jobs will be skilled labor fully immerced in ICT, how can governments demonstrate the return on investment of their human capital technology spending both in fiscal and social-economic terms?

 

In the economic sphere, Dr. Eric Hanushek has been something of the godfather of education data. He is responsible for establishing the quantitative connection between cognitive skills and long-term economic growth. It is in continuing that path of study that Yet Analytics and HP, Inc. have worked together to develop an artificial brain capable of identifying these economic and cognitive ROI connections at scale and in real-time.

 

The synapses of our artificial brain leverage machine learning and are programmed to fire based on the ingestion and querying of big data comprising information on learning, economic and social factors and outcomes gathered by the World Bank, the World Economic Forum, the United Nations and elsewhere. The outcome is the ability to predict multi-year return on investment on a great variety of learning, economic and social measures.

 

We knew that variables including adolescent fertility rates, infant mortality rates and the balance of trade goods all had significant relationships with GDP per capita. The artificial brain now recognizes the trends in these factors alongside educational and cognitive trends and can forecast the effect of such educational, economic and social factors on GDP. For example, the machine computes the trends and finds that in a given country investment in the math literacy of females as measured by PISA when combined with educational gender parity and a sustained level of internet access will yield a significant percentage outcome in the future growth of GDP per capita. The same factors in a different context may result in a different forecast. Perhaps most importantly, the artificial brain also forecasts the timetable of such investment and provides objective guidance by weighting value in the context of dozens of concurrent social and economic factors.

 

Beginning with, but advancing beyond methods of automated multilinear regression analysis, the artificial brain is comprised of neural networks, each trained to identify trends and relationships within and among key variables. Individual data records are treated as observations which together comprise layers of information. Training data is passed through these networks thousands, even hundreds of thousands of times, in order to learn the trend and relationship between the presented patterns and the individual country's GDP per capita. The neural networks learn the differences in variable values year to year in order to forecast GDP.

 

The result is an artificial brain purpose-built to assist in the identification and forecasting of return on investment in learning, economic, and social endeavors. Expressly built to take into account the temporality of data, the artificial brain and its neural networks can be trained for each and every country — meaning that countries themselves may add their own data in order to attain even more precise and relevant forecasts.

 

Yet Analytics' EIDCC artificial brain powers the HP Education Data Command Center.

 

The interactive analytics and data visualizations provide:

  • A real-time comparison of thousands of data points customizable by country
  • The ability to identify and choose key variables to drill down into micro-components of the time series such as relationship between technology spending, social trends, and education strategies
  • Hypothesis testing on past and future events
  • Real-time computation and visualization of multi-year strategic ROI including social and cognitive measures

 

For government leaders, this means the ability to demonstrate the responsible and strategic fiscal rigor of a government; the visionary education reform leadership of government and educational leaders; and the prediction of future payback and time horizons for economic and social outcomes. It makes a clear case for investing in human capital from early childhood through tertiary education. Drawing from a number of sources and data streams ranging from the internationally comparable to the hyper local, it renders complex data and statistics as elegant and accessible data visualizations. And it proves to international financing organizations that development ROI will be measured and met.

The HP Education Data Command Center, powered by Yet Analytics' EIDCC artificial brain, provides cross-modal quantitative evidence of return-on-investment of technology spending in education as well as predictive insights to maximize a country's economic and social outcomes as a result of investments in human capital."

 

https://www.yetanalytics.com/blog/the-eidcc-the-experience-graph-and-the-future-of-human-capital-analytics 

more...
No comment yet.
Rescooped by Roxana Marachi, PhD from Educational Leadership Posts, Videos, Articles, and Resources
Scoop.it!

Six Ways (And Counting) That Big Data Systems Are Harming Society // The Conversation

Six Ways (And Counting) That Big Data Systems Are Harming Society // The Conversation | Educational Psychology & Technology | Scoop.it

By Joanna Redden

"There is growing consensus that with big data comes great opportunity, but also great risk.

 

But these risks are not getting enough political and public attention. One way to better appreciate the risks that come with our big data future is to consider how people are already being negatively affected by uses of it. At Cardiff University’s Data Justice Lab, we decided to record the harms that big data uses have already caused, pulling together concrete examples of harm that have been referenced in previous work so that we might gain a better big picture appreciation of where we are heading.

 

We did so in the hope that such a record will generate more debate and intervention from the public into the kind of big data society, and future we want. The following examples are a condensed version of our recently published Data Harm Record, a running record, to be updated as we learn about more cases.

1. Targeting based on vulnerability

With big data comes new ways to socially sort with increasing precision. By combining multiple forms of data sets, a lot can be learned. This has been called “algorithmic profiling” and raises concerns about how little people know about how their data is collected as they search, communicate, buy, visit sites, travel, and so on.

 

Much of this sorting goes under the radar, although the practices of data brokers have been getting attention. In her testimony to the US Congress, World Privacy Forum’s Pam Dixon reported finding data brokers selling lists of rape victims, addresses of domestic violence shelters, sufferers of genetic diseases, sufferers of addiction and more.

2. Misuse of personal information

Concerns have been raised about how credit card companies are using personal details like where someone shops or whether or not they have paid for marriage counselling to set rates and limits. One study details the case of a man who found his credit rating reduced because American Express determined that others who shopped where he shopped had a poor repayment history.

 

This event, in 2008, was an early big data example of “creditworthiness by association” and is linked to ongoing practices of determining value or trustworthiness by drawing on big data to make predictions about people.

3. Discrimination

As corporations, government bodies and others make use of big data, it is key to know that discrimination can and is happening – both unintentionally and intentionally. This can happen as algorithmically driven systems offer, deny or mediate access to services or opportunities to people differently.

 

Some are raising concerns about how new uses of big data may negatively influence people’s abilities get housing or insurance – or to access education or get a job. A 2017 investigation by ProPublica and Consumer Reports showed that minority neighbourhoods pay more for car insurance than white neighbourhoods with the same risk levels. ProPublica also shows how new prediction tools used in courtrooms for sentencing and bonds “are biased against blacks”. Others raise concerns about how big data processes make it easier to target particular groups and discriminate against them."...

 

For full post, see:

https://theconversation.com/six-ways-and-counting-that-big-data-systems-are-harming-society-88660 


Via Roxana Marachi, PhD
more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Dr. Ruha Benjamin Discusses New Book, Race After Technology // Data & Society

Dr. Ruha Benjamin Discusses New Book, Race After Technology // Data & Society | Educational Psychology & Technology | Scoop.it

"Data & Society welcomes Princeton Professor Ruha Benjamin to discuss the relationship between machine bias and systemic racism, analyzing specific cases of “discriminatory design” and offering tools for a socially-conscious approach to tech development. In Race After Technology: Abolitionist Tools for the New Jim Code, Ruha Benjamin cuts through tech-industry hype, from everyday apps to complex algorithms, to understand how emerging technologies can reinforce White supremacy and deepen social inequity.

 

Presenting the concept of “the new Jim Code” she shows how a range of discriminatory designs encode inequity by explicitly amplifying racial hierarchies; by ignoring but thereby replicating social divisions; or by aiming to fix racial bias but ultimately doing quite the opposite."...

For event link and more information about Data & Society, please visit 
https://datasociety.net/events/databite-no-124-ruha-benjamin/

 

Link to video of the event is at: 

https://www.youtube.com/watch?v=bbcG7e2dBj0

 

more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Once More With Feeling: 'Anonymized' Data Is Not Really Anonymous // Techdirt

Once More With Feeling: 'Anonymized' Data Is Not Really Anonymous // Techdirt | Educational Psychology & Technology | Scoop.it

By Karl Bode

"As companies and governments increasingly hoover up our personal data, a common refrain to keep people from worrying is the claim that nothing can go wrong because the data itself is "anonymized" or stripped of personal detail. But time and time again, we've noted how this really is cold comfort; given it takes only a little effort to pretty quickly identify a person based on access to other data sets. Yet most companies (including cell phone companies that sell your location data) act as if "anonymizing" your data is iron-clad protection from having it identified. It's simply not true.

 

The latest case in point: in new research published this week in the journal Nature Communications, data scientists from Imperial College London and UCLouvain found that it wasn't particularly hard for companies (or, anybody else) to identify the person behind "anonymized" data using other data sets. More specifically, the researchers developed a machine learning model that was able to correctly re-identify 99.98% of Americans in any anonymised dataset using just 15 characteristics including age, gender and marital status:

 

"While there might be a lot of people who are in their thirties, male, and living in New York City, far fewer of them were also born on 5 January, are driving a red sports car, and live with two kids (both girls) and one dog,” explained study first author Dr Luc Rocher, from UCLouvain."


And using fifteen datasets is actually pretty high for this sort of study. One investigation of "anonymized" user credit card data by MIT found that users could be correctly "de-anonymized" 90 percent of the time using just four relatively vague points of information. Another study looking at vehicle data found that 15 minutes’ worth of data from just brake pedal use could lead them to choose the right driver, out of 15 options, 90% of the time.

The problem, of course, comes when multiple leaked data sets are released in the wild and can be cross referenced by attackers (state sponsored or otherwise), de-anonymized, then abused. The researchers in this new study were quick to proclaim how government and industry proclamations of "don't worry, it's anonymized!" are dangerous and inadequate:

 

"Companies and governments have downplayed the risk of re-identification by arguing that the datasets they sell are always incomplete,” said senior author Dr Yves-Alexandre de Montjoye, from Imperial’s Department of Computing, and Data Science Institute. "Our findings contradict this and demonstrate that an attacker could easily and accurately estimate the likelihood that the record they found belongs to the person they are looking for."

 

It's not clear how many studies like this we need before we stop using "anonymized" as some kind of magic word in privacy circles, but it's apparently going to need to be a few dozen more."...

 

For original post, please visit:

https://www.techdirt.com/articles/20190723/08540542637/once-more-with-feeling-anonymized-data-is-not-really-anonymous.shtml

more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Some Questions for Assessophiles // Inside Higher Education 

Some Questions for Assessophiles // Inside Higher Education  | Educational Psychology & Technology | Scoop.it

https://www.insidehighered.com/views/2018/07/03/professor-questions-current-approaches-assessment-opinion 

more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

The junk science of emotion-recognition technology // The Outline

The junk science of emotion-recognition technology // The Outline | Educational Psychology & Technology | Scoop.it

By Sanjana Varghees

"What do FacebookDisneyAmazon, and Kellogg’s Cornflakes have in common? They want to know how you feel. Whether they’re quietly filing patents or partnering with companies that sell this kind of software to tweak their ads for more of your attention, big corporations are investing heavily in learning how their customers feel.


The basic idea behind emotion-recognition tech is that a specific kind of software or wearable hardware can not only tell what you’re feeling from your face, your voice or the way you walk, but that it can convert that data into money and “insights” for someone else. This is despite the fact that the scientific evidence for emotion recognition isn’t really there. It’s big business, and an increasing number of companies want a piece of the action; the market for “emotion detection and recognition” was valued at $12 billion by the market research firm Mordor Intelligence last year. Mordor estimates that this figure will grow to $92 billion by 2024.


Emotion-recognition technology covers a range of different methods — gait analysis, voice analysis, and the most common iteration, face-tracking technology, where video footage of people’s faces is mined to train algorithms. In face-tracking emotion-recognition technology, your features are mapped, with certain points used as “landmarks” — such as the corners of your mouth, which are theoretically supposed to turn up when you’re happy and down when you’re angry. Those changes — scrunching up your nose, furrowing your brow, a quirk upwards of one corner of your mouth — are then assigned to emotions, sometimes on a scale of one to zero, or a percentage. While human coders used to be the primary way of labelling these differences, developments in technology have made it possible for artificial intelligence to carry out that work instead, with what’s known as computer vision algorithms.

 

ON THE EDGE

The broad area that emotion-recognition technology falls under is called “affective computing,” which examines the relationship between technology and emotions. Affectiva, born out of the MIT Media Lab, is probably the biggest company working in this area — it’s been around since 2006, and arguably has led the rest of the field. They’ve developed a massive dataset, comprising of over 5.3 million faces, mostly collected from their partners, which are companies that have video footage of people watching advertisements, television shows and “viral content.” In the last year, they’ve expanded that dataset to include people from gifs and crucially, people driving “in the wild”.

Affective computing in cars, often called automotive AI, is a particular area of focus: using video footage of drivers to identify whether they’re drowsy or if they’re distracted, with a stated end goal of a more “intimate in-cab experience” and “safe driving.” One of their other products, called Affdex for Market Research, tracks users reactions as they watch advertisements and videos, which are then converted into points and & then collated into an easily usable dashboard for the person who has to then try and decide whether a background of millennial pink or salmon pink would keep people’s attention for an eighth of a second more.


If a program doesn’t have access to your face, it will simply use your voice instead. Another emotion-recognition startup that’s received media coverage is Empath, which relies on a proprietary software to track and analyze changes in your voice; tracking your pitch and the way it sounds when you waver, detecting for four emotions, which are joy, anger, calm and sorrow. A range of products fill out Empath’s suite, including the “My Mood” forecast, an app that employees can tap to track changes in their emotion (using their voice) and how those shifts correlate with weather patterns. (Team managers can see both individual employees’ moods and aggregated team moods).

 

Empath’s Web Empath API — which works across Windows, iOS, Android — adds “emotion detection capabilities” to existing apps and services, such as its “smart call centre” suite of products, which are supposed to provide a better experience for customers while reducing employee turnover. But it’s fairly uncertain how tracking your employee’s emotions could produce these outcomes — or potentially, whether employees would even have an incentive to be honest about how they were feeling (such as in the My Mood app) if they knew their bosses would see it. (Affectiva declined a request for a comment, and Empath has not yet responded to my inquiry.). 

 

It’s hard to tell whether these technologies actually stand up to severe scrutiny, given that the industry is still emerging and there’s not really a lot of research that verifies the claims these companies make. In fact, the opposite tends to be true. A study published in July which reviewed more than 1,000 papers in the field and was co-authored by five professors from different academic backgrounds, found that the basis for emotion-recognition technology that relies on face movement tracking just doesn’t exist. “Companies are mistaken in what they claim,” says the lead author Lisa Feldman Barrett, a psychology professor at Northeastern University who has been working in the area for many years. “Emotions are not located in your face, or in your body, or in any one signal. When you reach an emotional state, there are a cascade of changes.... The meaning of any one signal, like a facial movement, is not stable. So measuring a single signal, or even two or three signals, will be insufficient to infer a person's emotional state."...

 

For full post, please visit:

https://theoutline.com/post/8118/junk-emotion-recognition-technology

more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Maths and Tech Specialists Need Ethical Pledge, Says Academic // The Guardian

Maths and Tech Specialists Need Ethical Pledge, Says Academic // The Guardian | Educational Psychology & Technology | Scoop.it

By Ian Sample

"Mathematicians, computer engineers and scientists in related fields should take a Hippocratic oath to protect the public from powerful new technologies under development in laboratories and tech firms, a leading researcher has said.

 

The ethical pledge would commit scientists to think deeply about the possible applications of their work and compel them to pursue only those that, at the least, do no harm to society.

 

Hannah Fry, an associate professor in the mathematics of cities at University College London, said an equivalent of the doctor’s oath was crucial given that mathematicians and computer engineers were building the tech that would shape society’s future.

 

Despite being invisible, maths has a dramatic impact on our lives

“We need a Hippocratic oath in the same way it exists for medicine,” Fry said. “In medicine, you learn about ethics from day one. In mathematics, it’s a bolt-on at best. It has to be there from day one and at the forefront of your mind in every step you take.”

 

Fry will explore the power and perils of modern mathematics in December when she delivers the 2019 Royal Institution Christmas lectures. Made popular by figures from Michael Faraday to Carl Sagan, the demonstration-filled sessions were launched in 1825 and became the most prestigious public science lectures in Britain.

 

In the three-lecture series, Secrets and Lies: the Hidden Power of Mathematics, Fry will examine the maths of risk and luck, from finding the perfect partner to keeping healthy and happy. She will also explore how algorithms that feast on data have infiltrated every aspect of our lives; what problems maths should be kept away from; and how we must learn when the numbers cannot be trusted."...

 

For full post, please visit: 

https://www.theguardian.com/science/2019/aug/16/mathematicians-need-doctor-style-hippocratic-oath-says-academic-hannah-fry 

more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Privacy Guide for Protection of Student Data When Taking #ACT or College Board (#PSAT, #SAT, #AP) Exams // Parent Coalition for Student Privacy

Privacy Guide for Protection of Student Data When Taking #ACT or College Board (#PSAT, #SAT, #AP) Exams // Parent Coalition for Student Privacy | Educational Psychology & Technology | Scoop.it

From Parent Coalition for Student Privacy:

"Here is a one-page pdf flyer you can distribute to parents or post in your schools.

Both ACT and College Board sell personal student data to colleges and universities, as well as to other non-profit and for-profit organizations to help them recruit students and/or market their products and services.


The College Board makes an approximate $100 million per year from its “Student Search” program, for which it charges organizations 47 cents per student name. 

[1]  Last year, ACT was sued via a class action lawsuit, because they allegedly included student disability information in the data they sold to customers.

[2] If your child is taking a College Board exam, and you don’t want any of their personal data sold, which may include their race, ethnicity, self-reported grades, religion and/or test scores within a certain range, as well as other confidential information, urge them NOT to fill out any of the optional questions that are included online or in the Student Questionnaire given before the administration of the exam.  They should also be sure not to check the box that indicates they want to participate in the College Board “Student Search” program.

If your child is taking the ACT, you and your child should also refrain from filling out any of the extraneous information asked for in the  ACT Student Profile Section, unless you want that data also sold and/or used for marketing purposes.

In May 2018, the US Department of Education’s Privacy Technical Assistance Center warned schools and districts that have agreements with these companies to administer their exams during the school day that their practice of allowing these companies to gather confidential information directly from students and sell it without parental consent may be illegal under several federal laws.

[3] In addition, New York as well as Illinois and 21 other states prohibit school vendors from selling student data under any circumstances. 

[4]
 Illinois legislators have now asked the State Attorney General to investigate the College Board’s practices for that reason. 


[5] NY Times has reported that this data often ends up in the hands of unscrupulous for-profit companies that use the information to market dubious products and services to families; in some cases, the information may end up in the hands of data brokers. 

[6] Some districts now refrain from giving these voluntary surveys to their students or tell them not to answer any of its questions, because this takes considerable time and can add stress to an already pressure-filled situation.

Districts also should be aware that these companies disclose personal data that may be illegal.

Here are some questions parents should ask their children’s school or district ahead of time:

  1. Is any survey or voluntary list of questions going to be asked of their children before the administration of these exams?

  1. If so, can they give you a copy of these questions? Prior parental notification of any such survey is required under the Protection of Pupil Rights Amendment (PPRA), passed by Congress in 1978.[7]

  1. If any highly sensitive questions are included, such as those involving religious preferences or affiliations, will the school notify parents of their right to opt their children out of the survey ahead of time, as is required under PPRA?

  1. Does the district have a contract with the testing company that prohibits them from selling any of this personal student data, as is required by NY state law as well as student privacy laws in 21 other states?

  1. If not, why not? And can they share a copy of this contract?


Sources

[1] https://collegeboardsearch.collegeboard.org/pastudentsrch/support/licensing/pricing-payment-policies

[2] https://www.businesswire.com/news/home/20180807005834/en/Students-Disabilities-File-Class-Action-ACT-Test

[3] https://studentprivacy.ed.gov/sites/default/files/resource_document/file/TA%20College%20Admissions%20Examinations.pdf

[4] https://www.studentprivacymatters.org/state-privacy-laws-re-selling-student-data-_act-sat-exceptions/

[5] https://news.wttw.com/2019/10/10/lawmakers-urge-ag-raoul-investigate-college-board-selling-student-data

[6] https://www.nytimes.com/2018/07/29/business/for-sale-survey-data-on-millions-of-high-school-students.html

[7] https://www2.ed.gov/policy/gen/guid/fpco/ppra/parents.html

 

For more information, please email us at info@studentprivacymatters.org"

https://www.studentprivacymatters.org/privacy-warning-re-act-collegeboard-exams/ 

more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

No Data About Us Without Us // Slides for Dignity in Schools Webinar 10/15/19

more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Epic Games sued for not warning parents 'Fortnite' is allegedly as addictive as cocaine // USA Today

By Edward Baig
Roughly a quarter of a billion mostly obsessed gamers are battling it out in "Fortnite." There’s a darn decent chance kids you know are among them. 

 

A Montreal-based law firm launched a proposed class action in Canada on behalf of two Quebec parents who claim that "Fortnite" publisher Epic Games needs to pay the price for a third-person shooter they allege is as addictive, and potentially harmful, as cocaine. 

 

The firm, Calex Légal, represents plaintiffs who are identified only by their initials, FN and JZ. They are the parents of a 10- and 15-year-old, respectively.

 

Written in French, the legal action alleges that when a person is engaged in "Fortnite" for a long period, the player’s brain releases the “pleasure hormone, dopamine” and that "Fortnite" was developed by psychologists, statisticians and others over four years “to develop the most addictive game possible,” all so Epic could reap lucrative profits. 

 

An Epic spokesperson said the company does not comment on ongoing litigation.

 

 Though "Fortnite" is free to play, kids spend gobs of real money purchasing the in-game currency, V-Bucks, used for dances (which are called “emotes”), skins and custom outfits for their virtual alter-egos.


“The defendants used the same tactics as the creators of slot machines, or variable reward programs, (to ensure) the dependence of its users, (and) the brain being manipulated to always want more,” the suit alleges in a rough translation. “Children are particularly vulnerable to this manipulation since their self-control system in the brain is not developed enough.”...

 

For full post, please visit:

https://www.usatoday.com/story/tech/talkingtech/2019/10/07/fortnite-producer-epic-games-lawsuit-says-addictive-as-cocaine/3900236002/ 

more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Schools are using AI to track their students // Quartz

Schools are using AI to track their students // Quartz | Educational Psychology & Technology | Scoop.it

By Simone Stolzoff

"Over 50 million k-12 students will go back to school in the US this month. For many of them using a school computer, every word they type will be tracked.

 

Under the Children’s Internet Protection Act (CIPA), any US school that receives federal funding is required to have an internet-safety policy. As school-issued tablets and Chromebook laptops become more commonplace, schools must install technological guardrails to keep their students safe. For some, this simply means blocking inappropriate websites. Others, however, have turned to software companies like GaggleSecurly, and GoGuardian to surface potentially worrisome communications to school administrators.

These Safety Management Platforms (SMPs) use natural-language processing to scan through the millions of words typed on school computers. If a word or phrase might indicate bullying or self-harm behavior, it gets surfaced for a team of humans to review.

 

In an age of mass school-shootings and increased student suicides, SMPs can play a vital role in preventing harm before it happens. Each of these companies has case studies where an intercepted message helped save lives. But the software also raises ethical concerns about the line between protecting students’ safety and protecting their privacy. 

“A good-faith effort to monitor students keeps raising the bar until you have a sort of surveillance state in the classroom,” Girard Kelly, the director of privacy review at Common Sense Media, a non-profit that promotes internet-safety education for children, told Quartz. “Not only are there metal detectors and cameras in the schools, but now their learning objectives and emails are being tracked too.”

 

The debate around SMPs sits at the intersection of two topics of national interest—protecting schools and protecting data. As more and more schools go one-to-one, the industry term for assigning every student a device of their own, the need to protect students’ digital lives is only going to increase. Over 50% of teachers say their schools are one-to-one, according to a 2017 survey from Freckle Education, meaning there’s a huge market to tap into.

But even in an age of student suicides and school shootings, when do security precautions start to infringe on students’ freedoms?"

 

For full post, please visit

https://qz.com/1318758/schools-are-using-ai-to-track-what-students-write-on-their-computers/ 

more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

In the Age of AI // Frontline PBS

In the Age of AI // Frontline PBS | Educational Psychology & Technology | Scoop.it

"Frontline investigates the promise and perils of artificial intelligence, from fears about work and privacy to rivalry between the U.S. and China. The documentary traces a new industrial revolution that will reshape and disrupt our lives, our jobs and our world, and allow the emergence of the surveillance society."...

 

https://www.pbs.org/wgbh/frontline/film/in-the-age-of-ai/

 

 

more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Predictive Analytics: The Privacy Pickle and Hewlett-Packard’s Prediction of Employee Behavior

Predictive Analytics: The Privacy Pickle and Hewlett-Packard’s Prediction of Employee Behavior | Educational Psychology & Technology | Scoop.it

By Eric Siegel
Hewlett-Packard (HP) knows there are two sides to every coin. The company has achieved new power by predicting employee behavior, a profitable practice that may raise eyebrows among some of its staff. HP tags its more than 330,000 workers with a so-called Flight Risk score. This simple number foretells whether each individual is likely to leave his or her job.

With the advent of predictive analytics, organizations gain power by predicting potent yet – in some cases – sensitive insights about individuals. These predictions are derived from existing data, almost as if creating new information out of thin air. Examples include HP inferring an employee’s intent to resign, retailer Target deducing a customer’s pregnancy, and law enforcement in Oregon and Pennsylvania foretelling a convict’s future repeat offense.

 

By predicting which of its staff are likely to leave, HP can focus its efforts on retaining them, thereby reducing the high cost associated with finding and training replacements. Managers “drive decisions with the support of the [predictive] report and the story it tells for each of their employees,” says Gitali Halder, who leads this prediction project’s team at HP and holds a master’s in economics from the University of Delhi.

HP credits its capacity to predict with helping decrease its workforce turnover rate for a specialized team that provides support for calculating and managing the compensation of salespeople globally. The roughly 300-member team’s turnover rates, which were a relatively high 20 percent in some regions, have decreased to 15 percent and continue to trend downward.

Flight Risk Promises Big Savings

Beyond this early success, HP’s Flight Risk prediction capability promises $300 million in estimated potential savings with respect to staff replacement and productivity loss globally. The Flight Risk scores adeptly inform managers where risk lurks: the 40 percent of HP employees assigned highest scores includes 75 percent of those who will quit (almost twice that of guessing).

 

HP, which literally started in the proverbial garage, came in as the 27th largest employer of 2011; $127 billion in revenue places it among the top few technology companies globally.

 

Halder and her teammate Anindya Dey first broke this ground in 2011, mathematically scrutinizing the loyalty of each one of their 330,000 colleagues. The two crackerjack scientists, members of HP’s group of 1,700 analytics workers in Bangalore, built this prognostic capability with predictive analytics, technology that learns from the experience encoded in big data to form predictive scores for individual workers, customers, patients or voters.

 

To prepare learning material, they pulled together two years of employee data such as salaries, raises, job ratings and job rotations. Then they tacked on, for each employee record, whether the person had quit. Compiled in this form, the data served to train a Flight Risk detector that recognizes combinations of factors characteristic to likely HP defectors.

The results surprised Halder and Dey, revealing that promotions are not always a good thing. While promotions decrease Flight Risk across HP as a whole, the effect is reversed within the sales compensation team: Those promoted more times are more likely to quit, unless they have also experienced a more significant pay hike.

Newfound Power, Threats

The analysis confirmed that Flight Risk also depends on things one might expect. Employees with higher salaries, more raises and increased performance ratings are less prone to quit. Job rotations also keep employees on board by introducing change, given the rote, transactional nature of some compensation staff activities.

 

HP’s newfound power brings with it a newfound threat in the eyes of some employees. A novel element is emerging within staff records: speculative data. Beyond standard personal and financial data about employees, this introduces an estimation of future behavior, and so speaks to the heart, mind and intentions of the employee. The concerned ask: What if your Flight Risk score is wrong, unfairly labeling you as disloyal and blemishing your reputation?

 

Most at HP don’t know they are being predicted. “The employees are not aware of this model,” Halder says. “[It] is not designed to penalize the employees but to make necessary adjustments that would result in lower probabilities of them leaving us.”

 

HP deploys this newly synthesized, sensitive isotope with care. The Flight Risk scores are securely delivered to only a select few high-level managers. “We are keeping this data very much confidential,” assures Halder.

 

Despite concerns, predictive analytics does not itself invade privacy. Although sometimes referred to with the broader term data mining, its core process doesn’t “drill down” to inspect individuals’ data. Instead, this analytical learning process “rolls up,” discovering broadly-applicable patterns by way of number crunching across multitudes of individual records.

 

However, after this analysis, the application of what’s been learned to render predictions may divulge unvolunteered truths about an individual. Here workers and consumers fear prediction “snoops” into their private future. It’s not about misusing or leaking data. Rather, it’s the corporate deduction of private information. It’s technology that speculates, sometimes against the wishes of those speculated."

 

...

 

"Civil Liberties an Issue

But embracing predictive analytics challenges the world with an unprecedented dilemma: How do we safely harness a predictive machine that foresees job resignation, pregnancy (as predicted by retailer Target) and crime without putting civil liberties at risk?"...

 

For full post, please visit: 
http://analytics-magazine.org/predictive-analytics-the-privacy-pickle-hewlett-packards-prediction-of-employee-behavior/

more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Colleges and Universities That Use Behavioral Tracking for Recruiting and Admissions Should Beware Privacy Compliance Pitfalls // Saul Ewing Arnstein & Lehr LLP,  JDSupra

Colleges and Universities That Use Behavioral Tracking for Recruiting and Admissions Should Beware Privacy Compliance Pitfalls // Saul Ewing Arnstein & Lehr LLP,  JDSupra | Educational Psychology & Technology | Scoop.it

By Alexander Bilus and Jillian Walton, Saul Ewing Arnstein & Lehr LLP

"Colleges and universities, like many other organizations, have incorporated automated data collection and predictive analytics into their business models and decision-making processes. Over the past decade, consulting companies developed predictive analytics tools for higher education institutions to refine their recruiting efforts and aid in their admissions decisions. These tools are particularly attractive as tuition costs rise, sources of funding decline, and competition for top prospects increases. While the use of data in recruiting and admissions efforts is far from new, the quantity and level of detail of the data has transformed higher ed’s marketing strategies. Where colleges and universities used to target certain geographic regions and specific high schools with predictive analytics tools, institutions are now able to customize recruiting to individual prospective students. But institutions can stumble into privacy law compliance issues if these practices are not accurately disclosed in their privacy policies and if the data is not protected when it is shared with consultants. This alert describes these data collection and use practices, summarizes the privacy compliance issues that can arise, and provides practical tools colleges and universities can implement to avoid any compliance pitfalls.

Colleges and Universities’ Data Collection and Use Practices in Recruiting Efforts and Admissions Decisions

For years, institutions have collected large amounts of data on prospective students, but predictive analytics tools now enable institutions to glean insights into prospective students far beyond their basic biographic information and their self-selected program of interest. With the assistance of consulting companies, institutions use tracking technology to capture data about the activity of their website visitors, including, among other things, each visitor’s IP address, unique device identification number, the pages the visitor views at, how many times the visitor visits the website, and how long the visitor looks at certain pages. This behavioral data can be linked to prospective students through an institution’s recruiting and marketing emails. When an email recipient clicks a link in an email from the institution to visit the institution’s website, the tracking technology records the email address associated with that IP address, which then connects the email address with the previously collected behavioral data.

By combining this behavioral data with other datasets for the institution, consulting companies can develop detailed profiles on prospective students. These datasets may be publicly available, such as U.S. Census Bureau household income data by zip code, or purchased from third-party data brokers, which amass consumer data from a multitude of sources. Institutions’ admissions offices have used these profiles and predictive analytic tools to assess a prospective student’s likelihood of applying, accepting an offer of admission, enrolling, and financial aid needs, and then, in turn, focus their recruiting efforts on prospective students who best fit the admissions offices’ desired profile for the incoming class.

Privacy Law Compliance Issues

Recently, The Washington Post published an investigative report highlighting the above-described practices and finding that the majority of the institutions included in its investigation failed to fully disclose the extent and purpose of these behavioral tracking practices in their website privacy policies. With the increasingly turbulent privacy law landscape and the public’s heightened awareness of privacy issues, institutions need to understand their own data collection and use practices, including how they are collecting data; who they are collecting data about; how this data is used, stored, accessed and secured; and who they share the data with, so that the institutions are well-positioned to evaluate whether their practices are compliant with the existing and new privacy laws. Otherwise, institutions may not adequately (or accurately) disclose their data collection and use, and may not take appropriate steps to protect data when sharing it with third-party consultants and service providers.

This may seem like a daunting undertaking when many departments throughout an institution use and collect personal information, however, there are ways to incorporate data privacy management into their operations, as discussed in the following section.


Recommendations

Institutions should verify that their privacy policies accurately describe their data collection and use practices, consider incorporating a privacy impact assessment into their procurement and vendor contract review process, and in particular ensure their privacy policies are compliant with FERPA and GDPR, where applicable.


Privacy Policies

For institutions that post privacy policies on their websites, these policies may be outdated or may not adequately describe the data collection methods, purpose for the collection, and with whom the information is shared. To the extent data collection practices are addressed in the privacy policy, the institution should take care to accurately describe the data collection, the use of the information, the purpose for the collection and use, and disclosure of that data to service providers and third parties.


Privacy Impact Assessments

Institutions should consider incorporating privacy impact assessments (PIAs) into their procurement and vendor contract review process. A PIA is a used to identify what personal information the institution is collecting and/or sharing with a third party, why the personal information is collected and/or shared, and how the personal information will be used, safeguarded, and stored. This tool enables the institution to assess the risk associated with a vendor contract, evaluate whether the institution’s privacy policy accurately reflects the collection and use of personal information required in the vendor contract, and negotiate the inclusion of provisions in the vendor contract that control the vendor’s use and protection of personal data provided by the institution. PIAs also serve as collaborative tools among procurement teams and the institution’s recruiting, marketing, and admissions departments as they engage third-party consultants to provide data analytics or behavioral analytics services so the institution’s lawyers and procurement team are fully informed of the institution’s data collection and use practices.

FERPA Compliance

Under the Family Educational Rights and Privacy Act (FERPA), institutions can only disclose to third parties the education records of current students if the institutions obtain the students’ consent or if one of FERPA’s exceptions to the consent requirement applies. To the extent institutions are collecting personal information about students through the behavioral tracking methods described above, institutions may share this information with consulting companies without the consent of the students under FERPA’s “school official” exception. This exception permits an institution to disclose education records to “school officials”—including consultants who are providing services to the institution—who have been determined by the university to have “legitimate educational interests” in using the records. For this exception to the consent requirement to apply, the institution should ensure that its annual FERPA notification to students properly defines “school officials” and “legitimate educational interests” and that its contract with the service provider contains certain provisions to protect the records.

GDPR Compliance

To the extent the European Union’s General Data Protection Regulation (GDPR) applies here, the regulation prescribes specific information that institutions must include in their privacy policies regarding the use and disclosure of personal data. By way of example, although this is not an exhaustive list, the privacy policy must include the identity and contact information of the data controller, the purposes for processing the personal data, the legal basis for processing the personal data, the legitimate interests of the controller or third party processing the personal data, recipients of the personal data, the period for which the personal data will be stored, and the data subject’s privacy rights. The information required by GDPR is more extensive than the privacy policies that institutions have typically posted on their websites in the past. The GDPR also requires institutions to include extensive provisions in their contracts with service providers that are given access to GDPR-protected data."...

 

For full document, please visit: 

https://www.jdsupra.com/legalnews/colleges-and-universities-that-use-57858/

 

more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

How To Recognize AI Snake Oil // Arvind Narayanan, Associate Professor of Computer Science at Princeton University 

By Arvind Narayanan

"Much of what’s being sold as "AI" today is snake oil. It does not and cannot work. In a talk at MIT yesterday, I described why this happening, how we can recognize flawed AI claims, and push back. Here are my annotated slides:

https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf

 

 

more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

The Data Brokers Quietly Buying and Selling Your Personal Information // Fast Company

The Data Brokers Quietly Buying and Selling Your Personal Information // Fast Company | Educational Psychology & Technology | Scoop.it
By Steven Melendez and Alex Pasternack
"It’s no secret that your personal data is routinely bought and sold by dozens, possibly hundreds, of companies. What’s less known is who those companies are, and what exactly they do.
 

Thanks to a new Vermont law requiring companies that buy and sell third-party personal data to register with the Secretary of State, we’ve been able to assemble a list of 121 data brokers operating in the U.S. It’s a rare, rough glimpse into a bustling economy that operates largely in the shadows, and often with few rules.

 

Even Vermont’s first-of-its-kind law, which went into effect last month, doesn’t require data brokers to disclose who’s in their databases, what data they collect, or who buys it. Nor does it require brokers to give consumers access to their own data or opt out of data collection. Brokers are, however required to provide some information about their opt-out systems under the law–assuming they provide one.

 

If you do want to keep your data out of the hands of these companies, you’ll often have to contact them one by one through whatever opt-out systems they provide; more on that below.

The registry is an expansive, alphabet soup of companies, from lesser-known organizations that help landlords research potential tenants or deliver marketing leads to insurance companies, to the quiet giants of data. Those include big names in people search, like Spokeo, ZoomInfo, White Pages, PeopleSmart, Intelius, and PeopleFinders; credit reporting, like Equifax, Experian, and TransUnion; and advertising and marketing, like Acxiom, Oracle, LexisNexis, Innovis, and KBM. Some companies also specialize in “risk mitigation,” which can include credit reporting but also background checks and other identity verification services.

 

Still, these 121 entities represent just a fraction of the broader data economy: The Vermont law only covers third-party data firms–those trafficking in the data of people with whom they have no relationship–as opposed to “first-party” data holders like Amazon, Facebook, or Google, which collect data directly from users."...

 

For full post, please see:

https://www.fastcompany.com/90310803/here-are-the-data-brokers-quietly-buying-and-selling-your-personal-information 

 

 

For detailed version of image above, click here. [Image by Cracked Labs]

 

 

more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

AI in 2019: A Year in Review: The Growing Pushback Against Harmful AI // AI Now Institute

AI in 2019: A Year in Review: The Growing Pushback Against Harmful AI // AI Now Institute | Educational Psychology & Technology | Scoop.it

Published by the AI Now Institute
"On October 2nd, the AI Now Institute at NYU hosted its fourth annual AI Now Symposium to another packed house at NYU’s Skirball Theatre. The Symposium focused on the growing pushback to harmful forms of AI, inviting organizers, scholars, and lawyers onstage to discuss their work. The first panel examined AI’s use in policing and border control; the second spoke with tenant organizers from Brooklyn who are opposing their landlord’s use of facial recognition in their building; the third centered on the civil rights attorney suing the state of Michigan over its use of broken and biased algorithms, and the final panel focused on blue-collar tech workers, from Amazon warehouses to gig-economy drivers, to speak to their organizing and significant wins over the past year. You can watch the full event here.

 

AI Now co-founders Kate Crawford and Meredith Whittaker opened the Symposium with a short talk summarizing some key moments of opposition over the year, focusing on five themes:
(1) facial and affect recognition;
(2) the movement from “AI bias” to justice;
(3) cities, surveillance, borders;
(4) labor, worker organizing, and AI, and;
(5) AI’s climate impact.


For full post with details, please visit:

https://medium.com/@AINowInstitute/ai-in-2019-a-year-in-review-c1eba5107127 

more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Education, privacy, and big data algorithms: Taking the persons out of personalized learning // Regan and Steeves (2019)

Education, privacy, and big data algorithms: Taking the persons out of personalized learning // Regan and Steeves (2019) | Educational Psychology & Technology | Scoop.it

Abstract
In this paper, we review the literature on philanthropy in education to provide a larger context for the role that technology company foundations, such as the Bill and Melinda Gates Foundation and Chan Zuckerberg Initiative, are playing with respect to the development and implementation of personalized learning. We then analyze the ways that education magazines and tech company foundation outreach discuss personalized learning, paying special attention to issues of privacy. Our findings suggest that competing discourses on personalized learning revolve around contested meanings about the type of expertise needed for twenty-first century learning, what self-directed learning should look like, whether education is about process or content, and the type of evidence that is required to establish whether or not personalized learning leads to better student outcomes. Throughout, privacy issues remain a hot spot of conflict between the desire for more efficient outcomes and a whole child approach that is reminiscent of John Dewey’s insight that public education plays a special role in creating citizens."

Contents

Introduction
Philanthropy in education
Tech foundation activities in K-12 education
Materials and methods
Results and discussion
Implications and conclusions

 

For full post, please visit:

https://journals.uic.edu/ojs/index.php/fm/article/view/10094/8152#23a 

more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Naviance Data Breach

https://www.montgomeryschoolsmd.org/uploadedFiles/data-privacy-security/Data%20Breach%20Notification%20-%20Naviance_MERGED.pdf 

more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Maryland Privacy Council Tackles Substandard Student Data Protections // EdScoop

Maryland Privacy Council Tackles Substandard Student Data Protections // EdScoop | Educational Psychology & Technology | Scoop.it
By Ryan Johnston

Just three months after an audit of the Maryland Education Department’s data-storage practices found it was placing the personal information of 1.4 million students and 233,000 teachers at risk, a statewide council of Maryland education officials are trying to change the state’s data-privacy laws. 

Maryland’s new Student Data Privacy Council met for the first time on Thursday to study the state’s Student Data Privacy Act of 2015 and to identify best practices that other states have started using over the last five years. The idea is for the council to recommend changes to the state’s data-privacy strategy through a report submitted to Gov. Larry Hogan by December 2020. The state has long struggled to maintain effective data privacy protections and received a D+ grade from the Parent Coalition for Student Privacy in 2018.
 

The council is chaired by Carol Williamson, deputy state superintendent for the Maryland Office of Teaching and Learning, and includes two state congressional representatives, as well as a handful of data-privacy experts and state Education Department administrators.

The council will meet monthly to offer fixes to the state’s privacy act, which has proved untenable after an audit revealed that personally identifiable information of teachers and students, including names and Social Security numbers, had been stored in an unencrypted format. The audit also found that the education department failed to implement data-loss prevention software or confirm that its third-party vendors were using proper data-security practices themselves. The audit also found the education agency had inadequate malware protection and sometimes relied on old, vulnerable servers."...

 

For full post, please visit:

https://edscoop.com/maryland-student-data-privacy-council-first-meeting/

 

more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

How Photos of Your Kids Are Powering Surveillance Technology // The New York Times

How Photos of Your Kids Are Powering Surveillance Technology // The New York Times | Educational Psychology & Technology | Scoop.it

"Millions of Flickr images were sucked into a database called MegaFace. Now some of those faces have the ability to sue."


By Kashmir Hill and Aaron Krolik

"The pictures of Chloe and Jasper Papa as kids are typically goofy fare: grinning with their parents; sticking their tongues out; costumed for Halloween. Their mother, Dominique Allman Papa, uploaded them to Flickr after joining the photo-sharing site in 2005.

None of them could have foreseen that 14 years later, those images would reside in an unprecedentedly huge facial-recognition database called MegaFace. Containing the likenesses of nearly 700,000 individuals, it has been downloaded by dozens of companies to train a new generation of face-identification algorithms, used to track protesters, surveil terrorists, spot problem gamblers and spy on the public at large. The average age of the people in the database, its creators have said, is 16.

 

“It’s gross and uncomfortable,” said Mx. Papa, who is now 19 and attending college in Oregon. “I wish they would have asked me first if I wanted to be part of it. I think artificial intelligence is cool and I want it to be smarter, but generally you ask people to participate in research. I learned that in high school biology.”

 

By law, most Americans in the database don’t need to be asked for their permission — but the Papas should have been. As residents of Illinois, they are protected by one of the strictest state privacy laws on the books: the Biometric Information Privacy Act, a 2008 measure that imposes financial penalties for using an Illinoisan’s fingerprints or face scans without consent.

 

Those who used the database — companies including Google, Amazon, Mitsubishi Electric, Tencent and SenseTime — appear to have been unaware of the law, and as a result may have huge financial liability, according to several lawyers and law professors familiar with the legislation."...

 

 

For full story, please visit: 

https://www.nytimes.com/interactive/2019/10/11/technology/flickr-facial-recognition.html

more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

Children 'interested in' gambling and alcohol, according to Facebook // The Guardian

Children 'interested in' gambling and alcohol, according to Facebook // The Guardian | Educational Psychology & Technology | Scoop.it

By Alex Hern and Frederik Hugo Ledegaard

"Facebook has marked hundreds of thousands of children as “interested in” adverts about gambling and alcohol, a joint investigation by the Guardian and the Danish Broadcasting Corporation has found.

 

The social network’s advertising tools reveal 740,000 children under the age of 18 are flagged as being interested in gambling, including 130,000 in the UK. Some 940,000 minors – 150,000 of whom are British – are flagged as being interested in alcoholic beverages.

These “interests” are automatically generated by Facebook, based on what it has learned about a user by monitoring their activity on the social network. Advertisers can then use them to specifically target messages to subgroups who have been flagged as interested in the topic.

 

In a statement, Facebook said: “We don’t allow ads that promote the sale of alcohol or gambling to minors on Facebook and we enforce against this activity when we find it. We also work closely with regulators to provide guidance for marketers to help them reach their audiences effectively and responsibly.”


The company does allow advertisers to specifically target messages to children based on their interest in alcohol or gambling. A Facebook insider gave the example of an anti-gambling service that may want to reach out to children who potentially have a gambling problem and offer them help and support.

But advertisers can target the interests for other purposes as well. The developers of an exploitative video game with profitable “loot box” mechanics, for instance, could target their adverts to children with an interest in gambling without breaching any of Facebook’s regulations.

 


The presence of automated interests also means that alcohol and gambling advertisers who do try to avoid Facebook’s rules about advertising to children have an audience already selected for them by the social network. Facebook relies primarily on automated review for flagging adverts that break its policies. But the automated review is not guaranteed to find breaches before the adverts start to run. Facebook recently settled a lawsuit with the financial expert Martin Lewis over the company’s long-term failure to keep his image out of scam adverts.

Facebook’s automatic categorisation of users has been criticised before. In May 2018, the social network was found to be targeting users it thought were interested in subjects such as homosexuality, Islam or liberalism, despite religion, sexuality and political beliefs being explicitly marked out as “sensitive information” by the EU’s GDPR data protection laws."...

 

For full post, please visit:

https://www.theguardian.com/technology/2019/oct/09/children-interested-in-gambling-and-alcohol-facebook

more...
No comment yet.
Scooped by Roxana Marachi, PhD
Scoop.it!

"Google using dubious tactics to target people with ‘darker skin’ in facial recognition project" // New York Daily News

"Google using dubious tactics to target people with ‘darker skin’ in facial recognition project" // New York Daily News | Educational Psychology & Technology | Scoop.it

 

By Ginger Adams Otis and Nancy Dillon
"Tech giant Google wants your face — especially if you’ve got “darker skin.”

To get it, the company has funded a facial recognition project that’s targeting people of color with dubious tactics, multiple sources with direct knowledge of the project told the Daily News.

 

The company’s goal is to build a massively diverse database, ostensibly so products like the biometric features on its upcoming Pixel 4 smartphone don’t suffer from a racial bias.  In the past, facial recognition technology has notoriously had a harder time identifying people with darker skin.

 

Google wants to avoid that pitfall — so much so it paid to have hired temps go out to collect face scans from a variety of people on the street using $5 gift cards as incentive.

 

A Google spokesperson acknowledged the goal of the data collection."...

 

For full post, please visit: 

https://www.nydailynews.com/news/national/ny-google-darker-skin-tones-facial-recognition-pixel-20191002-5vxpgowknffnvbmy5eg7epsf34-story.html

 

Image from CMU: 
https://www.cmu.edu/assets/images/homepage/2017/face_recognition_400x225-min.jpg

more...
No comment yet.