Data and society
1.4K views | +1 today
Follow
Your new post is loading...
Your new post is loading...
Scooped by heather dawson
June 12, 11:24 AM
Scoop.it!

 Whose Name Comes Up? Auditing LLM-Based Scholar Recommendations

 Whose Name Comes Up? Auditing LLM-Based Scholar Recommendations | Data and society | Scoop.it
This paper evaluates the performance of six open-weight LLMs (llama3-8b, llama3.1-8b, gemma2-9b, mixtral-8x7b, llama3-70b, llama3.1-70b) in recommending experts in physics across five tasks: top-k experts by field, influential scientists by discipline, epoch, seniority, and scholar counterparts. The evaluation examines consistency, factuality, and biases related to gender, ethnicity, academic popularity, and scholar similarity. Using ground-truth data from the American Physical Society and OpenAlex, we establish scholarly benchmarks by comparing model outputs to real-world academic records. Our analysis reveals inconsistencies and biases across all models. mixtral-8x7b produces the most stable outputs, while llama3.1-70b shows the highest variability. Many models exhibit duplication, and some, particularly gemma2-9b and llama3.1-8b, struggle with formatting errors. LLMs generally recommend real scientists, but accuracy drops in field-, epoch-, and seniority-specific queries, consistently favoring senior scholars. Representation biases persist, replicating gender imbalances (reflecting male predominance), under-representing Asian scientists, and over-representing White scholars. Despite some diversity in institutional and collaboration networks, models favor highly cited and productive scholars, reinforcing the rich-getricher effect while offering limited geographical representation. These findings highlight the need to improve LLMs for more reliable and equitable scholarly recommendations.
No comment yet.
Scooped by heather dawson
June 12, 11:23 AM
Scoop.it!

Artificial Power: 2025 Landscape Report

Artificial Power: 2025 Landscape Report | Data and society | Scoop.it
In the aftermath of the “AI boom,” this report examines how the push to integrate AI products everywhere grants AI companies - and the tech oligarchs that run them - power that goes far beyond their deep pockets.
No comment yet.
Scooped by heather dawson
June 12, 9:11 AM
Scoop.it!

Generative AI and Jobs: A Refined Global Index of Occupational Exposure

This ILO Working Paper refines the global measurement of occupational exposure to generative AI by combining task-level data, expert input, and AI model predictions. It offers an improved methodological framework to assess how GenAI may impact jobs across countries and sectors.

No comment yet.
Scooped by heather dawson
June 11, 2:48 AM
Scoop.it!

Microsoft 365 Copilot Experiment: Cross-Government Findings Report (HTML)

Microsoft 365 Copilot Experiment: Cross-Government Findings Report (HTML) | Data and society | Scoop.it

AI can significantly reduce time spent on government tasks. This report describes a trial of 20,000 civil servants in the United Kingdom, showing they could save nearly two weeks each annually by using the technology. It suggests AI tools have the potential to transform productivity and public service delivery at scale.

No comment yet.
Scooped by heather dawson
June 1, 3:40 PM
Scoop.it!

Student perceptions of AI 2025

Student perceptions of AI 2025 | Data and society | Scoop.it

Jisc report Over the last two years we have spoken to groups of students to get a broad understanding of the way they view artificial intelligence (AI), and how they are using it, and what their concerns and hopes are. We’ve published two reports, summarising our findings, in 2023 and 2024.

No comment yet.
Scooped by heather dawson
June 1, 3:30 PM
Scoop.it!

Unveiling the 2025 State of Open Infrastructure Report

Unveiling the 2025 State of Open Infrastructure Report | Data and society | Scoop.it
Presenting the latest insights and trends in open infrastructure characteristics, funding, policy, and community health.
No comment yet.
Scooped by heather dawson
May 20, 7:08 AM
Scoop.it!

European Union Intellectual Property Office (EUIPO) Releases Study on Generative Artificial Intelligence and Copyright

No comment yet.
Scooped by heather dawson
May 15, 3:45 PM
Scoop.it!

Student Perspectives on the Benefits and Risks of AI in Education

Student Perspectives on the Benefits and Risks of AI in Education | Data and society | Scoop.it
The use of chatbots equipped with artificial intelligence (AI) in educational settings has increased in recent years, showing potential to support teaching and learning. However, the adoption of these technologies has raised concerns about their impact on academic integrity, students' ability to problem-solve independently, and potential underlying biases. To better understand students' perspectives and experiences with these tools, a survey was conducted at a large public university in the United States. Through thematic analysis, 262 undergraduate students' responses regarding their perceived benefits and risks of AI chatbots in education were identified and categorized into themes.
The results discuss several benefits identified by the students, with feedback and study support, instruction capabilities, and access to information being the most cited. Their primary concerns included risks to academic integrity, accuracy of information, loss of critical thinking skills, the potential development of overreliance, and ethical considerations such as data privacy, system bias, environmental impact, and preservation of human elements in education.
While student perceptions align with previously discussed benefits and risks of AI in education, they show heightened concerns about distinguishing between human and AI generated work - particularly in cases where authentic work is flagged as AI-generated. To address students' concerns, institutions can establish clear policies regarding AI use and develop curriculum around AI literacy. With these in place, practitioners can effectively develop and implement educational systems that leverage AI's potential in areas such as immediate feedback and personalized learning support. This approach can enhance the quality of students' educational experiences while preserving the integrity of the learning process with AI.
No comment yet.
Scooped by heather dawson
May 14, 7:13 AM
Scoop.it!

The Adoption of Artificial Intelligence in Firms New Evidence for Policymaking- OECD

No comment yet.
Scooped by heather dawson
May 12, 5:41 AM
Scoop.it!

Revolutionizing health and safety: The role of AI and digitalization at work (ILO)

No comment yet.
Scooped by heather dawson
May 8, 3:53 PM
Scoop.it!

Global Approaches to Gen AI in Higher Education'.

Watch videos from the LSE-Peking University joint conference on AI

 

On 3 and 4 April, LSE jointly hosted a conference (with Peking University) regarding 'Global Approaches to Gen AI in Higher Education'. You can watch a video playlist of highlights from the Conference via YouTube, featuring teaching colleagues from several LSE Departments, a student panel, and guest keynote speakers.

No comment yet.
Scooped by heather dawson
April 29, 6:35 AM
Scoop.it!

Understanding ORCID adoption among academic researchers | Scientometrics

Understanding ORCID adoption among academic researchers | Scientometrics | Data and society | Scoop.it
Just over a decade ago, the ORCID (Open Researcher and Contributor Identifier) was created to provide a unique digital identifier for researchers around the world. The ORCID has proven essential in identifying individual researchers and their publications, both for bibliometric research analyses and for universities and other organizations tracking the research productivity and impact of their personnel. Yet widespread adoption of the ORCID by individual researchers has proved elusive, with previous studies finding adoption rates ranging from 3% to 42%. Using a national survey of U.S. academic researchers at 31 research universities, we investigate why some researchers adopt an ORCID and some do not. We found an overall adoption rate of 72%, with adoptions rates ranging between academic disciplines from a low of 17% in the visual and performing arts to a high of 93% in biological and biomedical sciences. Many academic journals require an ORCID to submit a manuscript, and this is the main reason why researchers adopt an ORCID. The top three reasons for not having an ORCID are not seeing the benefits, being far enough in the academic career to not need it, and working in an academic discipline where it is not needed.
No comment yet.
Scooped by heather dawson
April 14, 10:25 AM
Scoop.it!

Cybersecurity and privacy maturity assessment and strengthening for digital health information systems

Cybersecurity and privacy maturity assessment and strengthening for digital health information systems | Data and society | Scoop.it

Cybersecurity and privacy maturity assessment and strengthening for digital health information systems (WHO/Europe)
https://www.who.int/europe/publications/i/item/WHO-EURO-2025-11827-51599-78854
This guide focuses on cybersecurity and privacy risk assessments in digital health, as tailored to the WHO European Region. It provides a framework for technical audiences to develop risk assessment specifications suited to the unique needs and goals of their organizations and countries in order to comply with country-specific cybersecurity and privacy regulations. The assessment questionnaire that forms part of the assessment methodology is also available in the form of a Microsoft Excel spreadsheet and is published as a separate web annex.

No comment yet.
Scooped by heather dawson
June 12, 11:23 AM
Scoop.it!

In the Room Where It Happens: Generative AI Policy Creation in Higher Education

In the Room Where It Happens: Generative AI Policy Creation in Higher Education | Data and society | Scoop.it
To develop a robust policy for generative artificial intelligence use in higher education, institutional leaders must first create "a room" where dive
No comment yet.
Scooped by heather dawson
June 12, 9:20 AM
Scoop.it!

Power Hungry: AI and our energy future MIT review

Power Hungry: AI and our energy future MIT review | Data and society | Scoop.it
An unprecedented look at the state of AI’s energy and resource usage, where it is now, where it is headed in the years to come, and why we have to get it right.
No comment yet.
Scooped by heather dawson
June 12, 3:11 AM
Scoop.it!

Research Security – ARMA

Research Security – ARMA | Data and society | Scoop.it
Research Security Report: Stronger cooperation, safer collaboration: driving research security cooperation across Europe Following on from the event series in 2024 (see details below), the team have released a report based on the findings.
No comment yet.
Scooped by heather dawson
June 11, 2:47 AM
Scoop.it!

Mapping the Potential of Generative AI and Public Sector Work: Using time use data to identify opportunities for AI adoption in Great Britain's public sector

Mapping the potential: Generative AI and public sector work The Alan Turing Institute | 2 Jun 2025 | Government, Technology

This report outlines the findings from assessing the extent to which public sector activities in the United Kingdom are suited for Generative AI (GenAI) use. The findings demonstrate the potential supporting role that GenAI could play in freeing up valuable public sector time. However, its potential to support public sector work activities varies across different sectors.

No comment yet.
Scooped by heather dawson
June 1, 3:38 PM
Scoop.it!

Generative AI and Creativity: A Systematic Literature Review and Meta-Analysis

Generative AI and Creativity: A Systematic Literature Review and Meta-Analysis | Data and society | Scoop.it
Generative artificial intelligence (GenAI) is increasingly used to support a wide range of human tasks, yet empirical evidence on its effect on creativity remains scattered. Can GenAI generate ideas that are creative? To what extent can it support humans in generating ideas that are both creative and diverse? In this study, we conduct a meta-analysis to evaluate the effect of GenAI on the performance in creative tasks. For this, we first perform a systematic literature search, based on which we identify n = 28 relevant studies (m = 8214 participants) for inclusion in our meta-analysis. We then compute standardized effect sizes based on Hedges' g. We compare different outcomes: (i) how creative GenAI is; (ii) how creative humans augmented by GenAI are; and (iii) the diversity of ideas by humans augmented by GenAI. Our results show no significant difference in creative performance between GenAI and humans (g = -0.05), while humans collaborating with GenAI significantly outperform those working without assistance (g = 0.27). However, GenAI has a significant negative effect on the diversity of ideas for such collaborations between humans and GenAI (g = -0.86). We further analyze heterogeneity across different GenAI models (e.g., GPT-3.5, GPT-4), different tasks (e.g., creative writing, ideation, divergent thinking), and different participant populations (e.g., laypeople, business, academia). Overall, our results position GenAI as an augmentative tool that can support, rather than replace, human creativity-particularly in tasks benefiting from ideation support.
No comment yet.
Scooped by heather dawson
May 20, 7:10 AM
Scoop.it!

Copyright and Artificial Intelligence | U.S. Copyright Office part 3 

Copyright and Artificial Intelligence | U.S. Copyright Office part 3  | Data and society | Scoop.it
U.S. Copyright Office Releases Prepublication Version of Report on Copyright and Artificial Intelligence Part 3: Generative AI Training
No comment yet.
Scooped by heather dawson
May 20, 7:04 AM
Scoop.it!

 Evidence of a social evaluation penalty for using AI,

  • J.A. Reif,
  • R.P. Larrick,
  • & J.B. Soll,

 Evidence of a social evaluation penalty for using AI, Proc. Natl. Acad. Sci. U.S.A. 122 (19) e2426766122, https://doi.org/10.1073/pnas.2426766122 (2025).

No comment yet.
Scooped by heather dawson
May 14, 7:14 AM
Scoop.it!

Regulating image-based abuse: an examination of Australia’s reporting and removal scheme

No comment yet.
Scooped by heather dawson
May 12, 5:47 AM
Scoop.it!

Human Development Report 2025: A matter of choice: people and possibilities in the age of Artificial Intelligence (AI) (UNDP)

No comment yet.
Scooped by heather dawson
May 8, 3:56 PM
Scoop.it!

LSE-PKU-Student-Manifesto on use of Generative AI in education

No comment yet.
Scooped by heather dawson
April 29, 6:37 AM
Scoop.it!

Where there’s a will there’s a way: ChatGPT is used more for science in countries where it is prohibited. 

Honglin BaoMengyi SunMisha Teplitskiy; Where there’s a will there’s a way: ChatGPT is used more for science in countries where it is prohibited. Quantitative Science Studies 2025; doi: https://doi.org/10.1162/qss_a_00368

No comment yet.
Scooped by heather dawson
April 28, 3:17 AM
Scoop.it!

The direction of AI innovation in the UK: Insights from a new database and a roadmap for reform IPPR

The direction of AI innovation in the UK: Insights from a new database and a roadmap for reform IPPR | Data and society | Scoop.it

With the latest models achieving top scores in scientific and diagnostic reasoning tests, they could usher in a new era of growth. In our previous report w
No comment yet.