The plan sets out how the Australian Public Service will harness artificial intelligence (AI) to deliver better services. It provides the platform for every public servant to have the foundational training and capability support, access and the guidance needed to use AI tools safely and responsibly. It sets out 15 initiatives across 3 pillars.
The plan sets out how the Australian Public Service will harness artificial intelligence (AI) to deliver better services. It provides the platform for every public servant to have the foundational training and capability support, access and the guidance needed to use AI tools safely and responsibly. It sets out 15 initiatives across 3 pillars.
The advent of LLMs has given rise to a new type of web search: Generative search, where LLMs retrieve web pages related to a query and generate a single, coherent text as a response. This output modality stands in stark contrast to traditional web search, where results are returned as a ranked list of independent web pages. In this paper, we ask: Along what dimensions do generative search outputs differ from traditional web search? We compare Google, a traditional web search engine, with four generative search engines from two providers (Google and OpenAI) across queries from four domains. Our analysis reveals intriguing differences. Most generative search engines cover a wider range of sources compared to web search. Generative search engines vary in the degree to which they rely on internal knowledge contained within the model parameters v.s. external knowledge retrieved from the web. Generative search engines surface varying sets of concepts, creating new opportunities for enhancing search diversity and serendipity. Our results also highlight the need for revisiting evaluation criteria for web search in the age of Generative AI.
The growing ubiquity of conversational AI highlights the need for frameworks that capture not only users' instrumental goals but also the situated, adaptive, and social practices through which they achieve them. Existing taxonomies of conversational behavior either overgeneralize, remain domain-specific, or reduce interactions to narrow dialogue functions. To address this gap, we introduce the Taxonomy of User Needs and Actions (TUNA), an empirically grounded framework developed through iterative qualitative analysis of 1193 human-AI conversations, supplemented by theoretical review and validation across diverse contexts. TUNA organizes user actions into a three-level hierarchy encompassing behaviors associated with information seeking, synthesis, procedural guidance, content creation, social interaction, and meta-conversation. By centering user agency and appropriation practices, TUNA enables multi-scale evaluation, supports policy harmonization across products, and provides a backbone for layering domain-specific taxonomies. This work contributes a systematic vocabulary for describing AI use, advancing both scholarly understanding and practical design of safer, more responsive, and more accountable conversational systems.
Read the report by Robin van Kessel, Jelena Schmidt, Stephanie Winitsky, George Wharton, Elias Mossialos: Introduces a new taxonomy and eight policy recommendations to guide evidence-based adoption
AI Tools Radar https://radar.ircai.org/en/ Created by UNESCO this portal is designed to provide a catalogue of innovative and impactful AI tools from around the world, focussing on applications in the public sector, media, and judiciary. It is not intended to be exhaustive. They are selected for usefulness, accessibility and ethics. It is possible to browse items by sources, country of origin. All have descriptions
A framework addressing four fundamental issues facing artificial intelligence (AI): how behavioural science can augment AI’s capabilities; why individuals adopt or resist AI; how we can align AI design with human psychology; and how society must adapt to the impacts of AI.
This study investigates how Large Language Models (LLMs) are influencing the language of academic papers by tracking 12 LLM-associated terms across six major scholarly databases (Scopus, Web of Science, PubMed, PubMed Central (PMC), Dimensions, and OpenAlex) from 2015 to 2024. Using over 2.4 million PMC open-access publications (2021-July 2025), we also analysed full texts to assess changes in the frequency and co-occurrence of these terms before and after ChatGPT's initial public release. Across databases, delve (+1,500%), underscore (+1,000%), and intricate (+700%) had the largest increases between 2022 and 2024. Growth in LLM-term usage was much higher in STEM fields than in social sciences and arts and humanities. In PMC full texts, the proportion of papers using underscore six or more times increased by over 10,000% from 2022 to 2025, followed by intricate (+5,400%) and meticulous (+2,800%). Nearly half of all 2024 PMC papers using any LLM term also included underscore, compared with only 3%-14% of papers before ChatGPT in 2022. Papers using one LLM term are now much more likely to include other terms. For example, in 2024, underscore strongly correlated with pivotal (0.449) and delve (0.311), compared with very weak associations in 2022 (0.032 and 0.018, respectively). These findings provide the first large-scale evidence based on full-text publications and multiple databases that some LLM-related terms are now being used much more frequently and together. The rapid uptake of LLMs to support scholarly publishing is a welcome development reducing the language barrier to academic publishing for non-English speakers.
This paper provides an overview of the technical, ethical and environmental factors to consider when preparing scientific data for artificial intelligence (AI), and how these factors align with the ‘Open Science’ movement. The information presented is relevant to researchers, data practitioners, scientific bodies and policy-makers for science.
This study proposes a conceptual framework for understanding how generative artificial intelligence (GenAI) affects university students’ learning journey.
With the advent of large language models (LLMs), there is a growing interest in applying LLMs to scientific tasks. In this work, we conduct an experimental study to explore applicability of LLMs for configuring, annotating, translating, explaining, and generating scientific workflows. We use 5 different workflow specific experiments and evaluate several open- and closed-source language models using state-of-the-art workflow systems. Our studies reveal that LLMs often struggle with workflow related tasks due to their lack of knowledge of scientific workflows. We further observe that the performance of LLMs varies across experiments and workflow systems. Our findings can help workflow developers and users in understanding LLMs capabilities in scientific workflows, and motivate further research applying LLMs to workflows.
AI Competencies for Academic Library Workers Approved by the ACRL Board of Directors, October 2025 PDF Version Contents Foreword Guiding Mindsets Competencies
1. Evidence synthesists are ultimately responsible for their evidence synthesis, including the decision to use artificial intelligence (AI) and automation, and to ensure adherence to legal and ethical standards. 2. Cochrane, the Campbell Collaboration, JBI and the Collaboration for Environmental Ev …
Generative AI and news report 2025: How people think about AI’s role in journalism and society A man looks at his phone as he walks past Argentina's Central Bank in Buenos Aires, December 18, 2024. REUTERS/Francisco Loureiro Dr Felix Simon Prof.
This paper examines the environmental implications of applying artificial intelligence (AI) in scientific research. It serves as a primer for scientists, research institutions and science policy-makers who seek to understand various approaches to addressing the environmental impact of AI in science. In addition, it offers guidance on how reducing environmental costs can contribute to the broader goals of sustainability and ethical AI use in research environments.
AI Risk & Accountability AI has risks and all actors must be accountable. AI, Data & Privacy Data and privacy are primary policy issues for AI. Generative AI Managing the risks and benefits of generative AI.
Library Trends In this Issue Additional Information Volume 73, Number 4, May 2025 Issue Generative AI and Libraries: Applications and Ethics, Part II Melissa A. Wong View Save Library Trends is an essential tool for librarians and educators alike.
To get content containing either thought or leadership enter:
To get content containing both thought and leadership enter:
To get content containing the expression thought leadership enter:
You can enter several keywords and you can refine them whenever you want. Our suggestion engine uses more signals but entering a few keywords here will rapidly give you great content to curate.