|
Scooped by
Stéphane Cottin
onto Bonnes pratiques en documentation June 4, 2024 2:01 PM
|
Get Started for FREE
Sign up with Facebook Sign up with X
I don't have a Facebook or a X account
|
Scooped by
Stéphane Cottin
onto Bonnes pratiques en documentation June 4, 2024 2:01 PM
|
|
Bonnes pratiques en documentation
Dernieres informations sur les bonnes pratiques en recherche documentaire, analyse de la documentation, moteurs de recherche,... Curated by Stéphane Cottin |
|
Scooped by
Stéphane Cottin
April 16, 11:46 AM
|
L'approche Kaizen est un levier concret pour engager une transition régénérative au sein des organisations, tout en conciliant performance économique et transformation progressive.
|
Scooped by
Stéphane Cottin
April 16, 3:05 AM
|
Questions of academic freedom, and of free speech on university campuses, have arisen in a variety of specific contexts, all of which this Essay ignores. Instea
|
Scooped by
Stéphane Cottin
April 16, 2:25 AM
|
Webpage entity extraction is a fundamental natural language processing task in both research and applications. Nowadays, the majority of webpage entity extracti
|
Scooped by
Stéphane Cottin
April 15, 8:37 AM
|
Submitting authors can now request community feedback directly from PREreview with a single click on Preprints.org
|
Scooped by
Stéphane Cottin
April 11, 3:19 AM
|
|
Scooped by
Stéphane Cottin
April 10, 3:26 AM
|
|
Scooped by
Stéphane Cottin
April 9, 1:42 PM
|
The robots.txt file, introduced as part of the Robots Exclusion Protocol in 1994, provides webmasters with a mechanism to communicate access permissions to auto
|
Scooped by
Stéphane Cottin
April 9, 8:58 AM
|
Classical methods for mapping domain knowledge structures, namely bibliographic coupling (BC) and co-citation (CC) analyses, rely on co-reference or CC counts, which may lack precision an
|
Scooped by
Stéphane Cottin
April 8, 7:03 PM
|
|
Scooped by
Stéphane Cottin
April 8, 12:51 PM
|
This paper explores the challenges posed by Generative AI applications, particularly their potential to produce fake news, misinformation, and disinformation, t
|
Scooped by
Stéphane Cottin
April 8, 5:18 AM
|
GPT-4.5 a réussi un test de Turing en étant pris pour un humain dans 73 % des cas. Les chercheurs sont troublés.
|
Scooped by
Stéphane Cottin
April 8, 5:17 AM
|
A l’heure où le projet de loi « pour une République numérique » propose l’insertion dans le Code de la recherche de dispositions relatives à l’open access, le Centre national de la recherche scientifique (CNRS), aux côtés de ses partenaires dans le projet ISTEX ainsi que d’un grand nombre de...
|
Scooped by
Stéphane Cottin
April 8, 5:16 AM
|
Inferred data, such as predictions and emotional states, may have limited accuracy or may even be incorrect from the perspective of data subjects. According to
|
Scooped by
Stéphane Cottin
March 27, 4:23 AM
|
|
Scooped by
Stéphane Cottin
March 26, 7:13 AM
|
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
L'article scientifique « L'attention est tout ce dont vous avez besoin » a présenté le Transformer, une nouvelle architecture de réseau neuronal qui facilite la compréhension du langage. Avant le Transformer, les machines n’étaient pas très douées pour comprendre des phrases longues car elles étaient incapables de distinguer les relations entre des mots éloignés les uns des autres. Le Transformer a changé la donne, devenant la pierre angulaire des systèmes de compréhension du langage et d’IA générative les plus impressionnants aujourd’hui. Traduction, synthèse de texte, réponses aux questions et même génération d’images et robotique : le Transformer a révolutionné la manière dont les machines effectuent toutes ces actions.
|
Scooped by
Stéphane Cottin
March 26, 2:58 AM
|
|
Scooped by
Stéphane Cottin
March 17, 1:16 PM
|
Par Antony Belin Note au lecteur : Les article de lois cités ici font référence au contexte juridique français. Différents cas sont possibles concernant la mise-en-place d’un système d’archivage électronique (SAE), qu’il s’agisse de l’Autorité juridique (entité, organisme, collectivité…) propriétaire des données déposées, Autorité d’archivage (AA), ou d’un prestataire tiers-archiveur, Autorité de tiers-archivage (ATA), le tout…
|
Scooped by
Stéphane Cottin
March 14, 2:34 AM
|
This chapter examines the European Union’s Artificial Intelligence Act (AI Act) through the framework of digital constitutionalism. It explores how the AI
|
Scooped by
Stéphane Cottin
March 13, 2:36 PM
|
The ontology engineering process is complex, time-consuming, and error-prone, even for experienced ontology engineers. In this work, we investigate the potential of Large Language Models (LLMs) to provide effective OWL ontology drafts directly from ontological requirements described using user stories and competency questions. Our main contribution is the presentation and evaluation of two new prompting techniques for automated ontology development: Memoryless CQbyCQ and Ontogenia. We also emphasize the importance of three structural criteria for ontology assessment, alongside expert qualitative evaluation, highlighting the need for a multi-dimensional evaluation in order to capture the quality and usability of the generated ontologies. Our experiments, conducted on a benchmark dataset of ten ontologies with 100 distinct CQs and 29 different user stories, compare the performance of three LLMs using the two prompting techniques. The results demonstrate improvements over the current state-of-the-art in LLM-supported ontology engineering. More specifically, the model OpenAI o1-preview with Ontogenia produces ontologies of sufficient quality to meet the requirements of ontology engineers, significantly outperforming novice ontology engineers in modelling ability. However, we still note some common mistakes and variability of result quality, which is important to take into account when using LLMs for ontology authoring support. We discuss these limitations and propose directions for future research.
|
Scooped by
Stéphane Cottin
March 12, 4:25 PM
|
Economics Job Market Rumors (EJMR) is an online forum and clearinghouse for information on the academic job market for economists. It also includes content that
|
Scooped by
Stéphane Cottin
March 12, 3:40 AM
|
|
Scooped by
Stéphane Cottin
March 11, 3:52 AM
|
|
Scooped by
Stéphane Cottin
March 6, 11:23 AM
|
Cram, Lawrence and Docampo, Domingo and Safón, Vicente, Screening articles by citation reputation (July 22, 2024). Quantitative Science Studies, 0[10.1162/qss_a_00355], Available at SSRN: https://ssrn.com/abstract=5129969