information analyst
43.8K views | +0 today
Follow
information analyst
km, ged / edms, workflow, collaboratif
Your new post is loading...
Your new post is loading...
Rescooped by michel verstrepen from Media, Business & Tech
Scoop.it!

Quelle configuration choisir, de l'entraînement à l'inférence des LLM

Quelle configuration choisir, de l'entraînement à l'inférence des LLM | information analyst | Scoop.it
L'infrastructure à mettre en place diffère grandement selon les cas d'usage. Tour d'horizon des configurations répondant à chacun.

Via Bruno Renkin
No comment yet.
Rescooped by michel verstrepen from Renseignements Stratégiques, Investigations & Intelligence Economique
Scoop.it!

IA : la CNIL publie ses premières recommandations sur le développement des systèmes d’intelligence artificielle | CNIL

IA : la CNIL publie ses premières recommandations sur le développement des systèmes d’intelligence artificielle | CNIL | information analyst | Scoop.it

Concilier le développement de systèmes d’IA avec les enjeux de protection de la vie privée De nombreux acteurs ont fait part à la CNIL de questionnements concernant l’application du règlement général sur la protection des données (RGPD) à l’intelligence artificielle (IA), en particulier depuis l’émergence de systèmes d’IA génératives (« Generative AI systems »).


Via Intelligence Economique, Investigations Numériques et Veille Informationnelle
No comment yet.
Rescooped by michel verstrepen from Time to Learn
Scoop.it!

ResearchPal | Best AI Tool For Research

ResearchPal | Best AI Tool For Research | information analyst | Scoop.it
Use AI to accelerate your academic writing. Generate references, write literature reviews, read pdf, improve writing tone and integrate Zotero.

Via Frédéric DEBAILLEUL
No comment yet.
Rescooped by michel verstrepen from Renseignements Stratégiques, Investigations & Intelligence Economique
Scoop.it!

Des pirates volent vos comptes Gmail et Microsoft 365 grâce à cette nouvelle technique d’hameçonnage ...

Des pirates volent vos comptes Gmail et Microsoft 365 grâce à cette nouvelle technique d’hameçonnage ... | information analyst | Scoop.it

Une nouvelle plateforme sophistiquée de phishing en tant que service, baptisée “Tycoon 2FA”, gagne en popularité auprès des cybercriminels en raison de sa capacité à contourner l'authentification multifactorielle et à voler les identifiants de connexion aux comptes Microsoft 365 et Gmail.


Via Intelligence Economique, Investigations Numériques et Veille Informationnelle
No comment yet.
Rescooped by michel verstrepen from Veille CDI
Scoop.it!

L'IA est-elle capable de faire baisser la croyance dans certaines théories complotistes ?

L'IA est-elle capable de faire baisser la croyance dans certaines théories complotistes ? | information analyst | Scoop.it
Les théories complotistes sont connues pour avoir la dent dure, et résister particulièrement bien au "fact-checking". Mais trois chercheurs américains ont montré, dans une étude, qu'une discussion avec un robot utilisant le modèle GPT-4 pouvait réduire l'adhésion à ces théories.

Via cdi
No comment yet.
Rescooped by michel verstrepen from Renseignements Stratégiques, Investigations & Intelligence Economique
Scoop.it!

Le monde de la cybersécurité se prépare au bouleversement NIS 2 ...

Le monde de la cybersécurité se prépare au bouleversement NIS 2 ... | information analyst | Scoop.it

L’année 2024 est chargée pour la filière cybersécurité. Tandis que les Jeux olympiques et de possibles cyberattaques massives focalisent l’attention du public, une autre date a été cochée dans l’agenda des professionnels : en octobre la France va transposer dans son droit national la directive Network and Information Security 2 (NIS 2). Ce texte européen adopté fin 2022 doit améliorer la résilience de plusieurs milliers d’organisations.


Via Intelligence Economique, Investigations Numériques et Veille Informationnelle
No comment yet.
Rescooped by michel verstrepen from Renseignements Stratégiques, Investigations & Intelligence Economique
Scoop.it!

La hausse de la cybercriminalité n'est pas un fantasme : les piratages par malwares ont été multipliés par 7 depuis 2020 ...

La hausse de la cybercriminalité n'est pas un fantasme : les piratages par malwares ont été multipliés par 7 depuis 2020 ... | information analyst | Scoop.it

La menace cybercriminelle a nettement augmenté depuis 2020, tout comme les piratages par des logiciels malveillants voleurs de données, qui ont été multipliés par sept. Ces derniers atteignent des niveaux alarmants.


Via Intelligence Economique, Investigations Numériques et Veille Informationnelle
No comment yet.
Rescooped by michel verstrepen from Renseignements Stratégiques, Investigations & Intelligence Economique
Scoop.it!

Facebook de nouveau accusé d’avoir laissé Netflix accéder aux messages privés de ses utilisateurs ...

Facebook de nouveau accusé d’avoir laissé Netflix accéder aux messages privés de ses utilisateurs ... | information analyst | Scoop.it

Aux Etats-Unis, Meta fait l’objet depuis 2020 d’une action collective concernant le partage de certaines données sensibles avec d’autres entreprises, comme Netflix. Des documents rendus publics récemment ont relancé les accusations concernant l’accès donné aux échanges privés.


Via Intelligence Economique, Investigations Numériques et Veille Informationnelle
No comment yet.
Scooped by michel verstrepen
Scoop.it!

Scamio : un outil antifraude de Bitdefender, dopé à la GenAI - L'INFORMATICIEN & L'INFO CYBER-RISQUES - L'1FO Tech par L'Informaticien - L'INFORMATICIEN - L'1FO Tech par L'Informaticien

Scamio : un outil antifraude de Bitdefender, dopé à la GenAI - L'INFORMATICIEN & L'INFO CYBER-RISQUES - L'1FO Tech par L'Informaticien - L'INFORMATICIEN - L'1FO Tech par L'Informaticien | information analyst | Scoop.it
L’outil de détection et de remédiation est doté d’un chatbot d’intelligence artificielle et de technologies de protection, de prévention et de détection de Bitdefender. L’entreprise de cybersécurité Bitdefender a présenté son nouvel outil de détection des arnaques baptisé Scamio.
No comment yet.
Rescooped by michel verstrepen from Media, Business & Tech
Scoop.it!

Anthropic researchers wear down AI ethics with repeated questions

Anthropic researchers wear down AI ethics with repeated questions | information analyst | Scoop.it
How do you get an AI to answer a question it's not supposed to? There are many such "jailbreak" techniques, and Anthropic researchers just found a new

Via Bruno Renkin
No comment yet.
Rescooped by michel verstrepen from Educational Psychology & Emerging Technologies: Critical Perspectives and Updates
Scoop.it!

AI: Two reports reveal a massive enterprise pause over security and ethics // Diginomica

AI: Two reports reveal a massive enterprise pause over security and ethics // Diginomica | information analyst | Scoop.it

by Chris Middleton 

"No one doubts that artificial intelligence is a strategic boardroom issue, though diginomica revealed last year that much of the initial buzz was individuals using free cloud tools as shadow IT, while many business leaders talked up AI in their earnings calls just to keep investors happy. 

In 2024, those caveats remain amidst the hype. As one of my stories from KubeCon + CloudNativeCon last week showed, the reality for many software engineering teams is the C-suite demanding an AI ‘hammer’ with little idea of what business nail they want to hit with it. 

Or, as Intel Vice President and General Manager for Open Ecosystem Arun Gupta put it: 

"When we go into a CIO discussion, it’s ‘How can I use Gen-AI?’ And I’m like, ‘I don’t know. What do you want to do with it?’ And the answer is, ‘I don’t know, you figure it out!’"


So, now that AI Spring is in full bloom, what is the reality of enterprise adoption? Two reports this week unveil some surprising new findings, many of which show that the hype cycle is ending more quickly than the industry would like.

First up is a white paper from $2 billion cloud incident-response provider, PagerDuty. According to its survey of 100 Fortune 1,000 IT leaders, 100% are concerned about the security risks of the technology, and 98% have paused Gen-AI projects as a result. 

 

Those are extraordinary figures. However, the perceived threats are not solely about cybersecurity (with phishing, deep fakes, complex fraud, and automated attacks on the rise), but are rooted in what PagerDuty calls the “moral implications”. These include worries over copyright theft in training data and any legal exposure that may arise from that. 

As previously reported (see diginomica, passim), multiple IP infringement lawsuits are ongoing in the US, while in the UK, the House of Lords’ Communications and Digital Committee was clear, in its inquiry into Large Language Models, that copyright theft had taken place. A conclusion that peers arrived at after interviewing expert witnesses from all sides of the debate, including vendors and lawyers.

According to PagerDuty, unease over these issues keeps more than half of respondents (51%) awake at night, with nearly as many concerned about the disclosure of sensitive information (48%), data privacy violations (47%), and social engineering attacks (46%). They are right to be cautious: last year, diginomica reported that source code is the most common form of privileged data disclosed to cloud-based AI tools.

The white paper adds:
"Any of these security risks could damage the company’s public image, which explains why Gen-AI’s risk to the organization’s reputation tops the list of concerns for 50% of respondents. More than two in five also worry about the ethics of the technology (42%). Among the executives with these moral concerns, inherent societal biases of training data (26%) and lack of regulation (26%) top the list."

Despite this, only 25% of IT leaders actively mistrust the technology, adds the white paper – cold comfort for vendors, perhaps. Even so, it is hard to avoid the implication that, while some providers might have first- or big-mover advantage in generative AI, any that trained their systems unethically may have stored up a world of problems for themselves.

However, with nearly all Fortune 1,000 companies pausing their AI programmes until clear guidelines can be put in place – though the figure of 98% seems implausibly high – the white paper adds:

 

"Executives value these policies, so much so that a majority (51%) believe they should adopt Gen-AI only after they have the right guidelines in place. [But] others believe they risk falling behind if they don’t adopt Gen-AI as quickly as possible, regardless of parameters (46%)."

 

Those figures suggest a familiar pattern in enterprise tech adoption: early movers stepping back from their decisions, while the pack of followers is just getting started. 

 

Yet the report continues:
"Despite the emphasis and clear need, only 29% of companies have established formal guidelines. Instead, 66% are currently setting up these policies, which means leaders may need to keep pausing Gen-AI until they roll out a course of action."

 

That said, the white paper’s findings are inconsistent in some respects, and thus present a confusing picture – conceivably, one of customers confirming a security researcher’s line of questioning. Imagine that: confirmation bias in a Gen-AI report!

 

For example, if 98% of IT leaders say they have paused enterprise AI programmes until organizational guidelines are put in place, how are 64% of the same survey base able to report that Gen-AI is still being used in “some or all” of their departments? 

 

One answer may be that, as diginomica found last year, that ‘departmental’ use may in fact be individuals experimenting with cloud-based tools as shadow IT. That aside, the white paper confirms that early enterprise adopters may be reconsidering their incautious rush."...

 

For full post, please visit: 

https://diginomica.com/ai-two-reports-reveal-massive-enterprise-pause-over-security-and-ethics  

 

Via Roxana Marachi, PhD
No comment yet.
Rescooped by michel verstrepen from Time to Learn
Scoop.it!

Figr: AI-Driven UI Design & Workflow

Brainstorm, ideate, and design product UI faster than ever with Figr. Use Figr to find UI inspirations, generate app flows, create wireframes, build design systems and component library with AI.

Via Frédéric DEBAILLEUL
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

Americans increasingly using ChatGPT, but few trust its 2024 election information

Americans increasingly using ChatGPT, but few trust its 2024 election information | information analyst | Scoop.it
About one-in-five U.S. adults have used ChatGPT to learn something new (17%) or for entertainment (17%).

Via EDTECH@UTRGV
EDTECH@UTRGV's curator insight, April 2, 11:43 AM

"Most Americans still haven’t used the chatbot, despite the uptick since our July 2023 survey on this topic. But some groups remain far more likely to have used it than others."

Rescooped by michel verstrepen from Media, Business & Tech
Scoop.it!

IA – Des biais de genre qui font froid dans le dos !

IA – Des biais de genre qui font froid dans le dos ! | information analyst | Scoop.it
Une étude de l'UNESCO révèle les biais de genre choquants présents dans les modèles de langage d'IA les plus populaires. Des stéréotypes néfastes qui soulèvent de sérieuses questions éthiques et appellent à une refonte en profondeur du développement de l'IA.

Via Bruno Renkin
No comment yet.
Rescooped by michel verstrepen from Educational Psychology & Emerging Technologies: Critical Perspectives and Updates
Scoop.it!

What's in a Name? Auditing Large Language Models for Race and Gender Bias // (Haim, Salinas, & Nyarko, 2024) // Stanford Law School via arxiv.org  

What's in a Name? Auditing Large Language Models for Race and Gender Bias.
By Amit Haim, Alejandro Salinas, Julian Nyarko

ABSTRACT

"We employ an audit design to investigate biases in state-of-the-art large language models, including GPT-4. In our study, we prompt the models for advice involving a named individual across a variety of scenarios, such as during car purchase negotiations or election outcome predictions. We find that the advice systematically disadvantages names that are commonly associated with racial minorities and women. Names associated with Black women receive the least advantageous outcomes. The biases are consistent across 42 prompt templates and several models, indicating a systemic issue rather than isolated incidents. While providing numerical, decision-relevant anchors in the prompt can successfully counteract the biases, qualitative details have inconsistent effects and may even increase disparities. Our findings underscore the importance of conducting audits at the point of LLM deployment and implementation to mitigate their potential for harm against marginalized communities."

 

Please visit following for abstract on arxiv.org and link to download:

https://arxiv.org/abs/2402.14875 

 


Via Roxana Marachi, PhD
No comment yet.
Rescooped by michel verstrepen from Renseignements Stratégiques, Investigations & Intelligence Economique
Scoop.it!

Développement des systèmes d’IA : les recommandations de la CNIL pour respecter le RGPD | CNIL

Développement des systèmes d’IA : les recommandations de la CNIL pour respecter le RGPD | CNIL | information analyst | Scoop.it

La CNIL a publié ses premières recommandations sur l’application du RGPD au développement des systèmes d’intelligence artificielle pour aider les professionnels à concilier innovation et respect des droits des personnes. Voici ce qu’il faut en retenir.


Via Intelligence Economique, Investigations Numériques et Veille Informationnelle
No comment yet.
Rescooped by michel verstrepen from E-Learning-Inclusivo (Mashup)
Scoop.it!

Intelligences artificielles génératives

Intelligences artificielles génératives | information analyst | Scoop.it
L’intelligence artificielle générative (IAG) permet la création ou la génération de nouvelles données, de contenus ou de créations originales. Mais comment fonctionnent-elles ? Comment les inclure dans les pratiques pédagogiques ? Quels sont leurs avantages et leurs limites ?

L’université de Bordeaux met à disposition de la communauté enseignante une série de ressources pédagogiques permettant de comprendre le fonctionnement des IA génératives comme ChatGPT, leurs avantages, leurs limites, mais aussi de connaître les différentes utilisations qu'un étudiant, enseignant, ou tuteur peuvent en faire.

Ce kit propose, entre autres, des pistes d'activités pédagogiques et d'évaluation en fonction de différents cas de figure, des exemples de prompts, des bonnes pratiques, mais aussi une liste d'outils d'IA testés et recommandés par l'équipe d'ingénieurs pédagogiques de la MAPI.

L’objectif ? Permettre à la chaque enseignant d’identifier les impacts de ces technologies sur ses activités d'apprentissage ou d'enseignement et d'adopter les stratégies les plus adaptées selon sa sensibilité, son contexte ou ses pratiques.


Via CECI Jean-François, juandoming
CECI Jean-François's curator insight, April 6, 12:43 PM

Des ressources bien pensées et condensées pour envisager rapidement l'essentiel en matière d'IAED (Intelligence artificielle en éducation).

Rescooped by michel verstrepen from Media, Business & Tech
Scoop.it!

Google Books Is Indexing AI-Generated Garbage

Google Books Is Indexing AI-Generated Garbage | information analyst | Scoop.it
Google said it will continue to evaluate its approach “as the world of book publishing evolves.”

Via Bruno Renkin
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

Can Using a Grammar Checker Set Off AI-Detection Software?  

Can Using a Grammar Checker Set Off AI-Detection Software?   | information analyst | Scoop.it
A college student says she was falsely accused of cheating, and her story has gone viral. Where is the line between acceptable help and cheating with AI?

Via EDTECH@UTRGV
EDTECH@UTRGV's curator insight, April 4, 10:03 AM

"Stevens says that a professor in a criminal justice course she took last year gave her a zero on a paper because he said that the AI-detection system in Turnitin flagged it as robot-written. Stevens insists the work is entirely her own and that she did not use ChatGPT or any other chatbot to compose any part of her paper."

Rescooped by michel verstrepen from Renseignements Stratégiques, Investigations & Intelligence Economique
Scoop.it!

Attention au « qishing » : c’est la nouvelle méthode utilisée par les arnaqueurs expérimentés !

Attention au « qishing » : c’est la nouvelle méthode utilisée par les arnaqueurs expérimentés ! | information analyst | Scoop.it

Le « quishing » est une nouvelle forme d’arnaque inspirée de l’hameçonnage ou phishing. L’objectif est presque le même : cliquer sur un lien frauduleux afin de récupérer vos données personnelles notamment les mots de passe ou les coordonnées bancaires.
Cependant avec le qishing, les arnaqueurs dissimulent le lien derrière un QR code. Voilà pourquoi, il est appelé qishing qui est la combinaison de QR code et de phishing.


Via Intelligence Economique, Investigations Numériques et Veille Informationnelle
No comment yet.
Rescooped by michel verstrepen from Time to Learn
Scoop.it!

Jenni AI : Supercharge Your Next Research Paper

Jenni AI : Supercharge Your Next Research Paper | information analyst | Scoop.it
Jenni is your AI assistant for all things in your academic journey. We specialise in developing AI that helps you make your writing more efficient, while still keeping control.

Via Frédéric DEBAILLEUL
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

State Dept-backed report provides action plan to avoid catastrophic AI risks

"A report commissioned by the U.S. State Department suggests practical measures to prevent the emerging threats of advanced artificial intelligence, including the weaponization of AI and the threat of losing control over the technology."


Via EDTECH@UTRGV
EDTECH@UTRGV's curator insight, April 3, 11:26 AM

"While providing technical details on the risks of AI, the action plan also introduces policy proposals that can help the U.S. and its allies mitigate these risks."

Rescooped by michel verstrepen from Time to Learn
Scoop.it!

Digital Defense : The ultimate personal security checklist to secure your digital life

Digital Defense : The ultimate personal security checklist to secure your digital life | information analyst | Scoop.it

The ultimate personal security checklist to secure your digital life


Via Frédéric DEBAILLEUL
No comment yet.
Rescooped by michel verstrepen from Time to Learn
Scoop.it!

Awesome Privacy : Your guide to finding and comparing privacy-respecting alternatives to popular software and services.

Awesome Privacy : Your guide to finding and comparing privacy-respecting alternatives to popular software and services. | information analyst | Scoop.it
Your guide to finding and comparing privacy-respecting alternatives to popular software and services.

Via Frédéric DEBAILLEUL
No comment yet.
Rescooped by michel verstrepen from Educational Technology News
Scoop.it!

This free AI bot lets you chat privately with any document

"People often ask me what to actually do with AI. Well, here’s an idea: Give it a product manual, a document, or something else you’re interested in and chat about it."


Via EDTECH@UTRGV
EDTECH@UTRGV's curator insight, April 2, 11:38 AM

"Unlike other similar options, ChatPDF has a convenient free tier that doesn’t need any accounts or configuration and isn’t directly tied to any other platform. You can upload up to two documents each day, and they can be up to 120 pages long and up to 10 MB in size. That’s quite impressive in comparison to similar services’ free plans."