 Your new post is loading...
 Your new post is loading...
|
Scooped by
Gust MEES
March 17, 6:03 PM
|
|
Scooped by
Gust MEES
December 23, 2024 9:13 AM
|
Cybersecurity researchers have found that it's possible to use large language models (LLMs) to generate new variants of malicious JavaScript code at scale in a manner that can better evade detection.
"Although LLMs struggle to create malware from scratch, criminals can easily use them to rewrite or obfuscate existing malware, making it harder to detect," Palo Alto Networks Unit 42 researchers said in a new analysis. "Criminals can prompt LLMs to perform transformations that are much more natural-looking, which makes detecting this malware more challenging. Learn more / En savoir plus / Mehr erfahren: https://www.scoop.it/topic/securite-pc-et-internet/?&tag=AI
|
Scooped by
Gust MEES
September 30, 2024 11:27 AM
|
A recent cyber vulnerability in ChatGPT’s long-term memory feature was exposed, showing how hackers could use this AI tool to steal user data. Security researcher Johann Rehberger demonstrated this issue through a concept he named “SpAIware,” which exploited a weakness in ChatGPT’s macOS app, allowing it to act as spyware.
|
Scooped by
Gust MEES
July 26, 2024 1:48 PM
|
X uses your data to train its Grok AI assistant, but if you’d like to opt out of that, you can do that right from your settings menu. It is accessible on the web right here, or you can find it yourself if you click the three dots menu, then “Settings and privacy,” then “Privacy and safety,” and then “Grok.
|
Scooped by
Gust MEES
January 12, 2024 4:26 PM
|
An AI threat guide, outlining cyberattacks that target or leverage machine learning models, was published by the National Institute of Standards and Technology (NIST) on Jan. 4.
The nearly 100-page paper, titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” provides a comprehensive overview of the cybersecurity and privacy risks that come with the rapid development of both predictive and generative AI tools over the last few years.
|
Scooped by
Gust MEES
August 3, 2023 12:57 PM
|
Russian hackers and cybercrime forums are notorious for exploiting critical infrastructure. Last month, Hackread.com exclusively reported that a Russian-speaking threat actor was selling access to a US military satellite. Now, researchers have identified macOS malware being sold for $60,000.
|
Scooped by
Gust MEES
June 20, 2023 8:47 AM
|
More than 101,000 ChatGPT user accounts have been stolen by information-stealing malware over the past year, according to dark web marketplace data.
Cyberintelligence firm Group-IB reports having identified over a hundred thousand info-stealer logs on various underground websites containing ChatGPT accounts, with the peak observed in May 2023, when threat actors posted 26,800 new ChatGPT credential pairs.
|
Scooped by
Gust MEES
March 28, 2023 12:43 PM
|
Kriminelle lieben ChatGPT – und Europol hat konkrete Beispiele dafür entdeckt Was Beobachter:innen vorhergesagt haben, ist eingetreten: ChatGPT kommt längst bei kriminellen Machenschaften zum Einsatz, wie Europol warnt – und sei es nur zur Recherche für Verbrechen. Doch das Gefahrenpotenzial geht weit darüber hinaus.
|
Scooped by
Gust MEES
March 24, 2023 3:54 PM
|
A ChatGPT bug found earlier this week also revealed user's payment information, says OpenAI(Opens in a new tab).
The AI chatbot was shut down on March 20, due to a bug that exposed titles and the first message of new conversations from active users' chat history to other users.
Now, OpenAI has shared that even more private data from a small number of users was exposed.
"In the hours before we took ChatGPT offline on Monday, it was possible for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date," said OpenAI. "Full credit card numbers were not exposed at any time.
|
Scooped by
Gust MEES
February 1, 2023 11:20 AM
|
As 2022 ended, OpenAI made ChatGPT live to the world. It is an artificially intelligent research and deployment chatbot that interacts through text using realistic human responses. Its deep learning techniques can generate conversations that convince anyone they are interacting with an actual human.
Like opening the jar and releasing the genie, its impact is relatively unknown, but grave intrigue and curiosity surrounded it. How will it be used; how does it work; is it for good or evil? No, this is not the next Terminator sequel…
Its intentions are certainly for positive use, and its articulate responses have led many to claim it as the best chatbot to be released. However, in a short period, ChatGPT has already been linked to cyber threats as cyber-criminals leverage its advanced capabilities for nefarious means. Learn more / En savoir plus / Mehr erfahren: https://www.scoop.it/topic/securite-pc-et-internet/?&tag=ChatGPT
|
Scooped by
Gust MEES
July 4, 2021 6:47 AM
|
IF YOU DON'T have enough to worry about already, consider a world where AIs are hackers.
Hacking is as old as humanity. We are creative problem solvers. We exploit loopholes, manipulate systems, and strive for more influence, power, and wealth. To date, hacking has exclusively been a human activity. Not for long.
As I lay out in a report I just published, artificial intelligence will eventually find vulnerabilities in all sorts of social, economic, and political systems, and then exploit them at unprecedented speed, scale, and scope. After hacking humanity, AI systems will then hack other AI systems, and humans will be little more than collateral damage. Learn more / En svoir plus / Mehr erfahren: https://www.scoop.it/t/securite-pc-et-internet/?&tag=AI
|
Scooped by
Gust MEES
February 27, 2020 6:09 AM
|
|
Scooped by
Gust MEES
November 7, 2018 11:22 AM
|
Artificial intelligence has the potential to bring a select set of advanced techniques to the table when it comes to cyber offense, researchers say.
On Thursday, researchers from Darktrace (.PDF) said that the current threat landscape is full of everything from script kiddies and opportunistic attacks to advanced, state-sponsored assaults, and in the latter sense, attacks continue to evolve.
However, for each sophisticated attack currently in use, there is the potential for further development through the future use of AI.
Within the report, the cybersecurity firm documented three active threats in the wild which have been detected within the past 12 months. Analysis of these attacks -- and a little imagination -- has led the team to create scenarios using AI which could one day become reality.
"We expect AI-driven malware to start mimicking behavior that is usually attributed to human operators by leveraging contextualization," said Max Heinemeyer, Director of Threat Hunting at Darktrace. "But we also anticipate the opposite; advanced human attacker groups utilizing AI-driven implants to improve their attacks and enable them to scale better. Learn more / En svoir plus / Mehr erfahren: https://www.scoop.it/t/securite-pc-et-internet/?&tag=AI
|
|
Scooped by
Gust MEES
February 9, 12:51 PM
|
Cyberkriminelle behaupten, private Daten von Millionen OpenAI-Konten gestohlen zu haben. Forscher sind skeptisch, der ChatGPT-Hersteller ermittelt in dem Fall.
|
Scooped by
Gust MEES
October 15, 2024 2:31 PM
|
A new ‘super-realistic’ AI scam could get your Gmail account hacked A Microsoft security expert warns Gmail users of a new convincing social engineering attack.
Warning signs of a scam attempt The advent of generative AI has opened up all kinds of opportunities, but it has also ramped up various risks and dangers.
We’ve previously seen hackers who can use AI-generated codes, phishing emails, or even deepfakes to make even more realistic fraud attempts — ones that even security experts can easily fall for.
|
Scooped by
Gust MEES
September 25, 2024 10:31 AM
|
Software developers have embraced “artificial intelligence” language models for code generation in a big way, with huge gains in productivity but also some predictably dubious developments. It’s no surprise that hackers and malware writers are doing the same.
According to recent reports, there have been several active malware attacks spotted with code that’s at least partially generated by AI.
|
Scooped by
Gust MEES
July 5, 2024 1:00 PM
|
The New York Times reported on July 4, 2024, that OpenAI suffered an undisclosed breach in early 2023.
The NYT notes that the attacker did not access the systems housing and building the AI, but did steal discussions from an employee forum. OpenAI did not publicly disclose the incident nor inform the FBI because, it claims, no information about customers nor partners was stolen, and the breach was not considered a threat to national security. The firm decided that the attack was down to a single person with no known association to any foreign government.
Nevertheless, the incident led to internal staff discussions over how seriously OpenAI was addressing security concerns.
|
Scooped by
Gust MEES
December 26, 2023 11:54 AM
|
ChatGPT est victime d’une nouvelle faille de sécurité. En exploitant cette brèche, il est possible d’extraire des données sensibles concernant des individus en s’adressant au chatbot d’OpenAI.
|
Scooped by
Gust MEES
July 27, 2023 5:39 PM
|
Researchers jailbreak AI chatbots, including ChatGPT Like a magic wand that turns chatbots evil.
|
Scooped by
Gust MEES
May 17, 2023 9:54 AM
|
Abo-Malware: Googles und Apples Stores von teuren ChatGPT-Fakes geflutet Sophos warnt vor ChatGPT-Nachahmer-Apps in Apples und Googles App-Stores, die arglose Nutzer mit verschleierten Gebühren abzocken.
|
Scooped by
Gust MEES
March 25, 2023 5:26 PM
|
In den Stunden vor der Abschaltung von ChatGPT, war es demnach für einige Benutzer möglich, den Vor- und Nachnamen, die E-Mail- und Zahlungsadresse, die letzten vier Ziffern der Kreditkartennummer und das Ablaufdatum der Kreditkarte eines anderen aktiven Benutzers zu sehen. Die vollständigen Kreditkartennummern seien zu keinem Zeitpunkt offengelegt worden.
|
Scooped by
Gust MEES
March 14, 2023 7:51 AM
|
|
Scooped by
Gust MEES
January 10, 2023 11:30 AM
|
Attackers Are Already Exploiting ChatGPT to Write Malicious Code The AI-based chatbot is allowing bad actors with absolutely no coding experience to develop malware.
Since OpenAI released ChatGPT in late November, many security experts have predicted it would only be a matter of time before cybercriminals began using the AI chatbot for writing malware and enabling other nefarious activities. Just weeks later, it looks like that time is already here.
In fact, researchers at Check Point Research (CPR) have reported spotting at least three instances where black hat hackers demonstrated, in underground forums, how they had leveraged ChatGPT's AI-smarts for malicious purposes. Learn more / En savoir plus / Mehr erfahren: https://www.scoop.it/topic/securite-pc-et-internet Learn more / En savoir plus / Mehr erfahren: https://www.scoop.it/topic/securite-pc-et-internet/?&tag=ChatGPT
|
Scooped by
Gust MEES
March 2, 2020 12:00 PM
|
New York (CNN Business)Clearview AI, a startup that compiles billions of photos for facial recognition technology, said it lost its entire client list to hackers.
The company said it has patched the unspecified flaw that allowed the breach to happen. In a statement, Clearview AI's attorney Tor Ekeland said that while security is the company's top priority, "unfortunately, data breaches are a part of life. Our servers were never accessed." He added that the company continues to strengthen its security procedures and that the flaw has been patched. Clearview AI continues "to work to strengthen our security," Ekeland said. In a notification sent to customers obtained by Daily Beast, Clearview AI said that an intruder "gained unauthorized access" to its customer list, which includes police forces, law enforcement agencies and banks. The company said that the person didn't obtain any search histories conducted by customers, which include some police forces. Learn more / En savoir plus / Mehr erfahren: https://www.scoop.it/t/securite-pc-et-internet/?&tag=Facial+Recognition https://www.scoop.it/topic/securite-pc-et-internet/?&tag=Clearview
|
Scooped by
Gust MEES
January 20, 2020 7:09 PM
|
Das Start-up Clearview AI war der Öffentlichkeit bisher so gut wie unbekannt, und das war durchaus so gewollt. Die kleine Firma hat eine offenbar gut funktionierende, aber auf zahlreichen Ebenen problematische Gesichtserkennungstechnologie an Hunderte Polizeibehörden in den USA verkauft. Nun hat die "New York Times" dafür gesorgt, dass Clearview zum Inbegriff aller Befürchtungen wird, die mit Gesichtserkennung einhergehen. Die Zeitung schreibt vom potenziellen "Ende der Privatsphäre, wie wir sie kennen".
Revolutionär ist an der Technik eigentlich nichts, sie besteht aus lauter Versatzstücken, die es anderswo auch schon gibt. Aber im Zusammenspiel funktioniert sie so gut, dass Ermittler die Software gern und nach eigenem Bekunden auch erfolgreich einsetzen.
Sie müssen dazu nur ein einziges Bild eines Gesuchten bei Clearview hochladen – egal, ob das Bild frontal aufgenommen wurde und ob die Person zum Beispiel eine Sonnenbrille oder einen Hut trägt. Das Bild wird in ein mathematisches Modell des Gesichts umgerechnet, so wie es im Prinzip auch Apples Gesichtserkennung Face ID macht. Dieses Modell wird gegen eine Datenbank abgeglichen - und die hat es in sich: Angeblich besitzt die Firma Clearview eine Sammlung aus drei Milliarden Fotos, die sie ohne Erlaubnis von Facebook, Instagram, YouTube "und Millionen anderen Websites" per Scraping heruntergeladen haben soll.
Diese Fotos werden ebenfalls in mathematische Modelle umgewandelt und bei hinreichender Ähnlichkeit zum hochgeladenen Bild als mögliche Treffer angezeigt, mitsamt den Links zu den jeweiligen Quellen. Das ermöglicht eine schnelle Identifizierung. Learn more / En savoir plus / Mehr erfahren: http://www.scoop.it/t/securite-pc-et-internet/?&tag=Facial+Recognition
|
Amazon’s AI-enhanced Alexa assistant is going to need all your voice recordings, and there’s nothing you can do about it. An email sent to Alexa users notes the online retail giant is ending one of its few privacy provisions about recorded voice data in the lead up to Alexa+. The only way to make sure Amazon doesn’t get ahold of any of your vocals may be to quit using Alexa entirely.
Learn more / En savoir plus / Mehr erfahren:
https://www.scoop.it/t/securite-pc-et-internet/?&tag=Privacy
https://www.scoop.it/topic/securite-pc-et-internet/?&tag=Alexa