Artwork

Вміст надано CCC media team. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією CCC media team або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
Player FM - додаток Podcast
Переходьте в офлайн за допомогою програми Player FM !

GenAI in the Battle of Security: Attacks, Defenses, and the Laws Shaping AI's Future (god2024)

28:56
 
Поширити
 

Manage episode 449993999 series 2475293
Вміст надано CCC media team. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією CCC media team або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
The presentation explores the security challenges and opportunities posed by Generative AI (GenAI). While GenAI offers tremendous potential, it also has a darker side, such as its use in creating deepfakes that can spread misinformation, manipulate political events, or facilitate fraud, as demonstrated in a live deepfake example. Malicious variants of GenAI, are used in phishing attacks, social engineering schemes, and the creation of malware. Additionally, GenAI enables more intelligent network attacks through autonomous botnets decreasing the risk of exposure. Despite these risks, GenAI also provides defensive advantages by enhancing security measures, such as improving threat detection, strengthening access control, and identifying code vulnerabilities. This is exemplified in a live demo showcasing deepfake and AI-based content detection. The presentation also examines the different types of attacks that AI models, including GenAI, are susceptible to, across any task, model, or modality. This includes adversarial attacks, where inputs are specifically crafted to deceive AI systems. Additionally, attacks such as Prompt Injection and Visual Prompt Injection manipulate inputs to mislead models. However, navigating the complex landscape of AI compliance is essential. Organizations must adhere to regulations like the EU AI Act and standards such as ISO 27090, while also following guidelines from bodies like OWASP to ensure the security, transparency, and ethical use of AI systems. The OWASP AI Exchange plays a key role in modeling threats to GenAI, addressing risks and point out solutions. To defend against these threats, various detection and mitigation techniques have been developed and will briefly be presented. Licensed to the public under https://creativecommons.org/licenses/by-sa/4.0/ about this event: https://c3voc.de
  continue reading

1792 епізодів

Artwork
iconПоширити
 
Manage episode 449993999 series 2475293
Вміст надано CCC media team. Весь вміст подкастів, включаючи епізоди, графіку та описи подкастів, завантажується та надається безпосередньо компанією CCC media team або його партнером по платформі подкастів. Якщо ви вважаєте, що хтось використовує ваш захищений авторським правом твір без вашого дозволу, ви можете виконати процедуру, описану тут https://uk.player.fm/legal.
The presentation explores the security challenges and opportunities posed by Generative AI (GenAI). While GenAI offers tremendous potential, it also has a darker side, such as its use in creating deepfakes that can spread misinformation, manipulate political events, or facilitate fraud, as demonstrated in a live deepfake example. Malicious variants of GenAI, are used in phishing attacks, social engineering schemes, and the creation of malware. Additionally, GenAI enables more intelligent network attacks through autonomous botnets decreasing the risk of exposure. Despite these risks, GenAI also provides defensive advantages by enhancing security measures, such as improving threat detection, strengthening access control, and identifying code vulnerabilities. This is exemplified in a live demo showcasing deepfake and AI-based content detection. The presentation also examines the different types of attacks that AI models, including GenAI, are susceptible to, across any task, model, or modality. This includes adversarial attacks, where inputs are specifically crafted to deceive AI systems. Additionally, attacks such as Prompt Injection and Visual Prompt Injection manipulate inputs to mislead models. However, navigating the complex landscape of AI compliance is essential. Organizations must adhere to regulations like the EU AI Act and standards such as ISO 27090, while also following guidelines from bodies like OWASP to ensure the security, transparency, and ethical use of AI systems. The OWASP AI Exchange plays a key role in modeling threats to GenAI, addressing risks and point out solutions. To defend against these threats, various detection and mitigation techniques have been developed and will briefly be presented. Licensed to the public under https://creativecommons.org/licenses/by-sa/4.0/ about this event: https://c3voc.de
  continue reading

1792 епізодів

Усі епізоди

×
 
Loading …

Ласкаво просимо до Player FM!

Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.

 

Короткий довідник