Lectures
Advanced Threat Hunting: Staying One Step Ahead of Adversary
As cybersecurity defenders, our job is not just to react but to stay ahead of attackers. Yet, adversaries continue to evolve, refining their techniques to bypass defenses and infiltrate critical systems. To effectively hunt threats, we must understand how these attackers think and operate.
This session will explore real-world techniques used by malicious actors to breach security controls. We will examine how stolen data such as compromised session tokens and credentials are weaponized to gain unauthorized access to systems and supply chains. We’ll also uncover how attackers bypass restricted registration requirements, exploiting gaps in verification and automation processes. We will also analyze how logic flaws in authentication mechanisms allow threat actors to circumvent security controls, gaining entry where they shouldn’t. And much more.
By breaking down these attack strategies, you will learn how to identify, track, and neutralize emerging threats before they cause damage. This session will equip you with practical threat-hunting insights, showing you how to turn an attacker’s own methods against them before they strike.
Adversary Emulation: Simulating APTs, Ransomware, and Emerging Threats
While threat reports document advanced persistent threat (APT) activity, most red team simulations fail to capture the conditions, tool chains, and environmental assumptions adversaries relied upon—creating defensive gaps. This presentation demonstrates how to extract operational intent from cyber threat intelligence and translate it into authentic, repeatable simulations using frameworks like Atomic Red Team and CALDERA.
Using APT29 as a case study, we’ll walk through building actor-specific profiles and implementing tactics that reflect actual adversary constraints. Attendees will receive a threat actor profile template and framework configurations ready to customize for their specific threat landscapes.
AI BYPASS: How to gain a phisical access in 15 seconds
Despite the widespread adoption of AI-based security solutions, physical attacks on network infrastructure remain fast, effective, and dangerously underestimated.
The speaker will deliver a live demonstration showing how network security can be bypassed in as little as 15 seconds using a simple hardware tool. The presentation focuses on Layer 2 and Layer 3 attacks, revealing how physical access combined with low-level network exploitation can lead to immediate unauthorized entry.
The session will highlight why AI-driven security systems often fail to detect L2/L3 attacks, and will discuss practical ways to reduce the risk of physical breaches through improved monitoring, segmentation, and defensive controls.
By combining real-time exploitation with defensive insights, this talk demonstrates why physical access and low-level network attacks still play a critical role in modern cybersecurity, even in the age of AI.
Artificial Intelligence for Hacking
* Automated Vulnerability Scanning and Exploitation: AI detects vulnerabilities and autonomously selects or creates the appropriate exploit to validate them.
* Self-Updating Exploit Arsenal: AI retrieves, adapts, and standardizes public exploits from online sources without human input, maintaining an up-to-date library.
* Fuzzing and Injection Testing: AI performs intelligent fuzzing and injection (e.g., SQLi, XSS) to uncover and verify application vulnerabilities.
* Exploit Reprogramming: AI modifies and sanitizes exploit scripts to ensure safe execution and compatibility with the platform.
* Multi-Agent Orchestration: Multiple AI agents collaborate to coordinate scanning, exploitation, and refinement loops for more effective penetration tests.
Attack of the Clones: 80+ AI Agents Walk Into a SOC
What happens when you stop waiting for the next "AI SOC revolution" and just build your own clone army instead?
This talk tells the story of how one small SecOps team turned years of internal playbooks, tribal knowledge, and automation scripts into an Agentic Threat Management Framework — a swarm of 80+ AI agents that think, correlate, and report like seasoned analysts (with the added benefit of no coffee breaks).
We'll dive into the why behind building an in-house AI SOC — the frustration with black-box "AI security" hype, the need for transparency, and the joy of making something that actually works in a real, human lead environment, with all its inherent flaws and inconsistencies. We will share our own hard-won lessons:
- how to agentify your own security knowledge,
- orchestrate your agents on the battlefield,
- keep your AI explainable and traceable,
- and, the most important, transform the human SOC Analyst in an AI developer/prompt engineer.
By the end, you'll see how building your own AI SOC is about AI empowering humans and not the other way around.
CSRF attacks in modern Web applications
Cross-Site Request Forgery (CSRF) has long been a high severity threat to web applications, enabling attackers to execute unauthorized actions on behalf of authenticated users. While traditional CSRF mitigation techniques, such as anti-CSRF tokens and SameSite cookies, have improved web security, different application architectures and new research from the community introduced new challenges that can lead to overlooked vulnerabilities.
This talk explores the evolution of CSRF attacks in the context of modern web technologies, such as Single Page Applications. Additionally, the talk will assess how browser security mechanisms protect their users against CSRF attacks and how to potentially bypass them.
Exploiting Digital Energy at Level 0
The convergence of the digital and physical worlds has opened a physics-based attack surface that traditional cybersecurity does not address, particularly at the foundational Purdue Level 0. We define this new vulnerability through digital energy: the physical manifestation of computation. Our core argument is that manipulating this energy—through electromagnetic interference or mechanical force—allows attackers to side-step software defenses and compromise operational technology. Because advanced threats may exploit the physical environment to disrupt vital sensors and actuators, security must undergo a fundamental shift. The way forward is the urgent integration of physical layer security monitoring to protect critical infrastructure at its deepest level.
From Ghosts in the Code to Phantoms in the Machine: GenAI Inside Our Cars, Factories, and Cities
We increasingly live inside “soft” infrastructure. Modern vehicles, factories, energy systems, and city infrastructures are increasingly orchestrated by layers of software. Cars are software platforms; buses and trains run on digital control systems; factories are networks of programmable machines; and entire cities depend on interconnected networks of programmable machines which now are influenced or generated by AI. In such environments, a single malfunctioning component, hidden dependencies, unexpected interactions, and risks of remote shutdowns or unintended behavior; "ghosts in the code" if you like, can act as a kill switch, halting production lines, immobilizing fleets, or shutting down essential services. Previously, these kill-switch scenarios came from bugs, misconfigurations, or deliberate sabotage. But now Generative AI adds a new layer of complexity. AI can write code, design configurations, synthesize sensor data, or autonomously make operational decisions. As such, GenAI can be the guardian that detects anomalies faster than humans, or it can unintentionally embed vulnerabilities that only surface once deployed into the physical world. s physical infrastructure becomes more autonomous, the line between accident, malfunction, and attack becomes dangerously thin. Understanding how GenAI reshapes the kill-switch risk is essential for safety, security, and trust in modern digital infrastructure.
Key steps for ensuring the secure use of artificial intelligence in an organization
Artificial intelligence (AI) brings numerous opportunities, but it also introduces new security risks that organizations must not overlook. This talk will present key challenges and solutions for the secure use of AI, from strategic governance to technical controls, explaining why AI is not secure by default, how to defend against attacks, and how to prevent the misuse of models and data. Participants will be introduced to testing and monitoring tools, practical attack examples, and best practices for integrating security mechanisms.
(Note: Presentation in Slovenian)
Privacy in the Age of Telemetry, Cloud Services, and AI Monitoring
This talk explores how modern operating systems — including Windows, macOS, and in some cases even Linux — collect extensive telemetry and usage data, often without users being fully aware. We examine how cloud platforms such as OneDrive and iCloud impact data sovereignty and what happens to our files once they leave our devices.
A special focus is placed on the rise of AI-driven scanning, where automated systems analyze documents, photos, communications, and other private data stored in the cloud. We look at how these systems work, the associated risks, and what it means for individual privacy.
The talk also covers current regulatory challenges in the EU, including Chat Control and mandatory age verification, and how these initiatives could reshape privacy, encryption, and digital freedoms.
Participants will gain a clear understanding of the risks, the broader societal implications, and practical steps to protect their privacy in an increasingly monitored digital world.
Quantum-Proofing Images: Stopping Fake News in a Synthetic Media Age
The emergence of quantum computing threatens to invalidate current cryptographic mechanisms, creating urgent challenges for maintaining digital authenticity. Concurrently, deepfakes and manipulated imagery continue to erode public trust. We introduce Post-Quantum VerITAS, a provenance-preserving system engineered to remain secure in both classical and post-quantum threat models. Leveraging lattice-based hash constructions, post-quantum zero-knowledge proofs, and CRYSTALS-Dilithium signatures, the system maintains verifiable provenance even under quantum-capable adversaries.
In contrast to existing standards such as C2PA—which lack robustness against both image transformations and quantum cryptanalysis—Post-Quantum VerITAS offers a decentralized, quantum-resistant framework capable of verifying images after common edits. This presentation details the system’s cryptographic design, security guarantees, and resistance to quantum attacks, and discusses pathways for deploying quantum-secure provenance verification at scale.
Quishing Without Compromise: Scoping, Tools, Tricks, and Lessons Learned
Red teaming can be challenging especially when simulating real-world attacks like QR code phishing (“quishing”) within a tightly defined scope. How do you credibly launch a phishing campaign without wanting to know the specific targets, exposing sensitive information, or putting unintended users at risk? This session offers a behind-the-scenes look at how our team tackled these constraints. We will dig into some opensource tools that can be used and some custom tweaks that we made to make it more secure / believable and the pitfalls you can hopefully avoid. We will walk you through our attack chain:
(1) Redirector and how to filter the bots away
(1) Using a customized EvilGinx instance to verify the scope
(2) Creating a believable landing page for our targets,
(3) Lessons learned and possible automated attacks.
Secure-by-design: Building cyber-resilient products that meet UX, security, and emerging compliance standards
Security engineering isn’t enough anymore—products must now satisfy complex UX needs, evolving threat landscapes, and tightening compliance regimes. This talk unpacks how product managers and security teams can jointly build secure-by-design systems while aligning with frameworks like GDPR, the EU Cyber Resilience Act, and the upcoming EU AI Act.
We’ll cover secure defaults, data-minimization patterns, auditability requirements, model risk controls, and how to design security features that remain compliant as regulations shift, without slowing delivery or harming usability.
Securing Cloud-Native Supply Chains: Strategies for Fast, Resilient DevOps
This presentation addresses modern supply chain security in cloud-native engineering organizations, focusing on preventing incidents similar to SHA-1–related compromise events (e.g., “SHA1-Hulud”). Drawing from practical deployment experience with large PaaS providers, it outlines actionable mechanisms to ensure code integrity, artifact authenticity, and rapid detection and mitigation of malicious changes. Attendees will gain insights into securing CI/CD pipelines and maintaining rapid response capabilities without compromising development velocity. Emphasis is placed on aligning security practices with modern DevOps workflows to minimize risk while sustaining fast release cycles.
Smart Security: How Adaptive Authentication Is Changing the Game
As digital ecosystems grow increasingly complex and cyber threats continue to rise, traditional authentication methods are becoming less effective. Passwords, multi-factor authentication, and static security mechanisms often fall short against advanced attacks such as identity theft, credential stuffing, and social engineering.
In this session, we will explore the concept of adaptive authentication, which dynamically adjusts security requirements based on user context, risk level, and behavioral patterns. We will analyze the key components of adaptive authentication, including real-time risk assessment, the use of machine learning for anomaly detection, and the integration of biometric and contextual data.
We will also present examples of attacks that can be prevented through an adaptive approach and discuss the challenges involved in implementation.
(Note: Presentation in Slovenian)
TBA
TBA
The Onion: Layered cyber security for corporations
Supply-chain attacks, red teaming, cyber resilience—these aren't buzzwords, they're your daily reality when your vendor's compromised server becomes your problem. In this talk, we'll dissect the real threats facing modern organizations, from sophisticated supply-chain infiltrations to the social engineering that bypasses your million-dollar security stack. You'll learn how to plan red team engagements that actually test your defenses against real-world attack scenarios, not just check compliance boxes. This isn't about passing audits—it's about building security that makes attackers move on to easier targets. Get ready for a rapid-fire dive into the mindset and methods that turn corporate networks from soft targets into hardened fortresses.
Token Takeover: Anatomy of an Authentication System Collapse — Real-World Password Reset Misbinding (IDOR) & Multi-Domain XSS Token Theft Case Study
This presentation analyzes two real-world, high-impact vulnerabilities that led to full authentication system compromise: a Password Reset Token Misbinding flaw (IDOR) resulting in a $55,000 CEO account takeover, and a multi-domain XSS attack on the OneID authentication platform that enabled cross-origin token theft with physical safety implications.
The first case study demonstrates how a single unvalidated parameter (email) in the password reset flow allowed attackers to hijack any user account by re-binding a valid token to a victim’s email. The vulnerability required no MFA bypass and exposed financial assets at scale.
The second case study covers how an unsanitized parameter (originalUrl) combined with allowed javascript: scheme execution enabled remote script loading, token exfiltration, and full takeover across multiple global domains, including access to live location, vehicle lock/unlock functions, and user identity data.
The talk breaks down exploitation methodology, root-cause analysis, weak architectural patterns, and defensive strategies that could have prevented the collapse of both authentication systems. Attendees will gain practical insights into validating parameters, enforcing strict token binding, eliminating javascript: injection vectors, and hardening storage of authentication tokens.
Unmasking the Shadows: Advanced Techniques for Dark Web Domain Deanonymization
The Dark Web’s promise of anonymity through technologies like Tor has long been considered its most defining characteristic—and its greatest shield for malicious actors. However, sophisticated adversaries, law enforcement agencies, and security researchers have developed increasingly advanced methodologies to pierce this veil of anonymity. This presentation will provide a comprehensive technical deep-dive into the operational techniques, methods, and procedures (TTPs) used to deanonymize Dark Web domains and their operators.
Drawing from real-world case operations, OSINT investigations, and cutting-edge research, this talk will explore the full spectrum of deanonymization vectors—from passive traffic analysis and timing correlation attacks to active fingerprinting techniques and operational security failures. Attendees will gain insight into how seemingly minor OPSEC mistakes, infrastructure misconfigurations, and behavioral patterns can cascade into complete identity exposure. We’ll examine the technical architecture of anonymity networks, identifying inherent weaknesses and attack surfaces that can be exploited.
This session is designed for penetration testers, threat intelligence analysts, red teamers, and security researchers who need to understand both offensive deanonymization capabilities and defensive countermeasures. By understanding how anonymity fails, defenders can better architect resilient infrastructure, while investigators can develop more effective methodologies for tracking threat actors. Attendees will leave with actionable knowledge, practical tools, and a realistic understanding of Dark Web anonymity’s true boundaries in 2025.
Updates in the field of legal regulation and the challenges of privacy protection and artificial intelligence
With the adoption of the Act implementing Regulation (EU) laying down harmonised rules on artificial intelligence (ZIUDHPUI), the Information Commissioner, acting as the market surveillance authority, will be responsible for overseeing prohibited AI systems and certain high-risk artificial intelligence systems. At the EU level, changes in both the field of personal data protection and artificial intelligence are being introduced through the so-called digital omnibus. What, then, can we expect in the near future – more or less regulation, and what will it look like?
(Note: Presentation in Slovenian)
Vsebina v pripravi
Zero Trust in the Era of AI: Why "Verify" is Broken
By 2026, "Never Trust, Always Verify" is no longer just a mantra; it's the baseline requirement for NIS2 and DORA compliance. But in an age of hyper-realistic AI deepfakes and autonomous agents, verification has become the hardest part of the equation.
How do you "verify explicitly" when a video call from your CEO might be a deepfake? How do you apply "least privilege" to an AI model that needs massive data access to function?
This session upgrades your Zero Trust architecture for the realities of 2026. We move beyond basic MFA and segmentation to explore the next frontier: Identity Assurance and Privacy Enhancing Technologies (PETs). We will break down how tools like Homomorphic Encryption and Zero-Knowledge Proofs are moving from academic theory to practical necessity - allowing you to verify data and users without ever exposing the raw secrets.
Join this session to learn how to shock-proof your Zero Trust strategy against AI-driven identity attacks and build a security model that protects not just access, but the data itself while it’s being used.






