Keynote Speakers


Adversary Emulation in the Age of Generative AI

Cybersecurity threat actors have been evolving into complex organizations, with technical and financial means to deliver powerful attacks, with significant impact on economy and infrastructures. These threat actors are also looking with high interest at the recent evolution of Generative AI for malicious purposes. Generative AI is also a valuable opportunity to enhance cybersecurity. This presentation will look at emerging applications of Generative AI for Adversary Emulation, that is, the emulation of attack techniques for assessment purposes. In particular, we will discuss the role of Large Language Models (LLMs) at supporting cybersecurity analysts, by automatically generating malicious code to mimic threat actors.


Crossing the Border: How Adversarial Attacks Can Compromise Your Artificial Intelligence Model

You’re probably familiar with terms like Artificial Intelligence, Deep Learning, and Neural Networks, technologies driving disruptive innovations across various sectors today. However, what may not be as well known are the implications these technologies can have for the cybersecurity of your applications.

One of the main challenges faced is Adversarial Attacks, a form of attack targeting artificial intelligence models. They exploit vulnerabilities in AI systems, often using methods that evade human perception but can cause considerable damage.

A classic example occurs in image recognition. Here, an adversary can introduce tiny, nearly imperceptible alterations to an image, causing the AI model to misclassify it. For instance, an image of a cat could be manipulated in such a way that the model identifies it as a dog, even though to a human observer the image appears identical.

In this talk, we will provide a brief introduction to adversarial attacks, along with examples and suggestions to help mitigate such threats.


Performability Assessment: Methods and Tools for System Design and Tuning

Evaluating Performability is crucial for systems experiencing degraded performance due to failures and repair activities. We will begin by discussing the foundational concepts of Performability, including definitions of key performance metrics such as utilization and response time and dependability attributes such as availability, reliability, safety, security, confidentiality, integrity, and maintainability. This talk aims to provide a broad understanding of the significance, methods, and benefits of Performability evaluation, considering the complexity and representativeness of the models.

Several evaluation strategies will be examined, including analytical solutions, numerical-based methods, and simulations. We will discuss the complexity and modeling power of techniques such as reliability block diagrams (RBD), fault trees (FT), Markov chains (DTMC and CTMC), and stochastic Petri nets (SPN). Additionally, the importance of hierarchical and heterogeneous modeling methodologies, sensitivity analysis, phase-type evaluation methods, and the development of user-friendly tools will be highlighted.

Furthermore, we will introduce the Mercury tool, which supports Performability evaluation using models like SPN, CTMC, DTMC, RBD, and FT.

Through this presentation, we highlight the critical role of Performability evaluation in ensuring systems meet their performance and dependability requirements, thereby contributing to developing more robust and reliable system designs.


Leveraging LLMs for Secure and Trustworthy Software: Insights and Future Perspectives

Large Language Models (LLMs) are transforming software engineering, offering new possibilities for developing secure and trustworthy software. This keynote will explore the integration of LLMs into software development workflows, particularly their role in code generation. Supported by empirical evidence, we will discuss the capabilities of LLMs in vulnerability detection and mitigation, and delve into the importance of assessing the trustworthiness of code, including the role of LLMs in verifying code quality and adherence to best practices. We will conclude with a discussion on future directions, outlining emerging opportunities for LLMs in software engineering.


Industry Perspectives on AI Performance and Reliability

Artificial intelligence (AI) is changing the game in many areas of our lives, much like the internet did years ago. With AI models like GPT-4, we’re seeing big changes in the workplace, schools, and how we interact with each other. This talk will give you a straightforward look at how big AI systems are built to be safe, fast, and reliable. We’ll focus on three industries where AI is making a real difference: mining, agriculture, and the legal field. You’ll get to see real-world examples of AI at work, like how it’s helping to choose better where to mine, grow better crops, and sift through legal documents more efficiently. We’ll also discuss what’s being done to make sure these AI systems can be trusted to handle important jobs without a hitch. Join us to find out how AI is not just a buzzword but a powerful tool that’s reshaping industries and improving the way we tackle tough challenges.


Proving probabilistic worst-case reasoning: when functional and non-functional properties must meet

The problem of identifying and proving worst-case time behavior of real-time programs on processors has appeared within the context of critical industries like avionics or space. Rapidly adopted by the real-time scheduling community, worst-case execution time estimates of programs or tasks are mandatory to understand the time behaviour of a real-time system. Analyzing such time behaviour is often, done, with an important pessimism due to the consideration of worst-case scenarios, especially for multicore processors. A decreased pessimism has been obtained by understanding that large execution times of a program have low probability of appearance. Probabilistic (worst-case) execution time notion has been proposed, while current approaches are built, often, on statistical estimators based on the use of Extreme Value Theory or concentration inequalities. Recent results revisiting these definitions underline the maturity of their applicability but also the need of a correct and proved reasoning. Within this talk, we discuss a possible dilemma: a proved probabilistic (or statistical) worst-case reasoning may impose a joint analysis of both functional and non-functional properties. Nevertheless, these properties are analyzed separately when certified executions of programs are required in critical industries like avionics or space and this separation seems to be a mandatory key towards a successful certification process.


Artificial Intelligence in Cyber-Physical Systems, Friend or a Foe

The most recent advancements in Artificial Intelligence (AI) have provided unprecedented opportunities for real-time understanding of overall behavior and health of complex systems. The talk will focus on presenting the latest advancements in AI based on real-world case studies. We will investigate the capabilities of cutting-edge AI techniques for sequence modeling, demonstrating its ability to understand complex patterns and understand complex systems with high accuracy.
Furthermore, we will look at the approaches to modern tools for explaining AI decisions. In an environment where quick decision-making is critical, the collaboration of AI and human expertise becomes essential. We will emphasize the importance of explainability and interactive visualizations in encouraging effective cooperation between humans and AI, particularly when faced with massive volumes of data and little time for informed decision-making. In addition, we will investigate incorporating human knowledge and physics into the AI system to fill the gap of missing data, as well as facilitating effective knowledge transfer between anomaly detection models designed for different systems.
The talk will conclude with the brief overview of IEEE Industrial Electronics Society activities and opportunities for volunteer engagements.