Keynote Speakers
Roberto Natella
Università degli Studi di Napoli Federico II, Italy
Short Bio
Roberto Natella is an Associate Professor in Computer Engineering at the Federico II University of Naples, Italy. In 2022, Roberto received the DSN Rising Star in Dependability Award from the IEEE Technical Committee on Dependable Computing and Fault Tolerance (TCFT) and the IFIP Working Group 10.4 on Dependable Computing and Fault Tolerance, for research achievements within 10 years after PhD graduation.
His research interests are in the field of software security and dependability. The main recurring theme of his research activity is the experimental injection of faults, attacks, and stressful conditions.
Adversary Emulation in the Age of Generative AI
Cybersecurity threat actors have been evolving into complex organizations, with technical and financial means to deliver powerful attacks, with significant impact on economy and infrastructures. These threat actors are also looking with high interest at the recent evolution of Generative AI for malicious purposes. Generative AI is also a valuable opportunity to enhance cybersecurity. This presentation will look at emerging applications of Generative AI for Adversary Emulation, that is, the emulation of attack techniques for assessment purposes. In particular, we will discuss the role of Large Language Models (LLMs) at supporting cybersecurity analysts, by automatically generating malicious code to mimic threat actors.
Henrique Arcoverde
Tempest, Brazil
Short Bio
Henrique Arcoverde is a cybersecurity expert with over 15 years of experience in the field. Currently, he is the Technical Director at Tempest Security Intelligence, the largest cybersecurity company in Brazil and he has previously worked at global companies such as Matasano and NCC Group. He holds a bachelor’s and master’s degree and is pursuing a PhD in computer science at UFPE. Additionally, he is a professor at CESAR School, recognized as the top private college in the North/Northeast region of Brazil. His expertise in offensive security and management roles make him a highly respected professional who continues to lead and innovate in the field of cybersecurity.
Crossing the Border: How Adversarial Attacks Can Compromise Your Artificial Intelligence Model
You’re probably familiar with terms like Artificial Intelligence, Deep Learning, and Neural Networks, technologies driving disruptive innovations across various sectors today. However, what may not be as well known are the implications these technologies can have for the cybersecurity of your applications.
One of the main challenges faced is Adversarial Attacks, a form of attack targeting artificial intelligence models. They exploit vulnerabilities in AI systems, often using methods that evade human perception but can cause considerable damage.
A classic example occurs in image recognition. Here, an adversary can introduce tiny, nearly imperceptible alterations to an image, causing the AI model to misclassify it. For instance, an image of a cat could be manipulated in such a way that the model identifies it as a dog, even though to a human observer the image appears identical.
In this talk, we will provide a brief introduction to adversarial attacks, along with examples and suggestions to help mitigate such threats.
Paulo Maciel
Federal University of Pernambuco, Brazil
Short Bio
Paulo Maciel holds a Ph.D. in Electronic Engineering and Computer Science from the Federal University of Pernambuco (UFPE), Brazil. During his doctorate, he completed a “sandwich internship” at Eberhard-Karls-Universität Tübingen, Germany, from 1996 to 1997. In 2011, he took a sabbatical year at the Department of Electrical and Computer Engineering, Edmund T. Pratt School of Engineering, Duke University, USA.
Paulo is a full professor in the Computer Center at UFPE. He also serves as a member of the National Council for Scientific and Technological Development – Brazil (CNPq). His research interests include performance, reliability, availability, capacity planning, and stochastic models, with applications in cloud computing, sustainable data centers, manufacturing, integration, and communication systems.
Performability Assessment: Methods and Tools for System Design and Tuning
Evaluating Performability is crucial for systems experiencing degraded performance due to failures and repair activities. We will begin by discussing the foundational concepts of Performability, including definitions of key performance metrics such as utilization and response time and dependability attributes such as availability, reliability, safety, security, confidentiality, integrity, and maintainability. This talk aims to provide a broad understanding of the significance, methods, and benefits of Performability evaluation, considering the complexity and representativeness of the models.
Several evaluation strategies will be examined, including analytical solutions, numerical-based methods, and simulations. We will discuss the complexity and modeling power of techniques such as reliability block diagrams (RBD), fault trees (FT), Markov chains (DTMC and CTMC), and stochastic Petri nets (SPN). Additionally, the importance of hierarchical and heterogeneous modeling methodologies, sensitivity analysis, phase-type evaluation methods, and the development of user-friendly tools will be highlighted.
Furthermore, we will introduce the Mercury tool, which supports Performability evaluation using models like SPN, CTMC, DTMC, RBD, and FT.
Through this presentation, we highlight the critical role of Performability evaluation in ensuring systems meet their performance and dependability requirements, thereby contributing to developing more robust and reliable system designs.
Marco Vieira
University of North Carolina at Charlotte, USA
Short Bio
Marco Vieira was born in Ponte de Lima, Portugal. He earned his Ph.D. in Informatics Engineering from the University of Coimbra, Portugal. Marco Vieira is a Professor in the College of Computing and Informatics at the University of North Carolina at Charlotte. Before joining UNC Charlotte in 2023, he was a Professor at the University of Coimbra. His research interests include dependable computing, dependability and security assessment and benchmarking, software security, fault and vulnerability injection, failure prediction, static analysis and software testing, subjects in which he authored or co-authored works in refereed conferences and journals.
Marco is Chair of the IFIP WF 10.4 on Dependable Computing and Fault Tolerance, Associate Editor of the IEEE Transactions on Dependable and Secure Computing, Steering Committee Vice-Chair of the IEEE/IFIP International Conference on Dependable Systems and Networks, and member of the Steering Committee of the IEEE International Symposium on Software Reliability Engineering. He served as Program Chair for the major conferences on the dependable computing area.
Leveraging LLMs for Secure and Trustworthy Software: Insights and Future Perspectives
Large Language Models (LLMs) are transforming software engineering, offering new possibilities for developing secure and trustworthy software. This keynote will explore the integration of LLMs into software development workflows, particularly their role in code generation. Supported by empirical evidence, we will discuss the capabilities of LLMs in vulnerability detection and mitigation, and delve into the importance of assessing the trustworthiness of code, including the role of LLMs in verifying code quality and adherence to best practices. We will conclude with a discussion on future directions, outlining emerging opportunities for LLMs in software engineering.
Bruno Silva
Microsoft, USA
Short Bio
Bruno Silva is Senior Research Software Engineer and part of the Research for Industries at Microsoft Research (Redmond). He holds a bachelor’s degree (2008), master’s (2011), and Ph.D. in Computer Science (2016) from UFPE – Federal University of Pernambuco. In 2013, he conducted his Ph.D. at Technical University of Ilmenau (Germany).
Bruno has been working with high-performance computing and artificial intelligence for agriculture, mining. He also works with computational steering, design of experiments and sensitivity analysis for parametric applications. His research interests also include high-performance computing for artificial intelligence, capacity planning, dependability, survivability, and performance evaluation of distributed systems.
Industry Perspectives on AI Performance and Reliability
Artificial intelligence (AI) is changing the game in many areas of our lives, much like the internet did years ago. With AI models like GPT-4, we’re seeing big changes in the workplace, schools, and how we interact with each other. This talk will give you a straightforward look at how big AI systems are built to be safe, fast, and reliable. We’ll focus on three industries where AI is making a real difference: mining, agriculture, and the legal field. You’ll get to see real-world examples of AI at work, like how it’s helping to choose better where to mine, grow better crops, and sift through legal documents more efficiently. We’ll also discuss what’s being done to make sure these AI systems can be trusted to handle important jobs without a hitch. Join us to find out how AI is not just a buzzword but a powerful tool that’s reshaping industries and improving the way we tackle tough challenges.
Liliana Cucu-Grosjean
Inria, France
Short Bio
Liliana Cucu-Grosjean is a Research Director at the French National Institute in Computer Science and Automation (Inria) in Paris, France, where she leads the Kopernic research team. Her research interests include real-time, embedded and cyber-physical systems with a focus on the use of probabilistic and statistical methods for analyzing the schedulability of programs and estimating worst-case execution of those programs. Co-author of several seminal papers on probabilistic and statistical methods for real-time systems, Liliana has published more than 60 papers in top TCRTS conferences and journals.
Her contributions to a correct utilization of statistical approaches for the worst-case execution time estimation problem have been transferred from Inria to the start-up StatInf, an Inria spin-off that she co-founded in 2019. This patented technology has received numerous industry awards like the French most innovative technology in 2022 at the prestigious Assises de l’Embarqué and cited among the top 100 innovations expected to change the everyday life by the Le Point newspaper (July 2023) and among top 100 most influential Romanian in 2023. Last, but not least, the start-up StatInf is named among the 15 top France Analytics Startups in 2023 by EU Startup news online media (August 2023).
Proving probabilistic worst-case reasoning: when functional and non-functional properties must meet
The problem of identifying and proving worst-case time behavior of real-time programs on processors has appeared within the context of critical industries like avionics or space. Rapidly adopted by the real-time scheduling community, worst-case execution time estimates of programs or tasks are mandatory to understand the time behaviour of a real-time system. Analyzing such time behaviour is often, done, with an important pessimism due to the consideration of worst-case scenarios, especially for multicore processors. A decreased pessimism has been obtained by understanding that large execution times of a program have low probability of appearance. Probabilistic (worst-case) execution time notion has been proposed, while current approaches are built, often, on statistical estimators based on the use of Extreme Value Theory or concentration inequalities. Recent results revisiting these definitions underline the maturity of their applicability but also the need of a correct and proved reasoning. Within this talk, we discuss a possible dilemma: a proved probabilistic (or statistical) worst-case reasoning may impose a joint analysis of both functional and non-functional properties. Nevertheless, these properties are analyzed separately when certified executions of programs are required in critical industries like avionics or space and this separation seems to be a mandatory key towards a successful certification process.
Milos Manic
Virginia Commonwealth University, USA
Short Bio
Dr. Manic is a Professor with the Computer Science Department and Director of VCU Cybersecurity Center at Virginia Commonwealth University. He completed over 50 research grants in AI/ML in cyber and energy and intelligent controls. He authored over 200 refereed articles, has given over 50 invited talks around the world, authored over 200 refereed articles in international journals, books, and conferences, holds several U.S. patents and has won 2018 R&D 100 Award for Autonomic Intelligent Cyber Sensor (AICS), one of top 100 science and technology worldwide innovations in 2018, and is recipient of the 2023 FBI DCLA Director’s Community Leadership Award for innovative research in AI & cybersecurity.
He is an inductee of US National Academy of Inventors (senior class of 2023, member class of 2019), and a Fellow of Commonwealth Cyber Initiative (specialty in AI & Cybersecurity). He holds Joint Appointment with Idaho National laboratory. He is an IEEE IES President (2024-2025), after serving in multiple IES officer positions, IEEE Fellow (for contributions to machine learning based cybersecurity in critical infrastructures), recipient of IEEE IES 2019 Anthony J. Hornfeck Service Award, 2012 J. David Irwin Early Career Award, 2017 IEM Best Paper Award, associate editor of IEEE Transactions on Industrial Informatics, IEEE Open Journal of Industrial Electronics Society, and IEEE IES Senior Life AdCom member. He served as AE of Trans. on Industrial Electronics, was a founding chair of IEEE IES Technical Committee on Resilience and Security in Industry, and was a General Chair of IEEE ICIT 2023, IEEE IECON 2018 (record breaking, over 1,100 participants), IEEE HSI 2019.
Artificial Intelligence in Cyber-Physical Systems, Friend or a Foe
The most recent advancements in Artificial Intelligence (AI) have provided unprecedented opportunities for real-time understanding of overall behavior and health of complex systems. The talk will focus on presenting the latest advancements in AI based on real-world case studies. We will investigate the capabilities of cutting-edge AI techniques for sequence modeling, demonstrating its ability to understand complex patterns and understand complex systems with high accuracy.
Furthermore, we will look at the approaches to modern tools for explaining AI decisions. In an environment where quick decision-making is critical, the collaboration of AI and human expertise becomes essential. We will emphasize the importance of explainability and interactive visualizations in encouraging effective cooperation between humans and AI, particularly when faced with massive volumes of data and little time for informed decision-making. In addition, we will investigate incorporating human knowledge and physics into the AI system to fill the gap of missing data, as well as facilitating effective knowledge transfer between anomaly detection models designed for different systems.
The talk will conclude with the brief overview of IEEE Industrial Electronics Society activities and opportunities for volunteer engagements.