Invited Speaker 1

Dr. Mohammed Abdur Rahman

Chairman, Cyber Security and Forensic Computing Department, C3S, UPM

Title: Multi-Agentic AI Augmented Cybersecurity for Industrial Control Systems and Smart Infrastructure

Abstract:

 

The convergence of multi-agentic AI and cyber-physical infrastructure through IT/OT integration is reshaping the security landscape of Industrial Control Systems (ICS) and driving the evolution toward Industry 5.0. This talk explores how autonomous, memory-enabled AI agents —coordinated through multi-agentic agent-to-agent (A2A) planning and Model Context Protocols (MCP)—can augment threat detection, adaptive response, and self-healing capabilities across PLC, RTU, DCS, and SCADA environments.

 

While multi-agentic systems (MAS) hold significant potential to solve key Industry 5.0 challenges—such as resilience, hyper-personalization, and decentralized autonomous control — they also introduce new and poorly understood attack surfaces. Without proper defensive mechanisms and governance guardrails, MAS deployments risk unsafe coordination, adversarial behaviors, and cascading system failures. Much like Safety Instrumented Systems (SIS) safeguarded Industry 4.0 environments, MAS in Industry 5.0 will require secure coordination frameworks, trust boundaries, and real-time monitoring as foundational control layers.

 

This session features hands-on demonstrations showing how real-world MAS should be designed, deployed, and guardrailed within mission-critical infrastructures. Threat vectors such as adversarial task delegation, prompt-based manipulation, and agent hijacking are illustrated using the MAESTRO threat model and the OWASP MAS Top 10. A secure-by-design MAS architecture is proposed—aligned with global standards including ISO/IEC 42001 (AI Management Systems), ISA/IEC 62443 (Industrial Automation and Control Systems Security), and ISO/IEC 27001 (Information Security).

 

Finally, we reflect on how secure MAS deployments can contribute to the emerging Purdue 2.0 model, serving as intelligent, resilient control layers within the future of Industry 5.0 ecosystems

 

Bio:

Dr. Mohammed Abdur Rahman is a full professor and founding as well as current Head of the Department of Cyber Security and Forensic Computing, College of Computer and Cyber Science, University of Prince Muqrin (UPM), Madinah, KSA. Dr. M. A. Rahman has received the Best Researcher Award from UPM for 5 consecutive years. Dr. M. A. Rahman is among the world’s top 2% scientists and has received more than 15M USD as research grants from funding bodies in the UK, and KSA. His research interests include Self-Explainable and Trustworthy AI, Security at Cloud, Fog, and Edge, Adversarial Attacks and Defense Mechanisms of AI models, IoT and 6G Security, AI-Assisted Cyber Security Operations, Security and Privacy for Smart City Services, and Large/Small Language Models for System Automation. Prof. Mohammed has recently received Saudi Nationality for contribution in the areas of cybersecurity for AI and AI for cybersecurity for several highly impactful Saudi National Projects. 

Invited Speaker 2

Prof Dr Syed Mohammed Shamsul Islam

Visiting Professor, Daffodil International University, and Associate Prof (Senior Lecturer), Edith Cowan University, Australia

Title: AI and Climate Change: Impact, Opportunities and Challenges

Abstract:

Climate change is a major global concern that threatens human health, economies, and security. This talk explores how Artificial Intelligence (AI) can be used to help combatting climate change, while reducing its impact on the environment.

Bio:

Dr Islam is an Australian researcher with expertise in Artificial Intelligence, Machine Vision and Image processing. Graduating from the University of Western Australia (UWA) with a PhD with Distinction in 2011, Dr Islam worked at UWA and Curtin University before joining Edith Cowan University where he is currently working as an Associate Professor (Senior Lecturer) and a founding member of the Centre for AI & Machine Learning (CAIML). He has developed innovative AI tools and techniques for the environmental monitoring (e.g. seagrass, peatland), healthcare and autonomous navigation. Dr Islam secured 16 external grants totalling over $3.5M AUD including a 1.3 million dollars grant for protecting peatland ecosystems. To date, Dr Islam has supervised 11 masters and PhD students to completion, examined 25 theses and published over 100 scholarly articles including 41 journal articles (31 in WoS Quartile-1). He has attracted 34 public media releases, 4476 citations (Google Scholars) and multiple awards including ECU High Achieving Researcher Award 2021, and four best conference paper awards. He is also serving as an Associate Editor of IEEE Access, a Guest Editor of Springer Nature Computer Science, Electronics and Applied Sciences, a regular reviewer of CVPR, EECV, WACV conferences and over 50 Q1 journals, committee member of over 35 conferences and Senior member of both IEEE and Australian Computer Society. He also chaired the IEEE WA Signal Processing Society Chapter (2021-2024) and served as the General Chair for Digital Image Computing: Techniques and Applications (DICTA) conference in 2024 and Co-General Chair of IEEE Future Machine Learning and Data Science (FMLDS) 2025.

Invited Speaker 3

Anindya Bijoy Das

Assistant Professor, Electrical and Computer Engineering, The University of Akron, OH, USA

Title: Inside LLM Vulnerabilities: Bias, Misinformation, and the Path to Safer AI

Abstract:

 Large Language Models (LLMs) are reshaping how we search, learn, and make decisions, yet they remain vulnerable in ways that directly affect trust and real-world deployment. This talk provides an overview of key weaknesses that emerge in modern LLMs, with a focus on fairness challenges in recommendation settings and the susceptibility of models to manipulated context, reframing, and misinformation. I will highlight how these vulnerabilities surface in practice, why they persist despite increasing model scale, and what emerging research suggests about building more transparent, equitable, and robust AI systems.

Bio:

Anindya Bijoy Das is an Assistant Professor with the Department of Electrical and Computer Engineering, The University of Akron, Akron, OH, USA. He received his Ph.D. degree from the Department of Electrical and Computer Engineering, Iowa State University, Ames, IA, USA, in 2022. His research interests include different aspects of federated learning, large language models, coding theory, information theory, and ML/AI applications. He was a recipient of the Karas Award from Iowa State University in 2022 for his outstanding dissertation in the area of mathematical and physical sciences and engineering.

Invited Speaker 4

Md Tauhidul Islam

Assistant Professor, Department of Radiation Oncology, Stanford University

Talk Title: High-Performance and Interpretable Artificial Intelligence for Biomedical Applications

Abstract: Artificial intelligence (AI) is rapidly transforming biomedical research and clinical care, with applications spanning medical imaging, genomics, radiation oncology, and precision medicine. Despite remarkable gains in predictive performance, the widespread adoption of AI in healthcare remains limited by a fundamental challenge: the lack of interpretability and trustworthiness of modern deep learning models. Clinicians and scientists must understand not only what an AI model predicts, but why it succeeds or fails.
In this talk, I will present recent advances from our lab aimed at bridging the gap between AI performance and interpretability for biomedical applications. I will introduce a unified feature space analysis framework based on manifold discovery that reveals how deep neural networks learn and evolve during training, enabling insight into decision-making across classification, segmentation, and survival prediction tasks. I will also discuss how biologically informed image representations—such as genomaps—can transform high-dimensional omics data into interpretable visual structures that improve both accuracy and biological insight. Finally, I will highlight translational applications in radiation therapy planning and multi-modal biomedical data analysis, demonstrating how interpretable AI can enhance reliability, performance, and clinical impact.

Speaker Bio: Dr. Md Tauhidul Islam is an Assistant Professor at Stanford University whose research lies at the intersection of artificial intelligence, machine learning, and representation learning for complex, high-dimensional data. His work focuses on designing interpretable, data-efficient, and robust learning algorithms that enable principled reasoning over large-scale multimodal datasets, with biomedical and scientific data serving as key application domains.
Dr. Islam has introduced novel graph-to-image and manifold-based representation learning frameworks that transform high-dimensional, structured data—such as omics profiles, biological networks, and multimodal embeddings—into semantically meaningful and computationally tractable representations. His contributions include scalable eigenmapping and optimal-transport–based algorithms for graph-structured data, feature-space manifold discovery methods that reveal deep network learning dynamics, and multimodal fusion architectures that integrate heterogeneous data sources while preserving interpretability. He has also developed generative and self-supervised models for longitudinal modeling and temporal prediction in complex systems.
His research appears in leading venues including Nature Biomedical Engineering, Nature Computational Science, and Nature Communications, and is supported by competitive awards such as the NIH K99/R00 Pathway to Independence Award. At Stanford, Dr. Islam collaborates across computer science and applied domains to translate advances in interpretable and multimodal AI into scalable computational frameworks. He is a frequent speaker on interpretable machine learning, representation learning, and multimodal AI systems, and his work contributes to the broader foundations of trustworthy artificial intelligence.