AI & ETHICS

The 6th Workshop on

“Artificial Intelligence and Ethics”
(AI & ETHICS)

Information Security in Generative A.I.

INTERACTIVE WORKSHOP

Led by Professor John MacIntyre,
Chancellor of University of the Commonwealth of the Caribbean and Visiting Professor, Democratus University of Thrace,
Co Editor-in-Chief, AI and Ethics

Short Bio: Professor John MacIntyre has been working in Artificial Intelligence for more than 30 years, having received his PhD in 1996. He recently retired as Pro Vice Chancellor of the University of Sunderland, UK, where he worked for over 30 years and was Professor of Applied Artificial Intelligence. His is now Visiting Professor of Artificial Intelligence for universities in the UK, Egypt, Greece and New Zealand, and works as a consultant for organisations around the world helping them with their A.I. strategies and plans. He has published more than 150 peer reviewed papers and has given many invited keynote speeches and presentations at conferences, seminars and workshops globally. Since 1996 he has been the Editor-in-Chief of Neural Computing & Applications, published by SpringerNature, one of the world’s leading applied A.I. journals, and in 2020 he established a new journal, AI and Ethics, with his long-standing friend and collaborator Professor Larry Medsker in the United States. Professor MacIntyre is now a passionate advocate for the development of ethical standards in the development of A.I. He is a member of the International Neural Network Society, a Fellow of the Royal Society of Arts and Commerce, a Chartered Engineer, and a Member of the British Computer Society.

      Panel Members     

Professor Larry Medsker

University of Vermont, USA, and Co-Editor-in-Chief, AI and Ethics

Short Bio: Larry Medsker is past Chair of the ACM US Technology Policy Committee, Research Professor of Physics at George Washington University (GWU) and at the University of Vermont. He is also Research Professor in the Human-Technology Collaboration Lab at GWU, and Founding Director of the university’s Master’s Program in Data Science. Medsker serves as Co-Editor-in-Chief of AI and Ethics, Associate Editor of Neural Computing and Applications, and Policy Officer for ACM’s Special Interest Group on Artificial Intelligence (SIGAI). He is the author of four books and over 100 publications on neural networks and the editor of the forthcoming AI and Ethics Handbook, published by Springer Nature in 2026.

Jack Fisher

NASTAAR, UK

Short Bio: Jack is a Solutions Architect at NASSTAR, working with organisations across the UK and as a partner with Microsoft implementing MS Copilot and supporting organisations with their adoption and roll-out of Generative AI technology.

Romit Choudhury

Questa-AI, Luxembourg

Short Bio: Romit is a serial entrepreneur in Health-Tech and AI Agents and Entrepreneur-in-Residence to University of Luxembourg for Parallel Computing. He is part of the senior team at Questa-AI, which is developing technology for anonymisation of data in AI applications.

Professor David Leslie

Director of Ethics and Responsible Innovation Research at The Alan Turing Institute, UK
Professor of Ethics, Technology and Society, Queen Mary University of London

Short Bio: TBA

SYNOPSIS:

This interactive workshop explores the profound ethical dilemmas emerging from information security challenges in Generative AI adoption. As GenAI tools become ubiquitous in research, education, and enterprise, we face urgent questions about responsibility, trust, and the moral obligations of institutions and individuals deploying these technologies.

Professor John MacIntyre will give a presentation examining critical ethical dimensions underlying information security in GenAI:

Consent and Transparency: When users unknowingly expose sensitive data to AI systems, who bears ethical responsibility? What obligations do institutions have to inform students, staff, and clients about data risks in AI tools they recommend or require?

Equity and Access: Do robust security measures create a two-tier system where only well-resourced organisations can safely use AI, while others face impossible choices between innovation and security? What are our ethical duties to those who cannot afford secure AI infrastructure?

Trust and Accountability: When AI systems leak confidential research, student data, or proprietary information, where does moral culpability lie—with users, institutions, or AI providers? How do we rebuild trust after breaches?

Professional Ethics: Are professionals who use insecure GenAI tools violating duties of confidentiality to clients, patients, or students? What ethical frameworks should guide AI use in sensitive contexts?

Surveillance and Control: Does institutional monitoring of AI usage to ensure security create new ethical problems around academic freedom, privacy, and surveillance?

Power and Vulnerability: How do security risks disproportionately affect vulnerable populations whose data appears in training sets without consent?

A distinguished panel of three experts will debate these ethical tensions through real-world cases. The workshop concludes with audience Q&A, encouraging vigorous discussion about navigating the moral complexities of securing GenAI in an increasingly AI-dependent world.

Skip to content