Introduction
Artificial intelligence (AI) is a topic that’s becoming increasingly hard to avoid, but one we should all be aware of. It’s not a new idea, but rapid technological advancement is bringing it increasingly into our work and lives.
Our Ethics Committee and Youth Advisory Panel have come together to help cyber security professionals understand how to act ethically with AI. Below, you’ll find a primer on the current state of AI, in cyber security, ethics, regulation and industry practice, and practical advice for cyber security professionals and leaders navigating this new territory.
AI & Cyber Security
AI is not a new phenomenon in the world of computing; however, it has made significant and publicised advancements in recent years, with generative AI becoming accessible to the mainstream due to platforms such as ChatGPT, Google’s Gemini (formerly Bard) and others. These advancements come with a whole host of use cases across all industries including cyber security.
AI is a catch-all term for a variety of different areas of the domain including automation, Machine Learning (ML), cognitive computing and deep learning. The differences in maturity between these terminologies can be stark, with automation usually being a repetitive task that is scheduled to run at specified parts of the day using a piece of code or an algorithm. ML is often leveraged when looking at baselines and anomalous activity where it will learn from training data sets to then be implemented on live data to identify trends and patterns. The current ‘new’ advancement of Generative AI is moving away from analysing data and towards more human-like thinking with content generation. Most of the models leveraged in Generative AI models are trained on extensive datasets and are constantly learning from interactions with users.
Automation has been leveraged for a while to help combat the very common occurrence of burnout among cyber security professionals, according to Forrester, 66% experienced significant stress. One automation use case to leverage AI is implementing runbooks within Security Operations Centres (SOCs). SOCs often contain highly repetitive, time-critical activities that can add strain onto cyber professionals. Runbooks help to alleviate some of this by leveraging the criteria for a specific incident and only requiring human insight where necessary rather than through the end-to-end process. This has enabled some incidents in SOCs to now be remediated with no human insight required.
Often, following automation and runbooks is a need to be able to efficiently identify anomalous activity which moves away from the expected behaviour in a corporate digital environment, referred to as a baseline. Machine Learning (ML) can be a good option for this kind of requirement given the need to quickly identify any unusual actions happening in a corporate network. ML can be used to benchmark the usual activity that occurs day-to-day on the corporate network and then alert humans for anything that falls outside of this.
Moving away from proactive cyber defence is the use of Generative AI within the adversarial arena of cyberspace. With its ability to create new content, Generative AI can be used to create malicious tooling that adversaries can use to improve their capabilities. It was noted in early 2023, that a user of ChatGPT could continually ask ChatGPT to create pieces of code which in turn can make malware extremely hard to detect and defend against.
Ethical Issues in AI
AI (and the ethical considerations around it) is not a new concern. AI as a field of study has been researched since the 1950s, rising and falling in popularity (and funding). Duly diligent research properly considers ethical issues, so what are some of the key ethical issues surrounding AI?
A great concern relating to AI is privacy. Since the days of the early internet there have been concerns over who is collecting data and why. Whether for the optimisation of services or more targeted advertising, these questions have led to data regulations like the UK General Data Protection Regulation (GDPR). These same concerns exist surrounding AI, as the demands of AI have raised new ethical questions about data collection and usage.
AI uses the information provided by users and its own dataset to inform its decision making. When one starts a conversation with a chatbot, all the information is stored. All information used in prompts and the conversation history is recorded, including any file uploads. This information is often linked to one’s email address, and details on device data, usage data, and log data are stored. Furthermore, the integration of AI into everyday devices, phones, cars, and media streaming services intensifies this issue of the ability for constant digital surveillance.
So, there are privacy concerns about the storage of the information given to AI, however, service users can be considered to consent to this through using the service. What about information where consent is questionable and not explicit? AI systems scraping data from the internet take information posted on one platform and repurpose it to further train them. One may not want their social media posts or forum discussions to be used to train an AI. What about confidential information posted in data breaches? Reputable AI developers are unlikely to use this due to legality concerns, however a malicious actor may use any information available to them.
Predominantly, issues are raised regarding intellectual property rights and transparency. If it is not transparent how data is being used, organisations are held less accountable for possible unethical actions, undermining user autonomy.
Another concern is bias. A well-known example of this is the Microsoft Tay chatbot which went rogue after 24 hours of access to Twitter, generating racist and misogynistic tweets. This was very apparently technology gone wrong, but what happens when it’s not so obvious? In 2018 Amazon scrapped an AI recruiting tool that inadvertently favoured male candidates upon investigation. The Blackbox that is AI can obscure the ‘thought’ process that occurs, and general users cannot see the dataset used or the logic behind it and determine if it is trustworthy and fair.
Generative AI tools when asked to produce images, can create stereotypical examples relating to race and gender. Additionally, medical systems using AI have been shown to give lower accuracy results for black patients, compared to white patients. This is largely due to the underrepresentation of groups in data collection leading to inaccurate or inconclusive responses from the AI. Other types of bias can be found algorithmically, when not designed to account for fairness.
Finally, how do we balance the need for un-biased data against privacy concerns? These ethical topics about are all considerations to take in the context of regulation and the actions of large companies developing these systems.
Regulation
Modern AI is a nascent opportunity, and regulation continues to develop at pace. There are now signs of convergence on AI Regulation globally, in key focus areas that cover:
- Accountability & Oversight
- Transparency & Interpretability
- Data Privacy
- Bias & Discrimination
- Security & Integrity
Here we look a little closer at how the UK, EU and US have outlined their respective approaches to AI regulation.
UK AI Regulation
The UK intends to pursue its ambition to be a global leader in AI development. It has published its Pro-Innovation AI Framework, based on the National AI Strategy. The framework is underpinned by 5 guiding principles for responsible AI development and usage:
- Safety, security and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
The UK intend to avoid restrictive AI regulation, in favour of addressing uncertainty and gaps in existing regulatory remits with a proportionate approach of assurance techniques, guidance and technical standards. Entities such as the UK AI Standards Hub will help maintain and encourage responsible AI innovation.
EU AI Regulation
The EU has approved the EU AI Act that defines a risk-based classification system to apply regulatory requirements for AI systems based on application and potential risk to users:
- Unacceptable Risk: AI systems that could cause significant harm, such as those influencing human behaviour adversely, are prohibited.
- High Risk: AI systems that impact areas like education and law enforcement must abide by strict regulations that cover risk management and data governance, preventing biases and discrimination.
- Limited Risk: AI systems with low harm potential, such as chatbots that don’t disclose their non-human nature, face transparency requirements that ensure individuals are made aware, and consent, to interactions.
- Minimal Risk: Low-risk systems, like spam filters and AI-driven games, are largely exempt from regulation, however, are still encouraged to follow ethical guidelines.
The enforcement of this Act will be managed by national authorities, overseen by The European AI Office. Breach penalties of €35m or 7% of a company’s global revenue can be levied.
US AI Regulation
The US have published the “Blueprint for an AI Bill of Rights” the key principles for AI systems:
- Safe and Effective Systems: This highlights developer testing and risk control. The methodology is achieved through information governance, identification, monitoring, and testing; these are the most critical components to building any safe and effective system.
- Algorithmic Discrimination Protection: Algorithmic Discrimination Protection: The AI should shun discrimination, respect civil rights, operate on fair and representative data, and be constantly checked for bias.
- Data Privacy: Users are entitled to information on how their data is utilized in AI; this process has dependence on their consent.
- Notice and Explanation: Users need to be aware of the use of AI systems and their implications, leading to transparency of the systems used in them.
- Human Alternatives and Fallback: Users have the choice to reject contentious AI determinations and be provided with human substitutes.
Industry Best Practice
Some of the biggest concerns surrounding AI ethics are privacy issues, data bias and discrimination, and transparency. Some large companies are taking measures to address these concerns.
Microsoft Responsible AI Standards
Microsoft's initiative, known as Microsoft Responsible AI Standards, defines six key principles for responsible AI development:
- Accountability: This principle includes reviewing AI systems to identify those with significant adverse impacts on individuals, organisations, and society. It ensures systems are fit for purpose, comply with data governance policies, and support informed human oversight. These objectives are achieved by adhering to guidelines for human-AI interactions during system design.
- Transparency: This involves three sub-goals: ensuring system intelligibility for decision-making, clarifying AI capabilities and limitations for informed stakeholder choices, and disclosing AI interactions to inform users when they are engaging with AI.
- Fairness: Fairness ensures AI systems provide equal quality service to all demographic groups, including marginalised ones. It mandates that resource allocation in essential domains minimises disparities in outcomes and reduces potential stereotyping or erasing of identified groups.
- Reliability and Safety: This involves creating and adhering to guidelines reviewed by safety experts and relevant literature. It includes interviewing customers to understand operational factors and designing systems to minimise failure remediation time through failure mode and effects analysis.
- Privacy and Security: This is ensured through Microsoft's existing compliance measures.
- Inclusiveness: This is achieved through Microsoft’s compliance initiatives.
Rolls-Royce Aletheia Framework
Rolls-Royce has developed the Aletheia Framework, a comprehensive guide for ethical AI development and deployment. It outlines 32 principles divided into three sections: governance, accuracy/trust, and social impact.
- Governance: This emphasises the need to review and analyse algorithms for biases, ensuring algorithm origins are documented. Training data must be high quality, free of unethical biases, and verified with proof.
- Accuracy/Trust: This mandates continuous monitoring, including comparing actual results with likely outcomes. A continuous automated monitor can be used in the system to test AI by using existing data with approved results.
- Social Impact: This will require businesses to disclose the use of personal data, specify reasons, and obtain consent. System architecture must protect data from unauthorised access and allow updates in line with user rights.
- Ethical Integration: This addresses employee-AI interactions, detailing potential positive and negative impacts to ensure responsible management.
Advice
We asked our Ethics Committee four questions, and they’ve provided both actionable advice and general principles.
How can cyber security professionals encourage the adoption of strong ethical frameworks in SMEs?
We know large technology firms often have robust frameworks, and the staff capacity to work within them. But how can professionals in smaller firms, especially those with stretched budgets, encourage good behaviour?
Our Ethics Committee members highlighted that, while AI is new, ethics is not.
"Cyber security professionals should be able to implement principles and skills from throughout their career to encourage good behaviour."
Dr Ayman El Hajjar advises that, “It is well known that the use of AI by cyber security professionals brings additional risks that are not too far from the security risks in a conventional system, such as vulnerabilities in systems and having security as an afterthought while developing such a system.
"Security professionals should be encouraged to focus their work on developing AI systems that can overcome ethical concerns by ensuring that those systems use trusted models and trusted external entities..."
...such as an external API, a model provider or a supply chain, in the same way, that we would conduct a security assessment to identify external entities security posture in a conventional environment.”
Joe Fogarty, currently studying for a Master’s Degree in Professional Ethics, offered a perspective based on the ethics of public protection professions, highlighting five principles which also apply to AI: "It’s use must be lawful (which includes compliance with data protection requirements); necessary to achieve its valid purpose; proportionate to the issue at hand (for public protection this means proportionate to the risk being faced by the public; likely to be effective (and reviewed post-action); and likely to be efficient (and reviewed post-action).” GCHQ, which houses the National Technical Authority on cyber security in the form of the National Cyber Security Centre, has written about the Ethics of AI - GCHQ | Pioneering a New National Security: The Ethics of Artificial Intelligence – and these principles are reflected in that discussion.
Manto Lourantaki reminds cyber security professionals not to blindly trust AI, urging them to...
“...engage in thoughtful ethical reflection, helping to cultivate and maintain their ethical cognitive skills, rather than simply accepting actions and decisions made by the AI technologies.”
She also advises “complexity in algorithm design and data collection methods should be minimised to facilitate understanding, avoid the 'Black Box Problem,' and support the understanding of data selection and correlation”- especially good advice in a fast-moving SME environment where testing periods might be shorter.
Manto also highlights the value of knowledge at all levels of the organisation, an area where a professional willing to teach peers can have real impact. “Education, Training, and Awareness:
"Cybersecurity professionals should engage in and promote continuous training and awareness of how AI technologies are used, ensuring they are prepared to use AI based on their roles, whether they are directly involved in design and implementation or acting as end users."
They should understand how AI works, the ethical issues involved, and the processes in place to evaluate its appropriateness.
"It is imperative to be prepared and to prepare their workforce to use AI responsibly, tailored to their roles.”
How can cyber security leaders create a culture where people are willing to challenge the ethics of AI usage?
Cyber security leaders define the culture of their teams and may be held accountable when AI goes wrong. As a developing field, there isn’t yet an intuitive understanding of how to use AI, and well-meaning people may step over ethical lines. How do leaders ensure those mistakes are quickly noticed and corrected, without impeding innovation?
Policies are an important part of the equation and one cyber leaders can often control. Elaine Luck gave us some insight into how CREST are addressing this emergent question, caveating that CREST are “Conducting further research with a view to embracing the technology”. She stated “We ask that line managers review AI output prior to anything being used. We also state that AI output should not be reviewed by an AI tool... Our policy provides clear guidance for staff but notes that failure to comply with that may result in action. CREST has licences for specific AI tools which protect the content that it generates with the aim of avoiding it being provided for similar services to third parties. This is particularly important and our policy reminds users to a) only use the licenced product and b) be very mindful of not quoting any personal, CREST confidential or regulatory sensitive data. The policy suggests questions staff should ask themselves before asking for AI input.”
Manto recommends;
“Leading by example leadership and creating the appropriate circumstances, culture, and environment where everyone will have the appropriate knowledge, skills and feel safe to use it to challenge the ethics of AI usage.”
She also suggests ”establishing ways, e.g. through job role design and good and frequent internal communications, to influence and shape an ethical climate, encourage, and possibly reward people to challenge the ethics of AI usage.”
Manto also highlights that who you have on your team is important;
“Given concerns that AI tools may mirror the biases and limitations of their creators, leaders should ensure diversity among developers to prevent monoculturalism and bias in AI algorithms”
How can cyber security professionals communicate ethical risks from AI to without fearmongering?
Cyber security can be scary, and even more so when we talk about AI, with high profile failures in the news and terrifying fiction in the back of our minds. How can cyber security professionals appropriately emphasise ethics without feeding into our worst HAL 9000 fears?
Manto advises that professionals “use simple, clear language so that even a child can understand the ethical risks of AI. Minimise technical jargon, but ensure the information is appropriate and accurate. Be transparent and honest. We have a moral obligation to present matters truthfully and not conceal key information. It is not about fearmongering; we need to be upfront about the uncertainties and limits of our current knowledge on AI. A good lesson learned from the recent pandemic was the importance of providing regular updates as new information became available. This approach helped build trust and kept people informed, avoiding unnecessary panic.”
How can we as cyber security professionals effect change when we see a colleague acting unethically with AI?
Once again, we are reminded that, while the technology is new, ethics are not, and most workplaces will have processes that can be used even with emerging technology. Manto advises “As a cybersecurity professional, you should be able to recognise when a colleague is acting unethically with AI and have the skills to appropriately assess each situation and understand its impact. Ethical behaviour with AI should not be treated as a separate issue requiring a distinct process, as additional processes can make it bureaucratically heavy adding further complexity which may deter parties from acting toward positive change. When a colleague acts unethically with AI, use the established processes and communication channels available within and outside the working environment (e.g., the Speak Up process) to properly identify and report the behaviour. Engage the appropriate stakeholders who can influence change and identify any process improvements through this process. For example, during the development of any respective AI legal and ethical frameworks it is imperative a cyber security professional, if circumstances allow, to participate in their creation process, voice concerns, offer feedback. Anticipate resistance to the changes you advocate for but implement appropriate strategies to encourage a positive and initiative-taking approach toward change.
"It is essential not to ignore or overlook unethical behaviour. Behaviours you bypass are behaviours you accept.”
Contributors
Josh Callicott-Oelmann is a member of our Youth Advisory Panel and a Cyber Threat Intelligence Team Lead at NCC Group. He wrote the “AI & Cyber Security” section.
Rania Hindy is a member of our Youth Advisory Panel, a student of Space Science & Robotics, and works at the National Cyber Resilience Centre Group delivering security awareness training and technical services. She wrote the “Ethical Issues in AI” section.
Joseph Roberts is a member of our Youth Advisory Panel and a Cyber Compliance Support Officer at the UK Heath Security Agency. He wrote the “Regulation” section.
Hannah Ogilvie is a member of our Youth Advisory Panel and a Mathematics student at the University of Glasgow. She wrote the “Industry Best Practice” section.
Manto Lourantaki is the Chair of our Ethics Committee.
Dr Ayman El Hajjar is a member of our Ethics Committee, a senior lecturer and the head of the Cyber Security Research Group at the University of Westminster.
Elaine Luck is a member of our Ethics Committee and Head of Governance & Legal at CREST.
Joe Fogarty is a member of our Ethics Committee, a Chartered Security Professional and a Master’s student in Professional Ethics, focusing on public protection professions.
This is the first in a series, where we hope to keep providing actionable ethics advice to cyber security professionals. If you have any topics that you would like us to cover in the future, or any feedback, please email standards@ukcybersecuritycouncil.org.uk.
Further reading
https://www.forrester.com/blogs/we-need-to-talk-more-about-burnout-in-cybersecurity/
https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/
https://www.bbc.co.uk/news/technology-45809919
https://nihcm.org/publications/artificial-intelligences-racial-bias-in-health-care
https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper
https://www.gov.uk/government/publications/national-ai-strategy/national-ai-strategy-html-version
https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138-FNL-COR01_EN.pdf
https://digital-strategy.ec.europa.eu/en/policies/ai-office
https://www.whitehouse.gov/ostp/ai-bill-of-rights/
https://www.ox.ac.uk/news/2024-02-22-removing-bias-healthcare-ai-tools-0
https://www.ibm.com/impact/ai-ethics
https://www.ibm.com/policy/mitigating-ai-bias/
https://www.microsoft.com/en-gb/ai/principles-and-approach/
https://www.rolls-royce.com/innovation/the-aletheia-framework.aspx
https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained