October is cybersecurity month – and it’s going global

Software DevelopersAppdrawn Team | Published 25th October 2023
Cybersecurity events are taking place this month. The agendas may differ, but one key theme will unite participants - the need for a global approach to tackling cybersecurity. We look at what these events promise and how AI is being wielded as a weapon.

Could you outsmart a cybercriminal? The European Union Agency for Cybersecurity (ENISA) thinks you can. European Cyber Security month is a joint initiative from ENISA and the Commission and Member States, and the theme for this year is ‘Be smarter than a hacker.’ Cyber Security Month involves a range of webinars, training, workshops and other in-person and online events, with the aim of ‘promoting digital security and cyber hygiene.’ Via these events – as well as some slightly bizarre videos on its website! - it’s hoped that more people will get wise(r) to the tactics and tools used by cybercriminals.

A global army

This new army of smart IT users will not be confined to the continent. October is also Cybersecurity Awareness Month in the US, a partnership between the Cybersecurity and Infrastructure Agency (CISA) and the National Cybersecurity Alliance, along with the government and private sector. The theme, ‘Secure our World’ may sound generic, but has similarities with its neighbour event’s theme by advocating practical actions and behaviours. These include using a strong password manager, turning on multi-factor authentication, recognising and reporting phishing and updating software. ENISA outlines similar steps individuals can take, such as verifying links in emails, being alert to ‘too good to be true’ offers, and backing up important information to reduce the impact of a hack.

These may seem like relatively weak weapons against cybercriminals, but multiply these behaviours by hundreds, thousands and millions, and we’re on our way to building an effective barrier against crime. This attitude is one shared by both initiatives, demonstrated by the US’s ‘Secure our World’ theme and the EU’s slogan – running since the event’s inception in 2012 – ‘Cybersecurity is a shared responsibility.’

A third event taking place this month further highlights the need for an international approach to cybercrime. European standards organisation ETSI is holding its flagship Security Conference from 16-19th October, with this year’s summit promising a focus on ‘Security Research and Global Security Standards in action.’

Where in the world are victims and hackers?

Geographical borders pose no problem to hackers, who have easy access to a worldwide base of victims. The origin of criminal activity is not always easy to identify, however in 2021 Lindy Cameron, CEO of the National Cyber Security Centre (NCSC), said that “cybercriminals based in Russia and neighbouring countries are responsible for most of the devastating ransomware attacks against UK targets.” This type of attack, he said, “presents the most immediate danger” of all cybersecurity threats. In terms of victims, a report last year by the FBI found that the US far outranked other countries in terms of its number of victims of cyberattacks. This was followed by the UK, Canada, India, Australia and France.

While cybercriminals can exploit the lack of physical borders in the digital environment, cyber victims are not so lucky. Many businesses – which, like their hacker organisation counterparts also operate globally – can be constrained by trying to navigate and adhere to various guidelines and regulations implemented by different states in which they do business.

Standards without borders

Drawing up and adhering to a global set of recognised standards can therefore play a key rule in mitigating cybercrime. ETSI was initially founded in 1988 to support European regulations and legislation, but now has a ‘global perspective’, with its standards in use by countries across the world. The group recognised the importance of this approach early on; in 1990, for instance, it launched its Global Standards Collaboration to enhance standards cooperation on an international scale. Considering those cybercrime victim/perpetrator stats cited earlier, it’s also of note that another of ETSI’s aims – via its ‘Seconded European Standardization Expert in China’ project created in 2006 – is to increase cooperation between China and the continent in terms of standardisation.

A global, joined-up approach to cybercrime – both in terms of encouraging individual behaviours and international standards – may be a theme for October, but it is one long-recognised in the cybersecurity ecosystem. Something that’s not quite so established is a toolbox and approach for managing the latest tactics to be employed by hackers.

Artificial intelligence, real risk

What are these tactics? Again, we need only to look at – yet another! – IT security event on the horizon to find the answer. The UK government will host its first ever AI Safety Summit in November, which will focus on ‘risks created or significantly exacerbated by the most powerful systems.’ The event will focus on two categories of risk. First is ‘misuse’ whereby ‘a bad actor is aided by new AI capabilities in biological or cyber-attacks, development of dangerous technologies, or critical system interference. Unchecked, this could create significant harm, including the loss of life.’ The second is ‘Loss of control risks that could emerge from advanced systems that we would seek to be aligned with our values and intentions.’

Significant harm? Loss of control? Loss of life!? We’ve heard a lot about the risk of AI upending the employment ecosystem by replacing tasks traditionally done by humans. We’ve also seen legal proceedings relating to copyright of AI in the arts world, as well strikes by those working in the field which also focussed on the risks posed by artificial intelligence. Loss of life at the (purely metaphorical) hands of AI, however, remains a fear rather than a reality. But for how long?

It can be argued that, without robust regulation there’s a danger that AI employed in fields like defence and healthcare could be hacked, malfunction, or simply achieve its aim without the kind of ethical checks which a human would instinctively apply. Earlier this year, for example, a US Air Force colonel described a simulation scenario in which an AI-powered drone which had been programmed to destroy enemy air defence systems ended up ‘killing’ its operator. “The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat”, the fighter pilot said. As a result, “It killed the operator because that person was keeping it from accomplishing its objective.”

In healthcare, meanwhile, despite the benefits of AI for tasks like diagnostics and developing new treatments, there is also a significant risk for harm. A report by industry journal BMJ Global Health, for instance, outlines three potential dangers to public health which may be created by the increased use and lack of regulation of AI. The harm to human health – whether immediate or as a result of its impacts filtering through to communities –  may come about via ‘the control and manipulation of people, use of lethal autonomous weapons and the effects on work and employment.’

The big problem of large language models

These threats may seem a world away from those we experience at our desks, such as the phishing emails CISA and ENISA warn of. However, AI is already being implemented by hackers in this space in an attempt to avoid detection. In March, Europol released a report on how large language models, such as ChatGPT, can be used to generate realistic and accurate text which can then be used for phishing purposes. A writing style inconsistent with the purported sender of an email, as well as grammatical and spelling errors, have traditionally been red flags alerting us to potential scams. However, cybercriminals can easily avoid these pitfalls when armed with a tool like ChatGPT.

Cybercriminals will always come up with new tools and approaches to exploit IT vulnerabilities or take advantage of human IT users. Wielding AI as a weapon, it’s likely that even the most digitally savvy among us will be victim to a phishing scam or data breach. As criminals evolve their tactics, so must we. This means investing in security software, educating employees and peers, and keeping abreast of developments in the industry. October’s security events highlight the need for a joined-up, global approach, meaning behavioural change and awareness on a mass scale, as well as sharing knowledge, resources and guiding principles.

Appdrawn Team | Updated 25th October 2023

Follow us on social media for more tech brain dumps.