Per Lagerström
Article   March 27 2023

AI puts focus on the human element in cybersecurity

Using AI, threat actors can scale, automate and make their attacks even more sophisticated. At the same time, more and more cybersecurity efforts also rely on AI. This ‘escalating AI-war’” leads to more focus on human judgment and the ability to be accountable in a digitalized work environment. Offering affordable awareness training over time becomes one of the best solutions for any organization. These are some of the key findings in the Junglemap webinar 'How does AI impact cybersecurity?'

Information security
NanoLearning
AI
Banner for Junglemap AI webinar

Patrick Couch, business developer at silo.ai, one of Europe’s leading private AI labs, and with 10+ years as AI spokesperson for IBM is very clear: The AI arms race makes the human firewalls even more important. 
- With all the recent breakthroughs, it becomes almost impossible to tell the differences between human and artificial intelligence in terms of incoming digital messages. That’s why we need to train our abilities to put things into context and control our immediate impulses. 

Joakim Hvalby agrees. With a background from cybersecurity training in the Norwegian finance group DNB and now content lead for the Junglemap flagship courses on information security awareness, he sees the need to keep holding us humans accountable. 
- We can’t let our guards down and think that AI will solve the problems for us. With AI in play this becomes even more important. 

When it comes to the implications for awareness training, Joakim Hvalby sees a continuous need to keep beating the forgetting curve by offering learning “in a steady flow”. 
- AI will increase the threats and attacks. We need to be even more vigilant. 

Ann-Therese Enarsson, CEO at Futurion, a think tank focusing on the future of work, how AI impacts the labor market is constantly on their radar. She sees AI and apps like ChatGPT as a ‘colleague that never sleeps’, and just one example how AI fundamentally changes the way we work. 
- We need to meet AI with more HI, human intelligence, she says. It’s kind of a paradox, but with more AI in our work environment, we need to focus more on the specific human skills such as understanding context and knowing your audience. 

AI in cybersecurity has been coined a double agent. With the same techniques used both by threat actors as well as in cybersecurity systems. When asking OpenAI about the cybersecurity impacts of AI this list is being presented: 

 

Pros of AI in Cybersecurity:  

  1. Automation: AI can automate cybersecurity processes such as threat detection and response, reducing the need for human intervention, and allowing for faster detection and remediation of security incidents. 
  2. Detection: AI can detect potential threats that may go unnoticed by traditional security tools. It can analyze large volumes of data from various sources in real-time, providing more accurate and timely threat detection. 
  3. Prediction: AI can predict potential threats before they happen, allowing organizations to take proactive measures to prevent them. 
  4. Learning: AI can learn and adapt to new threats and attack patterns, allowing for better and more effective threat detection and response. 
  5. Efficiency: AI can improve the efficiency of cybersecurity operations by reducing false positives and minimizing the time and effort required to investigate and remediate security incidents.

 

Cons of AI in Cybersecurity: 

  1. Complexity: The use of AI in cybersecurity can add complexity to security operations, requiring additional training and expertise. 
  2. Dependence: Organizations may become overly dependent on AI for security, potentially neglecting other important security measures such as human oversight and basic cybersecurity hygiene practices. 
  3. False alarms: AI can generate false positives, which can lead to wasted time and resources investigating non-existent threats. 
  4. Vulnerability: AI systems themselves can be vulnerable to cyberattacks, and if compromised, they can become a tool for attackers to carry out their attacks. 
  5. Ethics: The use of AI in cybersecurity raises ethical concerns around privacy, data protection, and human rights. There is a risk that AI may be used to carry out cyberattacks or to violate the privacy and rights of individuals. 

Patrick Couch agrees with ChatGPT on these parameters, but also highlights that pros also can be cons – depending on who the user is. AI being a double-edged sword, once again puts focus on us humans. We can’t become too reliant on AI. Instead, we need to rely on the two things that create learning: experience and education. If AI can compose an email from your grandmother with specific and delicate information about you, maybe the right question to ask is if your grandmother would write this in an email in the first place? 
- This is a game we can’t win. We simply need to embrace new habits, and I think that NanoLearning is very good way to create these. 

Will AI bring inclusion or division?

Futurion publishes a yearly Future proof index, and the latest figures show that an increasing part of the Swedish work force doesn’t feel that they are attractive on the labor market. ‘Sweden needs to step up’ says Ann-Therese Enarsson. She also puts out a warning that the AI we have today is mainly built upon data from North America and Europe. 
- Three billion people are not on the internet today, she says. With AI playing a bigger role, this can mean that we see an even greater polarization in the near future. 

Both Patrick Couch and Joakim Hvalby agree and see cybersecurity implications based on this. People who feel left out and don’t understand, tend not to participate. This goes for cybersecurity as well as for politics.  
- If you don’t participate and take part in your organization’s cybersecurity, you easily become a security risk, says Patrick Couch. 

Junglemap offers awareness training to both large and small organizations, and Joakim Hvalby also sees a risk for SME:s – and for the whole economy. With more sophisticated AI in place, this might also be more expensive and a challenge for all SMEs. 
- Offering affordable and accessible awareness training for all, is something that can contribute to more inclusive and secure organizations.

With AI rapidly being implemented into a lot of software applications, individual integrity becomes even more important. When AI is still relying on data input from user behavior, all organizations need to strike a balance between AI-efficiency and human integrity. 
- This balance needs to be in focus, says Joakim Hvalby. Now and going forward. 

 

Watch the webinar 'How does AI impact cybersecurity?' here:

Per Lagerström
Article   March 27 2023