DigiCert analyzes the impact of Artificial Intelligence on digital security

According to DigiCert, artificial intelligence (AI) tools are gaining more and more space in life and work

With an estimated market size of $102 billion by 2032, it’s no secret that artificial intelligence (AI) is sweeping all industries. However, AI requires data, and where that data comes from, how it is processed and what results from those processes will require a sense of identity and security. It is understandable that many people are concerned about the security of such data. A 2023 survey found that 81% of respondents are concerned about the security risks associated with ChatGPT and generative AI, while only 7% were optimistic that AI tools would improve Internet security. Therefore, strict cybersecurity measures will be even more critical with artificial intelligence technologies.

“There are countless opportunities to apply AI in cybersecurity to improve threat detection, prevention and incident response. Therefore, companies must understand the opportunities and weaknesses of AI in cybersecurity to anticipate the next threats,” said Avesta Hojjati – Vice President of Engineering and Head of R&D at DigiCert.

Using AI

On the bright side, AI can help transform cybersecurity with more effective, accurate and rapid responses. Some of the ways AI can be applied to cybersecurity include:

  • Pattern recognition to reduce false positives: AI is excellent at pattern recognition, meaning it can better detect anomalies and provide behavior analysis and detect threats in real time. In fact, a 2022 study by the Ponemon Institute found that organizations using AI-powered intrusion detection systems experienced a 43% reduction in false positives, which allowed security teams to focus on genuine threats. In addition, it was shown that AI-based email security solutions reduce false positives by up to 70%.
  • Enable scale by enhancing human capabilities: AI can be used to improve human capabilities, provide more agile response time, and deliver scalability. The only limitation to the scale will be the availability of data. In addition, AI chatbots can be used as virtual assistants to provide security support and relieve some of the burden on human agents.
  • Accelerate incident response and recovery: AI can automate routine actions and tasks based on previous training and multipoint data collection, deliver faster response times, and reduce detection gaps. AI can also automate reporting, providing information through natural language queries, simplifying security systems, and providing recommendations for improving future cybersecurity strategies.
  • Sandbox phishing training: Generative AI can create realistic phishing scenarios for hands-on cybersecurity training, fostering a surveillance culture among employees and preparing them for real-world threats.

Red Alert

Nowadays, cybercriminals use AI in their attacks. Here are 3 examples:

  1. Automated malware campaigns with AI: Cybercriminals can employ generative AI to create sophisticated malware that adjusts behavior to avoid detection. These “smart” malware strains are harder to predict and control, increasing the risk of widespread system outages and massive data breaches.
  2. Advanced phishing attacks: Generative AI has the ability to learn and mimic a user’s writing style and personal information, making phishing attacks considerably more persuasive. Personalized phishing emails, which appear to come from trusted contacts or reputable institutions, can trick people into disclosing confidential information, which poses a substantial threat to personal and corporate cybersecurity.
  3. Realistic deepfakes: thanks to generative AI, malicious actors can now create deepfakes: image forgeries, audio and very compelling videos. Deepfakes pose a significant risk of disinformation campaigns, fraudulent activities and phishing. Imagine a remarkably realistic video of an executive director announcing bankruptcy or a fabricated audio recording of a world leader declaring war. These scenarios are no longer limited to the realm of science fiction and have the potential to cause significant disruption.

In addition, AI requires a lot of data and companies must limit exactly what is shared, as it creates another third party where data could be affected. Even ChatGPT itself suffered a data breach due to a vulnerability in the Redis library, which allows users to access the chat history of others. OpenAI quickly solved the problem, but highlighted potential risks for chatbots and users. Some companies have begun to completely ban the use of ChatGPT to protect sensitive data, while others are implementing AI policies to limit data that can be shared with AI.

How to generate digital confidence in AI with PKI?

The use of technologies such as Public Key Infrastructure (PKI) can play a key role in protecting against emerging AI-related threats, such as deep counterfeiting, and in maintaining the integrity of digital communications.

For example, a consortium of industry leaders, including Adobe, Microsoft and DigiCert, are working on a standard known as Coalition for Content Provenance and Authenticity (C2PA). This initiative introduced an open standard designed to address the challenge of verifying and confirming the legitimacy of digital archives.

C2PA leverages PKI to generate an undisputed trail, allowing users to discern between genuine and counterfeit media. This specification gives users the ability to determine the source, creator, creation date, location and any modification of a digital file. The main objective of this standard is to promote transparency and reliability in digital media files, especially given the increasing difficulty in distinguishing AI-generated content from reality in today’s environment.

“In short, AI will develop many opportunities in cybersecurity and has barely touched the surface of what it can do. AI will be used as both offensive and defensive tool to prevent and provoke cyberattacks. But the key is for companies to be aware of the risks and start implementing solutions now, given that AI cannot completely replace humans,” concludes Hojjati.