“Hey AI bot, write me a blog post about the biggest generative AI security trends in 2024, and make it sound human.”
Would you be able to tell if this was written by a human? Be honest.
Okay, okay — it was written by a human (the proof is in the inevitable typo). But one of the risks in generative AI and cybersecurity is that many people can’t tell, and that gives bad actors an opportunity to attack.
As much as AI has eased the burden on some businesses, it has also opened a slew of new security risks and trends — including risks to your network security. It has also led to some security companies testing, and leveraging, AI in their security tools.
In this post, we’ll take a look at a few of the biggest generative AI security trends, both for adversaries and security professionals, and the impact they might have on small and medium-sized businesses, SMBs, in the future.
When you think about generative AI, it’s likely your mind goes straight to a few platforms in particular: ChatGPT or Microsoft Copilot. However, ChatGPT is only one of many that have been developed in the last 10 years or so, and Microsoft Copilot, while able to produce weak code, has many security flaws. Beyond these two, other generative AI tools may have been incorporated into your daily business systems, such as IT ticketing software, helpdesk chatbots, and even your email drafts.
As noted in the Gartner 2024 Hype Cycle for Emerging Technologies: “Generative AI (GenAI) is over the Peak of Inflated Expectations as business focus continues to shift from excitement around foundation models to use cases that drive ROI. This is accelerating autonomous AI. While the current generation of AI models lack agency, AI research labs are rapidly releasing agents which can dynamically interact with their environment to achieve goals, although this development will be a gradual process.”
This gold rush has caused cybersecurity companies to take notice of, and aim for, generative AI in their respective roadmaps. For example, cybersecurity companies are both challenged to remain aware of the new and unique threats to privacy and also to implement AI into their security products because that is what the customer is expecting.
The idea of generative AI in cybersecurity is to augment human efforts by rapidly identifying anomalies and patterns, allowing the human security professionals to quickly identify and evaluate threats.
At the same time, the boom in generative AI has given adversaries new tools to create and quickly scale attacks, including:
AI can have very interesting use cases in security. For example, organizations may find AI helpful in creating and modeling scenarios that mimic a variety of cyberthreats, allowing the security team to respond to these threats in a controlled environment. This can make them more prepared for real attacks, ultimately leading to better preparation.
While this is all happening in cybersecurity, generative AI technologies themselves must tighten security themselves to prevent malicious use. Techniques like model validation, data integrity checks, and adversarial training are needed to prevent people from accessing private information or getting skewed or broken data outputs.
Generative AI must be treated as an iterative process, continually refining results but also refining the security associated with both the end-user and those who create the information in the first place.
In a realistic scenario, generative AI in cybersecurity might look like a cybersecurity vendor incorporating AI into their product to help monitor and generate threat reporting or predictions. Or it may look like creating modeling scenarios to test security professionals preparedness in a closed environment. At the same time, the same vendor may need to look at which types of data breaches are being traced back to generative AI to find new ways to protect their clients from similar attacks.
For example, they are now tasked with more operational technology risks that may arise due to generative AI, which means more deep monitoring is needed, which may be made easier through the use of predictive AI. It’s a double edged sword that keeps security vendors and security professionals on alert.
There are a few security concerns, however, that both security vendors and users alike should keep in mind. Generative AI expands a company’s attack surface in several ways.
Generative AI does not come without security risks. In fact, the Cisco 2024 Data Privacy Benchmark Study found that at least one in four companies have banned the use of generative AI entirely, at least for now. That’s because the same study found that at least 48% of working adults admit entering non-public company information into GenAI tools.
The generative AI security trends don’t stop there. According to recent research from ESG, 76% of 370 respondents surveyed believe cyber-adversaries gain the biggest advantage from GenAI technology. The study found that common adversary use cases for GenAI include “using large language models to crack passwords, increasing the intensity and cadence of previously successful attacks, and circumventing existing cybersecurity defenses, among others…GenAI makes it easier for unskilled adversaries to create more advanced attacks and increase their volume of assaults.”
There are a few ways to safeguard your internal generative AI software and processes, while also staying alert of potential threats from outside use of generative AI.
During AI model training, data from a variety of sources is used. If that data is corrupt, infected, or poisoned it can create chaos through repeated outputs. Generative AI is extremely susceptible to these types of attacks because adversaries can inject infected code into every output, and as the generative AI continues to use its own data to create new answers, the virus can spread even more.
This can also lead to AI model inversion. This occurs when a machine learning security threat involves using the output of a model to infer some of its parameters or architecture — you can see where this might become a problem, especially where infected input was used. You can think of this like the metaphor of a snake eating its own tail, in a way.
Perhaps the most common security threat when it comes to AI is not actually due to the AI at all — it is due to the people using it. If your employees are entering confidential data or information into a public generative AI tool, like ChatGPT, you are running the risk of that data not only being used in future answers for users who should not access that data, but you are also risking the storage of that data in an unsecured place.
This raises issues with data privacy, as well as regulatory compliance challenges. Your employees and team must understand how to use AI responsibly and confidentially. If the AI tool is part of your existing security tech stack, that doesn’t mean you’re quite off the hook either. You may need to review all of the documentation available about the AI tool to make sure your data will not be stored and will not become part of the generative AI model’s training or output.
Your customers may also be wary of their own data being compromised because of your company’s use of AI. In the Cisco study we mentioned earlier, they found that 91% of businesses recognize they need to do more to reassure customers that their sensitive data is used for intended and legitimate purposes in AI. You must ensure that your customers are aware of how you are using their data to make sure that they are comfortable with the level of risk you may be putting them in.
Generative AI comes with inherent biases, even when created with the best of intentions. There may be ethical violations, or offensive content generated. That’s why it is so important to make sure that the prompts being used are created with prompt safety in mind.
For example, if you are using generative AI to calculate the security risk in your business that deals primarily with insurance, is the data skewing that a specific demographic of your customers is riskier? That can cause major problems down the line and further biases that you are likely hoping to squash.
You will need to make sure that the input being used to train your generative AI model uses controlling prompts that align with your organization's ethical standards and values.
Where does this all leave generative AI in future years? Let’s take a look at some of the biggest generative AI security trends.
The rise of generative AI has created higher instances of adversarial attacks that target generative AI models. Some of the emerging trends in adversarial attack methodologies include:
There are ongoing advancements in generative AI security solutions that counter these evolving threats, but the biggest and most important way to combat these threats is to monitor logs and changes in the AI software to find any anomalies.
As mentioned earlier, data privacy remains critical in the use of any form of generative AI. You will need to make sure private information is not becoming part of the generative AI machine learning process and being shared with other users, or accessed by bad actors.
Privacy can be preserved using techniques like:
Implementing privacy-preserving techniques like FL and MPC is crucial for organizations aiming to adhere to data protection regulations while benefiting from generative AI.
Additionally, if your organization wants to block the use of unsecure AI tools to your employees entirely, solutions like OpenVPN’s Cyber Shield with CloudConnexa can help you block specific sites without routing them through a VPN tunnel or slowing any network speeds.
Explainable AI (XAI) is becoming a key component of generative AI security, as the demand for model transparency grows. XAI methodologies are gaining traction because they make AI decisions more understandable to human operators. These include:
Because of the incorporation of XAI to enhance interpretability and transparency — which is critical in detecting and mitigating potential security threats — this trend only stands to grow.
Generative AI security is increasingly being integrated into DevSecOps (Development, Security, and Operations) practices. This integration is vital for maintaining security throughout the AI lifecycle, from model training to deployment.
Trends in this area include automated model validation and security testing tools tailored specifically for AI. By embedding security into DevSecOps, organizations can accelerate their time to market while ensuring that generative AI systems are secure by design.
The growing complexity of generative AI security has led to increased collaboration among industry stakeholders for threat intelligence sharing. Threat intelligence sharing might look like an IT admin flagging a threat to the DevSecOs team, or the marketing team flagging a threat to the engineering team and the security team. This collaboration is critical to staying ahead of emerging threats — but it can’t be done through word-of-mouth alone.
Organizations may find that leveraging threat intelligence platforms, like Expel.io, and participating in collaborative research to foster proactive defense strategies is best for them. These platforms aggregate intelligence information over many companies, and if an issue is detected at one company, that information can be leveraged quickly to the benefit of the other participants.
You may find it helpful to get started by monitoring trusted resources and keeping an eye out for threat and vulnerability announcements. By pooling knowledge and resources, industries can improve their ability to defend against adversarial attacks and other security risks.
As generative AI becomes more ubiquitous, the regulatory landscape is evolving to address its unique security challenges. Compliance with regulations like GDPR, CCPA, and emerging AI-specific frameworks have become a priority for organizations.
To streamline compliance, companies are turning to compliance automation tools, which simplify regulatory reporting and auditing processes through the use of generative AI. But, this requires continuous attention to legal and ethical considerations as mentioned earlier in the post.
The focus on ethical AI governance is intensifying, with a growing emphasis on frameworks that promote fairness, accountability, and transparency in AI systems. One trend is the adoption of bias detection strategies to ensure that generative AI models do not perpetuate discrimination or inequality. We touched on this briefly earlier in the post, but it is worth mentioning that it is not just a necessity to protect your customers, but a growing trend to watch.
Organizations are embedding ethical considerations into their AI development processes, recognizing that ethical AI governance is not just a regulatory requirement but also a competitive differentiator in a socially conscious market.
As security threats evolve, so do techniques for enhancing the robustness of generative AI models. Trends include adversarial training, where models are exposed to adversarial examples during training to improve their ability to withstand attacks.
In addition, research in model verification and explainable AI-driven anomaly detection is helping organizations build more resilient generative AI systems. These efforts aim to harden models against both known and emerging threats, ensuring that they remain secure in increasingly hostile environments.
Finally, several technological innovations are shaping the future of generative AI security. These include:
These innovations promise to improve the security, privacy, and trustworthiness of generative AI systems, ensuring that they remain safe and reliable even as the technology advances.
As generative AI continues to evolve, balancing innovation with security is crucial. Organizations must weigh the trade-offs between usability and risk mitigation — all while remaining in the boundaries of their security framework like zero trust. Pursuing cybersecurity in the era of AI evolution requires a proactive approach.
When it comes down to it, in order to beat the threats, stay on top of generative AI security trends, and use AI to your advantage, you must think like a human, because in the end, generative AI is based on the behavior, input, and goals of humans.
Strategies like threat modeling, secure-by-design principles, and human-centric security approaches will be essential in staying ahead of emerging threats. By promoting interdisciplinary collaboration, industry partnerships, and knowledge sharing, organizations can build a more resilient digital infrastructure that is capable of withstanding the challenges posed by the future of AI.
To see how OpenVPN can help protect your business from some of the threats of AI, check out our interactive demo library.