AI Assistant, Friend, Foe, or Demigod?

When the telephone was first marketed, many predicted it to be a failure as it was assumed that people wouldn’t want to give others a way to bypass their front door and invade the privacy of their home. Despite the plethora of time-saving appliances and gadgets introduced and acquired over the years; people still complain they don’t have enough time during the day. When Hiram Maxim invented the machine gun in 1884, the prevailing thought was that it would prevent wars from happening. After all, what army would possibly attack a defensive position with that type of weapon at its disposal? And then there were the autonomous taxis that were supposed to drive us everywhere, and yet still can’t be found.

It seems that the legacy of invention is hard to predict. People claim to value privacy, then share every private moment on social media. We look to technology to make us more efficient, then spend our limited free time patching, updating, and upgrading it all. And somehow, we still haven’t learned, that the sword and the cannon can be used by both friend and foe.

The Unveiling of ChatGPT

To say that the deployment of the latest version of ChatGPT by OpenAI has generated a lot of buzz and interest across the world would be an understatement. It’s one of those inventions that seems destined to create a major impact on the world. Until its debut, the average person’s exposure to AI was limited to ordering Alexa to turn the lights off downstairs while enduring the frustration of trying to convince the virtual chat bot on your online banking site to forward you to a human support agent. OpenAI is now opening the minds of people across the world to the true potential of AI.

The Promise of AI Integrated Security

Few industries are banking on the power of AI more than cybersecurity. There’s no doubt that companies need a magic bullet to combat the escalating number of attacks originating from a threat landscape that continues to grow exponentially. The promise of AI-enabled security is simple. Dashboard security controls are insufficient to protect the dynamic nature of today’s complex hybrid networks and the cloudification of networks is making visibility challenging at best. Couple this with the widening talent gap for cybersecurity professionals and you begin to understand why AI is bringing so much promise to the cybersecurity community. Staffing shortages are contributing to the increased prevalence of system misconfigurations that Gartner says are the cause of 80% of all data security breaches. While overstretched IT staffs must protect every aspect of their expanding attack surfaces, cybercriminals need only exploit the weakest link to achieve their malicious objectives.

We Must Accept the Good and the Bad

There is no doubt that AI can have a significant positive impact on cybersecurity. We decided to ask ChatGPT itself how AI virtual assistants will improve cybersecurity for businesses in the future. Here is what it had to say:

“AI virtual assistants can help companies improve their cybersecurity by using natural language processing (NLP) to detect any suspicious activity by quickly detecting phishing attempts, malware, and other malicious threats. They can also be used to automate and streamline tedious, manual security tasks, such as patching, updating, and monitoring systems. This can free up IT teams to focus on more important tasks such as responding to cyber threats and developing more secure systems.”

All of this sounds wonderful, but it’s also important to remember that for every Yin, there is a Yang. One could argue that no other tool has contributed more to office productivity than the PC. However, the PC also represents a significant point of vulnerability that external threat actors regularly exploit using malware or phishing emails to gain access to our connected networks and online accounts. While the scalability of the cloud gives businesses the ability to serve a globally expansive customer base, it also magnifies the damage that a single attacker can inflict. Despite the assurances of the AI community, AI will also bring new threats and vulnerabilities as well.

How AI Can Be Manipulated

Handing over the reins of control to any one person, system, or entity holds an element of risk. While utilizing coding assistants such as GitHub Copilot, Facebook InCoder and ChatGPT has its benefits, these tools introduce new vulnerabilities. According to Ziv Nachman, a cyber threat intelligence analyst at CYREBRO:

“From an intelligence standpoint, using tools like GitHub Copilot, Facebook InCoder, and even the rising star ChatGPT can not only result in unintentional code theft but also create a situation where the training model is tricked by attackers to cause vulnerable code to be produced. Your co-pilot could be the cause of your plane crashing.”

As noted earlier, a weapon can serve two masters. This appears to be the case for intelligent tools as well. Trend Micro researchers demonstrated how GitHub Codespaces can easily be configured as a distributor of malicious content while potentially avoiding detection since the traffic would appear to come from Microsoft. In another study, researchers from Microsoft and various universities have devised a poisoning attack that can trick AI-based coding assistants into suggesting code that can be used for malicious intent. It’s common practice for developers to expose AI coding assistants to public code repositories to train them. By depositing malicious code on these public sites, cybercriminals hope to influence the coding practices of these AI assistants.

OpenAI, the company that manages ChatGPT, has insisted that the generative platform is designed not to create dangerous code, however, a team from Check Point Research reported that they were able to coax ChatGPT into writing a phishing email. The team evidently enticed it to do so under a “hypothetical” situation. CPR also states that participants in cybercrime forums with minimal coding experience are using the AI platform to create ransomware, malicious spam, and other malware code.

The Proper Balance

Unfortunately, there is no magic bullet that is going to eliminate threat actors or malicious code from attacking your network. What businesses need is the proper balance of comprehensive tool sets supported by competent and experienced cybersecurity professionals who know how to leverage security data and remediate threats. That’s why many businesses are turning to the services of a security operations center (SOC) that has the ability to monitor your network on a 24/7 basis while leveraging the existing security tools you currently have to their full potential.

Conclusion

Just as no one could have predicted the many ways that the PC impacted the average business, any predictions, or insights into the long-term impact that AI-assisted coding platforms will have on the world is speculative at best. These platforms are still in their early infancy. Not only has AI got a lot to learn, so do its handlers and its users. There’s no doubt we need intelligent analysis to aid our human security teams, but it is not time to hand over the keys to the kingdom, nor place all our hopes on this budding technology because with technology, always comes unforeseen disappointment.

Sign Up for Updates