ChatGPT is the internet's newest darling, a chatbot that offers a stunningly plausible imitation of realistic language. The tool, built on GPT3.5 and launched in November 2022 by OpenAI, a US research center, is free for everyone via a public beta version. GPT-4 is the next generation of large language models (LLMs) developed by OpenAI. It is still under development, and promises to operate with a focus on increasing the text generation's realism and coherence. It can also access and interpret data from the real world via Google Search, allowing it to deliver more thorough and up-to-date answers to queries. It’s application in practice can go from students asking it to do their homework to programmers using it to write scripts for their apps.
Microsoft is investing $10 billion in OpenAI, the company behind ChatGPT. Microsoft also recently announced that it will add ChatGPT to its Azure OpenAI suite, which will allow businesses to integrate AI tools such as DALL-E and Codex into their technology stacks. (DALL-E is an AI tool that can generate images from text descriptions, while Codex is an AI tool that can translate natural language into code.)
New tools bring new cybersecurity risks. One of the biggest concerns for cybersecurity practitioners is that ChatGPT could be used to generate malicious code. This would mean that many more people could create malware, leading to more attacks and breaches. Other concerns is ChatGPT could be used to generate phishing emails and other types of social engineering attacks, it could be used to create deepfakes and other forms of disinformation.
Researchers at CyberArk recently showed how they were able to bypass ChatGPT's content filters to generate malicious code. They created code that could inject a DLL into explorer.exe, and polymorphic code that is difficult for antimalware software to detect. They were also able to generate code that could find files of interest to ransomware criminals, and encrypt them.
While this is just a proof of concept, it seems that criminals are already using ChatGPT to create malware. In January 2023, Check Point researchers reported that they had found users of cybercrime forums using ChatGPT to create malware. They said that a thread had been started in a hacking forum where the creator of the thread shared screenshots of Python code allegedly generated by ChatGPT that searches for common file types such as Office files and PDFs, copies them to a random folder, zips those folders and uploads them to an FTP server.
Security researchers are concerned about "deepfake" technology that successfully simulates both video and audio of humans. Using words created by an AI chatbot to sound like the person being impersonated adds another element of deception and threat and might be used to build extremely convincing impersonations for targeted social engineering. It's not difficult to conceive a deepfake version of a kidnap and ransom demand.
ChatGPT is a powerful tool that can be used to generate malicious code, which could lead to a rise in targeted phishing attacks and polymorphic malware. This is a serious concern for cybersecurity professionals, as it means that they need to be more vigilant than ever in protecting their organizations.
Staying informed about the cybersecurity threat landscape is a full-time job, and the emergence of ChatGPT has made this task even more challenging for cybersecurity professionals.
Addressing ChatGPT: Strategies for the Cybersecurity Community
Threat Assessment:
Evaluate potential risks and threats associated with malicious use of ChatGPT, focusing on content generation, misinformation, and social engineering.
Keeping on top of patching users’ devices is essential to protect against known vulnerabilities. However, it is also important to be aware of zero-day vulnerabilities, which are vulnerabilities that have not yet been patched.
Detection Mechanisms:
Develop and implement robust mechanisms, such as machine learning models and natural language processing, to detect and mitigate harmful uses of AI-generated content.
ChatGPT (OpenAI) has to itself generate reports and analysis of cybersecurity threats, and then turn those reports into content that can be shared more widely with the organization in language that non-specialists will understand. This can help to raise awareness of the threats among the entire workforces.
User Education:
Educate users about the capabilities and limitations of AI models like ChatGPT, emphasizing caution and awareness of potential manipulation.
Security professionals need to communicate clearly with their users that each individual has a personal responsibility for the security of the organization, and thus user training needs to include not just details of how to, say, report a suspicious email but also must ensure that the user understands why a suspicious email is a threat to the entire organization.
Content Moderation:
Implement content moderation techniques, combining automated tools and human moderation, to filter out inappropriate or malicious content generated by AI models.
Cyber professionals should consider their existing phishing recognition training efforts and highlight to their users the kind of very detailed and credible emails they might receive. Training could include custom examples – generated by ChatGPT itself – of emails they should be alert to.
Highlighting to users the kind of very detailed and credible emails they might receive can be an effective way to raise awareness of phishing attacks. However, it is important to keep this training up to date, as phishing attackers are constantly changing their tactics.
Collaboration and Policy:
Foster collaboration among cybersecurity experts, AI researchers, and policymakers to establish ethical guidelines, industry standards, and regulations for the responsible use of AI technologies.
Comments