Cybercriminals Bypass OpenAI's Restrictions on Malicious Use

Hackers Use API to Bypass Barriers and Restrictions
Cybercriminals Bypass OpenAI's Restrictions on Malicious Use
Image: Shutterstock

Cybercriminals found a way to circumvent OpenAI's restrictions against using its natural language artificial intelligence model for malicious purposes, say researchers who already spotted low-level hackers using the firm's ChatGPT chatbot for a machine-learning assist in creating malicious scripts.

See Also: Generative AI Survey Result Analysis: Google Cloud

Security researchers at Check Point say that the natural language ChatGPT interface shuts down explicit prompts for it to do bad things such as writing a phishing email impersonating a bank or creating malware.

They say that's not the case for the application programming interface to OpenAI's GPT-3 natural language models. The current version of OpenAI's GPT-3 API "has very few if any anti-abuse measures in place," Check Point says. One way criminals exploit that is to integrate the API into Telegram.

Researchers say they found a cybercriminal advertising a Telegram bot offering unfettered access to the OpenAI API and tested its ability by tasking it to create a bank phishing email and a script for uploading PDF documents to an FTP server.

The cybercriminals offers 20 queries for free and then charge $5.50 for every subsequent 100 queries. One cybercriminal claims to have posted onto GitHub a basic script that uses the OpenAI API to bypass abuse restrictions.

OpenAI did not respond to Information Security Media Group's request for information.

Just week ago, Check Point also revealed how members of a low-level hacking community used ChatGPT's coding abilities to create a Python script that could be used for ransomware extortion and a Java snippet for surreptitiously downloading Windows applications. They also used the AI model to build code for an info stealer that looks for common file types, copies them to a random folder, compresses them and uploads them to a hardcoded FTP server (see: ChatGPT Showcases Promise of AI in Developing Malware).


About the Author

Jayant Chakravarti

Jayant Chakravarti

Senior Editor, APAC

Chakravarti covers cybersecurity developments in the Asia-Pacific region. He has been writing about technology since 2014, including for Ziff Davis.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.in, you agree to our use of cookies.