Security

Cyber-crime

What to do with a cloud intrusion toolkit in 2023? Slap a chat assistant on it, duh

Don't worry, this half-baked Python script is for educational purposes onl-hahaha


Infosec bods have detailed an underground cybersecurity tool dubbed Predator AI that not only can be used to compromise poorly secured cloud services and web apps, but has an optional chat-bot assistant that only kinda works.

The multi-purpose software is not entirely novel; there are others out there like it; and while it's supposedly offered for educational purposes only, it can be used by miscreants to attack other people's deployments. It could also be used by IT types to test their infrastructure for holes.

Predator AI is apparently programmed to be to able to exploit 30 kinds of misconfigured or poorly setup web-based services and technologies, ranging from Amazon Web Services' Simple Email Service, Twilio, and WordPress to OpenCart, Magento, OneSignal, Stripe, and PayPal, SentinelLabs boffin Alex Delamotte explained on Wednesday.

Its optional chat-bot assistant – partially powered by OpenAI's ChatGPT – is likely only "somewhat functional" at the moment, Delamotte added. This particular feature, which allows you to ask the tool questions about its operation and potentially perform actions, is not yet advertised on the tool's primary Telegram channel. That said, it's under active development, we're told, with its makers posting videos of it in action and taking feature requests.

It wouldn't hurt to take a look over the software's capabilities and ensure your web apps and cloud infrastructure are fully secured against the tool's techniques. It may be that Predator uses code and methods found in other toolkits.

"Predator's web application attacks look for common weaknesses, misconfigurations or vulnerabilities in Cross Origin Resource Sharing (CORS), exposed Git configuration, PHPUnit Remote Code Execution (RCE), Structured Query Language (SQL), and Cross-Site Scripting (XSS)," Delamotte wrote.

Predator, written in Python, has more than 11,000 lines of code, and provides a Tkinter-based graphical user interface that requires several JSON configuration files. The script defines 13 classes that correlate to the malware's various side features as well as its core malicious functionality.

These side features include doing things like building information-stealing Windows malware executables; using Windows commands that check if the current user is running as an administrator; crafting fake error messages for testing XSS exploitation on Windows systems; and translating dialog boxes and menu items into Arabic, English, Japanese, Russian, and Spanish.

The configurable data-harvesting malware Predator can build can use Discord or Telegram for command-and-control purposes, and a video posted by its developer last month claimed the code is "fully undetectable."

SentinelLabs, however, said it was "unable to successfully use this feature as the required configuration files were not available."

Then there's the GPTj class, which uses ChatGPT to provide a text-based assistant: queries are typed into the GUI, which displays the response. The script will first try to handle a user-submitted request by itself internally: there are more than 100 use-cases it can recognize and carry out itself or via a third-party service, before trying to use a remote ChatGPT API to understand the request.

This class contains "several partially implemented utilities related to AWS SES and Twilio, as well as utilities to get information about IP addresses and phone numbers," according to SentinelLabs. The extent of GPTj's capabilities is not entirely clear: it looks as though ChatGPT is used to handle basic questions about the tool, and actions are actually handled by the script itself when it recognizes requests from a hardcoded list.

We could be wrong; this might change over time anyway. You'd think OpenAI would have guardrails in place to stop the thing from doing or saying anything problematic. It gives the Predator's developers a reason to slap AI on the name.

"The actor designed Predator AI to try to find a local solution first before querying the OpenAI API, which reduces the API consumption," Delamotte explained. "This class searches the user's input for strings associated with a known use case centered around one of Predator's web application and cloud service hacking tools."

To keep up the appearance of it being legitimate software, the code "has a disclaimer saying the tool is for educational purposes and the author does not condone any illegal use," Delamotte said. That'll do the trick in court. ®

Send us news
3 Comments

Five Eyes nations warn Moscow's mates at the Star Blizzard gang have new phishing targets

The Russians are coming! Err, they've already infiltrated UK, US inboxes

Uncle Sam probes cyberattack on Pennsylvania water system by suspected Iranian crew

CISA calls for stronger IT defenses as Texas district also hit by ransomware crew

2.5M patients infected with data loss in Norton Healthcare ransomware outbreak

AlphV lays claims to the intrusion

Fancy Bear goes phishing in US, European high-value networks

GRU-linked crew going after our code warns Microsoft - Outlook not good

Hershey phishes! Crooks snarf chocolate lovers' creds

Stealing Kit Kat maker's data?! Give me a break

Interpol moves against human traffickers who enslave people to scam you online

Scum lure folks with promises of good jobs in crypto and then won't let them leave

Belgian man charged with smuggling sanctioned military tech to Russia and China

Indictments allege plot to shift FPGAs, accelerometers, and spycams

Scores of US credit unions offline after ransomware infects backend cloud outfit

Supply chain attacks: The gift that keeps on giving

Rogue ex-Motorola techie admits cyberattack on former employer, passport fraud

Pro tip: Don't use your new work email to phish your old firm

'Serial cybercriminal and scammer' jailed for 8 years, told to pay back $1.2M

Crook did everything from SIM swaps to fake verified badge scams

BlackBerry squashes plan to spin out its IoT biz

Board and incoming CEO decide reorganizing is better than splitting

Dump C++ and in Rust you should trust, Five Eyes agencies urge

Memory safety vulnerabilities need to be crushed with better code