Procedures
We’ve talked about the tactics used by phishing scammers, as well as the methods they often employ. But we’re yet to dive into the actual mechanics of a scam.
When it comes to gaining trust, operating at scale, and transferring money, phishing scammers have a number of tools at their disposal. Some are relatively old, while others are on the bleeding-edge of technology.
Digital currencies are designed with decentralization in mind. This, inevitably, means there are fewer protections than you’d find in the traditional financial system, like the ability to block or reverse a payment, or to claw back funds stolen from your account.
It’s for this reason that phishing scammers often target cryptocurrency users and services. But they often use cryptocurrencies to transfer funds across borders, and to obfuscate the identities of those receiving the proceeds.
An attacker may use something called a cryptocurrency tumbler to cover their tracks. These systems pool, mix, and shuffle funds in a way that makes it almost impossible to identify the origins of funds. In essence, they mix “clean” cash with “dirty” cash in a way that prevents an observer from determining which is which.
According to a 2022 report from Chainalysis, “illicit” cryptocurrency addresses were responsible for 23% of all mixer transactions that year. Many of these addresses belong to known criminal actors that use phishing tactics as a routine part of their business.
An example is the Lazarus Group, which the US State Department believes to be controlled by the North Korean government, and has used phishing extensively when targeting individual cryptocurrency owners, as well as businesses in the cryptocurrency sector.
The term ‘Generative AI' describes a type of artificial intelligence where computers produce bespoke creative works that, until recently, could have only been created by a human. Harnessing the powers of deep learning and large language models (LLMs), as well as large datasets and powerful training hardware, these AI models can create complex written, video, and visual works in a matter of seconds.
Text: Generative AI models like OpenAI’s GPT-4 and Alphabet’s LaMDA can respond to written requests and produce blogs, explainers, and even poems.
Code: Similarly, some models, including GPT-4 have proven themselves capable of writing software.
Audio: Generative AI can be used to synthesize speech, and models like VALL-E can even create sound recordings designed to mimic a specific human speaker, from their accent to their intonation. Despite still struggling with the “uncanny valley” where something just feels “off” deep fake audio can still fool some listeners, reports the New Scientist.
Images: Models like DALL-E 2 can generate photorealistic images based on a single written prompt.
Video: Some generative AI models, such as Meta’s Make-A-Video system, can produce GIF-like moving pictures from a simple written prompt. Others, including Runway’s Gen-1, have the ability to generate videos from sample images or written text.
Generative AI has the potential to revolutionize countless industries, improving productivity and quality. But in the wrong hands, it can be a tool for harm.
Malicious actors could, for example, use a generative AI system to produce customized phishing emails at an incredible scale or velocity, increasing the probability of being successful. Text-based generative AI systems aren’t just fast. They’re also designed to write like a human. Unless told otherwise, they use standard spelling and grammar. And so, they have the potential to be more convincing than a traditional human-generated phishing email, even to the most cautious person.
An attacker could even synthesize the voice of a CEO and use it to trick employees into sharing proprietary company information with a third-party. The VALL-E model can create a replica of a person’s voice from a three-second recording. If the attacker can provide more audio training data (particularly high-quality training data), the quality of the facsimile improves drastically.
As with any technology, generative AI systems can be misused. And so, the companies building them are implementing safeguards.
These protections often restrict the kinds of questions that can be asked. If a generative AI system, like OpenAI’s ChatGPT, detects that a person is trying to craft a malicious email, it’ll refuse the request.
These safeguards are helpful, but they aren’t perfect. There are LLMs that can be run locally with safeguards removed. Over the past few months, we’ve seen numerous examples of generative AI-driven phishing campaigns. Security researchers have exposed ways in which a generative AI chatbot can be tricked into writing potentially-harmful content.
Phishing has always been a problem for consumers, governments, and businesses alike. Generative AI didn’t create this problem, but it has the potential to exacerbate it.
Everyone — employees and private individuals alike — needs to be increasingly vigilant going forward. Even the most sophisticated phishing attack can be defeated by a skeptical and cool-headed mind.
Since its inception, the website Have I Been Pwned has tracked breaches across 678 websites, totaling over 12.5 billion accounts. These leaked credentials often provide an attacker the information and credentials they need to conduct further phishing attacks.
If a website fails to properly protect a person’s password by salting and hashing, an attacker could simply use the victims credentials to impersonate them. But even without that information, leaked data can be useful for an attacker.
By looking at a data breach, an attacker can identify where a person has accounts, and then deliver phishing emails that are both targeted and personalized. This threat isn’t hypothetical. After a rogue insider leaked the customer database of the Desjardins Group — a large Canadian financial services organization — the number of phishing URLs associated with that particular brand jumped by 1,680.4%, according to Vade Secure.
It’s a common misconception that threat actors are universally organized in a way akin to a shadowy underground enterprise, with none of the formalities that are inherent within a legitimate business. The image of a group, working from a murky room illuminated only by the glare of laptop screens, feels natural and obvious.
Reality is a lot more complex. Criminal organizations — including many phishing threat actors — are increasingly well organized. Despite their criminal intentions, they often try to cloak themselves in the veneer of a legitimate business. They’ll sometimes rent office space, have payroll and HR systems, and recruit from the wider public, rather than from the denizens of shadowy Dark Web forums.
This trend — as observed by the New York Times, as well as diligent citizen journalists like Jim Browning — is often true for call center scams. These often fall into the category of phishing (specifically “vishing”) or are, at the very least, phishing adjacent.
Although portraying themselves as legitimate businesses does expose the organization — and its leadership — to greater external scrutiny, it does have some major advantages. For phishing attacks that are only effective at scale, it gives the threat actors a greater recruitment pool to work from, allowing them to rapidly expand their operations — or simply sustain them in the face of worker attrition.