Menu
Menu
inquire

AI and social engineering

AI and social engineering

And the “truth”?

Before the emergence of sophisticated AI technologies like Large Language Models (LLMs), we were already grappling with a post-truth era. 

This landscape was shaped by various factors: the impact of social networks on political opinions (including the social media bubble effect), the media's role in guiding public conversation, and the influence of censorship in muting certain topics. 

These elements combined to obscure objective truth to the extent that the very concepts of 'truth' and 'facts' began to lose their meaning somewhat.

The introduction of AI into this mix only intensifies these challenges, further blurring the lines  between fact and fiction. AI's capabilities go beyond merely tailoring messages for enhanced persuasion; it enables the creation of content at scale and with a precision that was impossible before. 

This advancement poses a significant risk to the integrity of a shared, objective reality. Particularly as AI-generated material becomes virtually indistinguishable from content created by humans.

However, the concerns around AI extend beyond the production of false information. AI algorithms, especially those that govern content curation on social media platforms, tend to reinforce existing biases. They often create echo chambers, amplifying pre-existing views rather than presenting challenging or divergent perspectives. This phenomenon can exacerbate divisions in an already-polarised society.

What’s more, AI's capacity to produce credible fake news, deep fakes, and synthetic media casts doubt on the reliability of legitimate news sources. 

This escalation presents a serious challenge in the digital age when verifying authenticity is increasingly complex. And, as we’ll explore later, AI also has serious consequences for social engineering.


Deep fakes

The advancements in AI technology has heightened the risks associated with deep fake technology. Today, not just the written word but also human faces, body language, and voices can be synthesised with great accuracy, marking a new era in digital impersonation threats

While some older scams were easy to identify due to the attackers' limited English proficiency, ChatGPT has changed all of that. Crafting convincingly-authentic scam messages has become a trivial task.

You can train your virtual assistant over time to communicate in a particular tone of voice, after all. Attackers targeting customers of a bank might ask generative AI to craft a message in a particular style. They could also ask AI to write in the style of an experienced PR professional. 

Such has been the rapid development in AI capabilities in recent years that there are now AI Instagram models, and musicians. So, it’s not only the authenticity of an individual's communication that can be difficult to ascertain these days but their very being.


AI and social engineering

AI is already playing a big role in the increasing sophistication of social engineering attacks.

An impersonator, armed with AI, can now initiate a phone call to a target individual, seemingly from a trusted source such as a family member. Scams could, in the coming years, exploit synthesised voices indistinguishable from the real person. It’s a lot harder to ignore a scam call when it sounds like a loved one is at the other end of the line. 

The bad actor impersonating said loved one could convey a sense of urgency and authenticity. They might instruct the victim to transfer money urgently, generating additional sounds such as sirens or screams, effectively intensifying the scenario's realism.

The use of AI has also significantly streamlined Open Source Intelligence (OSINT). Gathering and analyzing information about potential victims, once a time-consuming and manual endeavour, can now be efficiently conducted with simple prompts to AI tools like ChatGPT with internet browsing capabilities.

The ease with which impersonators can now acquire personal details to convince victims of their identity has never been greater. 

Clearly it’s going to become harder in the future for us to identify genuine communications from those that look to trick us. But a lot of the suggestions we outlined in a previous article about boosting your social engineering awareness will remain valid. 

These include something as simple as disconnecting and calling back the number you know to be the genuine one to verify the request's legitimacy. 

Remember that the attacker is relying on you making an impulsive decision - preferably while anxious and flustered. And so sometimes simply taking a breath and deciding to call back can be an incredibly effective approach in guarding against these advanced impersonation tactics.


Fabricating corporate entities

Consider a scenario where someone who has been away on business for a few days returns home and encounters a sudden infestation of ants in their apartment. 

After deciding that over-the-counter solutions to remove the ants are toxic, they opt to search for professional help, googling 'city_name pest control service'. The top search results show several options, each boasting excellent reviews. So, the individual calls them, chooses one based on pricing, and schedules a visit. 

Upon returning home after the pest control specialist’s visit, they are met with a shocking discovery: their safe has been broken into, and valuable family belongings are missing. 

The victim calls the police, only to uncover a more complex web of deceit. Investigations reveal that the pest control service was an elaborate facade; the infestation was artificially created by the attacker, who also engineered a few counterfeit websites replete with fabricated reviews. 

The websites had been strategically SEO optimised to hijack traffic, ensuring that the top four search results for pest control services were fraudulent. Additionally, AI was employed to handle the phone interactions, leaving no traceable human evidence behind. 

This story might sound a little bit like an episode of Black Mirror. But is it really that unbelievable? 

After all, thanks to AI, producing a convincing fake company website now requires minimal budget and no development team. These websites can also be easily populated with reviews from AI bots pretending to be real people. Each with their own social media profiles, interests, and posts. And all of it AI generated.

Such is the potential for AI and social engineering to form a toxic mix, that we might arrive at a point several years from now where people only feel confident about the authenticity of the person they’re talking to if they are physically standing in front of them.


Wider societal manipulation

Beyond targeting individuals, AI's potential for misuse might also be exploited for corporate sabotage or unethical competitive strategies. 

This could manifest in various forms, from diverting traffic from rival companies to tarnishing their image with artificial, negative reviews on platforms like Glassdoor. In 2023, there’s no need to manually enter fake reviews on Glassdoor. AI has seen to that. 

We also read recently about a conference that attempted to enhance its appeal by creating a fictitious female speaker persona. This effort was aimed at presenting an image of diversity and inclusivity in its speaker lineup. Might this become a trend given how easily fake online profiles can be created? 

What is clear is that AI's role in shaping our social interactions and behaviors is expanding at pace. And whether it’s recommendation systems guiding our choices or AI-driven search engines and dialogue agents, it’s worth noting that these tools can exhibit subtle biases.

The capacity of AI to subtly manipulate content represents a significant concern in the digital age. This technology, particularly when integrated into spam bots, has the potential to shape beliefs and exacerbate societal polarization. 

AI's ability to generate vast amounts of information far exceeds what was previously possible with real commenters or journalists. Consequently, bot farms, powered by sophisticated AI, can wield serious influence. And this too can have an impact on cybersecurity readiness.

In an online environment where a single opinion dominates a discussion thread, a discerning individual might typically grow suspicious. But LLM-powered spam bots can simulate nuanced and intelligent debates, employing varied tones, styles, and emotional nuances while subtly steering the conversation towards a desired narrative. This sophisticated mimicry can sway even the most sceptical minds towards the beliefs promoted by the bot farm operators. Again, this approach might be employed to trick someone into clicking on a link that they shouldn’t or handing over sensitive information. 

It’s also worth a reminder that the human capacity to process vast amounts of information is inherently limited. And in an era when AI can generate overwhelming quantities of content, this limitation becomes increasingly problematic. Not least as it challenges our ability to discern truth and authenticity in a sea of AI-generated information.

As we’ve said in previous articles, the more inundated we are with information, the more likely we are to be distracted and click on a bogus link. 


Are we going full circle?

We suggested earlier that AI’s impact on social engineering might lead to so much paranoia about digital interactions that we’ll begin to prefer face-to-face meetings.

This shift - if it happened - could redefine communication norms, making in-person communication a luxury while the average person relies on AI-mediated interactions.

It feels important at this stage to acknowledge that AI has incredible potential for positive impact. Like any technology, AI's utility is subject to the intentions of its users, with the possibility of misuse by bad actors always present. And as a cybersecurity company, we’d be remiss not to explore this darker side of artificial intelligence.

Historical precedents like the Printing Press, the Industrial Revolution, and their profound social impacts, remind us that technological progress often reshapes societies and challenges the status quo. These changes have led to significant societal shifts, such as urbanization, class restructuring, and even revolutions.

The advent of AI might lead to similarly transformative changes. While the specific nature of these changes remains uncertain, one thing is clear: the world as we know it will evolve. And in this changing landscape, agility, lifelong learning, and a keen awareness of ongoing developments are all going to be crucial. As social beings, our ability to adapt and support one another will be key to navigating these novel cybersecurity challenges successfully.

For developers, this means not only being tech-savvy but also understanding and educating others about the potential risks and benefits of AI. As creators of software used by many, developers have a responsibility to help users reap the benefits of progress while safeguarding them against its pitfalls.

AI and social engineering can be a dangerous combination. But together we can guard against it.

Mobile application security involves more than just running tests and applying protections at the end of each development cycle. Security should be a continuous undertaking. Something that evolves alongside your app to stop the sophisticated threats that surround it.