Menu
Menu
inquire

When Trust Becomes the Attack Surface

When Trust Becomes the Attack Surface

Why modern fraud succeeds not through ignorance, but through pressure, context, and manipulated decisions

For a long time, scams were seen as a failure of knowledge. If someone was deceived, it was assumed they had missed something obvious like an unusual email address, a poorly worded message, or a detail that didn’t quite fit.

But that explanation no longer holds.

Today, many of the most effective attacks succeed not because people lack awareness, but because they are required to make high-stakes trust decisions inside systems that provide limited context, under conditions shaped by urgency, pressure, and manipulation.

A few months ago, a friend of ours was contacted about what appeared to be a legitimate job opportunity. After several rounds of credible email exchanges and a convincing video interview, she was invited to a final stage assessment. As part of the process, she was asked to log in via a familiar identity provider and approve a notification on her phone to continue.  

Moments after approving the request, the call ended and attempts to contact the company went unanswered.

Several anxious days passed until our friend received an email and notification from her credit score provider to tell her that her application for a bank account and loan had been approved. And it was also communicated to her that she owed a substantial fee from recent credit card transactions. Her fears in the days since the call had been confirmed; what had appeared to be a routine step in a structured hiring process had, in reality, been a carefully constructed attack designed to trigger a single, time-sensitive action.

From the outside, this might seem like a simple mistake. But viewed within the context of months of job searching, a seemingly credible process, a familiar interface, and a real-time request framed as standard procedure, it becomes something else entirely:

A decision made under pressure, with incomplete information, inside a system that offered few reliable signals of risk.

This isn’t an edge case but rather is part of a broader shift in how fraud works today. And it only becomes visible when you stop looking at systems in isolation and start examining the conditions under which people are asked to trust them. 

The exploitation of trust under pressure

A pattern is repeating on a loop; scams are succeeding largely because they align with our own moral reflexes and how we’re expected to behave in modern systems. 

A lot of attacks these days are the digital equivalent of holding the door open for somebody behind you who appears legitimate. A reflex act, shaped by hundreds of years of social norms and system design, can be exploited by someone with malicious intent. 

In the digital world decisions are compressed into seconds, often without the context needed to evaluate them properly. Our friend made a split-second decision in the job interview that, in isolation, looked routine. Only later did it become clear that the situation itself had been manipulated.

Scams don’t only happen in workplace environments; people assume a baseline of legitimacy in all digital platforms, interfaces, and communications that appear familiar. The reality is that we all rely on signals that are increasingly easy to spoof. People also tend to assume reciprocity; that other people navigating the digital world are like them and operate in good faith.

There is an argument that victims mistakenly placing their trust in scammers are simply behaving like human beings; the way we’ve been programmed and conditioned to behave for thousands of years. It requires a bit of a mindset shift to understand that social engineering works not because people are careless, but because they are responding to situations that have been carefully engineered to appear legitimate. But the more you think about it, the more you know it to be true. 

All of us are operating in a digital economy where trust and integrity have become an attack surface. Where the very behaviors modern digital systems are designed to encourage – trust, urgency, and cooperation – are exploited. But we’re not going to stop acting with integrity and trusting one another, so what’s the solution?         

The mobile device: the new trust anchor

We need to look at where so many of our decisions now live: our go-to, always-on companion; the place where our professional and personal lives converge.

There was a time when our digital lives were spread more evenly across a number of devices, but the increased power of the smartphone and a prioritization of convenience above all else has resulted in almost all of our daily activities now being mobile-based. From identity, to banking, to authentication and approval, through to messaging, it all converges there.  

The mobile device has quietly become the identity token of the digital economy.

We use the mobile device everyday in a way that might lead you to assume it’s completely trustworthy, but the mobile channel is anything but – it’s one of the most hostile (and most targeted) environments. The mobile channel is where OTPs are intercepted, where attackers carry out account takeovers and eKYC fraud, and where malware exploits accessibility services and overlays to capture credentials.  

It’s an environment where the systems that have been created increasingly rely on user participation in security decisions, but where users lack the context to be able to make those decisions safely.

The industrialization of deception

Good people operate on trust, while attackers operate on optimization.

And mobile channel trends – combined with rapid advances in AI – have enabled bad actors to optimize and scale fraud with increasing efficiency. Automation, AI-generated phishing kits, voice cloning, and deepfakes have improved both reach and plausibility.

At the same time, the systems themselves have become more complex. There are now more steps, more approvals, more signals, but not necessarily more clarity for the user. This creates conditions for what might be called verification fatigue: users become used to approving requests without scrutiny.

Layer on top of this broader social pressures like job insecurity, economic uncertainty, and the need to remain relevant in the age of AI, and you get a population more likely to engage with opportunities that promise improvement, even when signals are imperfect.

Think for a second about the current direction of travel:

  • AI-powered social engineering with perfect tone, timing and personalization, indistinguishable from legitimate communications
  • Increasing system complexity with more approvals, more signals, and more noise (and less clarity for users)
  • Verification fatigue with users increasingly conditioned to approving without thinking

Now, add to that a desperation of people to be relevant; to matter and be noticed, and to secure their ticket onto a train that they fear they may not be able to board in time.

It’s the perfect storm.

And the winds can blow even more wildly in times of global instability. In times of crisis, when flights are grounded, financial markets fluctuate, or conflicts escalate, attackers move quickly to impersonate authorities, service providers, or aid organizations.

In recent weeks, the war raging in the Middle East has left thousands stranded as flights were grounded. Within hours of this particular crisis, scammers began targeting the desperate. Fake airline social media accounts were created and people looking for a way out of the region began receiving direct messages from these accounts, promising to get them a flight in exchange for requests for personal information and links to money transfer apps to purchase tickets.

Urgency, information gaps, and heightened emotional states can all degrade decision quality. Global instability doesn’t create fraud, but it amplifies the conditions under which it thrives.

Protecting trust in an adversarial world

The world in which digital decisions are made has changed.

People are now routinely asked to make security-critical choices in environments defined by urgency, incomplete information, and increasingly sophisticated deception. At the same time, the signals they rely on to judge legitimacy - interfaces, messages, identities - are becoming easier to replicate and harder to verify.

This is creating a structural tension that all of us are beginning to feel.

Trust is essential to how digital systems function, but what if the conditions under which that trust is exercised are no longer stable?

The response cannot be to expect perfect judgment from users operating under this pressure. Nor can it be to remove trust entirely. Instead, a shift is required in how systems are designed; away from models that depend on momentary user decisions, and toward models that remain resilient even when those decisions are manipulated.

This is particularly important in the mobile channel, where identity, authentication, and approval increasingly converge on a single device. As we explored in our recent article about The Mobile Authentication Illusion, adding more steps does not necessarily create more security, especially when those steps rely on the same environment that may already be compromised.

The challenge ahead is not simply to detect fraud more effectively, or to educate users more thoroughly. It is to rethink where trust resides in digital systems, and how it can be protected when both human behavior and technical signals are subject to increasingly sophisticated and convincing forms of manipulation.

In a world where deception is industrialized, trust cannot disappear, but it can no longer be left unprotected either.

Future-proofing digital trust

Human advancement has relied on trust. That has not changed and will not change. What has changed in recent years is the environment in which trust is exercised.

Scammers are scaling deception by exploiting system design, human psychology, and contextual pressure. The response cannot be to abandon trust, but to build systems that are resilient to its exploitation.

The digital economy will not function if trust collapses. But it also cannot rely on users making perfect decisions in imperfect conditions.

This responsibility rests with all of us, and here at Licel we’re committed to playing our part.

Our work has always focused on defending the integrity of the mobile channel, because we know that is where modern-day high-stakes decisions are made - and where they must be protected.