Menu
Menu
inquire

Why trusted signals are the key to closing the widening mobile trust gap

Why trusted signals are the key to closing the widening mobile trust gap

A look back at the biggest mobile channel security trends of 2025 and what to keep an eye on in 2026.

From the bank’s perspective, it was a routine onboarding session; the kind designed to pass through the system without human attention.

The new user had opened the bank’s mobile app and started the registration process, granting camera permissions and following the prompts one by one. The liveness check passed, and there were no issues with the requested selfie which matched the uploaded passport photo. The device profile landed comfortably in the “low risk” bracket the bank had categorized, too – the handset was mainstream, it was running the latest OS, there were no obvious tampering indicators, and no malware flags.

In other words, there were green ticks all the way down the fraud console. Nothing about the session triggered a manual review. There was no need. Modern fraud systems aren’t built to scrutinize individual events – they’re built to recognize patterns, and this one looked reassuringly familiar.

Two hours later, the same brand-new account initiated a high-value transfer. Then another one. And another. The timing was measured, the amounts sitting just beneath internal thresholds that demand immediate investigation. From the point of view of the backend, this was still a legitimate customer behaving cautiously.

It was only later, once the activity was correlated, flagged retrospectively, and examined during investigation, that the digital mask slipped off and the story changed.

When analysts replayed the onboarding session end-to-end, the illusion became clear. The selfie wasn’t actually a selfie at all, but a deepfake streamed via a virtual camera. The device wasn’t as healthy as it seemed, either. It had just been staged to look healthy. Every signal the system had relied on was technically valid but fundamentally misleading.
As a mobile banking client of ours described it to us afterwards:

“The backend didn’t fail in this instance. It simply believed what it was shown.”

If there’s a lesson from the world of mobile security in 2025, it’s that there’s a big difference between what the backend believes is happening and what is actually happening inside the device and the application at runtime.

A battle for trust is now taking place on the mobile device, and we’re collectively depending on client-side signals like camera input, device state, and runtime behavior at the exact moment attackers have learned how to fake these signals with calm precision.

The result is that a trust gap is widening between what the backend assumes is true and what is actually happening inside the app itself. And it’s from within this growing gap that the most dangerous forms of mobile fraud are being born.

This article is about that gap, the trends that have helped to widen it in 2025, and why Trusted Signals are the key to closing it in 2026.

Closing the Mobile Trust Gap

The striking thing about the story above is that it wasn’t strange signals that caught the attention of the bank’s fraud team, but rather clean, perfect-looking ones. This last year has proven to us that mobile fraud can announce itself via precision, be that deepfakes matching expected liveness checks, emulator farms mimicking real devices flawlessly, or malware that drives a device in the same way that a human would. 

Typically, there has been an assumption that certain signals like geolocation, camera input, biometric checks, device model and OS versions, and network characteristics were inherently reliable and trustworthy. But this assumption is flawed, because these signals can be spoofed, staged, emulated, and tampered with – often with flawless precision and, increasingly, at scale. 

If we don’t want the trust gap to widen even further, then a philosophical rethink is required. We must question these signals and validate the integrity of the environments where they originate from. 

What is the value of a signal, after all, if it cannot be verified and trusted?   

A trusted signal can inform us about the trustworthiness of the runtime environment, and the authenticity of the device environment. It can tell us about the integrity of the application, and it can tell us about the presence of malware or other forms of manipulation on the device. 

At Licel, we believe that 2026 will be the year that we stop assuming trust and start engineering it instead.

Authentication Overload

Once upon a time, you would log into your online bank on your laptop with a login and password, and then receive a one-time-password (OTP) on an isolated environment – a hardware token, for example, or even an SMS on your mobile device. The second factor really was a second factor, because you received it somewhere else.

These days, because the mobile device is stuck to our fingertips at all times and is used for more or less every digital interaction and operation we carry out, the dynamic is much changed and a lot more complex. We’ve layered on a dizzying array of authentication measures to try to make sure that the person behind the screen is the genuine user and not an imposter. These include:

  • SMS OTPs and push notifications
  • Email codes and magic links
  • Authenticator apps
  • Passkeys
  • OS-level biometrics
  • Facial recognition and liveness checks
  • Callbacks and human verification 
  • NFC scans

The fact that this is by no means an exhaustive list tells you something about the level of complexity that exists in the world of authentication right now. We’re adding more and more locks to a door, but somehow it doesn’t feel any more secure than it did when there was just one lock attached to it. Why is that?    

Take the last of the authentication measures on this list, for example. You might assume that an NFC scan – say of your government-issued passport – is the pinnacle of security, but sadly this isn’t the case. Malware is capable of replaying data without the passport even being required. Attackers have also taken to redirecting payment flows to fraudulent endpoints, including other mobile devices. This is known as an NFC Relay (or NFC Proxy Malware) Attack.   

The key flaw with almost all of these factors of authentication is that they land on the same device that you’re using to carry out the sensitive operation in the first place; the mobile phone. The upshot of this is that instead of three distinct factors of authentication, you effectively have one potentially compromised environment wearing three different costumes. You end up with something performative – an authentication illusion rather than reliable authentication. Push notifications and passkeys are designed to be phishing-resistant, but if the device or app are under an attacker’s control, then it’s all quite pointless. 

Part of how we got here is the perpetual tug of war between user experience and security. It makes sense that product and marketing teams push for smoother, quicker onboarding flows and that engineers and security teams might want a little bit more friction if the payoff is higher assurance. What typically happens is that the UX argument wins by default and then another security method is patched on later. Another lock is added to the door, but it doesn’t make it any more secure. 

The sense is that something will need to change in 2026. It isn’t sustainable to keep stacking authentication factors on top of a device that you can never fully trust. 

The answer is to make sure that we can trust the signals that we receive from the device environment in the first place. Are we sure that the app hasn’t been tampered with or repackaged? Is the session behavior human? And is it really the user’s profile, or a well-crafted emulator imitating it?

Forward-thinking companies aren’t attempting to add even more authentication. They understand that three factors of authentication (something you know, something you have, and something you are) will suffice if they have a clear sense of what is happening on the device and are confident they can trust the signals emerging from it.

Deepfakes and Virtual Cameras: when the person signal fails

The mobile banking story at the beginning of this article revealed that attackers are increasingly targeting the interface between the human and the device. They are tricking the app into believing that a person is present when they’re not.

The steep rise in the number of camera injection attacks is particularly alarming. By bypassing the physical camera altogether and intercepting the video feed to inject synthetic or pre-recorded imagery into the app’s input stream, bad actors can fool the eKYC checks, making the system think that it is looking at a real human.  

Hybrid deepfakes are also gaining in popularity. This is where an attacker will start with a real person’s image from social media or a stolen document and, with the help of AI, produce realistic identities that look natural and stand a better chance of passing liveness checks. 

This trend exposes an uncomfortable truth that camera data has historically been trusted because it originates from the device’s camera API, but these days that isn’t a guarantee that the data can be trusted. Not if you can’t confirm the integrity of the environment that is generating those images. The answer is to make sure that you integrity-verify camera sessions to make sure the stream is genuine, carry out runtime checks to detect virtual camera modules, and protect the execution of biometric operations. 

If you can’t trust the signal, then you can’t trust the biometric. And if you can’t trust the environment, then you can’t trust the signal. Pivoting from asking whether you’re looking at a real face to asking whether you can trust the system that claims it’s a real face is likely to be a key shift in 2026 in the fight against eKYC fraud.  

Emulators and Anti-Detect Platforms: when the device identity becomes a lie

If this last year has shown us that human identity can be convincingly spoofed, it has also been the year that attackers perfected the art of impersonating a device. This rise of industrialized device evasion has shown us that today’s bogus environments don’t only hide “bad signals”, but also fabricate good ones. 

Instead of manually modifying a rooted or jailbroken phone, attackers can now:

  • spin up dozens (or even hundreds) of mobile devices
  • randomize their fingerprints to match “clean” hardware
  • spoof GPS, carrier data, manufacturer info, sensor outputs, and other IDs
  • automate virtually every interaction in the session

The issue – as with other types of attacks covered in this piece – is that backend fraud analytics are easier to fool than a hardened mobile application. If the fraudulent request is coming from a familiar device model with the right OS version and a matching device ID, then the backend doesn’t have a reliable way of knowing that it’s being lied to. 

The barrier to entry for attackers has also been lowered, with fraudulent mobile ecosystems now available on underground marketplaces. That means these types of attacks are much easier to carry out (and scale) than they used to be. 

They key to stopping emulators and anti-detect platforms is to make sure that protection is focused on the application’s runtime environment, as backend systems are blind to what is happening there. Mobile apps need to enforce tamper detection, emulator detection, protections against hooking, runtime application self-protection (RASP), and integrity checks.  

This brings us back to the concept of trusted signals. In this case, that means signals that are tied to the integrity of the environment that produced them; it isn’t enough to accept device-level telemetry without first questioning and then verifying the trustworthiness of the environment. 

A single attacker can now operate hundreds of bogus user devices at the same time, each of them with a carefully forged identity. The bare minimum we should be doing is making those devices prove themselves cryptographically. 

Mobile Malware’s Evolution and Hybrid Scams: when control falls into the wrong hands

Mobile malware has now reached an industrial scale. Last year we highlighted a particular strain that spread rapidly across India in a manner reminiscent of a biological virus – indeed, the map we created for that piece of content instantly brought back memories of the Covid visualizations we all became so used to seeing several years ago. 

Malware, much like a biological virus, is fast, adaptive, and opportunistic. Its sophistication has advanced in the last year to the point where it’s now legitimate to ask the question: 

“Who is actually controlling this user session?” 

On-device remote access trojans (mobile RATs) enable attackers to stream the victim’s device in real time, interact with the banking app directly, input passwords, PINs, or OTPs that the user is tricked into providing, move funds from inside the authenticated session, and hide activity behind fake screens or UI locks. 

From the perspective of the backend, everything looks normal – something of a running theme in this article. The victim’s device is treated as a remote-controlled terminal, and so every fraudulent action is performed on what appears to be a legitimate device, through a legitimate app, over a legitimate network, using legitimate credentials. 

But while RATs and trojans have become more sophisticated, malware attacks still rely on psychology to succeed. Human mental traps related to emotions like fear, reassurance, and desire are routinely exploited with a view to making people act quickly and unthinkingly, and to click on something they shouldn’t. The most successful scams combine the technical and the human; social engineering primes the victim, and then malware executes the fraud.   

In one scenario, a victim might search for the bank’s support number online and come across a malicious ad placed by an attacker. The victim calls what they assume to be the legitimate number, and a scammer on the other end of the line convinces them to install a bogus app that will give the attackers remote control to resolve an urgent issue. 

The upshot of an attack like this is the attacker gaining the user’s trust, attention, and device, in one go. Victims often comply with what the attackers ask them for not because they are naive, or stupid, or because they aren’t tech savvy enough, but because bad actors carry out their attacks with professional-style skill and choreography. It’s a dance where they know all the steps inside out and for which we – the victims – are completely unprepared. It’s no surprise that it’s so easy for us to trip up. It can happen to any of us, too; even those of us who work in cybersecurity and know the signs to look out for.

What is more, AI has also helped to make social engineering even more convincing, adding an extra layer of difficulty when it comes to avoiding becoming a victim of malware attacks. 

This is why malware is such a huge challenge right now for a range of organizations, but especially for those in the banking and payments sector. Humans, like devices, are breachable, and it’s a lot harder to protect humans than it is to protect a device or application. You can (and should) educate your customers to give them a better chance of spotting bogus communications, but in the modern digital world you cannot train away distraction. 

And so other anti-malware mechanisms take on even greater importance. Closing the growing trust gap will require runtime integrity checks to make sure the app hasn’t been tampered with, checks for malware and environment signal changes, and checks for behavioral signals to help determine whether input appears to be human or automated. 

Also of increasing importance are tamper-proofed, trusted signals in the form of threat intelligence that not only detects the presence of malware on a device, but also gives banks the ability to make nuanced decisions rather than blanket blocking. For example, high-value transactions can be delayed by several hours and communications can be initiated with the owner of the device to inform them of the suspected manipulation. 

The Stakes Are Getting Higher and Higher

A big reason why the trust gap has widened in 2025 is that, at the same time that the attack trends we’ve covered up to now – deepfakes, emulators, malware, and social engineering – have been increasing in both quantity and sophistication, the importance of mobile trust has also skyrocketed. 

In the past year, the mobile device has remained central to user identity and transactions, but has also become the gateway for national digital identity systems, regulated payment ecosystems, and new forms of authentication and authorization. 

That means that the cost of trusting the wrong signal has never been higher.  

Digital ID ecosystems gain traction

In October this year, the UK government announced its intentions to build and roll out a digital identity scheme in the coming years. It joined other countries around the world whose governments are currently working on Digital ID Wallets, Digital Travel Credentials (DTC), Mobile Driving Licenses (mDL), eIDs and Residence Permits, and Digital Health Records – in some cases all at once. 

These systems will all rely on everyday mobile devices, they will allow citizens to authenticate into services with high legal and financial impact, and they will become a kind of master key for accessing banking, healthcare, travel, and payments. 

It goes without saying that there’s a lot of risk involved with hosting such a vital system – one that will essentially become a hub of citizen end users’ digital lives - on the mobile device; a completely untrusted environment that might be compromised via some of the vulnerabilities we’ve already covered in this piece. As we hope you’ve gathered if you’ve read this far, trust cannot be assumed on the mobile device.  

A compromised Digital ID Wallet is more than an example of mobile fraud. It could completely destabilize trust in government services, border control, voting systems, welfare payments, and more besides.

We’ve written a lot about Digital ID in recent months. If you’re interested in finding out more, you can read our vision for how to protect digital identity ecosystems in the coming months and years.

Integrity becomes a procurement language

The shift to standardized digital credentials is particularly important in the digital identity space. Key international initiatives like the EU Digital Identity Wallet (EUDIW) demand interoperable, user-centric, and secure digital identity systems. 

Across a range of verticals, assurance is becoming mandatory. Industry standards such as EMVCo SBMP, PCI MPoC, ISO/IEC mobile security baselines, and Digital ID trust frameworks like eIDAS and ICAO have all converged around similar security expectations. These include protected and trusted execution for sensitive operations, strong runtime integrity verifications, resistance to malware and dynamic tampering, and robust attestation. 

During this past year, several of our clients have used our security components to help them achieve EMVCo SBMP approval. This is cause for celebration for them and is a sign of their maturity and the way they think about security. But it is also a pointer for the wider trend of where procurement is heading and how important compliance is in narrowing the trust gap. 

For buyers, high-level claims like  “we obfuscate the app” don’t cut it anymore. They increasingly expect evidence of third-party evaluation and approval from respected global bodies. From 2026, compliance with standards is set to be seen as the minimum expectation.  

Post-Quantum Cryptography: planning for long-term security

Post-Quantum Cryptography (PQC) has been on the agenda for security teams for years now, but it has felt like there was a lack of urgency for it to be implemented. The stakes are much higher now, however. Multiple nations are clearer about their pathway to accelerate migration and there’s more clarity around PQC threat models; it no longer feels like an academic issue. 

PQC is particularly important for mobile security because transactions increasingly use long-lived public keys which are often stored and managed on mobile devices. Also, going back to the planned implementation of digital identity solutions, it’s important to keep in mind that Digital ID credentials might live for decades. Compromised devices could potentially leak credentials, keys, and other secrets that will be vulnerable not only today but in the years to come, too. Even if attackers can’t break a cryptographic algorithm today, they could still steal it now and break it further down the line. Sensitive personal information – think medical test results or intimate photos – could even be stolen for future blackmailing purposes.  

Again, this is why trusted signals are so important. You need to ensure the integrity of cryptographic operations and the environments where they occur (protected and trusted execution, and tamper-resistant enclaves). This matters as much as the cryptographic algorithm itself, and explains why we’re so invested in this protection capability here at Licel. All of our security solutions are post-quantum ready. 

Mobile APIs are a big target

In recent months, we’ve seen a significant uptake in the deployment of our DexProtector Mobile API Protection mechanism to mitigate critical security and business threats. These include botnets, bonus program abuse, and fraudulent communications with hardware NFC digital wallets.

Unsecured APIs isn’t just an Android problem anymore, which is why we’ve recently extended this mechanism to iOS. Indeed, Apple's introduction of app side-loading capabilities in the EU earlier this year has resulted in the iOS platform becoming an increasingly attractive target for attackers.

The ability to safeguard the communication channel between mobile applications and their backends is therefore now fundamental on both major platforms. Mobile API Protection enables client backend servers to check the JWT token; if the verification of that token fails, that means it is not the original, authentic app but rather a tampered version of it, a clone, or a bot, for example. In other words, it helps to makes sure that only authentic and integrity-verified app instances can access protected APIs.

This security is more important than ever, because as we’ve said a few times already in this piece, a lot of requests are now arriving at the backend that at first glance appear completely legitimate. Without trusted signals, the line between what is genuine and what is wearing a convincing costume is incredibly blurry

Agentic AI swarms: the new DDOS

Mobile API Protection is particularly important because the next iteration of API abuse is likely to be an army of autonomous AI-driven fraud bots capable of multi-session testing, credential stuffing, and synthetic persona creating, all the while learning in real time from failed attempts. 

Think of it as thousands of coordinated software agents behaving like legitimate human users. Right now, it’s extremely difficult for most backend fraud models to tell synthetic humans apart from real ones – that is unless the environment itself provides trusted signals. 

Without the ability to verify environments safety, protect execution paths, attest runtimes, and have trust in how inputs and threat intelligence data is generated, these agentic AI swarms will be able to cause havoc. They could open accounts, make transfers, and carry out wallet activations. All at scale and with frightening precision. 

Our Wildcard Trend: Neuro-Metric Trust

When we set out to write our annual trends article, there’s normally one that we spot quite far out in the distance, barely perceptible at first, but moving at a pace that makes it worthy of an honorable mention.

This year we want to talk about the rapid rise of consumer neuro technology and biologically-linked wearables. It might not be entrenched in the next year, but as we approach the end of this decade, it’s likely that authentication will be increasingly biometric. Think about types of attention, focus patterns, stress responses, and emotional states. Security researchers are exploring whether these could be used for passive authentication, replacements for behavioral biometrics, or an additional trust factor that can be layered onto high-risk actions.  

It isn’t as far-fetched as it sounds. And the worry is that, as we’ve explained in this article, if a signal can be collected, then it can probably be spoofed. After all, two or three years ago, the idea of someone’s face or voice being spoofed almost certainly would have sounded like a scene from a Black Mirror episode, but here we are. 

AI models might learn to replicate certain cognitive-linked patterns, attackers could replay captured neuro-signals, compromised wearables could leak biometric telemetry, and synthetic neuro-behavioral profiles could even be generated on command. 

Might there be a time in the coming year or two when analysts call for some form of neuro-encryption to make sure that raw neurological data never leaves the device? 

Every new identity signal we create eventually becomes a new class of attack surface. The same will be true of neuro-metric authentication and, as is the case for the more current attacks we’ve reported in this article, the only durable defense is to make sure that the environment producing the signal is trustworthy. 

What it all Means for Mobile Channel Protection in 2026

Integrity, Integrity, Integrity

In 2025 we witnessed clean devices being emulated, intact apps being repackaged, real camera feeds being hijacked, legitimate biometrics being performed by malware, and trusted sessions being remote-controlled. All of these issues stem from the same problem: that the system cannot prove whether an environment can be trusted or not. 

Integrity checks are a vital mechanism for deciding whether an application or environment has been tampered with or modified illegally, and, therefore, whether it can still be trusted. 

Here at Licel, DexProtector carries out integrity checks and RASP checks for our clients’ applications, verifying integrity and trustworthiness at a variety of different levels. It is able to detect and disallow rooted and jailbroken devices, dynamic instrumentation tools (like Frida), and hooking frameworks, tampered and repackaged apps, and more. 

It also tamper proofs the sensors and signals that it relays to our threat intelligence solution, Alice. 

Integrity matters across most of the attacks trends we’ve covered in this piece. It should help us to decide whether a device can prove its trustworthiness, it can help determine whether camera streams are genuine and whether sessions are being injected, it can spot (and stop) emulated devices that are pretending to be real ones, and it can prevent increasingly sophisticated Mobile API abuse. 

Remember that without integrity checks, signals are just noise. With integrity checks, they become trusted evidence that enable you to make smarter, more serious business decisions. We think that this is going to be a massive value proposition and differentiator in 2026.

See the unseen and act with confidence

The whole point of articles like this one is for us to take a step back, consider the threats that have mattered most to our clients this year - the ones that we have worked so hard to mitigate and prevent - and then share the big picture of what it all means with you, the reader. It’s one more element of a continuous security strategy that guides the work we do at Licel. We know that attacks are constantly evolving and that standing still isn’t an option.

Our Threat Intelligence solution, Alice, takes DexProtector’s tamper-proofed, trusted signals and turns them into insights that mean something to our clients. These signals tell a coherent story about the ever-evolving threat landscape that surrounds their application and empowers analysts and SOC teams to make important decisions based on real, reliable inputs.

Trusted threat intelligence like this helps to turn a static control room into something more dynamic. It provides ongoing insights about new malware signatures and behaviors, emerging evasion techniques, device-spoofing tactics, evolving injection and accessibility attacks, and abnormal app usage patterns, among other vital data.

Trusted execution for the operations that matter most

In this article, we’ve hopefully convinced you that the mobile channel in 2025 is an inherently untrustworthy environment with a variety of malicious threats floating in the ether. How, then, can we hope to rely on applications in 2026 that operate in this murky world? How can we bring security to an environment that cannot be trusted by default? 

Introducing trusted execution: its role is to divide the execution environment between two worlds; the trusted world and the untrusted world. It’s a protected space where private keys can be stored, and sensitive operations and transactions can take place (like encryption, decryption, authentication, and secure storage of tokens and keys).

A few weeks ago we published an article that looked even further ahead - to 2035 - and imagined how we can best secure identities (of both humans and the AI agents we’re going to be relying on to carry out a lot of our digital tasks) between now and then. The concept of trusted execution played a massive role in our vision for achieving this goal. 

Here at Licel, our virtual Trusted Execution Environment (vTEE) is already helping our clients in the wallet and SoftPOS space to protect their sensitive logic even if the device around it cannot be fully trusted. 

From 2026, it will be much more likely to hear the security teams saying “if it matters, run it in a trusted execution environment.”


None of these three vital pillars of security in 2026 will work independently. The real value of integrity, trusted intelligence, and trusted execution emerges when they work together to produce signals that can’t be tampered with, can’t be spoofed by malware, can’t be forged, and can’t be emulated at scale.

That way you end up with signals that are based on reality rather than assumptions. Signals that give SOC teams the power to say “I know what is happening on this device; I can act with confidence.”


2025 was the year the mobile trust gap became impossible to ignore.

We’ve seen how convincingly identity can be faked, how easily devices can be impersonated, and how quietly sessions can be taken over, while backend systems continue to behave exactly as they were designed to.

The answer isn’t to add more checks or more complexity. It’s to stop working on assumptions that attackers no longer respect. Trust must be engineered, not inferred.

As we move into 2026, the organizations that succeed will be those that anchor every critical decision that they make in signals that are verifiable, resilient, and grounded in reality.  

In other words, they will be the ones who know - rather than guess - which signals they can really trust.

Explore our use cases to discover how Licel solutions are future-proofing the security of applications across Mobile Banking, Mobile Wallets, SoftPOS, and Digital Identity.

Read our use cases