At Licel, we’ve been providing security and anti-fraud solutions for mobile apps since the introduction of DexProtector in 2013.
Over those 12 years and more, organizations have become increasingly aware of the importance of building security into their apps on both Android and iOS.
From obfuscation and encryption, to runtime application self-protection, anti-tampering and anti-hooking, through to root detection, emulator detection, and a whole range of device attestation mechanisms: these have gone from specialized technologies to essential items on mobile security checklists.
Meanwhile, backend and infrastructure teams have built up their own world of protections: web application firewalls, API gateways, rate limiting, DDoS mitigations, hardened identity and access layers.
And when it comes to security in the interfaces and interactions between the mobile app and the backend, since TLS became the standard for securing app-to-backend network connections, security teams were primarily tasked with solving one particular problem: how does the app know it is really communicating with the genuine backend, and not with an attacker in the middle?
For this, public key pinning became the de facto standard method for the app to authenticate the server.
But there is a mirror image to this problem which initially received less attention, although in many ways it constitutes at least half of the problem of trust in the mobile channel: how does the server authenticate the app?
Most organizations have tried various solutions over time: API keys, shared secrets used for request signing, client certificates presented for mutual TLS authentication, and more.
In principle, these controls look like the natural counterpart to pinning, because they seem to complete the symmetry: the app authenticates the server, and the server authenticates the app.
In practice, though, these mechanisms have a fundamental limitation: what the server is authenticating is not an app but a credential: a string, or a signature or certificate produced with a key that the app holds.
This is a particularly significant issue in mobile security, because attackers in the mobile channel have endless opportunities to observe, analyze, and manipulate apps, and to extract data from them.
That means if an API key or shared secret can be extracted once, whether through dynamic analysis, memory dumps, man-in-the-middle attacks, or simply being stolen from somewhere it was handled unsafely, then it stops being a trust signal and becomes a free passport to the server-side, something that any arbitrary client can present just as effectively as the genuine app.
In other words, these approaches may not provide a high-confidence proof that the request is coming from a legitimate, untampered, up-to-date version of the mobile app, as opposed to a bot, script, emulator, or impostor that has copied or been given that credential.
And there’s also a second, related concern: in many cases, outdated app versions continue to be served by the backend long after they should.
Since in-app security and anti-fraud controls are continuously being refined and enhanced and extended, older app versions are easier targets, and often carry weaknesses or vulnerabilities that have been mitigated in subsequent updates.
Attackers and fraudsters, unsurprisingly, don’t insist on fighting against the latest, strongest release if the backend continues to accept traffic from an older, weaker one, and the result is that the trust model is quietly undermined: if the backend still serves their requests, attackers will simply exploit the weakest viable version.
And in many organizations, the problem is compounded because the ‘mobile API’ isn’t even a distinct surface: the mobile app sometimes shares the same public endpoints as the web client, or reaches into shared internal services. That makes mobile-specific enforcement much harder.
Real fraud campaigns exploit these gaps.
The following are some concrete examples of how they do so, based on what we have seen and heard from organizations around the world. Some details have been changed to preserve anonymity.
Case 1: ‘Sleeper mode’ trojan targets Middle Eastern bank
A bank in the Middle East launched a new mobile app with a major marketing push. Customers were encouraged to download it, enrol, and start using it for everyday transactions.
Fraudsters noticed the marketing and saw this as a great opportunity. They quickly created and circulated their own impostor version of the new app, using phishing campaigns and direct downloads to spread the cloned app as far and wide as possible.
To the bank’s unsuspecting customers, this impostor app looked normal. The UI was identical to the original one, for one thing. But more importantly, the app just worked. Balance checks were available and accurate. Paying bills worked. Transfers arrived as expected, on time, to their intended recipients.
In other words, the ‘cloned’ app was communicating with the bank’s real APIs just like the genuine original.
The problem was that the app also happened to be collecting data and reporting it to the fraudsters’ servers in parallel. Data such as user credentials; OTP flows; device characteristics; where the customer was when they performed transfers; when the customer usually logged in; who they paid and how much.
This went on for months, and still the fraudsters did not cash out. They captured as much data as possible from as many customers as possible.
And then the fraudsters’ campaign switched to phase two.
At this point, transactions started appearing that could go unnoticed; the amounts were relatively small, the payees looked plausible, the timings were aligned with the customer’s typical activity window.
There was nothing to trigger a fraud engine’s red flags, nothing strong enough to block transactions or close accounts, until finally the bank’s fraud team became aware of a flurry of disputed transactions.
Case 2: Credential stuffing that looks like ordinary mobile traffic
A bank in Latin America noticed a growing number of disputed transactions, performed via applications on both Android and iOS.
From the security team’s point of view, the transactions looked legitimate. The API calls were correctly formed and payloads looked normal.
As they pieced together the broad timeline, though, a hypothesis emerged that fit the shape of what they were seeing: a quiet credential stuffing campaign aimed at the same authentication endpoints the mobile app uses. Not loud or involving brute-force in the classic sense; credential dumps tested carefully, a few attempts per account, spread over time and across multiple plausible IP addresses, with requests shaped to resemble normal mobile logins.
There was no immediate, dramatic cash-out. Instead, the access was used patiently, to learn enough about each account to move later without standing out: long-lived access and low-friction fraud.
The fraudsters might have used real instances of the mobile app, on genuine mobile devices, for this campaign, but the bank’s behavioral analytics and emulator detection would have made it very difficult. Targeting the mobile APIs directly, and bypassing the app, provided the easier route to account takeovers.
Case 3: Version rollback enables fraudsters to spoof ID verification
A bank in Western Europe had been steadily reinforcing its mobile onboarding controls and eKYC procedures in an attempt to reduce money mule account creation with synthetic and stolen identities.
ID verification and liveness checks were already in place, but were still being bypassed through code injection and deepfake technologies. The bank had therefore decided to ramp up their integration of RASP and anti-tampering protections.
Fraudsters noticed the change quickly, and instead of trying to defeat the new protections head-on, they took the easier route: they simply rolled back to an earlier version.
When the bank’s fraud team realized that incidents were surging again after an initial decline, they analyzed the data from onboardings. One detail kept recurring: a surprising number of the mule accounts were created with an older and less used, but still supported, version of the app.
The bank’s next move was to get more restrictive about supported versions: gradually limiting service for older builds, prompting harder for updates, and adding backend checks to enforce minimum versions for onboarding.
For a while, it worked. Then incidents began climbing again.
When the fraud team dug deeper, they found indicators through threat intelligence that mule accounts were still being created from older, unprotected app builds; fraudsters had simply adapted by spoofing the app’s version identifiers, so simple version gating at the API layer was ineffective.
Of course, some additional mitigations, spread across the client-side and the server-side, would have made things more difficult for the fraudsters.
In all three examples, their reconnaissance would have been harder if endpoints were well hidden and there were strong defenses in place against analysis of the mobile app’s network communications.
Certainly, if the Middle Eastern bank had implemented robust obfuscation and encryption of code and assets, and proper anti-repackaging mechanisms, it would have made it a lot more difficult for the fraudsters to ship a ‘cloned’ version of the app convincing enough to fool the bank’s customers. More customer education campaigns might have been valuable as well.
And perhaps more effective fraud telemetry and analysis, both in-app and server-side, identifying inconsistencies and anomalies in behavior and transactions, could have flagged the automations and unauthorized activities.
Finally, traditional ‘app authentication’ measures, leveraging API keys, request signing, or mutual TLS, could certainly have made life more difficult for the fraudsters.
But these mitigations are not quite enough, either individually or collectively.
Public endpoints will always remain public, and determined attackers are likely to find their targets.
Customers can be tricked by impostor apps that are only superficially similar to the real thing.
In-app fraud telemetry components can be tampered with, and fed fraudulent data, especially on compromised devices.
And, as we have seen, traditional app authentication measures authenticate a credential, not an application. If the secret (the API key or signing key or certificate) can be extracted, it can be abused by an untrustworthy client.
Which brings us back to the core point: trust in the mobile channel can’t depend on insecure apps or static secrets, because the mobile client is a target that attackers will have ample opportunity to observe, instrument, and manipulate.
Traditional app authentication mechanisms mostly answer: 'Does this caller have the right credential?'
But to establish trust in the mobile channel, the backend actually needs answers to at least three questions:
- The question of authenticity: Is this request coming from a genuine version of our app on a genuine mobile device, and not from a script, bot, emulator, or impostor?
- The question of currency: Is this request coming from a recent version of the app, implementing up-to-date security controls?
- The question of integrity: Is this request untampered, and is it coming from an untampered app, running in a trustworthy environment?
Mobile platforms provide some building blocks towards these answers. Google’s Play Integrity API can supply signals intended to help distinguish genuine installs and risky environments, and Apple’s App Attest can help an app instance prove it’s legitimate.
They have value, but they don’t automatically solve the whole problem of binding API access to proofs of authenticity, currency, and integrity, across all sensitive endpoints, in a way that fraudsters can’t simply bypass.
That’s what DexProtector’s Mobile API Protection capability is designed to do: it gives the backend a way to reject requests from tampered, repackaged, automated, downgraded, or otherwise untrusted clients, even if those clients can perfectly imitate the request format.
The solution is implemented through an in-app Mobile API Protection component integrated by DexProtector, cryptographically bound to the app, and secured by the DexProtector Runtime Engine (DRE).
During integration with DexProtector at build-time, a secret is encrypted and integrated into the app. The DRE, being the first component initialized, runs integrity and RASP checks immediately. Only if those checks pass does the Mobile API Protection component mint and sign a short-lived DRE Session Token that the app can attach to API requests and the backend can verify.
That detail is what makes it different from request signing or mutual TLS authentication. The backend isn’t authenticating a credential in isolation. It’s verifying proof that was generated inside the protected runtime environment, after initial and ongoing integrity and attestation checks, and specifically for the current session.
This is how Mobile API Protection increases the reliability of other signals from the app used in authorization decisions.
Device attestation data, telemetry from fraud SDKs: these only have true value when the backend can be sure that they’re coming from a genuine, untampered app instance, not from something that’s simply learned how to mimic its traffic.
And it directly addresses the rollback problem: since outdated builds can’t mint valid tokens, ‘supported version’ stops being a string the client can spoof and becomes an enforceable property of the request.
This makes Mobile API Protection more than just another capability alongside RASP and attestation. It is the binding between the client-side and the server-side that makes security and anti-fraud controls actually enforceable at the moment when they matter: when the backend is deciding to authorize an action.