Menu
Menu
inquire

Protecting Mobile APIs From Bot Attacks:

Why Request Signing Alone Is No Longer Enough

Protecting mobile APIs from bot attacks means doing more than signing requests. It means verifying that every request is coming from a genuine, untampered application and not from a script, bot, or compromised client.

Mobile APIs are one of the primary targets for modern fraud and bot attacks, and the mobile application itself is often where those attacks begin. It’s the easiest place for attackers to study business logic, reconstruct request formats, extract embedded secrets, and automate abuse at scale.

For years, the standard defensive pattern was relatively simple: embed secret material inside the mobile application and use it to calculate a request signature, often through an X-Signature header or similar custom authentication value. The backend then verifies the signature and assumes the request came from a genuine application. At first glance, this looks reasonable. The app knows the secret, the backend knows the secret, and any request without a valid signature is rejected.

The problem is that the mobile application runs in an environment you don’t control; a single fact that changes everything.


The traditional model: signing requests with embedded secrets

To understand why, it helps to look at how the traditional model actually works:

The mobile app contains a secret key, certificate, or private key material. For each outbound request, the app calculates a signature over selected fields. The backend verifies the signature and treats the request as authentic.

This pattern is used to defend against bot traffic, tampered applications, unauthorized client versions, and direct API abuse from scripts and backend emulators.

In practice, the secret may be embedded in several forms: a hardcoded symmetric key, a file bundled in the application, a PKCS#12 or similar key container, a private key loaded into the app at runtime, or a static secret used to derive another signing key.

This architecture is attractive because it’s easy to deploy and easy to validate on the server side. It creates a clean security story: “only our app can produce valid signatures.” Unfortunately, as we’ll show, it’s an assumption that doesn’t hold under real-world attack conditions; and the reasons why are more systematic than most teams expect.


Why attackers start with the mobile app and not the backend

Attackers do not usually begin by attacking the backend logic itself. They begin by attacking the client, because the client gives them three things: the request structure, the signing logic, and the signing material.

Once those are understood, the attacker can often reproduce legitimate requests outside the original app and turn targeted abuse into a scalable bot operation.

The attack chain usually develops in stages.

Stage one: understanding the request structure through MITM

The first step is often a man-in-the-middle attack against the application’s network traffic. If the app doesn’t enforce robust network trust controls, such as Public Key Pinning and Certificate Transparency, then an attacker may be able to intercept traffic, observe endpoints, inspect parameters, and understand how signatures are attached to requests.

This gives the attacker a working map of the API: the request method, URL structure, header names, mandatory fields, sequencing logic, response patterns, error handling, and replay behavior.

A simplified example might look like this:

xml
METHOD: POST
URL: /api/v1/transaction/init
HEADERS:
  Accept-Encoding: gzip
  apps: {"os_name":"Android","os_version":"XX","device_class":"Phone",
         "device_family":"XXX","app_version":"2604100","device_id":"XXX"}
  X-Signature: XXX
  Signature: XXXX

Even if the attacker cannot reproduce the signature yet, they now know what the backend expects. This stage is often underestimated. Many organizations focus on protecting the key but leave traffic sufficiently exposed for an attacker to learn the entire request grammar – indeed tools like Burp Suite, mitmproxy, and Charles Proxy make this accessible to anyone with basic technical skill. It doesn’t require advanced reverse engineering, but rather patience and a proxy.

This is also where the link to the broader mobile threat picture becomes clear. For a deeper look at how API attacks fit into the evolving threat landscape, see our article on Mobile API Protection: From Afterthought to Necessity.

DexProtector addresses this directly by applying reinforced Public Key Pinning and Certificate Transparency controls, extending protection beyond what the underlying OS provides and making MITM-based traffic inspection significantly harder to execute. 

Stage two: static extraction of embedded secret material

Once the request structure is understood, the next step is usually static analysis of the mobile app package.

On Android, the attacker decompiles the APK or AAB using tools like jadx, apktool, or Ghidra. They inspect resources, native libraries, constant pools, configuration files, and class structures. On iOS, they analyze the IPA, bundles, metadata, Objective-C and Swift symbols, linked frameworks, and embedded resources using Hopper, IDA, or class-dump.

They are typically looking for hardcoded strings that look like keys or credentials, bundled files containing certificates or private keys, PKCS#12 containers with familiar structures like “-----BEGIN PRIVATE KEY-----”, symmetric keys with typical lengths and entropy characteristics, code paths that initialize signing operations, and resource files holding secrets or derivation parameters.

From an attacker’s perspective, private key material is often surprisingly recognizable. Once the attacker knows what request header or field is being produced, they can search the codebase specifically for the logic behind it.

It’s important to understand that the tooling required for this is freely available and well documented. A moderately skilled attacker with jadx on Android or Hopper on iOS can often locate embedded key material within hours rather than days if the application lacks meaningful static hardening protections.

This is the gap that static hardening is designed to close. At Licel, we built DexProtector’s static protection layer specifically to make this kind of extraction prohibitively complex. Mechanisms like String Encryption, Hide Access, Class Encryption, and Resource Encryption protect Android and iOS applications against static analysis, reverse engineering, tampering, and MITM attacks. 

In practical terms, String Encryption helps conceal hardcoded constants, Hide Access obscures sensitive API usage and call paths, Class Encryption makes targeted reverse engineering harder on Android, and Resource Encryption protects files and bundled assets that may contain key material or supporting metadata. DexProtector Studio is able to identify and visualize embedded secrets into application code and resources and suggest the optimal configuration for String Encryption, Class Encryption and Resource Encryption.

These are not cosmetic controls. They directly interfere with the attacker’s ability to discover where secrets live and how they are used.

Stage three: dynamic tracing and API hooking

If static extraction does not immediately reveal the secret, the attacker typically moves to dynamic analysis. This is where solutions based only on platform cryptography APIs often begin to fail.

Even when a secret is obscured statically, the application must eventually use it. That’s the moment the attacker typically targets.

Using dynamic instrumentation and hooking frameworks such as Frida, Xposed, or Cydia Substrate, the attacker traces cryptographic API calls at runtime to observe when the key is loaded, where plaintext inputs come from, which fields are signed, what exact data is hashed, what signature is produced, and which platform API or library performs the operation.

On Android, attackers commonly hook javax.crypto.Mac, java.security.Signature, or native JNI crypto paths. On iOS, they hook SecKeyCreateSignature, CCHmac, or application-level wrapper functions. The goal is simple: if the app can sign the request, the attacker wants to watch it happen. 

In more advanced scenarios on Android, the attacker may go further still, operating on a custom firmware build based on AOSP. In this model, they can patch framework-level cryptographic methods or surrounding system components to observe inputs and outputs more reliably, bypass certain application-level assumptions, or weaken parts of the execution environment in their favor. Tracing tools and low-level runtime instrumentation techniques can then be used to follow the signing flow step by step, even if the application code itself is heavily protected. It’s an important reminder that platform cryptography APIs remain valuable, but they do not by themselves prove that the calling environment is trustworthy.

This is why static hardening alone is not enough. DexProtector’s runtime protection capabilities are designed to address exactly this, blocking reverse engineering, tampering, code injection, and making compromised environments and bot-related attack conditions less likely via layered RASP-style controls.

RASP matters here because it changes the economics of runtime analysis. It raises the cost of instrumentation, increases the risk of detection, disrupts common hooking workflows, makes repeatable automation less stable, and creates telemetry for attack visibility.

DexProtector Studio adds a further practical advantage at this stage. It helps identify sensitive API calls, including cryptographic operations, and suggests appropriate protection filters during configuration. This means the development team does not need to manually trace every code path that touches signing logic. Protection coverage gets directed toward the areas that matter most, without relying on the development team to find them all manually.


TikTok’s multi-layered signing: 
why it still gets broken

A good illustration of this problem is TikTok’s API signing mechanism, which is often cited in reverse engineering communities as one of the more complex mobile signing implementations in production.

Rather than using a straightforward HMAC-SHA256 over request parameters, TikTok’s protocol chains multiple cryptographic operations together. Their signing pipeline involves custom hash functions, XOR-based transformations, and multiple rounds of encryption layered on top of each other, producing signatures through headers like X-Gorgon, X-Khronos, and X-Ladon. The complexity is deliberate. By combining non-standard algorithms with proprietary transformations executed in heavily obfuscated native code, TikTok forces attackers to reverse engineer each stage individually rather than simply identifying a single HMAC key and calling a standard library.

This approach genuinely raises the cost of reverse engineering. An attacker cannot just find a key and call javax.crypto.Mac.doFinal(). They must reconstruct an entire custom cryptographic pipeline, often spanning multiple native libraries with layered obfuscation.

And yet, it has been repeatedly broken. Open-source repositories and commercial bot services routinely replicate TikTok’s signing logic, either by reimplementing the algorithm chain after painstaking reverse engineering or, more commonly, by instrumenting the native libraries at runtime and invoking them directly. The attacker loads the original .so library, calls the signing function with the right inputs, and captures the output. They don’t need to understand every XOR round or every custom hash step. They just need the function entry point and the right arguments.

If the signing logic runs in the attacker’s environment, then complexity alone is not a security boundary. It is a time delay. TikTok’s experience demonstrates that even when you move well beyond simple HMAC-SHA256 into multi-layered, custom cryptographic pipelines executed in native code, the fundamental problem remains: the signer lives on the attacker’s device. 

Complexity buys time, not immunity.

The lesson here is not that complex signing is useless. After all, TikTok’s approach does filter out low-effort attackers and significantly raises the bar compared to a bare HMAC key. But against motivated adversaries running bot operations at scale, the signing scheme alone is not the final answer. The backend must have a way to verify that the request came from a trusted execution context, not just that it carries a correctly computed signature.

Why DexProtector is essential – and where it fits in the wider architecture

Across the three stages described above, DexProtector addresses each part of the attack chain in turn: reinforcing network trust through Public Key Pinning and Certificate Transparency, hardening the application against static extraction, and disrupting runtime instrumentation through its RASP layer. It also feeds incident and telemetry data into fraud analytics through Alice Threat Intelligence, and guides protection coverage through DexProtector Studio.

One detail worth noting here is that DexProtector's Certificate Transparency protection extends to Android 4.4 and above; well beyond what the underlying OS provides natively. For organizations operating across a wide range of device types and Android versions, that coverage gap matters.

Taken together, these capabilities significantly raise the attacker's cost and remove many low-effort attack paths. But for high-value APIs and determined adversaries, they should be treated as necessary rather than sufficient.

The reason comes down to a fundamental architectural question. One that request signing, however well protected, cannot answer on its own.

explore DexProtector

When the app itself becomes a signing oracle

Even with strong hardening in place, request signing alone has a fundamental limitation: if the signing operation is still controlled by the client environment, does the attacker even need to steal the secret at all? 

It may be enough to run the genuine public application on their own device, interact with it under controlled conditions, and manipulate it into producing valid signed requests for them. On Android, an attacker with legitimate access to the public app may run it inside a controlled environment and use instrumentation, accessibility services, automation frameworks, or modified system components to drive the application flow and capture signed outputs. On iOS, similar goals may be pursued through runtime injection, UI automation, or plug-in style instrumentation techniques that observe or influence how the application prepares protected requests.

In this model, the attacker is not necessarily stealing the key and moving it elsewhere. They are coercing the legitimate application into acting as a signing oracle. The app still produces a technically valid signature, but it does so inside an environment controlled by the attacker.

This is where the architectural limitation becomes visible. The question is not only “can the attacker steal the key?” It’s also “can the attacker make the app sign on their behalf?”

If the answer is yes, then protecting the signing logic alone is not enough on its own. A modern anti-bot architecture must also give the backend a way to verify that the request came from a genuine, untampered, policy-compliant application instance operating under acceptable runtime conditions.


The next step: binding requests to app integrity, not just to a secret

The architectural response to the signing oracle problem is to stop treating request authentication as purely a matter of cryptographic correctness and to start treating it as a combination of request integrity and application integrity.

That means the backend should verify not only that the request was signed correctly, but also that it was generated by a genuine application instance that passed relevant security checks at the time the request was made.

This is where Mobile API Protection changes the model. Instead of relying only on a secret embedded in the application, the backend can validate whether the request came from an authentic and untampered application instance and whether the relevant integrity conditions were satisfied at the time of the request. The server is no longer limited to asking "is this signature cryptographically correct?" It can also ask "was this request generated under acceptable security conditions?"

In practical terms, Mobile API security extends the trust decision beyond request signing alone and enables the backend to evaluate integrity-aware assertions tied to the application instance, runtime state, version compliance, and other policy-relevant signals.

This changes the server-side trust model. Instead of trusting only a shared secret, the backend can validate a richer token representing properties such as application authenticity, package identity, signing identity, app version compliance, runtime integrity results, device and environment trust indicators, anti-tamper outcomes, and freshness or anti-replay properties. 

In more mature deployments, the verification flow can also be correlated with Alice Threat Intelligence’s telemetry. For example, the backend may validate the JWT-style assertion cryptographically and, in parallel, use Alice risk signals to check whether the same application instance or device context has recently triggered indicators such as tampering, runtime manipulation, suspicious environment checks, or repeated policy violations. The integrity token becomes part of a broader set of fraud decisions rather than a standalone pass/fail control.

You can read more about how DexProtector and Alice work to protect Mobile APIs on this page. 

Now the attacker must do more than reproduce a signature. They must also satisfy the integrity conditions enforced by the server. That is a fundamentally different and much stronger position.


Why JWT-style protected request assertions make sense

One effective implementation pattern is to introduce an additional request parameter or token, for example a JWT-style assertion generated by Mobile API Protection and verified by the backend. This gives the server something far more useful than a correctly computed signature: verifiable evidence about the state of the application that produced the request. 

The advantages are worth spelling out.

Freshness. A short-lived token reduces replay value and forces the attacker to operate in real time. Unlike a static signature that remains valid as long as the key is unchanged, a JWT with a narrow expiry window means that even a captured token becomes useless within seconds or minutes.

Server-verifiable integrity evidence. The backend can validate claims related to app integrity and runtime checks rather than trusting a blind signature. The verification logic moves from a simple “is the HMAC correct?” check to a richer evaluation: “is this token fresh, does it assert a genuine app build, did the runtime integrity checks pass?”

Better policy control. The server can reject requests from outdated app versions, tampered builds, high-risk runtime states, or policy-violating environments. This is particularly important for organisations that need to enforce minimum app versions or block requests from rooted and jailbroken devices at the API level, not just at the app level.

Clearer anti-fraud linkage. Fraud controls can correlate API usage with integrity signals, not just with credentials. If your fraud engine sees a burst of transactions from a device where the integrity assertion reports anomalies, that is a much stronger signal than a failed signature check, which an attacker would simply avoid triggering.

Easier evolution. You can change server-side validation logic without redesigning the whole application protocol. When attackers adapt, you can tighten the integrity requirements on the backend without pushing a new app build. This operational flexibility matters enormously in an active attack situation.

We also recommend introducing an additional request parameter and adjusting the request structure when deploying Mobile API Protection. This prevents attackers from reusing previously obtained knowledge about your API request format. Once the request grammar changes and the backend begins requiring a JWT-backed integrity assertion, all previously captured request templates, replay scripts, and signature-generation tooling become obsolete in one step. A clean break that forces even well-prepared adversaries to start over.

Where the Licel vTEE fits – and why isolation matters for high-risk operations

The Licel vTEE (virtual Trusted Execution Environment) strengthens this architecture further by moving especially sensitive logic and material into a more isolated execution model inside the app.

From a design perspective, the Licel vTEE helps answer a very practical question: if the request-signing path is business-critical, should all of it really execute in the ordinary application layer?

For high-risk use cases, the answer is often no.

A virtual Trusted Execution Environment can be used to isolate selected security-sensitive operations: key handling, request authorization logic, token generation, challenge-response flows, transaction binding, and anti-replay metadata generation.

That does not magically make the mobile device equivalent to a hardware secure element. But it materially improves the architecture by reducing direct exposure of sensitive logic to the ordinary app layer and making dynamic abuse significantly more complex.

Consider the difference in practice. Without the Licel vTEE, the attacker hooks the app process and can observe the signing operation from the same execution context. With the vTEE, the most sensitive operations happen in a separated context that is significantly harder to instrument. The attacker’s Frida script may see the request go in and the signed output come out, but the actual key handling and integrity logic inside the vTEE boundary is not directly accessible through the same hooking path.

For advanced anti-bot and anti-fraud designs, this matters because it helps separate the visible app, the business API, and the trust-establishing security logic. That separation is often the difference between “the attacker can imitate the client” and “the attacker can observe the client but still cannot fully reproduce the trust proof.”

Taken together, Licel solutions form a layered trust architecture rather than a collection of separate features. The next section sets out how they work together in practice.

explore the Licel vTEE

A practical layered defense model

For organizations protecting valuable mobile APIs, the right model is not one control. It’s a layered chain where each layer raises the cost of attack and ensures that bypassing one doesn’t compromise the whole.

So, how do you protect a mobile API?

Layer 1: protect the network channel. Apply Public Key Pinning and Certificate Transparency to reduce the attacker’s ability to inspect and manipulate traffic through MITM positioning.

Layer 2: harden the app against static analysis. Apply String Encryption, Hide Access, Class Encryption, and Resource Encryption to make it significantly harder to identify secrets, signing logic, and protocol-sensitive constants.

Layer 3: resist runtime manipulation. Deploy RASP and anti-hooking controls to make dynamic instrumentation, crypto tracing, and hostile runtime observation harder to execute and easier to detect.

Layer 4: verify app integrity on the backend. Use Mobile API Protection so the backend validates whether the request came from an authentic, untampered application instance rather than a request that happens to carry the correct signature.

Layer 5: isolate the most sensitive logic. Deploy the Licel vTEE where the risk profile justifies stronger isolation for request authorization and key-dependent operations.

Layer 6: connect trusted mobile signals to fraud decisions. Feed telemetry and threat reporting into anti-fraud models, rate limits, session controls, and step-up authentication policies.

This is how you move from simple request signing to a resilient mobile API trust architecture. One where the attacker must defeat every layer simultaneously rather than finding a single point of failure. 

Lessons from a typical incident pattern

A common real-world scenario looks like this:

The organization deploys request signing to protect its backend API from bots. Attackers identify the mobile app as the easiest place to study the mechanism. Weak or absent MITM protections allow them to inspect traffic and understand the request format, and static analysis reveals the embedded secret or enough code context to find where it is used. Dynamic instrumentation traces the crypto path and confirms how signatures are produced. Automated abuse begins using reconstructed requests and valid signatures.

At that point, simply rotating the key rarely solves the problem for long. If the architecture stays the same, the attacker can often repeat the process within days or weeks.

The real fix is architectural: close the MITM visibility gap, harden the application, resist dynamic inspection, stop trusting signatures alone, add backend-verifiable runtime integrity, and isolate the highest-value operations.

A mobile API should not trust a request only because it is correctly signed. It should trust a request only when the request is correctly formed, cryptographically valid, and backed by evidence that it originated from a genuine, untampered application instance operating under acceptable runtime conditions.

That is the shift from mobile request signing to mobile trust enforcement. And it’s the shift that separates organizations whose APIs remain easy targets from those that aren't. 


Future-proof mobile API security

Embedding secret material in a mobile application to sign API requests is no longer a sufficient defense against serious bot operators. Once attackers can observe request structure, reverse engineer the client, and instrument runtime crypto operations, request signing becomes a speed bump rather than a security boundary. This is true even for highly complex signing schemes. TikTok’s multi-layered cryptographic pipeline, far more sophisticated than a standard HMAC-SHA256, has been repeatedly reverse engineered and replicated by bot operators. Complexity raises the cost, but it does not change the fundamental problem.

The right response is not to abandon request signing. It is to stop relying on it alone.

A stronger architecture combines protected network trust, hardened client code and resources, runtime anti-tamper controls, backend verification of app integrity, isolated execution for the most sensitive logic, and telemetry-driven fraud response.

No client-side defence is permanently unbreakable. The goal is economic: make attacks cost more than they are worth, and ensure that when one layer is bypassed, the next layer catches it. That is the honest reality of mobile channel security, and it is also the strongest position you can build from.

That is where DexProtector, Mobile API Protection, and vTEE fit together. Not as three separate features, but as a layered trust model for defending mobile APIs against modern bot attacks.


Frequently asked questions

What is the difference between request signing and app integrity verification?

Request signing proves that a request was produced using a known secret. App integrity verification proves that the request was generated by a genuine, untampered application instance operating under acceptable runtime conditions. Request signing answers "did this request use the right key?" App integrity verification answers "did this request come from a trustworthy source?" A robust mobile API defense requires both.

Why is request signing alone not enough to stop mobile bot attacks?

Because the signing secret lives inside the mobile application, which is an environment the attacker can analyze, instrument, and control. Once an attacker can observe the request structure, extract or trace the signing logic, and reproduce valid signatures outside the original app, request signing becomes a speed bump rather than a security boundary. The TikTok example above illustrates this clearly: even a highly complex, multi-layered signing scheme has been repeatedly broken by bot operators.

What is a signing oracle attack?

A signing oracle attack occurs when an attacker coerces a legitimate application into generating valid signed requests on their behalf without needing to extract the signing key directly. Rather than stealing the secret, the attacker runs the genuine app in a controlled environment and manipulates it into producing signatures they can use. This is why protecting the signing logic alone is not enough; the backend must also verify that the request originated from a genuine, policy-compliant application instance.

Find out more about how we protect mobile APIs at Licel.