Menu
Menu
inquire
The 4 layers of mobile application protection

Principles / 04 The four layers of mobile application protection

An attacker only needs one point of access to target your application or its users. That’s why security and protection are either comprehensive or ineffective. Layers of app protection are therefore required. In this section we explain what these layers are and how they connect with one another.

Layers upon layers of protection

Throughout this guide we emphasize the range of threats to mobile apps and their users. Some attacks target the application’s binaries for decompilation and modification; others target the data communicated between the app and remote endpoints; others make use of dynamic instrumentation to interfere with the app’s process during runtime. And so on.

Any one of these threats could be damaging. And each one demands a different approach when you try to prevent or mitigate it.

The main problem is, if you use obfuscation and perhaps encryption to harden your code and prevent an attacker from reverse engineering your app statically, they may be able to reverse engineer it dynamically using a tool like Frida. And if your app contains some functionality for detecting and preventing the use of Frida, the attacker may be able to remove that functionality by simply patching it, or modifying the app’s binaries. They might even just run the app within an app wrapper and take control of the execution environment in any case.

This is why a holistic view of security (and layers of app protection) is so crucial.

We would categorize four main layers to ensure comprehensive protection. Each layer is crucial, and they are interdependent. Prioritizing three out of four, for example, is likely to leave important parts of your app exposed.

The four layers of app protection are:

  • code and resource hardening
  • secure runtime environment
  • secure network communications
  • application integrity

Let’s explore each of them and examine how they are related to each other.

Code and resource hardening

Mitigating the danger of decompilation is fundamentally about making the decompiled code difficult (and ideally impossible) to understand. This can make static reverse engineering unfeasible. It can be achieved through code hardening, mainly by means of obfuscation and encryption. The same principle applies to sensitive resources, be they text-based (XML files, JSON files) or assets such as image files.

There’s also an important distinction to make between obfuscation and encryption. Obfuscation involves renaming identifiers, file names, class names, method names, symbols, and strings, as well as adding ‘junk’ code, without fundamentally changing the content or logic of the app. This makes the code more difficult to read and understand, but reverse engineering logic from obfuscated code can be done. And indeed there are a number of tools to assist with deobfuscation of any programming language.

Encryption is more powerful than obfuscation, because it fully transforms code and resources into meaningless ciphertext which cannot be read or understood by either a human or a machine. This ciphertext can only be restored to its original form through use of a decryption key. Encrypted code must therefore be decrypted before it can be executed by the operating system.

The most secure effective approach is for the app’s sensitive code and resources to be encrypted with dynamically generated keys which are not stored anywhere within the app package. Matching decryption keys are then also dynamically generated during runtime, and protected using white-box cryptography. In this scenario, the code and resources will remain encrypted at all times except when they need to be accessed by the OS during execution on a user’s device.

This is why code and resource hardening is mainly a defence against static forms of attack, i.e. disassembly and decompilation. However, some hardening mechanisms (obfuscation, string encryption, code virtualization) can also be effective in preventing and mitigating dynamic analysis, tampering, and reverse engineering.

Secure runtime environment

Runtime is of course what apps are designed for, and ultimately the target of the majority of attacks. 

If an attacker can exert control over the app’s runtime environment, they can exert control over every (client-side) functionality offered by the app.

Rooted devices and customized firmware can give attackers - and malware - administrative privileges and access to system-level functionality. This means any security features offered by the platform (sandboxing, signing certificate validation, secure boot, network security checks, installation method checks) may no longer offer your app any protection. Virtualized runtime environments such as emulators and app wrappers carry similar risks.

Dynamic instrumentation tools and debuggers, meanwhile, offer attackers direct access to the app’s processes. They can control its execution, examine its logic, change values of variables, and inject scripts and code.

Malware can leverage all of these affordances, in addition to recording the device screen, logging keystrokes, adding fraudulent root certificates, intercepting network traffic, interfering with system APIs (including those managing potentially sensitive hardware components), and allowing for full remote control over the device.

The runtime environment, in other words, is untrustworthy. It may be dangerous to the application and its users. 

This is why it’s so important for the app to be able to protect itself during runtime - to detect these threats and to take action against them. This is the purpose of Runtime Application Self-Protection (RASP) solutions.

RASP mechanisms embedded within the app can run checks every time the app is launched for indicators of all of the above threats: rooted devices, customized firmware, emulators, dynamic instrumentation tools, and so on. Since many of these threats allow attackers to take full control over the app’s execution, the safest policy is often to simply prevent the app from running at all if any of them are detected. Allowing the app to run opens up the risk of checks being bypassed.

For the most sensitive processes - those involving on-device cryptographic operations such as biometric authentication and point-of-sale transactions - some devices also offer Secure Hardware Components or Trusted Execution Environments (TEEs). These are spaces that are physically isolated from all other processes and device storage, meaning that they can be used to perform sensitive operations and store sensitive data (such as generating and storing cryptographic keys) without risk of interference from anything else occurring on the device.

Such Secure Hardware Components are not available, however, on all devices. So, there are software-based alternatives - such as Virtual Trusted Execution Environments (vTEEs) and proprietary modules which can be incorporated into applications and are specifically designed for sensitive operations. These usually make use of a combination of white-box cryptography and enhanced RASP mechanisms to make sure that the runtime environment is secure.

Secure network communications

Protecting your mobile application also means securing the communication channel between it and its remote endpoints. Almost all apps communicate with the outside world via the network. And often apps managing the most sensitive data (relating to financial transactions, personally identifiable information, health records) are the most heavily reliant upon interactions with backend services.

Bad actors are aware of this fact and will attempt to hijack this communication channel. In attacks such as these, attackers take control of data as it’s being transmitted over the network. Network sniffing tools allow attackers to capture network traffic by intercepting data packets being sent from the device to a local router. And proxy servers can also be used to stand as a Man-in-the-Middle (MitM) receiving all data between the mobile device and the servers it connects to.

The two priorities in mitigating and preventing these attacks are (1) encrypting the data for transit; and (2) ensuring that the data can only be decrypted at the legitimate, intended endpoints.

It is standard practice to use HTTPS to ensure encryption. And the shoe on the other foot is public key certificate validation checks. These make sure that with every request to the backend, the server can provide a legitimate and expected public key certificate.

Both Android and iOS systems can perform such checks, validating server certificates according to whether they correspond to root certificates pre-installed on devices.

But because of the threats laid out in the ‘Secure runtime environment’ layer on this page, the system cannot truly be relied on to perform these checks. It remains possible to override system checks and/or to install fraudulent root certificates.

It’s therefore crucial to perform such checks from within the mobile app itself. The authentic certificates against which every request must be compared can either be pinned - fixed - at the point of building the app, as in the case with Public Key Pinning, or they can be validated by comparison with distributed public logs of authorized certificates, according to the Certificate Transparency framework. In either case, it’s important to use a protection mechanism that can block the connection and prevent the transmission of data if a valid certificate isn’t provided.

Application integrity

Application integrity is all about preventing a bad actor from modifying - tampering with or patching - your application’s binaries.

Adversaries might use this approach for something relatively benign, such as accessing ‘locked’ features and content on their own devices. Or they might inject malicious code into the app, repackage it, and use social engineering techniques like phishing to distribute the malicious clone to your legitimate customers.

For the same basic reasons, integrity is also fundamental to all in-app protection mechanisms.

The reason for this is that if an attacker was able to patch the app, to modify the compiled binaries, they might be able to remove or override any RASP (Runtime Application Self-Protection) or network security features. This would allow them to run (and/or to redistribute) an unprotected version of what you might think is a secure application.

Both Android and iOS platforms offer some partial solutions, such as code signing, device-based checks, and integrity audits. But apps can be re-signed, devices can be compromised, and APIs used for audits can be removed or bypassed in a modified version of the app.

This exposes a general problem that anti-tampering and integrity controls must ideally solve: if some component within our app is fundamental to checking the integrity of the app, what’s to stop an attacker from simply removing or overriding that component?

This brings us back to our first app protection layer: encryption can help. As mentioned above:

The most secure effective approach is for the app’s code and resources to be encrypted with dynamically generated keys which are not stored within the app package. Matching decryption keys are then also dynamically generated during runtime and protected using white-box cryptography. In this scenario, the code and resources will remain encrypted at all times except when they need to be accessed by the OS during execution on a user’s device.

If those keys are dynamically generated using the app’s file contents as inputs, any encrypted file becomes (1) impossible for an attacker to understand, and (2) impossible for an attacker to modify. If they modify any encrypted file, the inputs to the algorithms will be different, and the decryption keys will not match the encryption keys, making decryption impossible. The modified file will be unusable.

In a secure app, the integrity of all sensitive code and resources will be guaranteed in this way. If the app has been modified, if any code has been removed, modified, or injected, or if the app has been re-signed, it will simply not run. And so it will be rendered useless to an attacker.

Don’t skip a layer

In a zero-trust world, an app must be able to protect itself. A hardened app is resistant to static reverse engineering; an app with embedded RASP mechanisms is resistant to dynamic analysis and tampering; and an app that performs its own certificate validation checks is resistant to the interception of communications over the network.

These protection mechanisms have different functions, and it may seem that they are independent. But think about it for a moment: RASP mechanisms and certificate validation checks are app components that might also be targets for static reverse engineering, and must also be hardened through obfuscation and encryption. And what’s to stop an attacker from simply removing or bypassing those components anyway? Only a mechanism that guarantees the app’s integrity. And how can you guarantee an app’s integrity? Encryption can play a crucial role in that as well.

As we said on the very first page of this guide, mobile application protection only really works when it’s comprehensive. By nature the layers of app protection we’ve mentioned in this section are interlinked. And so we can’t stress enough that only a combination of all four is enough to block sophisticated, modern attacks.