Menu
Menu
inquire

9 App Security Best Practices Developers Should Follow

It’s not an exaggeration to say that applications have revolutionized our world.

Their influence has been so vast that it’s hard to believe that mobile apps didn’t exist a little over a decade ago. Today we turn to apps as soon as we wake up and tend to use them right up until we go to sleep at night.

Apps aren’t only integral to our everyday lives, though. They’re also vital for business success. After all, for some companies their app is everything. Think about mobile banks and payment solution vendors. There’s no physical store for people to visit - it’s just the app.

That’s why application security is so important.

A more remote post-pandemic world is likely to make us even more reliant on apps. And businesses will grow to realize that their livelihoods and reputations simply depend on robust in-app security.

The 9 app security best practices below are a guide for you to safely navigate this new reality.


Think about security right from the start

Most forward-thinking companies now accept that app security cannot be an afterthought.

Not least because it simply isn’t efficient to think this way. Attempting to plug gaps after they’ve been spotted doesn’t make any sense because they might not have only been detected by your SecOps team. Hackers might also be aware of them already.

It’s a much better idea to use security by design principles from the outset - before a single line of code has even been written.

Your end users will happily allow your app access to their data - but only if you can prove to them that you’ll look after it safely.

The best way to do so is to put yourself in their shoes and show a little empathy.

A proper threat model should involve thinking about all of the ways these end users will use your app in the real world. And it’s worth bearing in mind that the real world increasingly means a zero trust world full of outdated OS, malware, and rooted devices.

Another useful exercise is to look at your app from a bad actor’s perspective. What are the potential attack vectors of your application? What kind of sensitive data or logic would they be most interested in stealing?

Answering these questions can help prevent vulnerabilities from creeping into your code at different points along the development journey.


Obfuscate and harden your apps and SDKs

Embracing security by design principles from the outset is a smart idea because it makes it less likely that hackers will be able to spot weaknesses in your code.

It means that doors aren’t left ajar for attackers to squeeze through.

These are the places bad actors look for in order to start an attack that could later lead them to reverse engineer your app or tamper with it.

One way to stop them achieving this goal is to harden and obfuscate your code.

Hardening code makes it tougher for a would-be attacker to read it and can help prevent real-time attacks. Obfuscation, on the other hand, is all about hiding implicit values and concealing logic. Ideally this obfuscated code can then be moved to an isolated, secure container (more on this later).

It’s also worth testing and reviewing your code from time to time to make sure that it’s as robust as it needs to be. That way you’ll be sure you’re ready to deflect the most current threats.


Encrypt your data

Done well, encryption is like some kind of steampunk puzzle. It hides the assets you value most in your app behind a scrambled, impossible-to-decipher code.

Even if a cybercriminal were to find it, they’d have no way of making sense of it as they wouldn’t have the key required to crack it.

It’s for that reason that hardy encryption should form the bedrock of your app security strategy.

Because with context-sensitive encryption keys, it isn’t possible for a bad actor to carry out a static analysis on your application. And that’s typically their first step before carrying out follow-up attacks.

Encryption can be performed on strings, classes, native libraries, and other assets such as media files, text files, and HTML files. This is important, because even seemingly insignificant assets inside your app’s code can be attractive to bad actors. These assets can offer clues about how your app has been put together and where the most valuable data might be kept. That’s why it’s vital that as many of them as possible get encrypted.

The end goal you’re aiming for is for a hacker to be put off carrying out an attack. For them to realise that it simply isn’t worth a massive investment in their time to attempt to decipher your app’s encryption.

More often than not, that’s exactly what they’ll decide.


Sophisticated cryptography and secure containers

The best kind of security uses interconnected layers of protection to safeguard apps and SDKs. This is particularly vital for sensitive apps in FinTech or MedTech that carry critical logic that relies on cryptography.

If you have one of these apps, code hardening and encryption alone might not be enough. You’ll also need somewhere safe to store your cryptographic keys.

Think about payment apps, for example. They tend to use a secure, isolated container to process transactions and store key material.

This container acts as a safe environment - much like a smart card or HSM. It’s a place hackers aren’t able to gain access to.

As this article explains, Google recognizes the need for secure environments to perform cryptographic operations and store sensitive key material and different types of credentials.

That’s because the beauty of clever cryptography like this is that it provides protection not just within your app but around it, too. It’s security that can operate between your app and the operating system.

In much the same way as robust encryption, these layers of protection act to put off cybercriminals. They find themselves up against a solid steel chain rather than individual weaker elements they can knock over one after the other.


Use tamper detection

A common attack employed by hackers is to attempt to tamper with the code inside your app. They might try to modify it, or inject their own malicious code in the form of malware.

An attacker might even tamper with an app and then try to pass off a fake version they’ve repackaged as the real thing.

There are a lot more of these bogus modified apps available to download than you’d think.

Code signing with cryptographic keys goes some way toward curbing tampering attempts. By signing their code, companies are validating the authenticity of that code and are tying their reputation to it. But sadly there’s nothing to stop a bad actor from using their own random keys to sign a modified app.

That’s why tamper detection is such a vital part of app security. It can tell you when your code has been modified and prevent any of that code from functioning if it has.

In practice this is done via environment checks and integrity checks at runtime. These checks spot the most common tools used by cybercriminals to reverse engineer your application. These include emulators and debuggers as well as hooking and rooting attempts.


Educate your end users

The covid-19 crisis proved that hackers are skilled at exploiting vulnerabilities. They witnessed the collective anxiety around the virus and our eagerness to trust authorities. So, they decided that they’d pretend to be those authorities.

The pandemic has been defined by staying indoors and a shrunken world. But with it has come a soundtrack of bogus text messages arriving at our phone. Bad actors pretending to be a healthcare provider offering a vaccine or a bank sending a monthly statement. All of these messages are bookended with a grim call to action that many have sadly fallen for:

“Just click on this link.”

The lesson from all of this is how vital it is to educate your end users.

That might be making them aware of how and when you’ll contact them, so they’re more likely to ignore phishing campaigns like these. It could be asking them for a strong password or biometric credentials. Or it might be making them aware of how they can use the app safely - such as being aware of open wifi connections.


Be wary of third party code and libraries

You shouldn’t feel like you have to reinvent the wheel with your app development. That said, it’s important to strike a good balance between using your own code and libraries and those created by third parties.

As we’ve said elsewhere on this site, a key cybersecurity principle is that the fewer links and access areas you create, the better.

It’s not the case that all outside libraries and frameworks are dangerous. In fact some respected frameworks are probably safer than using your own code. It’s just that the more of them that you use, the greater the chance that you’ll come across one that has been infected with malware.

And if you use a lot of them, you might also be unsure as to where that malware has come from.

When you do come across third party code and libraries, make sure that you test them thoroughly before they become a permanent feature of your app. And always check that you’re using the most up to date version of them.


Defend against man-in-the-middle attacks

A common mistake is to assume that app security is limited to protecting what is happening inside your application alone. Actually, it extends to the channels of communication that flow between it and the server.

Cybercriminals can try to trick an application into communicating with their private server rather than the genuine one used by a banking app, for example. This is called a man-in-the-middle attack.

And if the app doesn’t have any defence against fake server certificates, hackers can intercept messages with sensitive information like bank account information. They can even modify that information.

Man-in-the-middle attacks often start out as the phishing scams we mentioned earlier.

Someone is targeted with a text or email - supposedly a genuine one from their bank - and then they’re redirected to a page where they’re asked to share personal information.

Google’s Certificate Transparency project helps to stop man-in-the-middle attacks by fixing some of the structural flaws in the SSL certificate system. It also identifies those SSL certificates that have been issued maliciously.

Another key defensive tool is called public key pinning implementation. This secures the SSL certificate pin which often acts as the base for hackers trying to intercept your communication channels.


Security never stops

Technology is always changing. The way we interact with our phones and other smart devices evolves from one year to the next.

Risks and threats change almost as quickly as technology trends, which means that the security mechanisms you had in place last year might not offer complete protection this year.

That’s why it’s so important to understand the threat landscape of your application and to revise your threat model accordingly.

By employing what we call threat intelligence you’re effectively turning the table on the cybercriminals. You open a curtain and are able to look out and survey your app’s neighbourhood.

This view can give you all the insights you need to carry out a proper risk analysis.

In practice this might mean spotting some unusually-high API usage which can be a sign of malicious activity. With a sophisticated risk analysis system, a bank could even link security incidents to individual customers and take actions there and then.

Hackers don’t stand still. They’re constantly monitoring trends to get a better understanding of where they can get an edge. As developers you need to do exactly the same thing.

Because in the coming years the success of applications - which increasingly translates to overall business success - will not only be defined by cool features and smart UX.

More than anything else, they will be judged by how well they protect end users’ sensitive data.