“Mobile Application Security: Best Practices for App Developers”
The success of an app highly depends on its security. Users want safe app environments where they can interact with each other. Therefore, developers need to deliver digital solutions with app security in mind.
This article talks about how to protect data stored within apps, namely by means of HTTPS, clearing the cache, obfuscating code, protecting local storage, and keeping sensitive data (i.e. source code) inside the app.
HTTPS stands for Hypertext Transfer Protocol Secure, where the “S” guarantees a secure version of HTTP. This protocol is designed for secure communications over computer networks as well as the internet. In HTTPS, the communication protocol is encrypted by Transport Layer Security (TLS). TLS and its predecessor, Secure Socket Layer (SSL), are cryptographic protocols that ensure privacy and data integrity between a server and an application.
HTTP is unencrypted, unvalidated, and unverifiable. This means that attackers can easily spy on the contents of users’ communications and modify them or even stand between a user and an application on one or both sides of the communication.
TLS utilizes X.509 certificates, public or private key encryption, and an exchanged symmetric key to:
- validate a server’s identity;
- encrypt the content of communications;
- verify the integrity of communications;
- ensure that messages aren’t modified by an attacker;
- verify the authenticity of communications.
Encryption protects sensitive data at rest, in transit, and when it’s traversing multiple network connections. Encryption can be used to protect:
- files on servers;
- entire communication channels;
- hard drives;
- email messages;
- other potentially sensitive transmissions or storage of data.
Encryption uses algorithms that turn plain text into unreadable, jumbled code, ensuring an app’s security. To decrypt this ciphertext, an encryption key is required. This key is something that only authorized parties have in their possession.
Different types of attackers, namely hackers, may leverage their technical expertise to infiltrate protected systems. Another kind of attacker is a social engineer. Social engineers exploit weaknesses of human psychology to trick people into offering them access to personal information.
Phishing is a form of social engineering where an attacker learns a user’s personal information such as login credentials or private information. In a phishing attack, the attacker pretends to be a reputable entity via email or another communication channel and installs malware through a link or attachment.
Another type of threat is a man-in-the-middle (MITM) attack. An MITM attack may intercept communications between two parties, such as between a mobile app and a database full of information. The attacker can then eavesdrop or manipulate communications to cause harm or bypass other security measures on either side of the connection.
App owners should always protect their apps with HTTPS, even if they don’t handle sensitive communications. HTTPS is a requirement for new browser features. Unprotected HTTP requests can reveal information about the behaviors and identities of users.
A cache is a hardware or software component that stores any kind of data. Cached data is retrieved faster when requested since it’s saved on local storage or in memory. Data stored in a cache might be the result of an earlier computation or a duplicate of data stored elsewhere.
An app’s cache stores elements of apps or websites so they can be loaded quickly when accessed again. App data refers to both cached data and other pieces of saved information such as a user’s login and preference settings within the app itself.
A device’s cache contains data for all websites and apps that have been used on the device. It’s necessary to clear the cache every now and then to free up some space on your phone or tablet.
Android devices store lots of information in the system cache, so it gradually takes up more and more storage space. Clearing the app cache is necessary as part of the troubleshooting process to resolve a number of problems that may arise because of corrupted cache data. Android is getting better with every update, and most of the time you no longer need to empty the cache on your own. Android systems usually manage the cache very effectively.
A device should have enough memory to download data. The cache exists for one purpose: to preserve temporary data. For example, when a user downloads pictures, they’re saved in cache and can be reused from the cache instead of redownloading them. It’s recommended to clear an app’s cache or just reset it to default to free more memory.
Sometimes, old information that’s no longer valid can be stored in the cache. So downloaded apps may not work properly or an error may occur during a regular update. If these issues arise, clear the cache in order to remove invalid cached data. It’s paramount to clear an app’s cache during testing to avoid an extensive number of bugs.
Local storage is the part of a file system where media files, settings files, and other files are stored. For example, Viber and Telegram stores photos and files that users send or receive. This kind of data is kept in local storage. The files are stored there until a user deletes them manually in the app settings.
Securing stored data means preventing unauthorized access as well as preventing accidental or intentional destruction, infection, or corruption. Steps to secure data include understanding threats, aligning appropriate layers of defense, and continually monitoring activity logs, taking action as needed.
If storage isn’t protected, a hacker or some script can infiltrate the memory through addresses or the file manager. Thus, saved files can be endangered. For example, if a user sends some personal pictures, they can be easily retrieved from storage.
In protected local storage, data is encrypted by means of a key. The data itself therefore consists of bytes without any meaning. When encrypted, all files (video, text, audio, etc.) can only be read when deciphered by a key (e.g. password). Using this key, operations can be made on these bytes to turn them back into plain text. Without the correct decryption key, all that a malicious user will get is jumbled code that has no meaning.
Code obfuscation is the deliberate act of creating source or machine code that’s difficult for humans (hackers) to understand. Developers can obfuscate code to conceal its purpose, logic, or implicit values embedded in it. A tool called an obfuscator can be used to automatically convert straightforward source code into a program that works the same way but is much harder to read and understand. Developers can also obfuscate code manually. Code obfuscation may include:
- encrypting some or all of the code;
- stripping out potentially revealing metadata;
- renaming useful class and variable names to meaningless labels;
- adding unused or meaningless code to an application’s binary.
Code is often obfuscated to protect intellectual property and prevent an attacker from reverse engineering a software program. In iOS, code obfuscation isn’t so widespread because libraries are closed, not public as they are for Android. For this reason, an attacker can hardly get source code from iOS libraries. If a library’s source code is public, code obfuscation can be used.
By making an application much more difficult to reverse engineer, a developer can protect it against:
- theft of trade secrets (intellectual property);
- unauthorized access;
- bypassing licensing or other controls;
- discovery of vulnerabilities.
Writers of malicious code who want to hide or disguise their code’s true purpose also use obfuscation to prevent their malware from being detected by signature-based antimalware tools. Deobfuscation techniques, such as program slicing, can sometimes be used to reverse engineer obfuscated code.
Code obfuscation comprises many different techniques that can complement each other to create a layered and reliable defense against attackers. Some examples of obfuscation and application security techniques include the following:
Renaming obfuscation. Renaming alters the names of methods and variables. It makes the decompiled source harder for a human to understand but doesn’t alter program execution. The new names can utilize different schemes: letters (A, B, C), numbers, unprintable characters, or even invisible characters. And names can be overloaded as long as they have different scopes. Name obfuscation is a basic transformation that’s used by most .NET, iOS, Java, and Android obfuscators. For example, there can be X number of A variables in the source code. There can also be other variables like C or B interconnected with each other in the source code. In order to understand the logic, an attacker must be very attentive so as not to miss any element or variable when they try to decipher the source code.
Control flow obfuscation. Control flow obfuscation synthesizes conditional, branching, and iterative constructs that produce valid executable logic but yield non-deterministic semantic results when decompiled. This makes decompiled code look like spaghetti logic, which is very difficult for a hacker to comprehend. These techniques may affect the runtime performance of a method, however.
Instruction pattern transformation. This technique converts common instructions created by the compiler to other, less obvious constructs. These are perfectly legal machine language instructions that may not map cleanly to high-level languages such as Java or C#. One example is transient variable caching, which leverages the stack-based nature of the Java and .NET runtimes.
Dummy code insertion. Code can be inserted into the executable that doesn’t affect the logic of the program but breaks decompilers or makes reverse engineered code much more difficult to analyze.
Unused code and metadata removal. Removing debugging information, non-essential metadata, and used code from applications makes them smaller and reduces the information available to an attacker. This procedure may slightly improve runtime performance.
Binary linking/merging. This technique combines multiple input executables/libraries into one or more output binaries. Linking can be used to make an application smaller, especially when used with renaming and pruning. It can simplify deployment scenarios, and it may reduce information available to hackers.
Opaque predicate insertion. This works by adding conditional branches that always evaluate to known results—results that cannot easily be determined via static analysis. This is a way of introducing potentially incorrect code that will never actually be executed but is confusing to attackers who are trying to understand the decompiled output.
Anti-tamper. An obfuscator can inject application self-protection into the source code to verify that an application hasn’t been tampered with in any way. If tampering is detected, the application can shut itself down, limit functionality, invoke random crashes (to disguise the reason for the crashes), or perform any other custom action. It might also send a message to a service to provide details about the detected tampering.
Anti-debug. When a hacker is trying to infiltrate or counterfeit an app, steal its data, or alter the behavior of a critical piece of infrastructure software, they’ll almost certainly begin with reverse engineering and stepping through an application with a debugger. An obfuscator can layer in application self-protection by injecting code to detect if the production application is executing within a debugger. If a debugger is used, it can corrupt sensitive data (protecting it from theft), invoke random crashes (to disguise that the crashes are the result of a debug check), or perform any other custom action. It might also send a message to a service to provide a warning signal.
Reverse engineering code. This video demonstrates how binary equals source code in the world of .NET and Java and steps you through the kinds of risks that reverse engineering can pose and how application obfuscation can help.
When requesting information from different services, some data may require certain secret keys. Mostly, these secret keys are necessary for different services, for example for navigation with Google Maps or to use the Google search service. These services require a secret key that’s generated on the service’s website. A secret key gives a user access to a service. By entering this key, the system identifies that someone is an authorized user.
Secret keys are stored on the server side since it’s the server that uses them. Keys have a better level of protection on the server side than when they’re stored on the client side. Therefore, the server side requires more security.
If an app doesn’t have a server side, then the secret keys need to be saved within the app. Not only is it necessary in this case to save them in code or adjust the open file settings—it’s also necessary to encrypt them and limit access to them. The presence of this key in the app is a threat of its own.
Modern users are concerned about the security of apps they use. App owners should strive to create applications that satisfy all user expectations regarding safety. Software (Java, Android, .NET, and iOS) that’s outside an app owner’s immediate control requires reliable source code. All the aforementioned approaches and techniques enable successful application development, making it difficult for attackers to get access to sensitive data.
At SteelKiwi, we build projects from the ground up and do research to help you choose the right platform for your app. Our SteelKiwi team has built a secure online ecosystem for the Nova Vita healthcare center and can help you build an app that offers a secure and safe environment for your target audience. Don’t hesitate to get in touch with one of our sales managers to discuss the details of your project.