38

I've been trying my hand at building apps with Flutter and Dart. I noticed in my apps that if someone decompiled my app they could access a whole lot of things I didn't want them to access.

For example, if I am calling my database to set the users 'active' status to False when they cancel their plan they could just comment out that bit of code and they get access to the entire app again despite having cancelled their plan.

Since this is my first app, my backend is Firebase. The app handles everything and calls Firestore when it needs to read or write data.

  1. Is this something to really worry about?

  2. If so should I be using something like Firebase Cloud Functions?

  3. Should I be creating a proper backend? If so what would its structure be? Would my app just be a client for the backend?

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
123432198765
  • 575
  • 1
  • 4
  • 5
  • 20
    "For example if I am calling my database to set the users 'active' status to False when they cancel their plan they could just comment out that bit of code and they get access to the entire app again despite having cancelled their plan." It's concerning that your app is so easy to fool. What if someone's credit card expires and they don't update it? Would they get to keep free access forever? If not, then how would the 'active' status ever change in that case? – Joseph Sible-Reinstate Monica Jun 21 '20 at 01:53
  • 1
    The credit card expiring and other cases are handles by the in app purchases part of the app. I am working on improving it though(I haven't published the app, it's still in early stages). – 123432198765 Jun 21 '20 at 02:09
  • What kind of software are you coding? Is it the embedded code of some [ICBM](https://en.wikipedia.org/wiki/Intercontinental_ballistic_missile) or [DO-178C](https://en.wikipedia.org/wiki/DO-178C) related ? What is the financial risk if someone decompiles it? – Basile Starynkevitch Jun 21 '20 at 06:38
  • 1
    See also [SoftwareHeritage](http://softwareheritage.org/) – Basile Starynkevitch Jun 21 '20 at 06:44
  • 16
    Setting that kind of flags in the frontend is a terrible idea. Call a webservice that does it for you. – J_rite Jun 22 '20 at 05:47
  • 16
    The logic you describe should simply not be handled by the client's code, it should be handled on the server side. You never know if it's even your software that connects. It could be your software, a modified version, or even a completely seperate (maybe malicious) client. For example, canceling their plan and setting the user inactive should be an atomic operation handled completely on the server. There shouldn't be seperate steps involved at all on the client side. – kutschkem Jun 22 '20 at 08:46
  • 4
    You should read [this answer](https://softwareengineering.stackexchange.com/a/364630/121035) and note that although that answer focuses on how ludicrously trivial it is to get a properly formatted POST from a web-based client app, it's still quite possible to reverse engineer the responses from your client side app. **You cannot know if the user is using your code on their device.** – 8bittree Jun 22 '20 at 17:48
  • I'll post it as a comment, but that is the answer. **No single piece of logic must be on the client. Use the server for database store** – usr-local-ΕΨΗΕΛΩΝ Jun 23 '20 at 10:15
  • This is why apps are signed. To ensure that the code being run, is the code present on the server. This is also why jailbraking is so dangerious – Thorbjørn Ravn Andersen Jun 23 '20 at 14:44
  • @ThorbjørnRavnAndersen: Signing doesn't guarantee that. It only guarantees to the _user_ who produced the binary. Signing doesn't guarantee the server of anything at all. – Mooing Duck Jun 23 '20 at 17:14

10 Answers10

85

I used to be a full-time binary reverse engineer, and I still spend about 80% of my time reverse-engineering software (legally).

There are some good answers here already, but I wanted to add a few touches.

Legal Aspect

I'm not a lawyer. But as far as I'm concerned (and many others too), reverse-engineering doesn't really become legally enforceable until you've done something with the knowledge. Think about this situation:

Say I'm a reverse engineer and I download your app. I disconnect my "lab" machine from the network. Now, I decompile, disassemble, and debug your app, taking detailed notes of how it works. After doing all of this, I wipe out my lab machine and it never sees the network.

Then, I do nothing with that knowledge, because it's a weekend hobby and I just enjoy decompiling things.

It's debatable whether or not this is illegal and more importantly, it's unenforceable. There is no way you, your lawyer, or anyone else will ever know that I did this unless I'm already suspected of copyright violations, patent violations, or some other crime. Even if you sued me, what would you sue me for? I never published, distributed, advertised, told anyone, or did any kind of monetary damage to your business whatsoever. What would your "damages" be? For this reason, the vast majority of the time (view that EFF page that was linked in a comment earlier), real prosecution stems from cause of some (usually major) perceived loss by the software development firm or copyright/patent holder.

The trick is that a reverse engineer may actually use some of the knowledge that he/she/they learned from your app code, and do things that will be hard for you to detect or prove. If a reverse engineer copied your code word-for-word, and then sold it in another app, this would be easier to detect. However, if they write code that does the same thing but is structured entirely different, this would be difficult to detect or prove, etc...

Learn about who would target your app and why?

What are the type of people who would like to reverse engineer your app? Why? What would they get out of it?

Are they hobbyists who enjoy your app and could potentially even be helping your business by fostering a community of hacker enthusiasts? Are they business competitors? If so, who? What is their motive? How much would they gain?

These questions are all very important to ask because at the end of the day, the more time you invest in locking down your code, the more costly it is to you and the more costly it is to the adversary to reverse engineer. You must find the sweet spot between spending some time on application hardening to the point where it makes most technical people not want to bother spending time trying to thwart your app's defenses.

Five Suggestions

  1. Create a so-called "Threat Model." This is where you sit down and think about your application's modules and components, and do research on which areas would most likely be compromised and how. You map these out, many times in a diagram, and then use that threat model to address them as best as you can in the implementation. Perhaps you model out 10 threats, but decide that only 3 are most likely, and address those 3 in the code or architecture.

  2. Adopt an architecture which trusts the client application as little as possible. While the device owner can always view the app's code and the network traffic, they cannot always access the server. There are certain things you can store on the server, such as sensitive API keys, that cannot be accessed by the attacker. Look into "AWS Secrets Manager," or "HashiCorp Vault," for example. For every client module, ask yourself "Would it be ok if an attacker could see the inner workings of this?" "Why not?" and make necessary adjustments.

  3. Apply obfuscation if your threat model requires it. With obfuscation, the sky is the limit. The reality is, it is an effective protection mechanism in many cases. I hear people bashing on obfuscation a lot. They say things like

    Obfuscation will never stop a determined attacker because it can always be reversed, the CPU needs to see the code, and so on.

    The reality is, as a reverse engineer, if whatever you've done has made cracking into your app take 2-3 weeks instead of an hour (or even 3 hours instead of 5 minutes), I'm only cracking into your app if I really really want something. Most peoples' apps are frankly not that popular or interesting. Sectors which need to take extra measures would include financial, government, video game anti-hacking/anti-cheat, and so on...

    Furthermore, the above argument is nonsensical. Cryptography doesn't stop people from getting your data, it just slows them... Yet you're viewing this page right now over TLS. Most door locks are easily picked by a skilled lockpicker in seconds, people can be shot through bullet-proof vests, and people sometimes die in a car accident when wearing a seatbelt... So should we not lock doors, wear vests, and wear our seatbelts? No, that would be silly, as these devices reduce the likelihood of a problem, just like obfuscation, symbol stripping, developing a more secure architecture, using a Secrets Manager service to store your API secrets, and other hardening techniques that help prevent reverse engineering.

    Say I'm a competitor and I want to learn how to make an app like yours. I'm going to the app store and searching for similar apps. I find 10 and download them all. I do a string search through each. 7 of them turn up nothing useful, and 3 I find unstripped symbols, credentials, or other hints... Which apps do you think I'm going to be copying? The 3. You don't want to be those 3.

  4. Scan your source code for sensitive strings such as API secrets, sensitive keys, admin passwords, database passwords, email addresses, AWS keys, and so on. I usually search for words like "secret", "password", "passphrase", ".com", "http" using a tool called ripgrep. There will be false positives, but you may be surprised at what you find. There are automated tools which help accomplish this, such as truffleHog

  5. After you build your application, run the strings utility or a similar utility on it. View the output both manually and using a text search like ripgrep or grep. You'll be surprised at what you find.

Know about deobfuscators and look for them

Lastly, know that various obfuscators out there have deobfuscators and "unpackers." One such example is de4dot, which deobfuscates about 20 different C#/.NET obfuscator outputs. So, if your idea of protecting something sensitive is just using a commodity obfuscator, there's a high chance that there is also a deobfuscator or other folks online who are discussing deobfuscating it, and it would be useful for you to research them before you decide to use an obfuscator.

Why bother obfuscating when I can open de4dot and deobfuscate your entire program in 2 seconds by searching "[insert language here] deobfuscator?" On the other hand, if your team uses some custom obfuscation techniques, it may actually be harder for your adversaries because they would need a deeper understanding of deobfuscation and obfuscation techniques aside from searching the web for deobfuscators and just running one real quick.

Glorfindel
  • 3,137
  • 6
  • 25
  • 33
the_endian
  • 1,114
  • 8
  • 14
  • 1
    ```if whatever you've done has made cracking into your app take 2-3 weeks instead of an hour``` goddamn true. while it's by no means bulletproof, obfuscation may dramatically increase the amount of time, skill, and effort required, and may cause people to just give up. – hanshenrik Jun 21 '20 at 13:16
  • 29
    I agree with most of this, but "cryptography doesn't stop people from getting your data, it just slows them" is very misleading. Cryptography using a properly implemented and modern algorithm *does* stop someone accessing your data, provided that they haven't got some shortcut to the key (e.g. a narrow password search space). This is because brute forcing such a key is infeasible. – Jon Bentley Jun 21 '20 at 15:13
  • @JonBentley actually no. cryptography can only slow down an attacker. The only infeasible thing about it is if you can leave the attacker with no method but a brute force that takes longer than the heat death of the universe. They still have access to your data. If nothing else they know you transmitted data. – candied_orange Jun 21 '20 at 15:18
  • If 3 hours vs 5 minutes deters your attacker, the damage/gain probably isn’t worth much. – jmoreno Jun 21 '20 at 15:23
  • 12
    @candied_orange I'm not sure what your point is. "brute force that takes longer than the heat death of the universe" is considered infeasible enough by most people, and was exactly my point. We're not talking about knowing you transmitted data (that's a separate topic), we're talking about "stop[ping] people from getting your data". Sure we can nitpick and say that waiting until after heat death is "slowing down", but for practical purposes it's the same as stopping. – Jon Bentley Jun 21 '20 at 15:25
  • 2
    @JonBentley my point is that your point is the_endians point. You never stop them. You just slow them down. – candied_orange Jun 21 '20 at 15:26
  • 12
    @candied_orange Under our current understanding of the laws of physics, there is no possible way to wait until the heat death of the universe and then continue to do useful work. So yes, you stop them. Either way, this is being pointlessly pedantic and isn't useful information for any kind of practical purpose. – Jon Bentley Jun 21 '20 at 15:33
  • 2
    @JonBentley I understand your point wholeheartedly. However, as you may know, the whole heat death of the universe thing hasn't shown to be practically accurate as e.g. within the next 5-15 years, say, we build a stable quantum computer or some other powerful computing device that we don't currently have, which in practice, renders such lengths of time far shorter. I pondered my statement as I wrote it, as I knew I would receive comments like this. But the point still holds. The brute force time factors are on known computers at the time. This is why I left that statement. – the_endian Jun 21 '20 at 19:00
  • 1
    @JonBentley IMO, the statements about how long a bruteforce would take are also misleading - as they do not account for future computational advancements even within the next decade. Saying it will take 47371983 years, when in just 10, we have a computer that is 42387893x more powerful at prime factorization and so on... To me, that is misleading and similar to speaking about 1920s dollar values in 2020 without adjusting for inflation. But for all readers, JonBentley is right in that cryptography is generally *far stronger* than obfuscation schemes. :) – the_endian Jun 21 '20 at 19:08
  • 5
    As a security professional, I would suggest you take the bit about cryptography out, since it seems like you agree it's not really accurate, even if not about exactly how inaccurate. As far as bruteforcing goes, I can assure you that no possible performance advance in CPUs will make bruteforcing of modern cryptographic system practical without SOME other qualitative change, meaning either (1) a mathematical break of the cryptosystem involved, beyond just "faster machines", or (2) a practical application of quantum computers (which breaks some systems but not others, on current knowledge.) – Glenn Willen Jun 22 '20 at 00:02
  • 7
    For some context: My laptop can do about 10^6 AES decryptions per second. The Internet tells me a very fast CPU could plausibly do 10^8. (Exact numbers don't matter.) Cracking 128-bit AES with that very fast CPU therefore requires not 47371983 years, but around 1,000,000,000,000,000,000,000,000,000 years (two thousand trillion times greater). If you add 20 years of Moore's Law at 1.5x/year (though reality has not kept up), and assume a billion of those machines, you get "only" 30 billion years. And at some point during those 20 years, we can switch to AES-256 if we really feel threatened. – Glenn Willen Jun 22 '20 at 00:17
  • 1
    @glenn the mechanism is still the same though. It prevents access to information due to a delay in ability to access a resource. That was my point and it stands. I would not describe that as being inaccurate, but rather, intentionally not descriptive. I urge readers not to get to caught up on it and would expect them to research cryptography much as we expect our askers to research their subject matter before asking a question. – the_endian Jun 22 '20 at 05:33
  • @GlennWillen by the way, Moore's law applies to traditional computers and does not apply to quantum or other non-traditional computers that humans come up with in the future. The first electronic computer was called the ENIAC. It was built less than 100 years ago. If less than 100 years after it was built, humans also have designed and built a quantum computer, I would challenge you on the notion that 1,000,000,000,000,000,000,000,000,000 years are *actually* required to break AES. You're hung up on the details of a traditional computer and are not thinking practically here. – the_endian Jun 22 '20 at 07:56
  • 2
    That's why I said "using a properly implemented and modern algorithm". Such algorithms, according to our current understanding, are *not* susceptible to quantum computing, so that rules out that entire part of your argument. As for Moore's law, the calculations which predict waiting to the heat death make the assumption that you have converted the entire output of the sun into powering a computer which is 100% efficient (i.e. you cannot possibly exceed that without additional power sources). See for example [here](https://www.schneier.com/blog/archives/2009/09/the_doghouse_cr.html) – Jon Bentley Jun 22 '20 at 08:36
  • 3
    It's a popular misconception that quantum computers are a kind of magic bullet which speed up computing generally. In reality, they speed up only specific types of problems. – Jon Bentley Jun 22 '20 at 08:41
  • Jagex, creators of RuneScape (which used to be a popular online game), made their own obfuscator and suddenly 90% of their user accounts disappeared overnight. – user253751 Jun 22 '20 at 12:30
  • @JonBentley prime number decomposition being one of them. Nitpicking about crypto aside, this answer is on point. A little obfuscation, keep everything you can on the server and check as often as you can with it and a few other simple measures go a long way. Also, depending on the application, it might be possible that the server is not trustable either from the app, because it might be possible to replace it with a custom server ;) – bracco23 Jun 22 '20 at 15:51
  • @the_endian it still seems odd to leave a contentious point about cryptography in there, when the examples of seat belts, bullet proof vests, and lock picking, are already there, and are far better illustrations of the point you're making. – James_pic Jun 23 '20 at 09:33
  • 1
    @the_endian "[you] are not thinking practically here" Weren't you the one who was saying delaying until the heat death of the universe is distinct from 'stopping'? – Michael Jun 23 '20 at 10:55
  • 1
    @JonBentley: It's safer to err on the side of considering things as slowing down attackers, not stopping them completely. You cannot account for future computing power, your "heat death" argument is based on _current_ computing. Think of it this way: if I tell people that an airbag keeps them completely safe, they're going to drive more recklessly and are probably going to end up in monumental accidents that the airbag was never designed for. But if I tell them that the airbag only minimizes _some_ harm that comes to them, they'll be much less likely to behave as excessively reckless. – Flater Jun 23 '20 at 11:06
  • @JonBentley: In short, my objection to your statement isn't one of accurate fact (I don't know for a fact that we will have sufficient computing power in our lifetime), but one of both uncertainty (we _might_ have sufficient computing power because of a breakthrough) and human nature (a feeling of absolute safety eventually leads to absolute recklessness with the expectation of no consequences) – Flater Jun 23 '20 at 11:08
  • @JonBentley If the application itself eventually presents the data, when used properly, perhaps through the UI, then I'd say "Cryptogrophy doesn't stop people from getting to your data." Knowing the plain text and observing the blocks read are a good first step in attacking an algorithm. Having the binary that encrypts / decrypts it, even in obfuscated form, permits reuse of the algorithm without fully understanding it. Even keys loaded across the internet can be intercepted, if a reverse engineer can identify the correct RAM locations or inject print statements in the right locations. – Edwin Buck Jun 23 '20 at 12:30
  • @Flater No, we *do* know for a fact we won't have the computing power in our lifetime. The heat death calculations are not based on current computing. They are based on a theoretically 100% efficient computer (i.e. the absolute limit of what you can do without breaking the laws of thermodynamics) operating on the entire output of the sun. In Bob Schneier's example, even upgrading to the entire output of a supernova does not make even a remotely significant difference. – Jon Bentley Jun 24 '20 at 08:13
  • @EdwinBuck The scenario wasn't about an app displaying data that has been encrypted. It was used as an analogy; the example was TLS, which protects data in transit. – Jon Bentley Jun 24 '20 at 08:21
  • **Comments are not for extended off-topic discussions. They should be used for improving the content or clarifications.** – maple_shaft Jun 24 '20 at 12:21
  • one of my products competitor does the decompiling of my android app every time I add new features in it then they use it in their own app. It's android app so it's more easy to decompile despite using obfuscators so I intentionally add abusive notes in plain String variables in our local language for those code thieves xD and I know they read it :D Well I can't protect my app but at least shaming them feels good :) – Saqib Jun 24 '20 at 19:39
32

Once someone has a copy of your app, they can do anything with it. Your security model has to assume that nothing in your app is secret, and that actions that look like they have been made by your app might actually be malicious. As an approximation, a native app is about as secure as a web app.

That means that you must not store any API tokens or similar in your app. If you need to keep something secret, you have to write a server backend to manage the secret stuff and have your app talk to this backend. FaaS approaches might also work if you're not expecting many requests.

Firebase does have server-side authentication capabilities that e.g. prevent a user from modifying other user's data – if you configure everything appropriately. You can also apply some amount of validation to see that the data sent by the user makes sense. But in general, once a user has access to a document per some rules they can change whatever they want. Please read the Firebase security documentation carefully to avoid security breaches.

On mobile devices that haven't been rooted, apps can enjoy some basic security guarantees, for example it is possible to check that they are actually running on a specific device and that the app has not been modified. This e.g. means that 2FA apps or banking apps can be pretty secure, but this doesn't ensure that you can defend against decompilation. You must still ensure that your backend never trusts anything from the client.

amon
  • 132,749
  • 27
  • 279
  • 375
  • What would you suggest as validation? What's the best way to check if the request is genuine given that it's just requesting the change in a boolean value. – 123432198765 Jun 20 '20 at 18:15
  • 32
    @yesashishs the best question to ask isn't “Is this **request** genuine?” because that's impossible to determine reliably. Instead, ask: “Is this **user** permitted to perform this action?”. The app merely acts on behalf of the user. Related theoretical background: the [confused deputy problem](https://en.wikipedia.org/wiki/Confused_deputy_problem). – amon Jun 20 '20 at 18:26
  • That's interesting...I was under the impression that the user should be able to tell the database whether they were premium or not. When they sign up I write to the database saying that they are premium. I realize now that this is a terrible idea! How would I notify the database that the user is a premium member without using the user's phone? – 123432198765 Jun 20 '20 at 18:50
  • 7
    @yesashishs You would still use the user's phone - you would just have the user provide proof, such as a user account or authentication key. In other words, instead of saying "oh, hey, I'm premium", they say "I'm premium, and here's my product key to prove it." I believe that phone SDKs have tools for getting those kinds of proofs of purchase for apps bought through their stores, that the server can then verify independently. – TheHans255 Jun 21 '20 at 02:35
  • 2
    @yesashishs When processing payments, the standard approach is that the payment processor (e.g. Stripe) contacts your servers directly and tells you “we received a payment for item PREMIUM by user 1234 for $10”. You might need a couple of Firebase functions to provide the necessary interface. – amon Jun 21 '20 at 08:40
  • 1
    @yesashishs In general, this is the use case for a session token. Instead of the user making requests that say "I'm a premium user, let me see the premium page" or "Renew my subscription with card number 12345 for another 24 months," they should be saying "Renew my subscription with the card associated with my account. Here's a token I got when I logged in to determine and verify my identity." Basically, all the user should be able to send is the requested action (renewal, requesting a resource, etc.) and some sort of secure string they get when they log in with their username and password. – Feathercrown Jun 22 '20 at 20:40
  • You cannot really check that an app has not been modified. It's the app that checks that, that's the problem. You just remove the checks. – Sulthan Jun 22 '20 at 20:42
  • 1
    @yesashishs The backend should be able to save a copy of the token when they log in, to be able to recognize this token as referring to the current session of user #67890 or whoever logged in. So login --> generate new token and save it with a user ID or username or other identifying information in the backend --> send token to frontend --> use the saved record to determine which accounts to apply actions to when requests come back with the token. The token should be different every time, and ideally the connection should be secure. This avoids people stealing the token, bypassing security. – Feathercrown Jun 22 '20 at 20:47
  • @Sulthan The key is that some checks can be moved to the operating system, or to hardware security modules. E.g. the OS can refuse to run modified apps, and it's possible to assert that the user is in control of a specific device. But yes, properly implementing such checks is very tricky. – amon Jun 23 '20 at 11:29
  • @amon Usually, what a hacker would do, is to sign the app again and modify the checks inside the app. That's what developers do all the time with their apps. Usually you wouldn't decompile though. It would make more sense to change the assembly directly because decompiled apps is hard to compile back. – Sulthan Jun 23 '20 at 11:37
14

Never trust the client. Make sure that anything you need to keep private is stored on the server and requires user-specific credentials to access.

Solomon Ucko
  • 408
  • 3
  • 10
  • Using your target platforms DRM measures to protect the secrets your application uses to authenticate to the server is a good adjunct to this. If that secret is strictly per-customer, a lot of problems go away. – Tim Williscroft Jun 22 '20 at 03:04
  • 1
    @TimWilliscroft From the user, or from other users on the same device, or from other apps, or from network MITM, or from something/someone else? Also, what sort of problems are you referring to? Although DRM makes it more difficult, the user can always access anything stored on their device and modify/fake communications from their device. – Solomon Ucko Jun 22 '20 at 03:15
  • If your platform allows applications to have DRM protected secrets the machine owner can't read, those are what I'm talking about. Secure Enclaves and related technologies. – Tim Williscroft Jun 29 '20 at 02:20
  • @TimWilliscroft iOS's Secure Enclave only stores encryption/decryption/signing keys and performs the relevant algorithms. Any communication to or from it, or not involving it, can still be intercepted, faked, etc. For example, you could use it to sign messages, and prevent the user from accessing the key, but they could still use the Secure Enclave to sign their own messages. – Solomon Ucko Jun 29 '20 at 03:12
6

Is this something to really worry about?

This is very dependent on the product. A lot of the time, someone doing it will "cost" you $30 a month—who cares if four or five (or most likely zero!) people do it? You can monitor the situation over time and make changes if necessary. It's a bit like profiling code; engineers make notoriously bad estimates of good and bad bits.

Also, think rationally. If you are "angry" at people who do it, put that aside. Etc.

HOWEVER!!

they could just comment out that bit of code and they get access to the entire app again

If this is a problem, there is a good chance your users could do more serious things you haven't thought of, like impersonating other users, messing with their profiles, buying things with their money.

If so should I be using something like Firebase Cloud Functions?

Yes, "something like" that. For 95% of people asking this question, the problem is pretty much eliminated if you perform authentication and authorisation and sensitive functionality on the server/cloud rather than the client (and follow best practices correctly.) You don't necessarily need Firebase Functions if you can set up Firebase security rules to do the job. It depends on your application.

However in some cases code really needs to run on the client (e.g. in games or proprietary number-crunching algorithms) or it is too slow. In these cases, obfuscation is where to put your attention. But nobody has mentioned anti-debugging techniques. Malware authors use these to shut down the program if it suspects it is being run in a debugger or VM. This makes reverse-engineering even more time-consuming.

Should I be creating a proper backend? If so what would its structure be, would my app just be a client for the backend?

Backends tend to implement behaviour and your client can sometimes access functionality partly through the backend and partly not. If you have complex rules like users managing other users or teams, loyalty points and so on, that goes on the backend. It is madness to try to securely authorise that sort of thing on the client.

Otherwise, it's a matter of taste as to how much functionality to put on the server. On the one hand it creates an extra layer to implement and maintain. On the other hand, you can update backend code "in one go", so if you want to add new features or fixes, you don't need to worry about rollouts and conflicting versions of your client app everywhere. Doing intensive things on the backend is good for client battery life (at the expense of server $). So on.

Artelius
  • 368
  • 1
  • 6
4

As Jörg W Mittag mentioned there is the legal aspect of what you are talking about, and then the technical. As long as the app embeds critical logic and database access inside of it, someone with enough patience can reverse engineer and do the bad things you are talking about. There are different approaches you can take to protect your efforts:

  • Use Digital Rights Management (DRM) to protect the app--still can be defeated but harder to do
  • Use a code obfuscator which has the ability to make it harder to reverse engineer the code
  • Encrypt the module that does the critical access (you have to decrypt it when you load it in memory)
  • Move all critical behavior to services hosted remotely (like in the cloud)

None of these solutions are mutually exclusive, but the one that provides the best protection is to move your database access and critical business logic to a service oriented architecture (i.e. web services you control). That way it is never part of your app to begin with, and then none of the code you are worried about is even available for someone to reverse engineer.

It also means you are free to change how that information is stored and managed without having to release a new version of the app. Of course, you'll have to provide the appropriate protection to make sure that a user can only see or interact with their own data, but now you don't have to worry about the app being hacked.

Many apps are built this way now. The app communicates with servers via HTTP with JSON, YAML, Protobuf, BSon, or some other structured exchange format. The app authenticates to get a session token that is good for a few minutes at a time, and that token is presented to your service so you don't have to worry about server side sessions.

Berin Loritsch
  • 45,784
  • 7
  • 87
  • 160
  • 1
    Was searching the answers for this, and was surprised to find it at the bottom! When I first read the question, my thought was, "Wait, you're worried about a user *commenting out a line of code and recompiling?* Heck, it'd be sooooo much easier to grab the database connection info and just manually connect in." While it's possible OP has hardened the backend... it wouldn't surprise me if those credentials gave a lot more access than the OP realizes. And even money whether it's just a straight-up connect with CRUD access to the data table. Excellent answer! – Kevin Jun 23 '20 at 04:50
4

How do app developers protect their app when a user decompiles it.

In practice, they don't.

AFAIK, in Europe, decompilation of a software is legally possible for interoperability purposes. Please check with your lawyer since I am not a lawyer. Be aware of the GDPR. A related legal question is patentability of software. This is discussed by the FSF, by the EFF, by APRIL, by AFUL (Notice that I am member of both APRIL & AFUL).

But your question make few sense. You are trying to find a technical answer to a legal, social and contractual issue.

A good way to protect a software is thru a legal contract, such as some EULA.

Writing a contract requires as much expertise as coding a software. You need to contact your lawyer.

In most countries, an unhappy former IT professional could write to some court about software license violations, and that threat is dissuasive enough for most businesses..

A dual or symmetrical question is discussed in the paper Simple Economics of Open Source, but the Big other surveillance capitalism and the prospects of an information civilization paper is also relevant.

See also of course SoftwareHeritage.

You technically could write your own GCC plugin doing code obfuscation, or customize Clang for such purposes. I don't know if that is legal. Please check with your lawyer. See also this draft report giving technical insights.

PS. Common Criteria embedded code in ICBM or aircrafts (see DOI-178C) are probably not obfuscated. Such software intensive systems are protected by other means (including personnel armed with machine guns).

Basile Starynkevitch
  • 32,434
  • 6
  • 84
  • 125
3

There are two aspects to this.

First off, what you are describing is illegal in many, if not most, jurisdictions.

  • Decompilation: For example, in the EU, de-compiling is only legal for purposes of interoperability, and only if the copyright holder refuses to make interoperability documentation available under reasonable terms. So, unless the user is developing an app that requires interoperating with your service, and they have contacted you and asked for information required to interoperate with your service, and you refused to provide them such information, they are not legally allowed to decompile or otherwise reverse engineer your app, your service, or your network protocol.
  • Circumventing a digital protection device is illegal in the EU, the US, and many other jurisdictions.
  • Fraud: Using your app without paying is fraud, which is a crime pretty much everywhere.

So, since what you are describing is highly illegal, one potential way of dealing with the problem is to simply not do anything, under the assumption that no-one is willing to go to jail to save the money for your app. Simply put: don't do business with criminals.

Since that is not always possible, we have to talk about the second aspect: the user owns the device. That is information security 101. You cannot trust anything that is on that device or is sent by that device. Period. The user can manipulate everything you send, everything you store.

Computers are stupid. Much stupider than humans. In order to execute code, the computer has to understand it. You can compile it, obfuscate it all you want, the computer still has to be able to understand it in order to execute it. Since computers are stupider than humans, this means that the user can understand it, too, i.e. decompile / disassemble / reverse engineer it.

You can encrypt it, but the computer has to decrypt it to understand it. Therefore, you have to store the decryption key somewhere on the user's device. Since the user owns the device, the user can extract the key. Or you send the key over the network. Since the user owns the device, the user can intercept the key. (Or the user can log the device into a WiFi under the user's control, or …)

There is no way in which you can protect the code.

You have to design your security under the assumption that the user can read and change your entire code on the device, read and change your entire data on the device, read and change everything your app sends over the network, read and change everything your app receives over the network. You cannot trust the user, the user's device, or your own app. Period.

The security models of mobile devices are designed to protect the user from apps, not the other way around.

Jörg W Mittag
  • 101,921
  • 24
  • 218
  • 318
  • 4
    FWIW The Eff has a great article from the US legal point of view [Coders’ Rights Project Reverse Engineering FAQ](https://www.eff.org/issues/coders/reverse-engineering-faq) – Peter M Jun 20 '20 at 14:56
  • What kind of security architecture would you suggest? Running most of the logic in the backend is what comes to my mind. – 123432198765 Jun 20 '20 at 18:24
  • 2
    Given that the *purpose* of the decompilation might be to to something obviously fraudulent (e.g., the OP was worried that someone might continue using the functions of the app after cancelling their plan), the fact that the decompilation itself is illegal may not deter attackers much (it may however simplify prosecution) – Hagen von Eitzen Jun 20 '20 at 22:17
  • 1
    There are some countries where reverse engineering is legal. Australia for example. – Tim Williscroft Jun 22 '20 at 03:01
  • "Interoperability" is a pretty broad word. I'm not particularly familiar with the legal precedents surrounding its interpretation, but a plain language reading would surely include e.g. "I want to write my own open source client app that can connect to your backend server." Oh, you say your backend has no authentication or security features to make sure I don't access data that doesn't belong to me? Too bad… – Ilmari Karonen Jun 23 '20 at 15:30
  • (It could still be illegal for me to actually access other people's data or make fraudulent transactions, etc. But just reverse engineering your app to make my own? As long as it has legitimate uses, I could probably make a decent claim that it is and should be allowed.) – Ilmari Karonen Jun 23 '20 at 15:30
2

You should learn more about how to secure your database with security rules because as others said you cannot be sure the user won't access your code.

You should implement Cloud Functions for every sensible code you want to run on the server. For example, you should have one function that sets the user to premium when he has valid credentials.

You should also have restrictions on premium access in your database (set security rules) that only allows premium users to access it (you can store premium in the user's auth token).

You should always have in mind that anyone has access to your database.

deltastar
  • 21
  • 1
2

I think another part to your question is about the granularity of operations.

Your questions seems to be framed such that your app has two actions:

  1. Cancel the plan
  2. Set the user status to inactive

And that these are separate, so a canny user could comment out (2) and let (1) still run.

In this case, these actions would be much better in a back-end function, and importantly, there should only be single function that does both of these things in a transactional fashion, e.g.

CancelUserPlan() {
   CancelPlan();
   SetStatusInactive();
   CommitChanges();
}

At present you have another issue in your architecture beyond a malicious user - what happens if your second call fails (a network blip for example)? Is that user now in a 'non-paying' but full access state?

Having this as a single action that the user sees (and can manipulate) means that they can either cancel and be set inactive, or they can do neither of these things.


In short, this is a bit of a deeper problem than the securing of your code on the mobile device. As stated in other answers to this question, there are valid reasons to obfuscate deployed code, but, if you haven't architected your application in a secure/robust fashion from the start, then you have another problem to rectify before you even get to obfuscation.

Paddy
  • 2,623
  • 16
  • 17
1

I believe you are looking for the concept of Obfuscation. It basically makes code more difficult to read by humans. There is in fact some documentation over at the flutter website on how to achieve this.

Code obfuscation is the process of modifying an app’s binary to make it harder for humans to understand. Obfuscation hides function and class names in your compiled Dart code, making it difficult for an attacker to reverse engineer your proprietary app.

Documentation can be found at Obfuscating Dart code

If it is something to really worry about depends on the sensitivity of the application you are building. Usually if this is a platform for businesses customers will often ask penetration test results to verify the security of your app. One of the things they do is decompile the application.

I would also suggest to hide any sensitive keys (e.g. API keys etc) in the secure storage of any OS you target. If this would be iOS this would be the keychain for example. Otherwise someone could also get hold of these keys and either impersonate you or leave you with a hefty bill if you have a subscription based on usage.

  • I've heard about Obfuscation but is that all that's needed? – 123432198765 Jun 20 '20 at 13:42
  • 1
    I've edited my reply and added another parth regarding storing sensitive data in your application such as API keys. There is another part to this in regard to storing user data. Be it keys or user data some parts are better off encrypted or put in secure storage. – Niels Willems Jun 20 '20 at 13:46
  • 4
    Obfuscation makes reverse-engineering/decompilation *more difficult*, but doesn't prevent it. It is not appropriate as a security measure. However, obfuscation makes it less likely that the app is cloned by a competitor. – amon Jun 20 '20 at 13:50