39

My question has to do with JavaScript security.

Imagine an authentication system where you're using a JavaScript framework like Backbone or AngularJS, and you need secure endpoints. That's not a problem, as the server always has the last word and will check if you're authorized to do what you want.

But what if you need a little security without involving the server? Is that possible?

For example, say you've got a client-side routing system and you want a concrete route to be protected for logged-in users. So you ping the server asking if you're allowed to visit protected routes and you go on. The problem is that when you ping the server, you store the response in a variable, so the next time you go to a private route, it will check that if you're already logged in (no ping to the server), and depending on the response it will go or not.

How easy is for a user to modify that variable and get access?

My security (and JavaScript) knowledge isn't great. But if a variable is not in global scope and is in the private part of a module pattern which only have getters but not setters, even in that case, can you hack the thing out?

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
Jesus Rodriguez
  • 551
  • 1
  • 6
  • 8
  • 30
    All these long answers. In short, to answer you header, "very hackable". To answer your first 2 questions inline with each other, "without server, u lost security" && "No". To answer the 3rd, "Very easy", and finally, "Yes, with ease". There you go. All questions answered. :P – SpYk3HH Jun 07 '13 at 14:10
  • Prime example, see my blog post about adding jQuery to any web page. http://spyk3lc.blogspot.com/2013/05/add-jquery-to-almost-any-site-from.html Now, young padawan, go forth and try it. See just how easy it is to add jQuery to any site that doesn't already have it, then use the jquery to very easily manipulate any part of the sight without long lines of javascript. Boom! – SpYk3HH Jun 07 '13 at 14:13
  • 7
    Once, when registering for a domain name, phone number was a required field, but that was only enforced in the javascript - not server side. So, I disabled it by redefining a function (using Firebug), and voilà! They don't have my phone number. – Izkata Jun 07 '13 at 14:38
  • @mplungjan your comment confuses me good sir. What are trying to say? – SpYk3HH Jun 07 '13 at 15:55
  • 8
    `manipulate any part of the sight without long lines` [site vs sight](http://grammarist.com/spelling/sight-site/) – mplungjan Jun 07 '13 at 15:56
  • 8
    Professional web programmers need to get such thing right. Please. It is embarrassment more than a grammar nazi thing :) https://plus.google.com/u/0/+MonaNomura/posts/h9ywDhfEYxT – mplungjan Jun 07 '13 at 16:00
  • @Izkata why would one actually use a registrar that was that lax about data integrity? – alroc Jun 07 '13 at 17:49
  • @alroc AFAIK, phone number isn't normally required - just, some sort of contact information, and I already provided my email address. Now that I think about it, the requirement was on the edit info page, not on the registration page - so I already had the account, already without a phone number, and "couldn't" make edits unless I added it. – Izkata Jun 07 '13 at 18:13
  • Not sure you can touch a var inside an anon func but you can't do much of use in an anon func without touching outside of it and that's where anything in JS can overwrite/override your stuff. But yeah, don't trust the client. Not ever. – Erik Reppen Jun 13 '13 at 06:53
  • I know the OP specified "in a browser" but that cannot be relied upon as it is in some of the assumptions in the answers. Dickering over the exact semantics of a "prompt" in an environment where those semantics may be completely redefined by a compromised (or merely custom) client doesn't make sense. In fact, you cannot assume anything about what's at the other end of your TCP socket on the Internet especially when it comes to security. IMO the wrong answer got the checkmark, Joachim's is the correct response to *any* question about network security: it can only be done in the server. – Perry Aug 17 '13 at 21:33
  • QOTW but it's on the wrong site. This has nothing to do with programmers.SE. – Mark E. Haase Aug 18 '13 at 14:08
  • This question [is featured at Ars Technica](http://arstechnica.com/information-technology/2013/08/how-easy-is-it-to-hack-javascript-in-a-browser/?awesm=s.tk_c2). – Peter Mortensen Aug 26 '13 at 18:25

5 Answers5

90

It's simple: any security mechanism that relies on the client to do only what you tell it to do can be compromised when an attacker has control over the client.

You can have security checks on the client, but only to effectively act as a "cache" (to avoid making an expensive round-trip to the server if the client already knows that the answer will be "no").

If you want to keep information from a set of users, make sure that those users' client never gets to that information. If you send that "secret data" together with instructions "but please don't display it," it'll become trivial to disable the code that checks that request.

As you see, this answer doesn't really mention any JavaScript/Browser specifics. That's because this concept is the same, no matter what your client is. It doesn't really matter it's a fat client (traditional client/server app), an old-school web application, or a single-page-app with extensive client-side JavaScript.

Once your data leaves the server, you must assume that an attacker has full access to it.

samthebrand
  • 368
  • 2
  • 12
  • 27
Joachim Sauer
  • 10,956
  • 3
  • 52
  • 45
  • Thank you. Would be lovely if you can explain it a little more, my security knowledge is not that good and the only part I understand is that I am wrong :P. When you talk about caching, it is only to cache when the server says no? I mean, if the server said yes once, you can't cache that, right? – Jesus Rodriguez Jun 07 '13 at 10:49
  • @JesusRodriguez: that depends on how hard your security requirement are. If the server answered "42" to "give me value x" before and now would answer "no, I won't", would showing "42" to the user be a problem for you? Only you (or your requirement engineers) can answer that. The basic premis is simple: The **server** must decide which information to pass to the client. Only pass to the *client* what you would be ok with the *user* seeing. Don't depend on the client hiding stuff it knows. – Joachim Sauer Jun 07 '13 at 10:51
  • 3
    I'd just like to add that it _is_ possible to access closure variables in the browser (like in the module pattern your mentioned), especially with a debugger :) – Benjamin Gruenbaum Jun 07 '13 at 10:59
  • 2
    @BenjaminGruenbaum: feel free to expand that into an answer that focusses on the JS-side of things. I only gave the high-level, technology-agnostic overview here. An answer focusing on JS itself would be nice as well! – Joachim Sauer Jun 07 '13 at 11:00
  • 1
    So in short, if some information / route / whatever is accessible depending on a javascript variable, that won't be safe in any case because you can easily change that variable to say what you need to access that private stuff. – Jesus Rodriguez Jun 07 '13 at 11:01
  • @JesusRodriguez: exactly. – Joachim Sauer Jun 07 '13 at 11:02
  • 5
    @JoachimSauer I think that would miss the point of this question which you nailed nicely. Regardless of what security measures the client takes, it's possible to compromise _communication_ . You can create very solid authentication systems in JavaScript but it's all worth nothing the minute you insert communication to a server other clients treat as a source of truth too. The source code is all sent to the client side, a smart attacker can just read it, and imitate an HTTP request to the server. One must treat anything originating from the client as un-trusted unless validated on the server. – Benjamin Gruenbaum Jun 07 '13 at 11:05
  • I added an answer on what _can_ be done, it's not much, but it might be useful to OP. – Benjamin Gruenbaum Jun 07 '13 at 11:39
  • To answer that old question about caching "yes responses": you could technically do that, but you'd have to build a system that can handle the case when the client thinks it is allowed to do something, but the server ends up saying "no", which is tricky both technically and from an UX perspective. – Joachim Sauer Dec 20 '16 at 13:21
26

Please read Joachim's answer before reading this one. He covers the general reasons behind client-side vulnerability. Now, for a suggestion how you might get around this problem...

A secure scheme for client-server communication without having to authenticate with the server manually on every request:

You're still letting the server have the last say, and the server still has to validate everything the client says, but it happens transparently.

Assume the HTTPS protocol to prevent man-in-the-middle attacks (MITMA).

  • The client handshakes with the server for the first time, and the server generates a public key for the client and keeps a private one using an asymmetric encryption scheme. The client stores the server's "public" key in the local storage, encrypted with a secure password you don't save anywhere.

  • The client is now offline. The client wants to perform trusted actions. The client enters his password and grabs the server's public key.

  • The client now performs actions based on its knowledge of that data, and the client encrypts every action it performs with the server's public key for that client.

  • When the client is online, client sends its client ID and all actions the client performed are sent to the server encrypted with the server's public key.

  • The server decrypts the actions, and if they are in correct format it trusts that they originated in the client.

Note:

  • You can't store the client's password anywhere, otherwise an attacker would be able to fetch the key and sign the actions as its own. The security of this scheme relies solely on the integrity of the key the server generates for the client. The client would still need to be authenticated with the server when asking for that key.

  • You're still in fact relying on the server for security and not the client. Every action the client performs you must validate on the server.

  • It's possible to run external scripts in web workers. Keep in mind every JSONP request you have is now a much bigger security issue. You need to protect the key at all costs. Once you lose it, an attacker can impersonate the user.

  • This meets your demand that 'no ping to the server' is performed. An attacker can't simply imitate an HTTP request with forged data if they don't know the key.

  • Joachim's answer is still correct. You're still, in fact, performing all authentication on the server. The only thing you saved here is the need for password validation with the server every time. You now need only to involve the server when you want to commit, or pull updated data. All we did here is save a trusted key on the client side and have the client re-validate it.

  • This is a pretty common scheme for single page applications (for example, with AngularJS).

  • I call the server's public key "public" because of what that means in schemes like RSA, but it is in fact sensitive information in the scheme and should be safeguarded.

  • I wouldn't keep the password anywhere in memory. I'd make the user submit his/her 'offline' password every time he/she starts running offline code.

  • Don't roll your own cryptography - use a known library like Stanford's for authentication.

  • Take this advice as is. Before you roll this sort of authentication in a real world business-critical application consult a security expert. This is a serious issue that is both painful and easy to get wrong.

It's critical that no other scripts have access to the page. This means you only allow external scripts with web workers. You can't trust any other external scripts that might intercept your password when the user enters it.

Use a prompt and not an inline password field if you're not completely sure and don't defer its execution (that is, it shouldn't live to a point where events have access to it but only in sync code). And don't actually store the password in a variable -- again, this only works if you trust the user's computer isn't compromised (although that's all true for validating against a server too).

I'd like to add again that we still don't trust the client. You can't trust the client alone, and I think Joachim's answer nails it. We only gained the convenience of not having to ping the server before starting to work.

Related material:

Benjamin Gruenbaum
  • 5,157
  • 4
  • 24
  • 34
  • 3
    "Don't roll your own crypto" applies just as much to protocols as it does primitives, if not even more so, because while people usually aren't foolish enough to make their own primitives, they think they can create a protocol that's fine. I also don't see why RSA is necessary here. Since you're using HTTPS, why not just generate a 16-32 byte shared secret and require that be sent with future requests? Of course, if you do this, be sure to use time-invariant equality checking on the string... – Reid Jun 07 '13 at 16:47
  • 2
    +1 @Reid Yeah, I've just had a discussion about this with somewhat of an expert about this recently. I roughly based the primitive scheme here on a protocol I learned about (by Rabin IIRC) but at least until I can find the specifics (and justify for example why asymmetric encryption is needed in this case). **Do not use the scheme suggested in this answer without consulting a security expert first** . I've seen this approach used in several places, but in practice that means very little, if anything. I also edited the answer to enhance that. – Benjamin Gruenbaum Jun 07 '13 at 17:21
  • Using a prompt adds little security. An attacker can override the `window.prompt` method in order to intercept the pass phrase, or launch an own prompt (possibly within an iframe!). A password field has one additional benefit: the characters are not shown. – Rob W Jun 07 '13 at 18:44
  • @RobW Of course, this whole scheme is worthless if the page was compromised. If the attacker has access to run JavaScript in the page we're screwed. The idea behind prompt was that it's blocking but you're right, any security benefit it provides is doubtful. See the last link in the list. – Benjamin Gruenbaum Jun 07 '13 at 18:47
  • 1
    First of all. Awesome answer, I think that I cannot ask for anything else. On the other hand, I wanted to do pet projects for me (and of course for who wants to use it) and maybe this is overkill (or maybe not). I mean, I want do nothing serious and I am afraid that I cannot roll that kind of auth you explained, lack of knowledge. I think I would roll a simple 401 check or if I don't have any request, I just ping the server every time I access that kind of routes. Auth token is an option too (and maybe the closest thing related with what you explained here AFAIK). Thanks you :) – Jesus Rodriguez Jun 08 '13 at 00:23
  • Thanks, I'm glad I helped :) Again, don't roll your own crypto :) Validating against the server before a commit of sensitive information seems like pretty reasonable behavior to me. The client can edit whatever they want. However before the server performs any action involving that data, the client has to re-authenticate. That's pretty simple to implement, and very reliable. Good luck! – Benjamin Gruenbaum Jun 08 '13 at 00:51
  • `Server decrypts the actions, and if they are in correct format it trusts that they originated in the client.`, can't follow. Clearly the user itself can always get the public key and sign their own data and send it to the server. Which leaves us in exactly the same situation we were in the beginning - it may be harder (or impossible?) for malicious JS to do the same thing but that's only closing one attack vector. – Voo Aug 17 '13 at 18:10
  • The point about storing the client password is missing detail. If your backend generates a token and puts it in a cookie that never expires, you've done little to protect yourself. If a browser gets compromised or you have a XSS exploit in your site, you'll have protected their password (which is likely to be reused on other sites), but that's it. There is no difference between storing a token in a cookie or storing a uname/passwd in a cookie in terms of security for your site. If you're worried about an attacker forging requests you need to use a CSRF token in addition to any login data. – Mark Aug 18 '13 at 03:11
  • @JesusRodriguez the very basics of security : HTTPS, form check on Servers, No Cross-Origin-Request (CORS) if you need it, configure the appropriated headrs thoroughly. use prepared statement on RDBMS or everything alike for anything that use a scripted language to interact with (LDAP, Redis, Lucene, ...). XSS : don't store raw/display HTML from data entered by the user, escape it. If you need more, you'll need someone competent. – Walfrat Dec 19 '16 at 10:20
10

There is a saying in the gaming community: "The client is in the hands of the enemy". Any code that is running outside of a secured area like the server is vulnerable. In the most basic scenario, vulnerable to not being run in the first place. It's the client's decision to actually run your "security code" and the user may just "opt out". While with native code you have at least an automatic obfuscation into assembly and an additional layer of protection by the fact that an attacker needs to be a good programmer to manipulate that, JS comes normally unobfucated and as plain text. All you need to stage an attack are primitive tools like a proxy server and a text editor. The attacker will still need a certain level of education concerning programming, but it's way easier to modify a script using any texteditor than injecting code into an executable.

nvoigt
  • 7,271
  • 3
  • 22
  • 26
  • Assembly is not an obfuscation, and who believes otherwise has never played a wargame – miniBill Aug 19 '13 at 07:46
  • @miniBill: While a moderately experienced attacker will have no trouble with assembled code, it would provide a very basic high-pass filter. (Alas, this is academic, as weeding out the script kiddies provides minimal increase in security for the OP's scenario) – Piskvor left the building Aug 26 '13 at 19:07
1

This isn't a question of hacking JavaScript. If I wanted to attack your app, I would use a private proxy that would allow me to capture, modify, and replay traffic. Your proposed security scheme doesn't appear to have any protections in place against that.

Mark E. Haase
  • 5,175
  • 1
  • 17
  • 11
0

Speaking specifically about Angular:

Protecting a route client side just doesn't exist. Even if you 'hide' a button to that route the user can always type it in, Angular will complain, but the client can code around that.

At some point your controller is going to have to ask your server for the data needed to render the view, if the user isn't authorized to access that data, they won't receive it as you have protected this data on the server side, and your Angular app should handle the 401 appropriately.

If you're trying to protect the the raw data, you can't client side, if you only want a particular user to only be able to view common data a certain way, create the data 'views' on the server rather than sending raw data to the client and having it reorganize it (you should already doing this for performance reasons anyway).

Side note: there should never be anything sensitive built into view templates that Angular requests, just don't do it. If for some crazy reason there is, you need to protect those view templates on the server side, just as you would if you were doing server side rendering.

Mark
  • 109
  • 1