18

Currently, I'm involved in a research project in which we are evaluating an existing web environment providing a safe online playground for children/adolescents with intellectual disabilities. Certain areas of this web application require identification of the user to allow the content to be aligned with the user's capabilities and to prevent user impersonation. The current version is very outdated and uses a very weak username/password-like authentication mechanism, which boils more or less down to unsecured access (abuse has been detected in the past).

I think some kind of MFA, or MFA-based approach is recommended as a replacement for the username/password solution in place. It could be interesting to have an adult, for example, a parent (when used in a home setting) or a teacher (when used at school), confirm access to the online resources on behalf of the user when the user it not able to authenticate for whatever possible reason.

Because some of the users have (very) limited intellectual capabilities, (more) complex mechanisms (like OTP, mobile authenticator, digipass, etc.) are not always suited. Also because of the age range of the target audience (ranging between 8 and 21), we cannot assume every user has access or is allowed to use a smartphone that could be used in the authentication flow. Under such circumstances, parental approval might be a valid alternative solution.

Many possible approaches probably exist to implement such authentication and authorisation solution. Please feel free to share any thoughts/ideas/suggestions to setup such solution.

Currently, a parent or teacher (guardian) registers a child/adolescent (user) via an online registration procedure. This procedure is email based and assigns a username for the user (based on the first name, last name and date of birth), a 3-letter password (limited to the first 10 characters of the alphabet) for the user and a free text guardian password. Using the website, the guardian uses the username/guardian password to add the user account to the local browser (cookie based). When a user want to authenticate, he/she selects his/her picture/avatar and enters the 3-letter password.

My concerns about this approach are:

  1. Adding avatars using a browser cookie to ease future authentication with such a weak password makes authentication pointless.
  2. The very limited number of user password combinations possible with 3 characters out of of a set of 10 possibilities makes it very prone to attacks.
  3. I find the overall procedure somewhat confusing and not very guardian friendly.

I deliberately did not explicitly mention before that the current solution is a browser based platform to keep as many possibilities open as possible. Interviewing guardians and users, we noticed a more or less even distribution among the desire for web based solutions and native mobile solutions (the latter does not exist today).

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
KDW
  • 349
  • 2
  • 12
  • 8
    I'm assuming no hardware based solutions are acceptable (e.g. NFC tag) as this would require hardware everywhere the child would access it. This restricts us to the field of information-driven credentials, and the lowered threshold for complexity is going to make it easier for attackers. So, first of all, what are you trying to protect against? Public access? Impersonation of existing users? DOS attacks? Furthermore, not all mental disorders present the same way, is there a specific family of disorders that you're dealing with or do you expect a solution to cater to anyone with _any_ disorder? – Flater Apr 11 '23 at 06:54
  • 6
    I'm not quite sure about the third paragraph vs the fourth. First you state the benefit of parental confirmation of access, then you state that MFA via smartphone is not feasible due to _the age range of the target audience_. So what if the parents use the MFA on their phone, and giving out the OTP to the child (or entering it themselves) entails their consent? What part of that would not work? – Flater Apr 11 '23 at 06:58
  • 5
    Also, for close voters: this is a question that triggers some closure reasons. However, I don't think this warrants closure. It is asking for an established software-related approach to handling an unusual authentication use case. The question is niche enough to not have a clear home and this seems like the best home - provided we are given reasonable requirements as to the expected capacities of the end users. – Flater Apr 11 '23 at 07:00
  • 1
    Are fingerprints an option? – Dominique Apr 11 '23 at 12:27
  • 4
    @Dominique I remain amazed that so many people believe in an authentication technique that has everyone leaving their password on everything they touch. – candied_orange Apr 11 '23 at 12:29
  • 13
    Why do you require authentication in the first place? What types of services do you provide? What type of data do you associate with the users? What types of abuses are you talking about? There might be other ways to minimize abuse without using a secure authentication mechanism. – JonasH Apr 11 '23 at 12:33
  • 1
    @Flater I agree I did not describe my question clear enough. Mental disorder is poorly chosen. Intellectual disabilities are a better description of the target audience. Hardware solutions are not forbidden, but are less preferable due to the requirement of having such solutions available for all users. Preventing public access is mainly to prevent user impersonation. Besides security, we also require identification of the user to align the provided content to the user's intellectual capabilities. – KDW Apr 11 '23 at 18:04
  • 1
    @Flater About your second comment. I think a further clarification is required. MFA via the smartphone by the user themselves is difficult for two reasons. First, the user's age range varies between 8 and 21. Most of the younger users don't own a smartphone. Second, some of the users aren't able to use a smartphone or are not allowed by their parents. MFA via the parents is a viable approach but would introduce difficulties for passing the OTP to the child when being at school. In that case, the OTP needs to be passed to the teacher (preferable for a number of children in the same class). – KDW Apr 11 '23 at 18:12
  • @Flater Thanks for your "anti-closure" remarks :-) I edited the original question to try to reduce confusion and unclear elements in my question. – KDW Apr 11 '23 at 18:28
  • 1
    @Dominique Fingerprints are an option for certain users. For those who don't own a smartphone or are not allowed/able to use one, something different needs to be in place. – KDW Apr 11 '23 at 18:29
  • 1
    @JonasH The necessity for authentication is a very valid question which is carefully being evaluated at this point. The offered services are very common services (news, video, music, text messages, etc.) but in a very moderated environment with adjusted content based on individual capabilities and suited for e.g. children. Since a requirement is to adjust content based on such capabilities, sensitive information is linked to users (not accessible for the user themselves, but available for the teachers/parents because they are involved in deciding the user's capabilities). – KDW Apr 11 '23 at 18:39
  • @JonasH For now, abuse is mostly user impersonation to send offensive messages to other users. For those events, we need some way to register and report such behaviour to the parents, teacher, etc. to improve future user behaviour and improve their attitude. I have to admit we are considering sentiment analysis in text messages to see if it can be of added value for this particular purpose. – KDW Apr 11 '23 at 18:48
  • 4
    Have you considered facial recognition? With a well-placed camera it could require absolutely no effort from the user. Jimmy sits down in front of the computer and the computer says "hello Jimmy!" – user253751 Apr 11 '23 at 19:09
  • @Flater "*It is asking for an established software-related approach*" - then please [edit] the question to make it actually ask that, and remove "*Please feel free to share any thoughts*". – Bergi Apr 11 '23 at 20:45
  • 1
    _"MFA via the parents is a viable approach but would introduce difficulties for passing the OTP to the child when being at school."_ Teacher can be given an MFA as well. An account is not necessarily limited to _one_ source of OTP. – Flater Apr 11 '23 at 23:10
  • @Bergi: The core question seemed clear enough to me, the comment was written to close voters who jump the gun and not read the full question. If you're at the stage where you explicitly dictate to others what edits to make, have at it. I'm not your personal assistant. – Flater Apr 11 '23 at 23:14
  • 10
    This question might be better suited over at the [UX site](https://ux.stackexchange.com/), as it is focused on user experience rather than software engineering per se. – jaskij Apr 12 '23 at 00:55
  • 1
    Re: parent authentication only working while at home, there's no necessity that only one person can authenticate on behalf of any given user, so you could have parents linked to one user 'hat' or teachers linked to multiple. The additional challenge there is that once you've assumed a hat or role you can't take it off and step back to the authenticating account. – Cong Chen Apr 12 '23 at 01:04
  • 3
    This suggestion doesn't feel substantial enough for a full answer, have you asked the parents and teachers of the target audience what they want? Do they want a scheme where the student logs in on their own, or one where the guardian logs in for them? Obviously you'd still design the system but it feels like this fundamental question needs to be answered first. Then you can come up with possible solutions and go back to your customers again to see which they think are feasible for their charges. – IllusiveBrian Apr 12 '23 at 01:12
  • 1
    @IllusiveBrian When asked, it was considered more important to adjust the solution to what users are capable of. Some users log in on their own while others are not able to remember their username/password. We are trying to design a solution broad enough to support as many user scenarios as possible but still secure enough to make a point. A lot of previous comments arguably question the need for authentication. I think it does but in a way it doesn't become a hassle for parents/teachers and without frustrating users by being to difficult. – KDW Apr 12 '23 at 06:30
  • Based on a number of comments, I added the current authentication procedure in the original post for further clarification. – KDW Apr 12 '23 at 06:59
  • 2
    I think this is a very interesting question that has clearly stimulated some discussion. Anything that enables more users to interact with interesting software sounds great to me. Remember computers before they had GUIs with a pointing device? Very hard for the average/untrained user. – JBRWilkinson Apr 13 '23 at 13:23
  • 3
    Why only the first 10 characters of the alphabet, rather than any letter? That would multiply the number of possibilities by (26/10)^3 > 17! – Solomon Ucko Apr 13 '23 at 17:47

9 Answers9

25

Users don't care.

Mental disorder or not users simply don't care as much as you do about security. You could set up two factor, OTP, even physical keys and users will still wander off to the bathroom without locking their computer.

It's important to understand security is about motivation. If you're trying to secure private information the user has entered into the system then the user is motivated to keep it secure.

If you want to ensure a compromised account can't be used to send the president threating emails understand that the user is not only not motivated but most likely completely unaware that their bathroom trip could play a role in this.

Rather than focus on trying to ensure authentication is 100% admit that it's not and that it might be a good idea to limit the ability to use the system in unintended ways, authenticated or not.

As for "mental disorder" this is such a catch all term I'll argue that a better term for them is user. Any system that deals with enough users deals with users with a mental disorder. With a net this wide the only sensible accommodation is allowing for more than one authentication scheme in the hopes of providing one that works for them. I, for example, am dyslexic as hell. So for me, asking for correctly spelled passphrases without benefit of a spell checker is a non-starter. Others will have different issues. The more schemes you allow the more you accommodate.

For what it's worth, I've worked with students who had profound disabilities (or whatever politically correct term du jour) who's ability to use a computer was constrained to pressing one big red button. And I've maintained computer labs used by college students. As far as security goes I trust both groups equally. And not for any politically correct reason.

If you're reading this and thinking I haven't truly answered the question that's because this is a frame challange. I'm calling the basis of the question into question. You don't need to worry about disabled people not maintaining account security because other users don't either. Worry about that.

candied_orange
  • 102,279
  • 24
  • 197
  • 315
  • 2
    This is sound advice in general, but it misses getting to grips with the particular problems posed by those without mental capacity. – Steve Apr 11 '23 at 12:28
  • @Steve better now? – candied_orange Apr 11 '23 at 12:49
  • I agree "mental disorder" was a very poorly chosen and wrong description (blame it on translation software ;-) ). Intellectual disabilities is the correct description. To my opinion, if we decide to retain the security requirements, it is mostly there to protect users against themselves. Not all but a lot of our users are not fully aware and able to estimate the full extend of their actions, hence the sandboxed environment of the solution to prevent malicious users to get in touch with our users as much as possible. – KDW Apr 11 '23 at 19:04
  • 7
    It’s considered bad form to edit a question, after it receives answers, to the point where the edits invalidate the answers. No communication is perfect. So when you see the need to clarify that profoundly consider asking a new question. However, “intellectual disabilities” teaches me nothing that “mental disorder” didn’t. Take some time and express their real situation and needs. I suspect you’re talking about low functioning students incapable of taking responsibility for account security. Express the consequence of their impairment. Don’t just give it fancy names. – candied_orange Apr 12 '23 at 11:31
  • _"then the user is motivated to keep it secure"_ Post-edit, this still glosses over the fact that people with an intellectual disability aren't always capable of appreciating the sensitivity of that data, or that they should protect it, or how to protect it in a reasonably secure way. – Flater Apr 12 '23 at 23:20
  • 7
    @Flater my point wasn't that disabled people are incapable of appreciating the sensitivity of the data. My point was that supposedly abled people are typically no better. Any security plan that relies on users caring is doomed. I'm pointing out that if disabled users make you worry about your security plan you don't really have a security plan. I'm not glossing over this point. It's my whole thesis. – candied_orange Apr 13 '23 at 00:42
  • @candied_orange: While the answer does indeed overall conclude that _any_ user is susceptible to not being motivated, the second paragraph asserts that there are certain sources of motivation for users (i.e. when it benefits their own personal data security). That assumption of motivation (in that specific case) inherently depends on the user's ability to comprehend that the data is personal and worth securing, which is not something you can bank on for people with intellectual disabilities. In that regard, you cannot paint users with and without intellectual disabilities with the same brush. – Flater Apr 13 '23 at 02:14
  • 1
    @candied_orange: The question is not about the philosophical nature of good vs bad agents and how an everyday user is motivated. The question is about how to technically accommodate users who are not always capable of the level of security that we (at the very least _attempt_ to) commonly hold users to, such as using non-trivially-brute-forceable credentials for the purposes of authentication. If you're arguing that using such credentials for the purposes of authentication shouldn't be done for _any_ user because every user is unreliable, that is well out of scope for the posed question. – Flater Apr 13 '23 at 03:31
  • 1
    @candied_orange: In other words, reliability is not the core issue here, ability is. For the purpose of the posed question, these two topics are completely separate. – Flater Apr 13 '23 at 03:31
  • @candied_orange, I agree with your sentiment that even the average user has weaknesses. Those designing authentication systems more often than not are security amateurs who have unreasonable (and even presumptuous) expectations, and don't care to learn or be told otherwise. However there is an important difference in degree here. I don't need my mum to approve logging into my bank account. Whatever modicum of capability allows ordinary people to use computers safely for enough of the time (if not always), not even that modicum is present with the users the OP is concerned with. – Steve Apr 13 '23 at 10:09
  • @Steve better now? – candied_orange Apr 14 '23 at 17:23
  • @candied_orange, well I think you've doubled down on saying there simply isn't a distinction between the average user and the incapable. I won't rebut that because I suspect it is rather closer to the truth than casual assumptions about the topic. But I for one still struggle with the idea that the incapable do not impose particular constraints, or that the OP can be answered in purely general terms without reference to possibility that his users may have a profile of capability very different from average. (1/3) – Steve Apr 14 '23 at 18:44
  • For example, if a college student is careless and allows others access to their college social media account, an adjudicator may accept that the student themselves did not issue malicious communications, yet impose a ban on further use to mark their carelessness and *blameworthiness* and as an example to their peers. That kind of response may however be less acceptable when dealing with the incapable, because people may not accept that their carelessness is blameworthy, and therefore they may not accept that withdrawal of the facility for carelessness is reasonable. (2/3) – Steve Apr 14 '23 at 18:44
  • The net effect is that a provider may be dealing with a set of users who are more careless than the norm and who cannot be held to account in ordinary ways, and therefore measures that usually achieve a statistically acceptable level of security (or civility of use) amongst an average set of users (and where handling mischievous use imposes an acceptable administrative burden), may perform far too poorly in these special circumstances. (3/3) – Steve Apr 14 '23 at 18:49
17

Two obvious things.

One, there's an inadequate specification of who the security measures need to resist, and/or who stands to gain from unauthorised access. No system is wholly resistant to adversarial attack, and it's impossible here to infer the exact vulnerabilities of the target or the incentives of the adversary. There isn't even the barest of information about why a playground for those lacking mental capacity requires authentication.

Two, there is at least partly an assumption that despite lacking capability in general, the users may still be capable of participating in any scheme to secure access to the computer systems. The existing simple scheme has, you acknowledge, failed. It's important to recognise when do-something-itis is reigning.

Limited mental capacity is often associated with limited individuality, in the same way that children have limited individuality as distinct from adults around them (and especially, their parents).

What I mean by individuality is possibly something corresponding to the "homo economicus" model of behaviour in economics, in which "individuals" have intentions and preferences which arise from within (not from outside), their interests are in behaving in ways which express those intentions or meet those preferences, and their capability is such as to be able to analyse their circumstances and align their behaviour to their interests.

The designers of computer authentication systems often assume individuality, and assume that access to a computer can always be related to an individual. Or more precisely, they usually want to assume these things, because such assumptions if fulfilled would enable the computer system to be used or accessed in ways it couldn't otherwise.

In fact, these assumptions are likely to be most strongly challenged by those with limited mental capacity. They may struggle to recall and reliably execute complicated schemes of behaviour (which, despite its complexity, is commonplance and well-trained amongst capable adults, and therefore not generally seen as difficult). They may be deferential in general to others, or to those they recognise as responsible adults - this deference is often encouraged, to allow responsible adults to manage them. And they may be eager to please, or struggle to conceive or scrutinise the propriety of fulfilling requests or aligning with intentions expressed by others. Their lives may be sheltered, leading to an under-developed common sense.

What this adds up to is that it is almost certainly a fundamental mistake to design a computer system for those with limited mental capacity, including an authentication scheme, around the assumption of their individuality.

Behind those with limited mental capacity, are their supervisors who often have considerable influence. Often, the supervisor might not be a specific adult, but a varying set of them. Beyond this, friends and peers, and even downright strangers, may have considerable influence (in a way which, for a person with normal mental capacity, would be considered blameworthy and a failure of responsibility).

And then the person with limited capacity may themselves behave on their own initiative in absurd or irresponsible ways, but which is characteristic of their condition.

Under these circumstances, any real security is impossible when it attempts to treat the typical person with limited mental capability as an individual, and it should be assumed that all information in a computer system which such a person can access, is community property.

Steve
  • 6,998
  • 1
  • 14
  • 24
  • 4
    The last sentence -- *"... it should be assumed that all information in a computer system which such a person can access, is community property"* -- is a very good observation. – Greg Burghardt Apr 11 '23 at 18:03
  • I agree with your statement "What this adds up to is that it is almost certainly a fundamental mistake to design a computer system for those with limited mental capacity, including an authentication scheme, around the assumption of their individuality." My question then is : "What alternatives exist if some kind of authentication is required (for whatever possible reason that might exist)?" – KDW Apr 11 '23 at 19:10
  • 4
    @KDW, it would depend on what those reasons are (since that would influence the amount of hassle that would be tolerated arising from the authentication scheme, and it might reveal the profile of constraints in other respects). For example, if the computer system absolutely must record the identity of the user - say if it was administering a psychometric test to the user which goes on their medical record permanently - then the whole affair would be supervised by staff throughout,... (1/2) – Steve Apr 11 '23 at 20:34
  • 4
    and identifying the limited-capacity user in the first place would be done by the supervising staff referring to the *responsible adult* in charge of the user. But if the computer is being used just to play noughts-and-crosses for entertainment, and keep a scorecard for each user, then the responsible adults in charge are not going to tolerate a long-winded saga of authentication for something of no importance. It's unclear, from anything you've said so far, how much demand can be placed on the supervising adults. (2/2) – Steve Apr 11 '23 at 20:35
  • @KDW, I see reviewing other comments that you've actually clarified that you're dealing with a communication/social media system. Are you actually part of the organisation in charge of the limited-capacity users, or are you simply an external developer marketing a product? – Steve Apr 11 '23 at 20:48
  • @Steve We are an academic research organisation. One of our research domains focusses on a caring and inclusive society. So yes, we are involved in the organisation in charge and no, we are not a development company marketing a product. – KDW Apr 12 '23 at 07:07
  • @KDW, I see. The reason I ask is because when you're part of the same organisation, you'll have a lot more freedom and potential. If you're predominantly trying to control mischief amongst the (broadly authorised) users, and you have staff on payroll who are specifically there to review, investigate, and control communications that these users produce, then a suggestion could be to stick with the existing login scheme, but ensure the computer equipment is covered by CCTV. This way, there is another audit trail to identify the real user in cases where identification is necessary. (1/2) – Steve Apr 12 '23 at 08:02
  • Possible impediments to this are firstly that the existence of CCTV might not be acceptable (due to potential misuse by staff, or unauthorised access), and secondly that the role of moderator may be neglected by the organisation, or it may suffer from poor job design which impairs its effectiveness (and leads to communications being effectively ungoverned). Because supervising the incapable is already costly and complicated, there is often no appetite (either amongst management, or amongst staff) for systems which increase costs and complexities further. (2/2) – Steve Apr 12 '23 at 08:03
  • 1
    @KDW, also after reading your further comments on the question, it sounds as though one of your potential requirements is that, when an incapable user attempts to log on, their request is sent for approval to a terminal controlled by their guardian for the time being. The unreasonable difficulty with this will be maintaining real-time records which link users to their guardians for the time being (for example, as they pass from parent, to teacher, to another teacher, back to parent), so as to ensure the right guardian is presented with requests from those in their charge. (1/2) – Steve Apr 12 '23 at 08:36
  • 1
    It may be necessary to think of systems that distinguish between permanent guardians who have a fixed relationship with the user (like a parent) and transient guardians (like teachers or carers) whose relationship to a single user is not fixed, whose charge of the user may change dynamically multiple times a day, who may at times be in charge of many users, and who (moreso even than parents) need to work efficiently with minimal additional time or cognitive burdens imposed. (2/2) – Steve Apr 12 '23 at 08:38
5

Conundrum

The combination of potentially low-qualified users and diverse and potentially insecure hardware environments makes this a hard problem to solve, and probably one where the solution isn't a simple strategy someone could write down.

Two-factor schemes are complex or expensive or both

Typically, MFA schemes depend on a combination of something the user knows and something the user has, for example, a memorized password plus a TOTP key generator. This requires that the user is able to memorize a short string and is able to type it reliably, and that they are able to read and type something like a 6-digit number.

From your list of requirements, you have already excluded OTP, Smartphones, etc. But you will need to bite the bullet and require something that isn't a memorized password.

If you can depend on a a set of always available hardware features, such as a standard USB port, most likely a hardware USB token might be the right choice. This is costly, though. NFC reader hardware isn't ubiquitous, so NFC tokens, which can be pretty cheap, are likely out.

A completely different line of thought

What can you possibly change or limit in your requirements?

  • One option you already mentioned is requiring the assistance of a helper who is able to execute a more complex authentication scheme. This would enable you to strike out the condition that the scheme must be usable by the target user.
  • You could evaluate what is actually at stake, why someone would want to access the account with stolen authentication, how they were able to gain access to the password, etc., and protect against the known vulnerabilities. If you need to protect private data, you might require different authentication for that than for access to the user-facing web application.
Hans-Martin Mosner
  • 14,638
  • 1
  • 27
  • 35
  • "This requires that the user is able to memorize a short string and is able to type it reliably, and that they are able to read and type something like a 6-digit number." For some of the users, this is certainly a usable approach. Unfortunately, not all of them are able to do so. For them I'm looking for possible alternative solutions. – KDW Apr 11 '23 at 19:18
  • 2
    The possibility to rely on the assistance of a helper seems to be the most promising approach. It has some disadvantages but the ideal solution probably does not exist... – KDW Apr 11 '23 at 19:20
  • A usb fingerprint scanner might be a good option – Andrew Williamson Apr 13 '23 at 01:25
5

Currently, I'm involved in a research project in which we are evaluating an existing web environment providing a safe online playground for children/adolescents with mental disorders.

For simplicity, let's omit whether there's a disorder or not. The problem to solve is the "dependency". I think the error is in looking for solutions based on "autonomy". Our target users probably don't have much autonomy in their real life so, the solution should take this into account.

This's not a problem exclusive to users with mental disorders, it's also a problem among our elders. We are speaking about a group of users who often depends (a lot) on someone else, so the security must involve the capacity to trust in that someone else. Someone who is not the immediate benefactor.

In this context, we must assume the solution will be susceptible to abuses. Technology aside, it will be critical to implement a protocol (process) to follow in case of abuse or fraud. The protocol must be tested and monitored constantly. It must be reliable and robust to provide confidence and deterrance.

Given the user's dependency on someone else, I think KYC (Know Your Customer) policies are key to the solution, but knowing our users is not enough. The policies (hence accountabilities) must be extensible to those other users authorized to act on behalf of our users.

Feel free to share any thoughts/ideas/suggestions to set up such a solution.

In the blockchain and smart contracts ecosystem, the figure of the "intermediary" is quite common. Someone whose vote or final say allows (or denies) transactions between two or more entities/accounts. Our solution could adopt this shared authorization protocol. It would be a protocol based on consensus between two or more persons and our service.

As with any "democratic" process, transparency and traceability are important. The protocol must keep participants informed at all times (requester, benefactor, participants, dates and times, etc.)

If we solve the previous problem, left finding a way to provide users with a bit of "autonomy". A sustainable autonomy regarding credentials ownership.

For this case, I find biometrics a must-have because we all have (at least) one, and they don't need retention. Security based on biometrics is not more secure per se. It's still vulnerable, but hacking the biometrics of 2 or more participants who are fully aware and informed about the auth process is not that simple.

Biometrics can be our MFA. For example, they could replace temporal tokens with a text to be pronounced. The content doesn't even matter because we are not matching words. We look for matches in voices, cadences, tones, inflexions, accents, etc. We can train AI to enhance the validation so that the sentiments don't constrain the matching pattern. AI can be trained to detect fraud too.

The MFA could vary among participants, so we can adapt the mechanism to specific needs or incapacities.

Laiv
  • 14,283
  • 1
  • 31
  • 69
  • 8
    This answer really addresses the core of the problem. *"We are speaking about a group of users who depends (a lot) on someone else, so the security must involve the capacity to trust in that someone else."* -- in general, "trust" is the foundation of any security paradigm. This is especially true when the primary benefactor of the system is unable to participate in the authentication/authorization process. Technology alone won't solve this. Other people, polices, and procedures must be involved. Great answer! – Greg Burghardt Apr 11 '23 at 17:56
  • 1
    The suggestion about blockchain with smart contracts is certainly something to further explore and honestly was not something I already took into consideration. Thanks for pointing it out! – KDW Apr 11 '23 at 19:29
  • 2
    @KDW You don't even need blockchains to implement such techniques for a centralized authentication system. Look at how cloud IAM systems handle server authorization in a least privilege model. You have server host accounts which have permissions to generate authorization for container accounts that have permissions to perform the relevant task. – user1937198 Apr 12 '23 at 00:35
  • 1
    @KDW I mentioned blockchain and smart contracts in case you need to do some research but, certainly, you can implement the same principles in a traditional (centralized) web system. The takeaway ideas are KYC, auth by consensus and biometrics as MFA. – Laiv Apr 12 '23 at 07:01
2

It sounds like the main attack vector that's been abused is the three letter passwords, and it's easy to see how:

  • Potentially multiple users have been authenticated on one shared computer, and you just need the three letter password to swap between them --- this is probably a more common setup than we'd like to imagine for e.g. schools, and it's not hard to see how it could be easily exploited.
  • It sounds like we're talking people in e.g. a group / classroom situation, and it doesn't sound too far fetched to imagine that they might often leave their devices unattended while unlocked / type in their passwords rather slowly, such that the keystrokes are easy to observe. Again, it's easy to see how this could be abused by fellow classmates to gain access.

The guardian setup can't solve the problem

No matter what you do with the original guardian setup, it can't solve these problems, unless you make the guardian log in for them every time, which I would imagine to be an untenable hassle on their behalf.

Even if the initial guardian login used 2fa, retaining that login on the computer and abolishing the three letter passwords entirely (e.g. moving towards the type of solution that most popular websites use) doesn't work here, because you'll still need:

  • The quick user switching (e.g. to support the shared computer case)
  • Some sort of unlocking mechanism (e.g. to support the case where users leave their computers unattended --- it would be nice if we were able to provide a "lock" button for them / lock them after e.g. 5 minutes, and then they could reauth somehow when they come back).

App based authentication for the guardians also doesn't work

Giving an app to the guardians where they can remotely authorise requests doesn't help either --- while it would work in theory, in reality no-one's going to check that the person that sent the authorisation request is the person that's supposed to be able to use the computer, especially if it's around the same time where you might expect the real user to be requesting access.

I suppose you could do some sort of camera based solution (e.g. take a photo of the person making the request and display it), but that opens up a whole other host of problems, and plays back into the problem from the previous section: the last thing you want to be doing as a teacher in this scenario is pressing 20 accept buttons on your phone at the beginning of each class.

Authentication hardware

In my opinion, this is your only viable solution. You provide some authentication hardware, and the users authorise using this. For example:

  • Fingerprint sensors, like TouchID
  • Facial recognition sensors, like FaceID or Windows Hello
  • USB hardware keys, like a YubiKey (hese can have fingerprint sensors built into them, and I'd really only recommend this variety in this context, as users are quite prone to just leaving the USB stick plugged into their computers the entire time, defeating the point in the physical access scenario described above)
  • NFC Smart Cards
  • etc

A lot of laptops come stock with at least one of these methods these days, and if not, it's relatively cheap to just get everyone some YubiKeys (or similar alternative). From your user's perspective, they just have to complete a very simple challenge (e.g. pressing a fingerprint sensor) and they're authenticated, and although you'll still need to do the initial login with the guardian (and configure the hardware authentication key in this step too), it's a lot more secure now, as WebAuthN is actually a robust security mechanism, rather than a simple three letter passcode.

You can do this entirely within the browser these days, with no third-party plugins, using the WebAuthN API. From a software perspective, the server generates a challenge, the user signs it using their private key (which is stored on / accessed via the hardware key), and then the server decodes it using their public key and ensures it matches up. You can read more about the spec, and see a demo of it in action in your browser, here: https://webauthn.guide/

Toastrackenigma
  • 207
  • 1
  • 4
1

I would tie the authentication to devices if possible. Require the guardian to enable a device for a specific user, and store some type of strong key in the cookie.

Ideally there should be one device per user, I believe most users would understand the concept of ownership, i.e. I should use my tablet/phone and you should use yours. You could also consider trying to push authentication down to the device level, where there is more support for various other authentication mechanisms. But this will likely not be possible to enforce if you are just a webpage.

I would assume the access to different features should be more or less aligned with what the user wants and is capable of doing, so I would expect there to be little harm if someone saw some video or music they where not intended to see.

People being mean to each other on the internet is unfortunately a problem that is as old as the internet is, without any clear solution. Monitoring messages is a good start. You might also consider taking photos of the user (assuming there is a webcam or front facing camera) when sending messages. That way you have a way to verify the actual user if the message was offensive.

But there is no way to automatically and reliably detect offensive messages, since it is in large part based on the intent of the sender. Some online games have completely removed chat, but players still abuse gestures, animations, or whatever is available to be offensive. But the only way to completely remove the risk of users being mean would be to remove all ways for users to communicate.

JonasH
  • 3,426
  • 16
  • 13
1

"Mental disabilities" is not a good distinguisher. Generally it is thought that about 20% of persons have some kind of mental disability. I'm pretty sure you are targeting one or more subsets of that; in that case you may need a more specific approach for each group you're targetting.


Biometric authentication may be of help. I guess that the disabled user should be able to present their face to accomplish about any task. Local PIN authentication could also be used (with a set number of retries). A quick search shows some positive results, e.g. in this paper

If you're using a web framework then there are authentication frameworks such as PassKey that may help you using the aforementioned biometric authentication options. It can e.g. use Windows Hello or Android biometrics to create a secure login.

Beware that biometrics & facial recognition may not be acceptable to all users, especially if the biometric device is in control of the institution rather than the user. Ethics as well as local law should be considered when going into this direction.


If you really think that strong authentication is required action then you should also require that the action is performed correctly. If you don't then there is no reason for authentication after all. In other words: if this action requires oversight then I guess it must be provided. In that case the entity performing the oversight might as well perform the authentication. If the guardian cannot remember a password then you might want to point them to a password manager (that's what I use for family that lack IT skills as well).

Alternatively you could authenticate the guardian, and then let them choose the test subject under their guardianship.


I'm not sure if there is any reason for MFA. MFA is used to create a more secure environment compared with entering a password. So it requires two factors.

I guess you could require a hardware tag that needs to be inserted in some kind of reading device in that case, so that something that you have is included in the MFA rather than something that you know. I'm sure that there are pretty big RFID tags to be found, that could be read by an antenna inside a receiver box.

Maarten Bodewes
  • 337
  • 2
  • 14
0

Here's just an idea. Perhaps you can do without authentication.

If you need authentication only to map users to abilities, perhaps you can solve that problem without authentication.

For example, the initial screen is a "test" (e.g. a game) to gauge user abilities; gauge enough to be able to tailor web site characteristics or functions. Then perhaps you can add the posibility to "upgrade" to more features with more "tests".

Pablo H
  • 598
  • 2
  • 7
-2

Why change what works?

For rare or infrequent misuse, it sounds like you're over-engineering which reduces access and increases difficulty.

Audio might be a technique, to have someone read a small passage of text, or bluetooth (do they have smartwatches or phones?) however both are probably costly, unreliable, slow and complicated.

Best of all is probably to give them a menu of things that they already remember, like a secret question/answer, where the memorized detail actually is something they enjoy to remember or that they need to remember. Parent phone number, street number, name spelling, age, fun things, material tings, favorite things, tangible and intangible, all are options.

As so many people are very happy and comfortable sharing data, and it's essential to disclose & share if you want to overcome mistrust and suspicion, the only way to eliminate misuse is to shut down the system altogether, or over-complicate it, creating a need for frequent intervention, which often causes frequent outages anyway, due to the human & technical burdens!

Given most people are trying to alter and improve, you wouldn't want something that a person grows out of. Eg. Impairments that are overcome via diet or activity or medical assistance, different interactive partners, and the like, are probably best seen beyond static conditions or labels.

It could be said that you could integrate an access control system into a spaced repetition learning pattern.

Have you used systems which ask for a password, but if you get it wrong, asks for your previous remembered password? What if the password is their first initial, last initial, and age, and a street number, or the day of the week (1-7)? Is the impairment so profound any sequence has to be repetitive? Can images such as pictures be used to identify difference between 'person' 'age' 'address' etc, so those things can be exchanged or presented in a varying sequence?

Or have you used systems that remind you of only one random secret question at any time, but might ask you a different one if you get that wrong constantly? Or even, occasionally, ask you to review a pin number? This would be, one answer at a time, rather than eg. three on the same page, where you see the three questions, or three pictures.

  • 2
    If you have a new question, please ask it by clicking the [Ask Question](https://softwareengineering.stackexchange.com/questions/ask) button. Include a link to this question if it helps provide context. - [From Review](/review/late-answers/252939) – user16217248 May 16 '23 at 15:12