34

I think many people with even a small experience in designing UI/UX to handle user data will be familiar with the perils of putting in input field/database limits for personal data, such as names. However, when it comes to storing biometric data, such as in medical/patient management software, I might've assumed that there was some validation on input given the intended use-case!

That seems as though it might not always be the case, having recently seen this tweet, in which someone was invited for their COVID-19 vaccine prematurely, apparently due to his GP surgery storing his height as 6.2cm, giving a BMI of 28,000.

Questions:

Is this just a flaw in their particular software? Is it possibly just the case that many of these systems were never intended for the mass selection of patient groups?

Or are there valid reasons that you might not want to introduce input ranges and sanity checks to biometric data?


Colour me only mildly concerned, given the AI-based future of medical decision making!

  • It's possible that usability testing identified that asking very-tired-due-to-going-pandemic doctors to double-check correctness of data entered frustrated them to the point of complaint/avoidance and so it was easier to allow bad data than make it harder. Even iOS gets this wrong - complains about Blood Pressure above 130/110, even though that is common among some sections of society. – JBRWilkinson Mar 11 '21 at 09:25

7 Answers7

48

Is it possibly just the case that many of these systems were never intended for the mass selection of patient groups?

This has absolutely nothing to do with it. Even if this software was only used in scope of retaining patient information when the patient visits their GP, the calculated BMI would've been incorrect.

The issue with mass harvesting of data is that people don't invest any time in looking at specific entries anymore, and therefore they don't see obviously wrong data. In comparison, that doctor who looks at the patient info for the patient who is in front of them will notice that 28000 number.

Is this just a flaw in their particular software?

If the software was never required to put boundaries on data input, then not having boundaries isn't a flaw in the software. At best, it's a flaw in the requirements.

The 28000 also wasn't a bad calculation either. It was a correct calculation based on the data that was input. You cannot blame a calculation for the correctness of its input, or what I like to refer to as "shit goes in, shit comes out".

So you want to limit the height input then (and weight, but let's focus on height for now). What should the minimum limit be?

Well, the shortest person recorded is about 62 cm. But what about when that record is broken? Because most records tend to get broken once in a while.

Also, babies are generally 50cm, so maybe that's where the limit should be. But what about premature babies? Even only accounting for the viable range of premature births who have a reasonable chance at survival (which is 24 weeks), they can be as small as 22cm.

So if you want to account for all humans, we could argue that 22cm is a reasonable minimum boundary.

You should already notice that 22cm is still close to the 6.2cm figure we started with.

I reverse engineered your example. For a 28000 BMI and a height of 6.2cm, you'd need to weigh about 108kg. But even if you disallow this height, yet still allow a height of 22cm, that still leads to a BMI of 2231.4.

The BMI data is still nonsensical, even though both input values are within their individual normal ranges. We established that a height of 22cm is possible, and a weight of 108kg is also realistic.

Your question is built on the assumption that such data validation would be trivial to implement without fault. The above calculation shows you that this assumption is incorrect.

Or are there valid reasons that you might not want to introduce input ranges and sanity checks to biometric data?

While people's height and weight isn't going to change overnight, it's generally inadvisable to add more restrictive validation to data than what was asked, based on nothing more than what a developer thinks might be a possible reasonable restriction.

For example, my country's license plates used to be of the format AAA-000 (and initially, vanity plates weren't legal). Should software have only allowed this format?

Well, it seems like you would have forced that. But when those license plates ran out, we started using 000-AAA. And when that ran out, we've started using 0-AAA-000.

If you had written those validation checks, you would've had to change and redeploy your application every time the format changed. And this is a relevant topic, because that is precisely what happened in my country. They had to manually update thousands of devices (speed cams, parking lot cameras, police vehicle cameras, ...) because they were unable to register these new license plates.

Had they not bothered with this format validation, they wouldn't have had to update their software. Given that in this case it was embedded software on devices, having to redeploy is a cumbersome and expensive task.

Similar issues could be encountered with:

  • Landlines are 9 digits here, whereas cell phones are 10 digits
  • Postal codes here are 4 digits, but they've introduced 5 digit codes recently
  • House numbers are numeric, but there is a fringe case whereby a property that is split into two properties will get a "A/B/C/..." suffix. So what once was number 1 becomes numbers 1 and 1A. This is not the same as a box (i.e. number 1 box A). For example, we live at address Redacted Street 14A, but the building next door (Redacted street 14) is an apartment building, and labels their apartments A/B/C/... 14A is my house number. 14 box A is the nextdoor apartment on the first floor. You can imagine my frustration whenever I fill out a form and notice that the developers needlessly decided to enforce a numeric format in the number textbox.

Colour me only mildly concerned, given the AI-based future of medical decision making!

You're putting the cart before the horse here. Even if the patient info registration tool allows for inputting nonsensical data, that doesn't inherently mean that the interpretor of this data must blindly believe anything it is told.

If you only could implement one validation, you'd put the validation on the AI, not on the data collection tool. If you blame any mistakes your AI makes on the input data rather than the AI, then your AI isn't an AI, it's just an algorithm.

Flater
  • 44,596
  • 8
  • 88
  • 122
  • I admit that I'm likely approaching this from the perspective of a patient, rather than a programmer - because while I agree that the calculation works from a technical standpoint, and it's probably just a failure in the software specification - I don't think I'd have described the calculation as correct! Given that the usage context was presumably always 'aid making medical decisions for the patient'. But it's obviously happened in this case that the data has been used for registering patients for vaccinations, and I wonder if that was the initial design requirement. – Sphaerica Pullus Feb 17 '21 at 14:21
  • I disagree with the assertion that it's not worth performing validation at the data collection stage, granted I think you're right that validation of certain medical data fields wouldn't be trivial. The way data is collected constrains the eventual use, partly through how much you can trust the accuracy. In fact, the only party that can validate easily is the person entering the data! Yes, the end-user of the data is the one with the responsibility for the consequences of errors, so for sure they should do validation, although I think the example I posted clearly shows that they often don't! – Sphaerica Pullus Feb 17 '21 at 14:29
  • 7
    @SphaericaPullus: _"I don't think I'd have described the calculation as correct!"_ **You cannot blame a calculator for you having entered the wrong number.** `750 + 250 = 1000` I hope you agree this calculation is correct. But we were trying to calculate how many apples you and I have together, and it turns out that I only have 75 apples, not 750! This doesn't invalidate the calculation itself, it invalidates the input, and subsequently the result of the calculation. But not the calculation itself. As an analogy, if I shoot the wrong person, the gun did not malfunction. I just used it wrongly. – Flater Feb 17 '21 at 14:50
  • 3
    @SphaericaPullus _"I disagree with the assertion that it's not worth performing validation at the data collection stage"_ I never claimed that you should never validate data collection. I just pointed out that proper validation is not as trivial as you claim it is. Even with reasonable data validation on both the height and weight of a person, that doesn't mean that the subsequent BMI is inherently valid. A 22cm tall human supposedly weighing 108kg proves that point. It's really hard to write validation that would exclude these data values but individually allow them in other cases. – Flater Feb 17 '21 at 14:52
  • 8
    @SphaericaPullus: In other words: tells me what range of heights we should allow for a person weighing 108kg. Then wonder if you would like to bet if anyone can find an example of a person who was shorter/taller than whatever arbitrary range you just came up with. – Flater Feb 17 '21 at 14:58
  • 5
    I agree with everything in this answer, up to the last paragraph - which I don't _disagree_ with, but don't think captures current reality well. We don't currently have any AI that is not "just an algorithm" - there is no evidence of any transcendent consciousness or anything like that, just very complex and dynamic algorithms for turning inputs into outputs. A system that automatically invites someone based on their BMI is as much "AI" as most things branded as such, and we _should_ be concerned that it was fed incorrect inputs and allowed to make *unsupervised* decisions based on them. – IMSoP Feb 17 '21 at 15:32
  • @IMSoP Artificial intelligence != artificial consciousness. But I agree that we should indeed be concerned about the correctness of input (as per my "shit goes in, shit comes out" point), but even a valid input can be incorrect if it's simply not the person's actual weight (but a seemingly correct value). Starting from an assumption that validations are by default trivial and non-obstructing isn't particularly productive, which is why I targeted that underlying assumption. – Flater Feb 17 '21 at 15:58
  • 1
    @Flater Yes, like I say, I absolutely agree with most of the answer, and mostly agree with the last paragraph, I'd just emphasise it differently: we _should_ be treating everything claiming to be an "AI" as "just an algorithm", and keeping a close eye on its inputs _and_ its outputs. Consciousness was just one example of something that might justify making a distinction, but right now "artificial intelligence" is a pretty meaningless term applied to a bunch of different techniques mostly because it makes them marketable. (And I say this as someone who studied it at University.) – IMSoP Feb 17 '21 at 17:15
  • @Flater *"I hope you agree this calculation is correct."* Of course! I don't want to come across as complaining about semantics. It's just that the answer is wrong in the sense of what it's meant to represent. I suppose I'm worried, related to what another commenter below mentioned, about the potential of developers devolving all responsibility for input validation. Implementing the correct calculation is great, but I would say that they've slipped up if their remit was to make reliable software on which to base medical decisions. – Sphaerica Pullus Feb 17 '21 at 17:53
  • @Flater *"It's really hard to write validation that would exclude these data values but individually allow them in other cases."* I can understand that, but it seems as though there was not even an attempt here. Granted, in this case, the consequences of the BMI being 28000 are probably no different to it being >30 (which might *conceivably* be within the margin of error of the actual measurements), so perhaps there's little point. I suppose it's all okay as long as the assumptions that can be made about data are clear downstream, but it troubles me that that wasn't the case here. – Sphaerica Pullus Feb 17 '21 at 18:01
  • 10
    @SphaericaPullus I don't think people are disagreeing that _something_ went wrong here, but the thing that went wrong was not _input validation_, and the developers of the input system aren't at fault for omitting it. The right tool for the job might have been something on the _output_ that allowed a human overseeing the process to quickly spot this outlier and realise that it was an error; deciding if it was a mistake _in this particular case_ is much easier for a human than a computer, which has to obey some set of rules (even if they're complex, machine-learning generated rules). – IMSoP Feb 17 '21 at 18:42
  • 3
    @SphaericaPullus Your also assuming that the input system and the processing system where designed together. Its entirely possible that the input system doesn't think about BMI at all, and just treats weight and height as independent fields to keep that system simple. And then potentially years later someone decided they needed BMI to make a priority decision and now the processing system needs BMI. How should the input system have known that BMI was the thing it needed to validate? Should it have also validated age to height ratio? – user1937198 Feb 17 '21 at 21:37
  • @user1937198 Yeah, it's a fair point! Part of what I'm curious about is whether medical software has been stretched beyond its original design intentions, partly due to the impact of COVID-19. We had at least one situation in the UK of infection data being truncated by the MS Excel spreadsheet row limit, although that's a slightly different issue... – Sphaerica Pullus Feb 17 '21 at 21:51
  • @SphaericaPullus Thats been an issue in medical systems for years. Medical tech moves slowly due to certification needs. Current cutting edge for data management in medical tech is stuff that was new around 2000. Its a lot easier to handle stuff in excel than to get your custom app built by a company with development process that is compliant with rules on medical devices. – user1937198 Feb 17 '21 at 22:04
  • 5
    @SphaericaPullus: There's not even a guarantee that the height/weight and BMI were part of the same application. Maybe the original source only tracked raw medical data like height and weight (among others), and the COVID mass aggregator tool imported this data and chose to use it specifically for BMI calculations. You can't fault the original tool for not testing for reasonable BMI values if that application doesn't even handle BMIs. What happened here was an issue, but it's not easily attributable (or blanket fixed) by a single party in this entire chain of events. – Flater Feb 17 '21 at 23:36
  • 1
    @TheodorosChatzigiannakis: But adding validation doesn't solve the problem of incorrectness. You could enter wrong data, still within the reasonable boundaries, and it could still be massively different in medical terms. At best, it solves the problem of silly cases. While "my BMI is 20 but the app says 30" doesn't make for a funny headline, it's as much of a problem from a medical perspective, and arguably an even bigger problem since a human observer wouldn't easily spot the mistake. So we can argue about validating silly values, but it's not the most productive way to prevent real issues. – Flater Feb 18 '21 at 11:38
  • 1
    @TheodorosChatzigiannakis: My issue (or should I say consideration) here is that an incomplete solution that sounds like a complete solution often brings with it a dangerous sense of security and subsequent complacency. Secondly, many imperfect solutions tend to create more complexity than a single complete solution does. That obviously doesn't mean you don't validate, but it does put a significant asterisk on the idea that "validation would trivially solve this problem", which is what the posted question is founded on. It's a nuanced frame challenge, not anti-validation rhetoric. – Flater Feb 18 '21 at 11:43
  • By the way, the thing that went wrong here isn't even that serious. So this person got an earlier COVID vaccination invitation, and messes up obesity statistics *slightly*. Big deal - who cares? They can easily fix this problem whenever a real person notices it and the consequences will be nearly zero. – user253751 Jul 21 '21 at 09:52
  • @user253751: "whenever a real person notices it" This is a very fragile part of your argument. Not wrong, just _very_ prone to complacency (cfr my previous comment) People tend to overrely on automated systems and some tend to even second guess themselves over an automated system. – Flater Jul 23 '21 at 23:38
  • @user253751: If you want an example, there are plenty of stories of people following their (mistaken) GPS over what their own eyes tell them. Another example: a Tesla's autopilot feature is in no way graded ready for autonomy, yet there are several cases of people not only relying on it, but actively circumventing the features that ensure that a human must remain in control of the vehicle. People _want_ their cars to be able to drive themselves, and throw caution to the wind in pursuit of their desire. Someone in a boring data validation job might similarly be incentivized to "trust" the data. – Flater Jul 23 '21 at 23:52
28

I have worked in this industry, and there are several popular patient management systems which simply accept whatever number the doctor enters for the patient records.

In practice this means that if you looked at a thousand patients' data, you would typically find one or two where the doctor had entered meters or feet into a field that was meant to store centimeters.

One place where you will often find input validation is in the calculators that some of these systems have for generating figures like BMI, eGFR, etc. But this SMS wouldn't be generated from the interactive BMI calculator window, it would be generated from the height and weight in the patient database, which is full of non validated data.

When I developed a chart that would show the average BMI of a patient population, I found that to get a reasonable looking chart I had to program it to filter out extreme outliers that were likely to be data errors, because the data had not all been validated on input. For example, I did not include people less than 10cm tall in the BMI chart.

To answer some of your specific questions:

Is this just a flaw in their particular software?

It's like this in several systems, not just one.

Is it possibly just the case that many of these systems were never intended for the mass selection of patient groups?

Yes. The patient database was never intended to be used like this.

The primary use case of the patient database is storing your doctor's notes within the clinic so your GP can read them again next time you visit. Other functions, such as sending SMS, are added in later versions as a bonus feature, or implemented through third-party add-on software.

Mass selection of patient groups is not a common feature of patient management systems, but there is third-party software available that can do it.

Basically, the data was entered into software A, and the SMS was sent by software B, and software company B have no say in whether software company A perform input validation.

Robyn
  • 412
  • 3
  • 6
  • Thanks Robyn! I've marked your question as the answer, not because I disagree with most of the other answers/comments exactly, but because in my mind, a first-hand account of experience in the sector is really useful, and it's succinct. One wonders if it is likely to become an increasing concern for the developers of software A, or whether it's best to keep the distinction clear (even if perhaps it does result in the occasional issue) – Sphaerica Pullus Feb 18 '21 at 12:31
  • 1
    This is not the correct answer. I encourage you to listen to all the things others are telling you, rather than argue with them at length. – Reid Feb 19 '21 at 22:32
  • 1
    @Reid Much as I appreciate your contribution, I disagree with you on what the 'correct' answer is, the tone you've chosen to use, and apparently also your opinion on debating/discussing answers. Happy to 'defend' my choice, if you think it's worth anyone's time. – Sphaerica Pullus Feb 20 '21 at 22:06
  • @Reid: I mean if your point is that it's not the right answer, because we're looking at the specific question: "Should I validate biometric input", to which the answer is "Probably not *strictly*", then maybe you have a point? But it's pretty clear below that there's not a clear answer to even that, and I appreciated this answer's treatment of the more systemic issues, which are clearly relevant. Please do elucidate, because you haven't given me much to go on. – Sphaerica Pullus Feb 20 '21 at 22:21
27

Perhaps the assumption is that the doctor knows more than the programmer.

Would you want to be the doctor to tell your patient that you can't treat them because the IT department thinks the patient can't exist?

"I'm sorry Mr SuperFitAthleticRunnerPerson, we can't treat you because our system won't allow a resting heart rate less than 60."

user253751
  • 4,864
  • 3
  • 20
  • 27
  • 11
    I once was in the hospital and had to wait for treatment. They hooked me up to a heart rate monitor which kept alarming the staff, their comment: "are you an athlete?" Apparently my resting heart rate is around 40, which was the threshold for the alarm going off. – Pieter B Feb 17 '21 at 12:18
  • I can confirm that I would not want to be that doctor! It definitely comes across like a terrible idea, except I'm now not sure whether it's worse than increasingly automated systems registering Mr SFARP for a chicken pox jab because someone mistakenly entered his age as 6, instead of 36? I'm sure there are some possible scenarios out there which are somewhat scarier, based on automated systems and incorrect patient data. – Sphaerica Pullus Feb 17 '21 at 12:23
  • 2
    @SphaericaPullus It's not like the doctor is *required* to follow whatever the system recommends. – user253751 Feb 17 '21 at 16:28
  • @user253751 Oh, of course not, that's absolutely the silver lining. And any system which didn't take full advantage of the doctor as trained human, and their superior ability for spotting nonsense, would be really silly. But it's clear that doctors aren't always involved - I guess the point here is that I'm a little scared about the idea of it potentially being software developers who are making the medical decisions by proxy... – Sphaerica Pullus Feb 17 '21 at 17:42
  • 6
    It would make **a lot** of sense for the system to prompt. "Unusual resting heart rate. Click here to confirm, click there to go back and edit." – o.m. Feb 19 '21 at 12:11
  • Prompting for verification of data is completely reasonable, but in some cases this feedback is not possible. I guess this was one of them. – IS4 Feb 19 '21 at 15:05
  • See also: plane crashes that have happened when the computer has overridden something highly unusual which the pilot tried to do in an emergency scenario. (But see also also: plane crashes where the pilot did something highly unusual that the computer could have overridden) – user253751 Jul 23 '21 at 12:30
12

Certainly, something went wrong to allow this notification to go out, but it's not necessarily a lack of input validation.

Strictly forbidding invalid input often sounds simple, but is actually an extremely difficult problem. A classic example is validating names - surely a one-letter input is someone typing their initials, and should be rejected? Not if they have the common Korean surname O.

Sometimes, the programmer might be able to codify likely mistakes, and have a "soft validation" that triggers a message like "this is an unusual value, are you sure?" For all we know, this happened in this case, and the user accidentally clicked "Yes".

Data is also copied between systems frequently, so it may be that the data was entered in one system that lacked validation, and then was imported into another. Again, the import system couldn't know for sure that the data was bad, but could have triggered a "soft validation" - "import includes suspicious data on rows X, Y, and Z". Again, a human operator needs to correctly act on this information.

Finally, the data was used to produce a report that was going to be acted on. Something as simple as sorting that report by BMI would immediately have made this result stand out, and a check against the input data would have discovered the cause. In my opinion, this is where this case should have been spotted - but we don't know if this was a missing feature in the system, or operator error using it.

Checking for every possible way a complex system can fail is hard, but providing ways for someone to spot that it has failed can be essential.

You are quite right to be concerned about this in the context of "AI" systems, which is a rather vague term currently used mostly for "machine learning" algorithms. Because these systems aren't built from individually tested components, but "evolved" or "trained", it's even more important that they have appropriate supervision.

IMSoP
  • 5,722
  • 1
  • 21
  • 26
  • This makes sense, I think I neglected to consider the whole system in asking the question. It's the links and assumptions that are made which cause problems! Soft validation at each step seems like one of the best options, at least without considering potential programming effort. And the 'final' stage of reporting and acting on the data must assume un-validated data anyway, given that it's necessarily aggregating data from a wide variety of sources (some of which, I suppose, might also be self-reported, rather than be entered by a doctor). – Sphaerica Pullus Feb 17 '21 at 19:21
  • The AI/ML part does definitely come across as more worrying. From what I understand, the training data at least *does* need to be valid, although I don't know how big a dataset you need to accurately train a medical model. I gather it's also not really possible to validate an ML model analytically, in the sense that it's a black box. Having a trained medical professional evaluating *every* output seems ethically/legally necessary *now*, but I can't imagine it being the case in the future! – Sphaerica Pullus Feb 17 '21 at 19:27
9

This might have been caused by too much input validation of the wrong sort. When a form doesn't let you enter information in the way you need, and the information is useful or important to your job, users will invent their own awkward conventions in order to record the information. In the words of Ian Malcolm, life finds a way.

Perhaps in this case, the patient only knew their height in feet and inches, but the system only accepted metric, so the nurse or doctor invented 6.2 cm as the closest he could get to 6 feet 2 inches, rather than wasting time doing the conversion.

So input validation, especially for data entry intensive applications, must strike a careful balance. Make it too permissive, and it's too difficult to use for any sort of aggregation. Restrict use cases the user actually needs, and users will invent bizarre ways to do what they want. In these sorts of applications where it's difficult to anticipate every conceivable situation, it's usually better to err slightly on the side of being too permissive.

Karl Bielefeldt
  • 146,727
  • 38
  • 279
  • 479
  • 1
    true story: a nurse recorded my grandaughter weight as 6.2 when it was 6 pounds 2 ounces, and we didn't understand why the nurse's calcuations never matched ours until one day she weighed 7.14 and a lightbulb went off. (She did the calculations with pounds and ounces correctly; we were trying to reproduce using her misrecorded numbers.) – Kate Gregory Feb 17 '21 at 20:35
  • 6
    Agreed. Ultimately, someone may have taken the view that the purpose of these systems is to *collect* data from medics, not to second-guess them, and a decision was taken not to incorporate any logic whatsoever that nags or purports to second-guess them. You can analyse the data later, and either just dismiss absurd data from consideration, spend the resources necessary to clarify and correct questionable values, or as happened here, just give priority jabs to people who otherwise wouldn't have qualified and let such events of no overall consequence take their course. – Steve Feb 17 '21 at 20:51
3

You might argue that the failure of validation is not at the input stage, but at the statistical reuse stage.

It's not illegitimate for the system to allow the entry in principle, as it may not be clear what a sensible maximum BMI ought to be, and sticking your finger in the air to gauge a maximum doesn't increase the reliability to 100%, so further checks should still have to be done.

Certainly there are people of the given weight, and there are quite possibly patients of the given height (perhaps premature babies, for example), so what would have to be validated is particular combinations of values.

Rather, those making use of unvalidated data, including processed data where additional assumptions are introduced, ought to have familiarised themselves with the data and investigated extreme values.

Even moderately extreme values, those still within the realm of medical possibility, may have implied that it was pointless to invite an immobile person to get a jab, or that a conversation should have been started first with the GP to exercise judgment on the medical sensibility of giving a jab. This is really a story about those analyses and steps being omitted from the process.

And if those additional steps are seen as too much hard work and cost to implement, relative to what is at stake by sending an erroneous invite, then why will data entry validation involve any less hard work and cost overall relative to the stakes involved?

Steve
  • 6,998
  • 1
  • 14
  • 24
  • It's true that an erroneous invite isn't a particularly bad outcome, but I wonder if there aren't other, worse situations with a similar origin. But yes, I suppose it comes down to the design of the analysis, with the onus being on the developers to implement checks relative to the potential risk! It's presumably known in the medical software industry that patient data is a huge mess of un-validated information, although there have been a few stories of not-precisely-experienced developers contributing to new software in the COVID-era. – Sphaerica Pullus Feb 17 '21 at 14:37
  • 1
    @SphaericaPullus, yes there's been an extreme loss of common sense in IT about the reliability of data, and people with far too little work experience, and too little access to institutional memory are being left to interpret and summarise data. It's not helped by a belligerent attitude, amongst some involved in IT, that such errors are somebody else's problem (whether the user, the original system designer, etc.), and not their problem to identify in theory or to solve in practice, when in fact people need to be eagle-eyed at every step and treat data with appropriate scepticism by default. – Steve Feb 17 '21 at 17:38
  • 2
    And by the time you get to calculating the BMI, it may be to late in the system to do anything about it. You have 2 inputs about the patient, at least one of which is probably wrong, but if this is running in a batch job in the background there may not be anywhere sensible to send an error message. – user1937198 Feb 17 '21 at 21:21
3

The GP and the software engineer are both subject to the same ethical principle:

So, medical practitioners and software engineers who worked together on this medical system should have talked about expected validity range of input and vital parameter, brainstormed on risks (medical risks for the GP, technical risk for the engineer, user confusion risk for the UX designer), and agreed how to best mitigate them.

I understand the argument that a patient record cannot be rejected because of wrong or missing input, because lives are at stake. But some warnings could easily be issued:

  • at data entry, to avoid typos risk ("are you sure that ...."),
  • at end of the day, to avoid bias under time-pressure ("today you had a patient with..."),
  • before using the data on a connected system for public health ("x people in risk group A present anomalies, please check").

This case shows that the system, as it is, can cause harm (i.e. someone vaccinated early without need whereas a weaker person who desperately need the vaccination doesn't get it in time).

P.S: Sorry to add yet another answer, but when talking about a life critical processing, I personally cannot agree with commercial arguments such as: "It was not part of the requirements".

Christophe
  • 74,672
  • 10
  • 115
  • 187
  • The point about professional ethics is really interesting, although I think it must be a lot less clear cut than that in practice. I imagine the programmers of a piece of medical equipment must feel that responsibility heavily (after stories like the Therac-25, for instance), but I suspect it's less obvious in the case of patient management systems? And I agree that all possible 'soft validation' techniques should be undertaken, although it seems as though as an aggregated data set, patient data *still* has to be assumed to be unreliable and un-validated regardless. – Sphaerica Pullus Feb 18 '21 at 12:41
  • Maybe the best ethical position for a programmer of this type of software is to just make it abundantly clear to end-users of the data, before they are granted access, what validation if any is already undertaken, and that they have a responsibility to filter/validate the data according to their own legal/ethical/safe perspective. – Sphaerica Pullus Feb 18 '21 at 12:45
  • 1
    @SphaericaPullus Thank you for this interesting question and your feedback. I’m shocked to read in accepted answer how common it is, considering IEC 62304 & other medical sw. standards. Indeed, we’ll never achieve 0% errors. But we have the means to reduce them. Ethics is often seen as optional until a major incident happens and the story ends in court.The diesel-gate sw-engineer felt safe, since they produced according to requirements without questioning them, yet [he was jailed](https://www.nytimes.com/2017/08/25/business/volkswagen-engineer-prison-diesel-cheating.html) because of the harm. – Christophe Feb 18 '21 at 13:39
  • Right - I guess it's a fairly classic situation of regulation occurring only after public incidents! I'm sure that the vast majority of medical software development firms are doing this correctly, but I can imagine that if the organisation purchasing the software doesn't explicitly state these issues as a concern, the lowest bidder might not be doing it. This seems like a bigger problem for primary care facilities, which at least in the UK, are often small and privately run. – Sphaerica Pullus Feb 18 '21 at 13:56
  • 1
    As I mentioned in my answer, we don't actually know that warnings _weren't_ issued in this case, but not acted on. They might also have been proposed, but rejected as counter-productive: as you say, immediate warnings might be ignored due to time pressure; but delayed warnings might also be ignored, if they impose yet another responsibility on an overworked member of staff. Imagine every morning getting an e-mail saying "here are 272 mistakes you might have made yesterday". I don't think it's reasonable to accuse someone who may have consciously made that decision of failing to uphold an oath. – IMSoP Feb 18 '21 at 14:13
  • @IMSoP I UV your answer :-) I did not accuse anybody of breaking an oath, but that the oath **should** have produced some effects, which we can’t see. We don’t know if there was a warning, and we both agree on the limits of an immediate warning. My PS was more for some other answers which suggest that one can implement requirements without questioning them if common sense tells that something must be wrong or forgotten. About your last remark: the point is that these records are no longer GP’s personal records if shared with health system. There is a duty to handle. Moreover it’s personal data – Christophe Feb 18 '21 at 16:20
  • 1
    Note: If these warnings happen anything but infrequently, they will easily trigger warning fatigue. – user253751 Jul 21 '21 at 10:00