14

Update/Clarification My client understands the need for their in-house testing and he/they always swears they will "do better" (i.e. do something) but it just doesn't happen. They don't have the budget for outside testing. I guess I am asking (vaguely, I acknowledge) about what could instill a "test early, test often, test on target machines' ethos?

Question: how to encourage users to take the time to explicitly test and report issues with new releases, not to "test-as-they-go" in production projects.

Background: I have a small client for whom I have written a suite of multimedia presentation tools. They are a nice client and we have a good relationship. The project is on-going, adding features as we go along.

There are two issues I have:

  1. Feature definition is done on-the-fly, often over the phone, subject to change, revision, reversal. (a bit of like Kennedy's "We will go to the moon and do the other things" – I've always been amused by the "other things" part of that)

  2. Virtually no QA testing is done on their end.

I can deal with #1, more or less. This is not a client who would even read a spec before a meeting, let alone write up one. I'm used to it. It is item #2 which I have the issue with: they don't or won't test new releases. What they do is to use them for production so when bugs come up, they either find a workaround and don't report it, or they are so in a hurry to get on with the project, that bug reports are vague.

We have had many discussions about all this but I have only been able to nudge them a bit (e.g. we use github for issues tracking – though mostly I use it). The root reasons are twofold: they are a small consulting company and don't have (or don't think they have) the resources for testing (nor the budget to outsource it). And cultural: although they think of themselves as "developers" they really are just users of a multimedia software package. (e.g. they have none of the obsessive neurosis attention to detail of "real" developers).

This affects me as you would expect: without feedback I can't tell if a feature is complete (see #1), or if there are other consequences. It also is making me a bit lazy.

spring
  • 255
  • 2
  • 6
  • related: [Getting users to write decent and useful bug reports](http://programmers.stackexchange.com/q/132248/31260) – gnat Dec 10 '15 at 19:44
  • 3
    If a bug is so small that the users themselves don't seem to care if it's fixed or not, why do you insist? – kamilk Dec 10 '15 at 19:54
  • 2
    @kamilk - the short answer is that I am invested in my client doing well, being productive, etc.. The long answer is it is often is not a matter of just a "small" bug – it may also be a usability issue, missing feature implementation, etc. If I don't know about it, I can't fix it. Also, the "workarounds" they come up with are often huge time wasters for them or even staying with earlier versions of the software. – spring Dec 10 '15 at 20:55
  • 18
    If you're invested in your client doing well; you should see to very solid testing *before* releasing to them. Clients are not testers. Hire a tester, or do your own testing, or write coded tests, but if you want to feel absolutely certain your stuff works for your clients, test before you give it to them. – Jimmy Hoffa Dec 10 '15 at 21:03
  • Who controls these users' budgets? – corsiKa Dec 10 '15 at 23:16
  • Am I missing something obvious or are you asking how to get them to go from testing your code too late to testing your code earlier? Shouldn't you test your own code? Is anyone instilling a culture of QA in your company? – djechlin Dec 11 '15 at 03:32
  • 4
    @djechlin - it is about testing (and reporting) at all. And a developer can only test so much: I don't use it like the users do. – spring Dec 11 '15 at 08:45

4 Answers4

19

they have none of the obsessive neurosis attention to detail of "real" developers

Preface: The kind of language you used here is typically a red flag for me. When I hear people talk about "real" developers or the (one and only) "right" way of doing things, I start thinking of tunnel visionned cargo-cult developers.

Now, I'm not saying that you're definitely one of these developers (I don't have enough evidence to assert that), but if you are, then you might benefit from this answer.

Answer

It sounds like you and your client are optimizing for different things. It's an unfortunate fact in software engineering that often the needs of the business and the desires of the developers don't necessarily line up.

Software developers are often passionate people with a focus on improvment. They like to improve software performance, the development process, business processes, communication methods, etc. And that's great. Focusing on these things is what separates the craftsmen and craftswomen from the mindless keypushers.

But, your client isn't a software craftsperson. Your client is a business with a completely different set of priorities. And, sometimes, those priorities look ridiculous to we software craftspeople... but that's only because we're optimising for different things.

Business frequently want to optimize for things like early release to market and short term cost savings. In doing so they may need to sacrifice things like QA, User Experience, long term cost savings, and other things that make developers tick.

Is that a bad thing? Well, not necessarily. I can't speak for all businesses, but in my experience my clients do these things in order to increase their own ROI (return on investment). Doing things like QA, UX refinement, and long term planning offer them lower ROI. Even worse, many businesses have investment structures that only reward short term wins as opposed to sustainable approaches and long term wins.

So, while you could try to sell the idea of QA to your client you may be wasting your time and straining your relationship with them. In the best case you'll get someone eager to try out your ideas (unlikely). In the worst case you'll have to convince the entire company to rework its incentive structures so that long term investments such as QA are rewarded. In either case, your odds of success are low.

MetaFight
  • 11,549
  • 3
  • 44
  • 75
  • 4
    +1, trying to change the internal workings of an entire *different* business because it doesn't seem right to *you* usually cuts a relationship short. A professional should advise if he can foresee serious problems, specially if they can also advise on how to mitigate them. However if the problems are so small the company doesn't even bother to report them, the best thing you can do is send a friendly reminder once in a while that there may have been time saved if X or Y without insisting. – Ordous Dec 10 '15 at 21:22
  • -1 as, while this is a well written post, this doesn't really address the question of _how_ you would go about doing it. The answer is that you do it in a very similar way you convince regular developers to test: show that testing helps reduce risk. Less risk == less production issues in the middle of a client demo. – David says Reinstate Monica Dec 10 '15 at 23:09
  • Nope – way off base but thanks for replying. – spring Dec 11 '15 at 02:41
  • @DavidGrinberg that's all well and good unless reducing the number of production issues isn't worth the effort/cost/time for the client. If that's the case, no amount of developer logic will convince them to sacrifice their ROI just to satisfy you. And that's why I didn't answer the *how* of the question and instead focused on a potential flaw in its premise. – MetaFight Dec 11 '15 at 06:03
  • craftspeople :-) – Toni Leigh Dec 11 '15 at 11:21
  • @ToniLeigh noted and corrected. Cheers. – MetaFight Dec 11 '15 at 14:28
  • @MetaFight The issue is whether or not the client's calculation of ROI is accurate. The "workarounds without reporting" bit sticks out to me: unless the developer has shown a propensity to delay so long such that submitting a proper bug report is pointless because it will never get resolved, there's no reason they can't just put together a list of things they want fixed the next time they call TOMATO. If the hierarchy is decentralized, maybe have a shared document for all their grievances to make it easy to edit/add to, which could even help with the feature definition bit (or make it worse). – JAB Dec 11 '15 at 15:27
  • @JAB interesting aside: "Shared document" is another one of my red flag phrases :) main point: I agree that's it's possible that the client is miscalculating their ROI. I'm just trying to remind the OP that it's also (very) possible that we as developers don't have enough perspective in the client's domain to correctly assess *their* ROI. – MetaFight Dec 11 '15 at 15:49
  • @MetaFight given the description of the client as a small consulting company, I assumed they would be small enough where such a shared document (in the Google Docs sense, not just a sheet of paper or word document that gets passed around/emailed back and forth) would not be unmanageable. Obviously that system would fail completely if the client has a significant number of people with access to it. – JAB Dec 11 '15 at 17:23
10

The interesting question is when you get paid, not whether your client does any testing of their own.

  • if you get paid based on your time, no problem.
  • if you get paid in advance, no problem.
  • if you get paid when the client declares the project “done”, big problem.

The problem is how you can know when the client accepts the software and will pay you. This clearly doesn't work when the client continuously amends the project with vaguely defined new requests. If this means that pay day is always deferred – and becomes more unlikely by each request – this becomes untenable for you.

A fixed contract that carefully specifies all features and defines under which conditions the client will accept these features is clearly very uncomfortably strict, but it allows you to plan the project in advance (also the next project). It also guarantees you'll get your money for the software you delivered, if it lives up to the spec. In such a scenario, the only responsibility of a client is during the contract definition phase, and at the end for acceptance testing.

Such acceptance testing done by a client is separate from other forms of testing:

  • unit tests
  • system integration tests
  • usability tests
  • load tests
  • pre-release tests

As far as possible, you would anticipate the acceptance tests and perform them yourself before delivering the functionality as to avoid any embarrassments. Aside from acceptance tests (which only measure contract fulfilment, not software quality), all Quality Assurance is your responsibility. In particular, your client does not necessarily have a QA mindset, the necessary technical background, or the contractual obligation to do QA. Also, I find outsourcing bug hunting to the client quite unprofessional.

That is not to say that bugs wouldn't happen. Assuming you have a project-based relationship to your client, you'll want to walk a line between being courteous and rapidly providing fixes, and explaining that they have accepted the current release as sufficient for their needs – large changes require a new contract. If you have an ongoing support contract, you'll of course have to stick to your agreed service level.

In an agile setting, responding to client needs is valued more than sticking to the letter of the contract, but you'll still want to get paid. Therefore, many agile-oriented project methodologies value close client interaction, to the point that the client might become part of the team. You could then always talk to this “product owner” to clarify any necessary points. Since the PO has the authority to grant you the time to work on any feature they find valuable, this can work even when starting with vague client needs. If you do not have such a close communication, you will need to follow a more formal approach.

  • When you learn of new customer needs, work with the customer to translate them to requirements. This helps the client to get what they actually want.
  • Requirements are objectively measurable – they are either fulfilled or not. This saves the client from half-solutions that only kind of work.
  • All client requests must be provided in written form so that you can bill them. This protects them from being billed for stuff you just felt like working on – such as rewriting the entire interface when asking for a button to be aligned differently.

    A lot of communication can be done in person or over the phone, but at the end you'll want a piece of paper to document that the client wanted you to work on these requirements. In simple cases, it might be sufficient to recap a phone call and send them an email to verify what they asked you to do.

Bug reports are always difficult. If your clients are devs themselves that should help since they can understand your needs: having clear steps to reproduce. A simple way to get powerful insight is to enable logging in the deployed software. Provided that the data privacy issues can be worked out, requiring every bug report to have the current log attached not only guarantees some written communication, but also tells you what the user actually did (in contrast to what they thought they were trying to do).

amon
  • 132,749
  • 27
  • 279
  • 375
  • 1
    Money isn't the issue (I'm on a monthly retainer – I get paid whether I code or not). It's how to nudge their office culture ...or something I don't get. – spring Dec 11 '15 at 02:42
2

The way to encourage communication of bugs is to encourage frequent, granular communication of features. If you train a company that they can ask for anything with zero ceremony, they'll use that feature for minor bugs, too. Give up on changing to your client's workflow unless these changes make their life easier.

Getting your client to do in-house testing is tough, but getting them to actually report bugs is not as difficult as it sounds. The way to get more feedback is to reduce user friction...even if that means transferring some of that friction to yourself.

  1. Use simpler tools, even if those tools are inadequate and inappropriate. E.g., BaseCamp is a pretty awful bug tracker (because it's intended for project management), but people are actually willing to use it.

  2. Since the bug trackers we were using did not support image copy-paste, I actually wrote a trivial program which copies the current clipboard image to disk (as a Guid), then copies the Guid to the clipboard. After minimal training, a user could attach clipboard images to issues by just hitting print screen, clicking a button, and then pasting into the file chooser dialog of the bug submission tool.

A screenshot (possibly edited in MS Paint with annotations) and 1-2 sentences is enough to pinpoint most features/bugs.

Both of these suggestions are targeting friction points which I experienced, and both of these suggestions increased reporting by a factor of more than 10. However, you will need to target your own friction points.

Brian
  • 4,480
  • 1
  • 22
  • 37
  • This answer gets to the point. You want them to implement rigorous testing protocols: that's very unlikely to happen, especially if it's coming from outside the organization (e.g. you). The best thing to do in this case, since you get paid anyhow, is to make it as painless as possible to report bugs back to you. If you're *really* dead set on thorough testing, do it yourself, and learn more about the business processes if you need to... It's an unfortunate reality that many companies will *never* prioritize testing. – DrewJordan Dec 11 '15 at 16:19
1

Make the testing for your client easy, but make it really hard for your client to use any new features in an untested version in production. This can be accomplished as follows:

Whenever you deliver a new feature, you implement this first in a "beta version", clearly marked with a sign "not for production". You make this beta version available to the client for testing. You also provide the latest "production version" he shall use for real production (without the new features, but with the latest bug fixes), and you refuse to transfer the new beta features into the production version until you got a feedback that someone at the clients side has at least tried it out first.

If the client starts to use the beta version on his real production data though it shows always a big message "Not for production use" whenever he starts the program, then you cannot help him, but at least you made clear that whenever he looses production work because he used the beta for wrong purposes that it is clearly his fault. If the client does not learn from that, you may consider to disable your client's ability to use the "beta" in production by deactivating some crucial functions like saving the results to disk in the "beta", if that's necessary.

Providing a separate "beta" will force you to establish proper version control / configuration management, so you can manage a production branch and a beta testing branch side by side without hassle. But since you are working with Github, I guess you already use something like GIT, which makes this kind of management very simple.

Doc Brown
  • 199,015
  • 33
  • 367
  • 565
  • I don't really agree with the first paragraph. Often people genuinely realize that something is important yet fail to do it (quitting smoking for example). Testing is a classic example of something like this: even if you realize it is really important, it requires a lot of discipline not to take shortcuts on it when faced with deadlines, etc. However, the idea of the beta is good, given the customer's stated desire to get better at testing. –  Dec 11 '15 at 05:48
  • I would also use this as an opportunity to address point #1. Propose a whole process to the customer where new requirements are written down, agreed, tested in a non-production environment, then released. –  Dec 11 '15 at 05:54
  • I do tag new releases as "alpha" or "pre-release - not for production", plus do the whole github "milestone" thing with issues (bugs, new features to be test, etc.) but it hasn't made a difference. The whole situation sort of confounds me. I've proposed stuff like a monthly bug-testing "pizza day" thing to focus their team (2-3 people) of testing, a "vote for your favorite/most annoying issue" thing. It's kind of weird – yet they use my software for major presentations all the time so I don't understand why there isn't more pushback. I suppose it falls into "another thing to do/not my job" – spring Dec 11 '15 at 08:42
  • @TOMATO: do you **strictly** refuse to transfer features from the pre-release version into the production version, until the customer tells you he has tested the feature? Does your customer try circumvent that refusal? – Doc Brown Dec 11 '15 at 08:54
  • 2
    +1 for the clearly marked beta version: if you hand out the test version in garish purple, with a huge green blinking banner at the top of the main screen screaming "TEST VERSION -- NOT FOR PRODUCTION USE -- UNSAFE -- AAARGH!", they won't use it for presentations or even anywhere where a client might see it. You can hold back the clean production version (take it as a hostage, if you will) until they give some sort of useful feedback. – Christian Severin Dec 11 '15 at 11:17