14

At the office we just got out of a long period where we released patches on a too-frequent basis. Near the end of that period we were doing almost three patches per week on average.

Beside that this was very demotivating for the developers, I was wondering what the customer would think about this. I asked the question myself and concluded that I never knew software that was updated that frequently. However, for the case that comes the closest I do not care really since the patches are pretty quickly applied.

The customers which received these patches differ a lot from each other. Some were really waiting for the patch where others did not really cared, yet they all got the same patches. The time to update the customers software is less than 30 seconds, so I do not expect any problems concerning time. They do need to be logged out though.

So my question in more detail: Is getting updates frequently giving a 'negative' message to the receiver?

Of course, I could ask the customers, but I'm not in that position nor do I want to 'Awaken the sleeping dogs'.

PS: If there is anything I could do to improve my question, please leave a comment.

FrustratedWithFormsDesigner
  • 46,105
  • 7
  • 126
  • 176
Mixxiphoid
  • 641
  • 1
  • 5
  • 15
  • @downvoter, care to explain? – Mixxiphoid Mar 13 '14 at 06:40
  • 6
    If you're worried about customer perception, maybe describe them as "updates" rather than "patches"? :) – Chris Taylor Mar 13 '14 at 07:22
  • While this is not a direct answer, if you can keep patch deployment as non-intrusive and automatic as possible (e.g. download updates in background, have an update service running to apply while idle), then you can mitigate end-user anxiety by not making the updates obvious. For example: How many Google Chrome updates have you received in the past month or so? (Answer: [lots](http://googlechromereleases.blogspot.com/search/label/Stable%20updates)) They could release 5 patches for Chrome per day, and nobody would raise an eyebrow. Automatic Windows updates (when enabled) are another example. – Jason C Mar 13 '14 at 07:27
  • *"The time to update the customers software is less than 30 seconds, so I do not expect any problems concerning time. They do need to be logged out though."* What about the customer testing the patch themselves? I don't know what sort of software you're working with, but if it's anywhere near mission-critical for anyone, they'll want to test an update before going live with it in a production environment. While the installation of the patch might be quick and easy, that testing is going to take a lot of time and effort on the part of the customer. – user Mar 13 '14 at 10:38
  • @MichaelKjörling The problem is, in that period mission critical features failed, so it didn't really matter whether the production environment or the test environment was updated first. It just needed to work ASAP. – Mixxiphoid Mar 13 '14 at 10:41
  • @Mixxiphoid If there were upwards of three mission-critical failures per week for "a long period", I'd definitely say that yes, that would have a serious negative impact on my impression on the software's reliability. But that's not strictly about the patches/updates/fixes/callthemwhatyouwill. – user Mar 13 '14 at 11:51
  • @JasonC: "you can mitigate end-user anxiety by not making the updates obvious". Isn't that a bit like saying that restaurants can mitigate customer anxiety by not displaying a hygiene inspection certificate? ;-) If the customers are anxious about updates, then some (probably the majority) will be less anxious about hidden updates. Others will be more anxious that you've suddenly started hiding something. So there's a trade-off to consider. – Steve Jessop Mar 13 '14 at 13:05
  • @SteveJessop It's not really like that because hygiene inspection certificates (a result of a periodic test, required by law, and unintrusively displayed in one location) are a poor analogy to software updates. – Jason C Mar 13 '14 at 14:43
  • @JasonC: it's not exactly like it. It's a bit like it. Some people will notice that you're taking steps to prevent them being anxious, without addressing their cause for being anxious. Those people will feel that you're pulling a fast one even if you aren't, because you are concealing the symptoms of the problem (high rate of critical flaws) instead of solving the problem. For Chrome or Windows you could argue that one would be mistaken to see frequent updates as a cause for concern and so ignorance is bliss. The comments made by the questioner suggest that in this case it's fair concern :-) – Steve Jessop Mar 13 '14 at 15:00
  • @ChrisTaylor that may help for the customers that aren't expecting a patch, but for the other customers I think it would hurt even more to call it persistently an 'update' instead of a patch. – Mixxiphoid Mar 13 '14 at 16:45

4 Answers4

24

As with many things in computing, it depends. If the patches are a response to customer requests for new features or improvements, then your company will be viewed as responsive. If, on the other hand, your patches are a response to bug reports, then your company will be viewed as incompetent.

enter image description here

Testing software on your customers is by far the most expensive possible way to detect bugs, no matter what anyone says. It's a false economy; the free labor that you think you're getting is more than offset by customer service effort, breaking the software development life-cycle, and losing customer confidence.

Robert Harvey
  • 198,589
  • 55
  • 464
  • 673
  • 3
    Maybe this should be a different question, but anyway: We knew we were testing via our customers but could not stop that at that time, we were trapped in a cycle. What could we do to step out? – Mixxiphoid Mar 12 '14 at 19:41
  • 3
    Robert, I have seen this diagram lots of times before. It was probably correct when software development followed a pure a waterfall model, but since software can be developed *and* deployed in small cycles, its got more and more wrong. To be precise - for a small amount of bugs and software the tendency is still true, for lots of bugs its definitely wrong. – Doc Brown Mar 12 '14 at 19:58
  • 2
    @DocBrown: The graph is still correct. Shorter development cycles mean less cost per cycle, which is consistent with the graph. But that still doesn't mean you should alpha test your software on your customers, unless there is a clear understanding and agreement that this is part of the process. – Robert Harvey Mar 12 '14 at 20:00
  • Well, the cost of defects decreases the earlier the bug is found **and** fixed. And the chance a bug is found increases dramatically the earlier you get your software out of the door. That does not mean one should push untested software into production, of course. I recommend, for example, this article http://agileelements.wordpress.com/2008/04/22/cost-of-software-defects/ – Doc Brown Mar 12 '14 at 20:06
  • @RobertHarvey Do you conclude that we alpha test on our customers based on the info in my question? If so, that would mean we have something to ponder about. – Mixxiphoid Mar 12 '14 at 20:06
  • @Mixxiphoid: No, but you [essentially admitted that you do](http://programmers.stackexchange.com/questions/232179/are-patches-a-bad-sign-for-the-customer/232181?noredirect=1#comment463011_232181). – Robert Harvey Mar 12 '14 at 20:08
  • @RobertHarvey Ok, I was just curious. It could be that so many patches was in itself a sign that we test via our customers. But I guess that is not the case per definition. – Mixxiphoid Mar 12 '14 at 20:10
  • @DocBrown: I'm not sure if that article is describing an epiphany the author had, or merely stating the obvious. The X axis *is* time; the three points illustrated on that graph are just that: points. – Robert Harvey Mar 12 '14 at 20:12
  • Found also this one: https://www.techwell.com/2013/10/what-does-it-really-cost-fix-software-defect , might interest you. – Doc Brown Mar 12 '14 at 20:12
  • @DocBrown: *That* article is merely saying that the numbers might only be illustrative, may vary from project to project, and not be accurate to the dollar. That's also self-evident. – Robert Harvey Mar 12 '14 at 20:13
  • I was more referring to "So when we hear that defects are more expensive to fix the later they’re found, we don’t question whether that is actually true because we already believe it." – Doc Brown Mar 12 '14 at 20:14
  • 1
    All it takes is a little self-reflection to see that the principle itself is sound, even if the numbers or the shape of the curve are just wild-ass guesses. We even have a name for it in programming parlance: *Fail Fast.* – Robert Harvey Mar 12 '14 at 20:15
  • @Mixxiphoid creating a testing environment is the easiest way to step out of the cycle, deploy code to a testing environment first, run extensive tests on the solution and really try to break the code. If it stands up then release it out to customers. For more critical issues make the fix, then give it some basic tests and get it out to users ASAP (potentially even skip the test environment deployment). – Matthew Pigram Mar 12 '14 at 23:37
  • +1 for Responsiveness vs Debugginess. Everything depends on context. – WernerCD Mar 13 '14 at 00:36
10

I feel like releasing several patches in close proximity reflects poorly on the company. It always makes me feel like they didn't test throughly enough up front, that the developers are incompetent, or the management has no idea what they are doing.

That being said, the other side of the token is that releasing several patches close to one another shows that the company is taking a proactive approach to their product and is continuing to improve their product for the consumer.

I'm more inclined to feel that the former is the way most will look at it from a consumer's standpoint, but speaking directly with them will (obviously) be the best choice, but will also raise the issue within the customer base that they may not have been aware of initially.

Jack
  • 237
  • 1
  • 5
  • So bottomline, should we try to only patch those customers who really need it, and the others on a later moment in 'bulk' to improve our image? – Mixxiphoid Mar 12 '14 at 19:44
  • 5
    Patching for individual customers sounds like it could be a heachache, especially if there is a large customer base. Rolling out patches on either a regular schedule (monthly, bi-monthly, etc.) and promoting the patches to the customers could be a good way to drum up interest in "what's coming next" from your product, as well as address the issues that are being ironed out. Of course, proper documentation and notification is crucial to communicate to the end user with patch notes. – Jack Mar 12 '14 at 19:48
  • That really clarifies a lot for me. Seems like we should put some effort in using our patches to improve our image. Until now I was convinced that was not possible. Always saw patches as a necessary evil. – Mixxiphoid Mar 12 '14 at 19:52
  • depends on when in the release cycle too. If patches are close together in the first days after release, that gives a different impression from when they're (still) close together several months later. – jwenting Mar 13 '14 at 11:39
7

More and more companies follow in Chrome's footsteps and have more and more frequent customer releases.

The pre-requisite for implementing short release cycles is a painless upgrade - in chrome, for example, the upgrade is done without user intervention at all on application start-up, and if the user keeps his application always on, he receives a minor flag advising him to restart the application at his convenient time, and the application then makes the effort to return him to his previous state after the restart.

This method leaves the customer happy, as he does not need to be aware of every update, and since feature releases come frequently, bug fix releases will be just as welcome.

If, on the other hand, patches come after glaring show-stopper bugs, and they come in clusters, since the earlier ones failed to fix the bug, or created a greater bug - rest assured your customers will smell it. This will definitely reflect poorly on the vendor's professional reputation, and lower the vendor's perceived software standards. Continuous delivery relies heavily on effective unit testing and integration testing to guarantee its success.

On the matter of not talking to your customers to 'let sleeping dogs lie', I believe this is a wrong strategy - customers are not blind, and they are no fools. I believe that good communication with your customers can only reassure them that they are your priority, and that you are receptive to their criticism. If you have delivered bad releases a couple of times, and you do not hear them complain - you should definitely be worried - it is not that they did not notice, more likely they are just to busy finding a replacement for you...

Uri Agassi
  • 1,751
  • 11
  • 18
  • 2
    +1, as a frequent customer of software I want the guys with frequent updates and good ways to deploy them. Products that stagnate are the real red flag here -- at the very least it means the vendor isn't investing in the product. Or investing in vNEXT that they want you to pay for all over again. – Wyatt Barnett Mar 12 '14 at 21:02
  • What I understand from your last paragraph is that we should always be honest and transparent in our communication toward the customer. Are there situations we should not (yet) inform the customer about certain things? – Mixxiphoid Mar 13 '14 at 06:53
  • 1
    Of course, being honest with the customer does not mean leaving the line open as you convene a panic-meeting to mitigate a just-found disaster. You should communicate the information _after_ you have assessed the situation, have a strategy to fix it, and can honestly say that everything is under control. You may find yourself _embellishing_ the truth, but downright lies have a way of haunting you later on... – Uri Agassi Mar 13 '14 at 07:33
2

Patches specifically for customers that have detected a problem are obviously going to need to go out as soon as possible.

I have seen software at large companies then take the approach that other customers will get those patches as a service pack at regular scheduled intervals. Normally because the patches take some effort to install and test in the customer environment but in your case it could just be used to lessen the possible impact of the effect you are concerned about.

I have also seen people advocate putting patches up in repositories or on websites where customers can download and install the ones they want to. This can create problems with knowing what patches which customers have, so when they call in with a problem you have to determine exactly what code they are running, but with care that can be tracked. You can then force people to upgrade to one of the larger 'packs' when one is released that bundles up lots of patches.

The exception are security patches. A large Washington based software company has been known to get into hot water by waiting for the third Thursday of the month before releasing critical security patches and information about the hack has leaked out and forced their hand early to even greater embarrassment.

Google chrome gets around the issue by auto updating very frequently, they too require you to cycle the program (restart chrome, or in your case log out). They have now made that normal practice for browsers and people don't even think about it any more. But not everyone can be Google.

SaaS applications get around the whole issue by doing the updates in the background.

However, above all, unless you are doing continuous integration or updating with new user requested features very frequently, then I think we are still in a time when people expect you to have done a decent amount of testing before release. If you would be embarrassed to meet your customers and talk about the frequency of bug fixes, you are probably not doing enough testing. Did you release how much of a risk you were taking before you released the code. There is an argument for releasing very early buggy code as long as you know that is what it is, but I think you need to have a good understanding of your known quality, which means understanding and keeping under control your time to know quality.

Encaitar
  • 3,053
  • 19
  • 26
  • +1, that's the key point - the quicker you can fix a bug (and deploy), the better - as long as the user/customer does not have any additional effort with the deployment. When the customer has to deploy manually, or updates will just interrupt his workflow, it is important to find the right frequency for deployments. Sites like Facebook will deploy several patches a day and most people won't even notice. – Doc Brown Mar 12 '14 at 20:21
  • so I guess we're lucky on that part. Our updates cost us (beside the stress and coding and all) only 1 or 2 hours. It costs the customer 1 minute to get back to work. I will look into the 'service pack' approach, this may indeed come in handy for those who do not need the patch directly. – Mixxiphoid Mar 12 '14 at 20:26
  • Found this reference for Facebook: http://blogs.wsj.com/cio/2013/04/17/how-facebook-pushes-two-releases-per-day/, so there seem to be two releases, not several, per day. Still impressive, I guess. – Doc Brown Mar 12 '14 at 20:31
  • I 'heard' that amazon push code every 17 seconds. But I'm putting it in a comment because I can't remember who told me and a google doesn't show it. :-) – Encaitar Mar 12 '14 at 23:42
  • @Encaitar: Right, Amazon's architecture has hundreds of interacting services. So I'm not surprised if they push something extremely frequently, but I very much doubt that each push *directly* affects more than one component. What you see as a single website doesn't necessarily have an overall version. It's more like saying the city road network is updated every 17 seconds because your crews paint in total 5 thousand fresh lines a day :-) – Steve Jessop Mar 13 '14 at 13:15