3

There is a point in time where you make design choices and debate them with management. In my case I have to debate my positions and design choices with senior management but it is frustrating that management only strives for performance while I think stability is a must while performance can be achieved later.

E.g. We are facing a design choice to make a recovery mechanism due to lack of transactionality in certain processes i.e. we need to guarantee transactionality of a those processes making them complete fully or rollback the changes it made to database. The current code makes this difficult because we are using stored procedures that manage their own transactions. This means that if the process calls 3 or 4 stored procedures, there is 3 or 4 transactions and if we want the recovery process we need to rollback those changes (yes, they are committed at that time, this means that we need to make more transactions to the database in order to leave it in a consistent state or at least somehow "ignore" those records).

Of course, I wanted to remove the transactions from the stored procedures and commit the transaction in the code after the process ends or rollback there if the process has exceptions.

The case is that management thinks that this approach will make the process slow and also will impact greatly in our code. I think this is correct but also I think that making the rollback process ourselves is plainly reinventing the wheel, error prone and IMHO it will take too much time in stabilize.

So, after the previous example, What could be the most beneficial approach in such cases? I mean, I want a Win-Win situation but I think it is just plainly impossible to agree on this because every time I want to talk about it I get responses like "there should be another way", "you should not tell me there is no way around", "this is not factible", "the performance will degrade", etc. and I think I will end making this faux recovery process just to comply with management.

OTOH I could be wrong and I should do what is told to me without complaining.

ElderMael
  • 260
  • 2
  • 9
  • 3
    "management thinks that this approach will make the process slow" based on what? have they implemented it and measured? if not then they have no argument. – rossipedia Sep 14 '13 at 06:31
  • @rossipedia I think the consensus there is that "too many records will be locked" because the process has many factors that could make it last from 2 to 30 minutes. We even have alerts for the last case. – ElderMael Sep 14 '13 at 06:54
  • This would indicate that there is a performance issue. You will have to come up with some approach that results in maintainable code AND does not degrade performance. – Shamit Verma Sep 17 '13 at 09:21
  • Well, in my opinion; one should do a stable code, then slowly build up to efficiency, I can't really give much details though, I always say direct to the point. – Myrl Sep 14 '13 at 06:50

4 Answers4

6

Code should be maintainable, valid, and efficient, in that order of priority.

There is no point in having high performance code that doesn't do what it is supposed to do (there goes robustness), and whenever you hit a bug or a performance issue, it is maintainability that is going to make the difference. I have no experience in database, but I would be surprised if this principle didn't apply in database too.

Now the difficult part is getting management to accept that. And by the way, is performance a problem at all yet anyway?

Julien Guertault
  • 720
  • 4
  • 15
  • I am aware of performance metrics in the web layer but I do not think they apply for the backend process I am talking about. I truly do not know for sure if performance there is lacking but if I had to say something I think it will be no, the performance is not a problem now. – ElderMael Sep 14 '13 at 06:55
  • Then profile it. You can't work on performance without data, and if data shows performance is not an issue, then you win on management too. – Julien Guertault Sep 14 '13 at 07:23
  • 1
    Why would maintainable code that *doesn't work* be preferred over unmaintainable code that works? Fitness trumps all other concerns imo. – Rein Henrichs Sep 14 '13 at 08:50
  • 1
    @ReinHenrichs: because if it's maintainable, it can be fixed, which is much better than having code that works^W is believed to work but nobody can deal with. – Julien Guertault Sep 14 '13 at 09:20
  • @ReinHenrichs: that said, I agree if we had to stick with only one, it would be that code should work, period. – Julien Guertault Sep 14 '13 at 09:25
  • Without knowing the context of the problem and the environment you can never know what is most important. For example for a one off etl program I would care more about it being correct and efficient rather than maintainable. – dietbuddha Sep 14 '13 at 20:51
  • Nope, working code that cannot be maintained is preferable - hence all those ancient VB6 or Java programs that enterprises still use. Note that you said "if its maintainable then it can be fixed", which is another way of saying "still doesn't work yet" – gbjbaanb Sep 14 '13 at 22:41
  • @gbjbaanb: the fact unmaintainable code exists doesn't mean it is preferable. As you say, those companies still use that code. So what if it has to be changed? What if they discover a case where it doesn't work? Code is not carved in stone, and this is why "if its maintainable then it can be fixed" is important. – Julien Guertault Sep 15 '13 at 07:38
  • 2
    In the end it comes down to two things: shipping software that end users can get value from; and changing that software over time in response to feedback so that it continues to provide value. We should be thinking about fitness, maintainability and performance in those terms. – Rein Henrichs Sep 16 '13 at 00:25
2

I guess all the good advices here on PSE to prefer maintainable code over fast code won't convince your management as long as both of you have only opinions, but no facts. So here is my advice how you might act in your specific situation.

Having stored procedures making their own commits without the ability to control the transactions from outside makes it really hard to reuse that procedures in a combined process (that is true for any kind of code updating a database, stored procedure or not). I see this typically as a heavy design error, typically made by beginners (but I have seen this kind of thing too often from more or less "experienced" devs).

The case is that management thinks that this approach [...] will impact greatly in our code.

Of course, it will impact your code (it will enhance its design). But IMHO in most cases you can refactor existing code in a way the risk of breaking things is not too big. For each of those stored procedure, add a parameter (for example bool autoCommit) which allows to switch the commit off optionally. Let the commit enabled be default (that has typically only a small impact on the existing code), and make sure you did not break anything so far.

Then, make a test version of your code (or at least a representative example) which uses the new feature to control the transaction from outside. This gives you an opportunity to measure the performance impact - and prove or disprove managements objections. This will also allow a direct comparison of the old and the new code showing how much better the new solution is.

Later, you may think of refactoring the changed procedures to a state where the autoCommit parameter is not needed any more.

Doc Brown
  • 199,015
  • 33
  • 367
  • 565
1

What you look for are two things: correctness and efficiency. It just so happens that sufficient simplicity will give you evident correctness and make efficiency easy to implement.

However, your system is long past that point. Unless there is some drastically different approach to the problem that will greatly simplify things, achieving both correctness and efficiency will be hard. If you want to do both to emerge from the same code, you will probably fail.

The only approach here is to use tests. You will have a separate body of simple code that is not at all concerned with efficiency, but correctness. And once your tests are in place, you can implement the actual solution and wrestle with it until its fast enough while all tests past.

back2dos
  • 29,980
  • 3
  • 73
  • 114
0

A well performing code that is unstable is no good

So the first and foremost lookout should be to achieve stability in code, and the way to do that is TESTS, build for yourself a great test suite.

When you have a great test suite that backs you, now you should try and do Load/Stress testing and if you find the performance lacking then and only then you should try to achieve performance.

And while achieving performance if you do something wrong your tests will warn you well ahead of production time.

Steps:

  • Tests, tests and tests
  • Flexible, Readable, Testable code
  • Load testing
  • Performance optimization if required and proved lacking from load/stress testing
Narendra Pathai
  • 169
  • 1
  • 9