The person updating the test better have a firm understanding of the reasoning behind the code change and decide if the test or the new code needs to be fixed. This much inconsistency in a team when it comes to testing is probably an indication of other problems as well.
I think it's the other way around. When test is good, its failure should guide one in deciding whether (and why, and, ideally, how) to modify the code or test.
In that sense, responsibility starts not with the person updating the test but with person creating test.
The one who creates test would better have a firm understanding of the reasoning behind the code. If this is not the case then test is of pretty little value and one would better stop worrying about changing it - or even removing it if it indeed stands in a way of development.
If test produces nothing but incomprehensible crywolfs at reasonable code changes, this is an issue in test - this is what needs to be fixed.
There is a certain amount of risk involved that someone else on the team will alter code that is covered by testing. Is this the point where testing becomes counter-productive?
Less productive - maybe. Counter productive - hardly.
Time lag between code changes and test execution may feel unnatural for someone used to mature forms of unit testing and it indeed may incur some effectiveness losses but in other forms of testing (functional, integration, QA) it is quite a common thing and it is not considered a reason to drop testing.
To deal with test execution lagged that way, one probably has to get out of the box of "classical" unit testing approaches and look at the way how our colleagues testers do their stuff.
Now, imagine someone some time ago has changed code and some time later you run test and it fails. Think of how tester would handle that...
- Study test failure.
- If test helps you to find bug in code, great - open the ticket in issue tracker to get this fixed.
Note, later you can use these tickets to justify benefits of running tests in the build time - that is, if tests are indeed that helpful.
- If test does not help, modify it to match the code changes (are you afraid of that? if yes, that indeed can become counter-productive).
- If you feel there is an issue in test... guess what? open the ticket in issue tracker to get this fixed.
Note, later you can use this data to explain why regression bugs slip through and justify the need to invest efforts in testing.
Here is where you may experience some loss in productivity - because you act as a tester with the tool that is intended to be used by developer. This is not good, but not really the end of the world don't you think.