The main purpose of standards and patterns is to improve the efficiency with which developers create and interact with code.
They write code in a particular way, and expect to read code expressed in a particular way, and by doing this they reduce the amount of time spent evaluating how best to write code, and reduce the amount of time spent interpreting and gaining an understanding of code that they read.
This is crucial for developers for two reasons. Firstly, code can have all sorts of behaviours (including unintended or unexpected ones) - getting to grips with what exactly any code does (and should do) can take a large amount of effort and experience. It is not just a question of what a single operation does, but how an ensemble of operations or whole system integrates and whether it meets the goals for which it is written.
Secondly, two sets of code that do exactly the same things, or at least have the same external effects and meet the same goals, can be expressed in an almost infinite number of ways. It can take more effort and experience to determine that two pieces of code do the same thing, than it does to write a single new piece of code that is consistent with an understanding you already have.
It is apparent then that not only does code require a large investment to understand it's full behaviour, but that differences in form (even with equivalent behaviour) can require large investments to ascertain that the difference is indeed only in form.
These problems present equally whether you are reading or writing the code, and whether it is your own code you are reading from 6 months ago or whether it is another developer's code you are reading. Except, if you are reading your own code, you might still have a partial recollection of all that went into it.
This is why all developers attempt to leverage patterns and standards of code in their work (including self-imposed ones), because making costly and exhausting investments in understanding, and then embodying the result of this in memorised patterns and standards which are then followed rather slavishly (without a full re-analysis every time), is the only way they can achieve a reasonable pace in writing and reading code (at least, if the code is to be useful and correct).
When do developers talk of changing standards for an existing product?
Usually, when they were not involved from the start, and have no prior investment in the product or in an understanding of its patterns which allow them to work quickly with it. Perhaps, no understanding of any relevant pattern, and want to be sure they are investing wisely (especially in patterns that will be useful in future). Or perhaps, they already have a significant investment in an alternative pattern (perhaps enabled by newer technology or tooling, used elsewhere), and want to get on with writing new code quickly according to that pattern, rather than spending time investing in an apparently obsolete pattern. Or perhaps, they want to invest in a new pattern with a view to creating a new product in future to a better standard - perhaps for a different employer.
Should a standard on an existing product be changed, or be mixed-and-matched? That really depends on an overall impression of why the original standard was adopted, the relative merits of each standard (against the additional complexity and possible integrational challenges introduced), the extent to which the new work is cleanly separated from the old, and indeed perhaps personal career goals which investments in a new standard may suit.
It's a question that is almost impossible to answer in the abstract, except by talking at length about the nature of the problem and things you might want to take into consideration in making a judgment.