Is scientific code a different enough realm to ignore common coding standards?
No, it's not.
Research code is often "throw away" and written by people who are not developers by background, however strong their academic credentials. Some of the research code I wrote would make current me cry. But it worked!
One thing to consider is the gatekeepers to projects drive what gets included. If a large project started as an academic/research code project, ends up working, and is now a mess, someone has to take the initiative to refactor it.
It takes a lot of work to refactor existing code that is not causing problems. Especially if it is at all domain specific or does not have tests. You will see that OpenCV has a style guide that is very comprehensive, even if not perfect. Applying this retroactively to all existing code? That is.. not for the faint of heart.
This is even more difficult if all that code works. Because it's not broken. Why fix it?
Yet these projects prosper, are maintained and widely used!
This is the answer, in a sense. Working code is still useful and so it is more likely to be maintained.
It might be a mess, especially initially. Some of these projects probably started as a 1-off project that "would not need to be reusued ever and could be thrown away."
Also consider that if you are implementing a complex algorithm it may make more sense to have larger methods because you (and others familiar with the scientific side) can conceptually understand the algorithm better. My thesis work was related to optimization. Having the main algorithm logic as one method was considerably easier to understand than it would have been trying to break it apart. It certainly violated the "7 lines per method" rule but it also meant that another researcher could look at my code and more quickly understand my modifications to the algorithm.
If this implementation was abstracted away and designed well, this transparency would be lost to non programmers.
To fellow answerers: This question refers to the code base of open-source libraries for computationally intensive tasks in one or more scientific domains. This question is not about throwaway code. Please pause for a moment to make sure you grasp every highlighted aspect before writing an answer.
I think people often have this idea that all open source projects start as, "hey I have a great idea for a library that will be wildly popular and used by thousands/millions of others" and then every project happens like that.
Reality is that many projects are started and die. A ridiculously tiny percentage of projects "make it" to the level of OpenCV or VTK etc.
OpenCV started as a research project from Intel. Wikipedia describes it as being part of a "series of projects." Its first non-beta release was 2006, or seven years after it was first started. I suspect that the goal initially was meaningful beta releases, not perfect code.
Additionally, the "ownership" of OpenCV has changed significantly. This makes standards change, unless all responsible parties adopt the exact same standards and keep them for the duration of the project.
I also should point out that OpenCV was around for several years before the Agile Manifesto that Clean Code derives inspiration from was published (and VTK almost 10). VTK was started 17 years prior to the publishing of Clean Code (OpenCV was "only" 9 years prior).