Often in modeling, data processing, and optimization; the intensive portions of the code can often reduce to solving a lot of linear equations.
e.g.
solve for x1 in A x1 = b1
solve for x2 in A x2 = b2
... and so on
Sometimes the matrix 'A' doesn't change between computations, and my question is how (and when) can I take advantage of this with some pre-computation? This assumes that you're already using linear algebra libraries (e.g. BLAS, LAPACK) optimized for your system, and can't find a way to solve it in a single batch operation for which (LAPACK, Matlab, etc) already have specialized functions.
For example, one strategy is to compute and store an appropriate matrix decomposition (LUP or QR) once that would otherwise be (internally) called by the linear algebra library each time. However, I can find little guidance on which ones to use that work well with the intermediate solvers (in particular when working with LAPACK), or the relative merits of the decompositions in terms of speed or running into bad edge cases.
Note: A "bad" strategy is to compute the inverse matrix inv(A)
due to speed and accuracy issues. (MatLab's documentation for inv() discusses this in greater detail )