My problem is to classify input time series signal (128 points, equidistant in time, range 0.0..1.0) in limited amount (say 16) different classes. We have hand-classified 20000 samples and we are using 2000 as training set and other 18000 to evaluate our classification algorithm (test set).
In my analysis I found that classification using correlation coefficient works quite good. https://en.wikipedia.org/wiki/Pearson_correlation_coefficient
My classification "standard" class samples are calculated as mean values from training set. I correlate each sample from test set with standard class samples and the winning class is selected by the highest coefficient.
The problem is that I found out that different parts of signal is not equaly important. Some parts of signal fluctuate widely between samples, and same are very similar.
So I came upon idea to include some sort of weight for points in correlation coefficient. The weight would be inverse proportional to variance of selected point sample for the class in training set.
I can't believe this is a novel idea, but in my search up to now I have not found anything like this. If this is a known method, please let me know. If somebody had a similar problem and used similar methods, please report on results. If nowone has heard of this approach, please let me know what you (statistics-wise) think of it. Thank you!