I'm reading a guide for back-propagation of a neural net, and it gives the error function as:
Next work out the error for neuron B. The error is What you want – What you actually get, in other words: ErrorB = OutputB (1-OutputB)(TargetB – OutputB)
But then, underneath that it says:
The “Output(1-Output)” term is necessary in the equation because of the Sigmoid Function – if we were only using a threshold neuron it would just be (Target – Output).
I am in fact using the sigmoid function for my net, but I'd like to write things to be as general as possible. I thought of trying something like this:
public List<Double> calcError(List<Double> expectedOutput, UnaryOperator<Double> derivative) {
List<Double> actualOutput = outputLayer.getActivations();
List<Double> errors = new ArrayList<Double>();
for (int i = 0; i < actualOutput.size(); i++) {
errors.add(
derivative.apply(actualOutput.get(i)) * (expectedOutput.get(i) - actualOutput.get(i))
);
}
return errors;
}
Where I allow the caller to pass in the derivative (I think that's what it's called), so the error can be calculated for any activation function.
The problem is, I haven't learned calculus yet, and don't even know if this makes sense. Can I have the derivative passed in like this and still have it give accurate results? Or will the error calculation change depending on activation function/derivative used?
I wasn't sure if this should be put in the Math SE; as it contains code.