I have a couple of questions regarding the PID controllers; maybe it is that I have not understood the concept very well.
I will formulate my question with a case:
I have a system which consists of a warp beam in which a thread is wound and a motor to rotate the warp beam (“Process” according to the picture below). This thread should be fed at a constant tension to another machine. The motor is connected to a control motion in which we have just to specify the speed at which the motor should rotate, so we have a PID controller to calculate the speed at which the motor should run. To measure the tension of the thread we have a tension sensor.
Now the input of the PID controller is the error, which has units of tension (Newtons converted to mA), but we are feeding the output of the PID controller to the process as the rpm for the motors. How is that possible? I mean if I have at the input of the PID units of Newton and the value for the process should be rpm. Does the conversion of these units is done “through” the units of the Ki, Kp and Kd parameters?
And my second question: Let´s say for a moment that the tension in the thread matches with the set point value, which means error is zero (or at least in an acceptable range, to consider it zero), and thus the controller output will be zero. But if the PID output is zero, the input to the process is zero, which I do not understand because the motor should keep running at least at the same speed. So how is that possible? Or what is wrong with my analysis?