With almost every software there are errors and those must be given levels. Grave errors may simply stop your program while simple notices can be resolved with a click. I've always proceeded by giving them a certain numeric degree of importance. But is there a "general rule" between programmers on how to chose these degrees?
- Should a higher degree of importance be represented by a larger number (e.g. 500) or a smaller one (e.g. 5)? Is there a reason why?
- Should error levels be widely spaced (100, 200, 300, ...) or closer to each other (100, 101, 102)? And again, are there any advantages to this technique?