Statically-typed languages such as Java afford the benefit of compile-time checking of types - you are guaranteed that an object is of a given type, so:
- there is no need to spend time and resources investigating the TYPE of a variable or parameter, because we don't know if this is a
None
,List[string]
, or just onestring
- there are no runtime type errors of this form
- there is no worry from mistakes in the use of conventions (e.g.
intPhoneNumber
changed to an string, but forgot to change the name tostrPhoneNumber
)
There are probably other points that I have not mentioned. But the general idea is that there is less cognitive load on the programmer and if there are incorrect assumptions, feedback is immediately provided in the form of a failed compilation - and immediate feedback is exactly one of the winning factors of unit testing.
Duck-typing on the other hand does not care about the explicit type of an object - as long as the object has the relevant attributes (data members and functions), the program can go ahead. If the object has a function with the same name as what is required by some piece of code, then the code will attempt to use the function, even if the implementation is unexpectedly different. If the object does not have the required attributes, then the program crashes. The advantage of this is a reduction in code - no interface definition is necessary in a duck-typing language.
As described above, such an unexpected scenario can only be caught at runtime. But a statically-typed language would catch such a mismatch at compile time. Unless a robust unit testing suite is set up with 100% path code coverage, such a basic - and preventable - error (type mismatch) is liable to be missed. And we know that it is very difficult for a variety of reasons to achieve such a high percentage of code coverage, let alone path code coverage.
Therefore:
- Wouldn't the 'disadvantages' of static-typing - meaning more verbose code - be far outweighed by the advantages of its intrinsic boost to software reliability?
- From a production software reliability perspective, is there any winning advantage to using languages with such features?
- Or is it simply that such languages shouldn't be used for production (even though they are in many, many instances)?