32

First I want to say Java is the only language I ever used, so please excuse my ignorance on this subject.

Dynamically typed languages allow you to put any value in any variable. So for example you could write the following function (psuedocode):

void makeItBark(dog){
    dog.bark();
}

And you can pass inside it whatever value. As long as the value has a bark() method, the code will run. Otherwise, a runtime exception or something similar is thrown. (Please correct me if I'm wrong about this).

Seemingly, this gives you flexibility.

However, I did some reading on dynamic languages, and what people say is that when designing or writing code in a dynamic language, you think about types and take them into account, just as much as you would in a statically typed language.

So for example when writing the makeItBark() function, you intent for it to only accept 'things that can bark', and you still need to make sure you only pass these kinds of things into it. The only difference is that now the compiler won't tell you when you made a mistake.

Sure, there is one advantage to this approach which is that in static languages, to achieve the 'this function accepts anything that can bark', you'd need to implement an explicit Barker interface. Still, this seems like a minor advantage.

Am I missing something? What am I actually gaining by using a dynamically typed language?

Aviv Cohn
  • 21,190
  • 31
  • 118
  • 178
  • 6
    `makeItBark(collections.namedtuple("Dog", "bark")(lambda x: "woof woof"))`. That argument isn't even a *class*, it's an anonymous named tuple. Duck typing ("if it quacks like a...") lets you do ad hoc interfaces with essentially zero restrictions and no syntactic overhead. You can do this in a language like Java, but you end up with a lot of messy reflection. If a function in Java requires an ArrayList and you want to give it another collection type, you're SOL. In python that can't even come up. – Phoshi Jul 03 '14 at 09:54
  • 2
    This kind of question has been asked before: [here](http://programmers.stackexchange.com/questions/10032/dynamically-vs-statically-typed-languages-studies), [here](http://programmers.stackexchange.com/questions/68417/need-an-example-where-dynamic-languages-are-better-than-static-languages), and [here](http://programmers.stackexchange.com/questions/100457/can-static-and-dynamically-typed-languages-be-seen-as-different-tools-for-differ). Specifically the first example seems to answer your question. Maybe you can rephrase yours to make it distinct? – logc Jul 03 '14 at 09:55
  • 3
    Note that for example in C++, you can have a template function that works with any type T that has a ```bark()``` method, with the compiler complaining when you pass in something wrong but without having to actually declare an interface that contains bark(). – Wilbert Jul 03 '14 at 09:56
  • @Phoshi - does "SOL" mean "not allowed to violate a contract"? – Den Jul 03 '14 at 12:58
  • @Den: Essentially. There's very little good reason why many functions at all should only accept a specific type, rather than, as you say, anything with sticks to some contract. – Phoshi Jul 03 '14 at 13:28
  • 2
    @Phoshi The argument in Python still has to be of a particular type - for example, it can't be a number. If you have your own ad-hoc implementation of objects, which retrieves its members through some custom `getMember` function, `makeItBark` blows up because you called `dog.bark` instead of `dog.getMember("bark")`. What makes the code work is that everyone implicitly agrees to use Python's native object type. – Doval Jul 03 '14 at 13:30
  • @Doval: So? You can't define anything /but/ objects, and you have a contract to stick to just as much as if you were in a strictly typed language. Just because I wrote makeItBark with my own types in mind doesn't mean you can't use yours, wheras in a static language it probably /does/ mean that. Besides, you don't need to make a class, as shown in my original snippet. Anything that supports the right accessors will work, whether you're patching them into an existing class, making your own, using a namedtuple, a dict with a wrapper to direct d.k to d[k], /whatever/. – Phoshi Jul 03 '14 at 13:43
  • 3
    @Phoshi `Just because I wrote makeItBark with my own types in mind doesn't mean you can't use yours, wheras in a static language it probably /does/ mean that.` As pointed out in my answer, this is not the case *in general*. That's the case for Java and C#, but those languages have crippled type and module systems so they're not representative of what static typing can do. I can write a perfectly generic `makeItBark` in several statically-typed languages, even non-functional ones like C++ or D. – Doval Jul 03 '14 at 13:50
  • 1
    @Doval: Because you're using *their* systems for polymorphism. You don't get that problem in C#/Java either if they're defined on interface types. Typeclasses are not helpful if you have a function from concrete type T to concrete type K, and indeed you may be *worse* off in those languages due to not having the same kind of subtyping relationships an OO language might have if your lazy library developer has left everything open for inheritance. A well written program is flexible in any language, but dynamic languages can make even poorly written programs flexible. – Phoshi Jul 03 '14 at 13:54
  • One thing that I enjoy about dynamic languages is that I can most often swap out the code at runtime in a server environment without having to reboot my application. – Seiyria Jul 03 '14 at 13:57
  • 2
    @Phoshi In C# and Java the writer of the `Duck` class needs to specify that it implements `IBarkable` up front; this is not the case in the Haskell/SML/OCaml/D/C++/Go. And if you extend some arbitrary class that some lazy programmer left open without actually designing it for inheritance, your code will break when he makes changes to the class. Just like if you monkey patch things in a dynamic language, your code will eventually break when someone else tries to monkey patch the same thing. If some piece of code isn't written for extension, there's no *safe* way of extending it. – Doval Jul 03 '14 at 13:59
  • 1
    @Doval: Yes, I understand the advantage of typeclasses, but they don't help you if your function is not of type `(Barkable b)=>b->Whatever`. If your function is of type `Dog->IO()` and you want to pass in a logging auto-proxy you're just as stuck as a method in C# or Java of type `void(Dog)`. – Phoshi Jul 03 '14 at 14:01
  • 1
    @Phoshi That example is a bit of a red herring; you're piggybacking on the fact that Haskell is pure and won't let you hide side effects inside your functions. In other languages it's not an issue; if you want to substitute a function of type `Dog -> Whatever` for another one of the same type that does logging, you can. – Doval Jul 03 '14 at 14:05
  • 1
    @Doval: Oh, sorry, I wasn't even thinking about that. Replace logging with memoisation or something, then. The important part was if your function only takes a concrete type in C# or Haskell, your flexibility is limited. In either language, if you use the tools they give you for polymorphic behaviour, you get a lot of flexibility. I don't think this is the problem typeclasses were intended to solve, because they don't solve that problem. They're better, in many ways, than subtyping polymorphism, but they don't fix this particular issue. – Phoshi Jul 03 '14 at 14:09
  • 2
    @Phoshi I understand what you're getting at now. You're right that if I don't add `(Barkable a) => ` to the code it won't be generic. However if I didn't write the code with genericity in mind you don't know the minimal set of constraints I had in mind when I wrote `makeItBark`. While a dynamic language will give you the ability to pass in some other type to `makeItBark`, you have no guarantee the code is correct under all circumstances, or that the code won't break when I make a change to `makeItBark`. So, yes, you get to have your way, but understand that it can and will bite you. – Doval Jul 03 '14 at 14:19

3 Answers3

37

Dynamically-typed languages are uni-typed

Comparing type systems, there's no advantage in dynamic typing. Dynamic typing is a special case of static typing - it's a statically-typed language where every variable has the same type. You could achieve the same thing in Java (minus conciseness) by making every variable be of type Object, and having "object" values be of type Map<String, Object>:

void makeItBark(Object dog) {
    Map<String, Object> dogMap = (Map<String, Object>) dog;
    Runnable bark = (Runnable) dogMap.get("bark");
    bark.run();
}

So, even without reflection, you can achieve the same effect in just about any statically-typed language, syntactic convenience aside. You're not getting any additional expressive power; on the contrary, you have less expressive power because in a dynamically typed language, you're denied the ability to restrict variables to certain types.

Making a duck bark in a statically-typed language

Moreover, a good statically-typed language will allow you to write code that works with any type that has a bark operation. In Haskell, this is a type class:

class Barkable a where
    bark :: a -> unit

This expresses the constraint that for some type a to be considered Barkable, there must exist a bark function that takes a value of that type and returns nothing.

You can then write generic functions in terms of the Barkable constraint:

makeItBark :: Barkable a => a -> unit
makeItBark barker = bark (barker)

This says that makeItBark will work for any type satisfying Barkable's requirements. This might seem similar to an interface in Java or C# but it has one big advantage - types don't have to specify up front which type classes they satisfy. I can say that type Duck is Barkable at any time, even if Duck is a third party type I didn't write. In fact, it doesn't matter that the writer of Duck didn't write a bark function - I can provide it after-the-fact when I tell the language that Duck satisfies Barkable:

instance Barkable Duck where
    bark d = quack (punch (d))

makeItBark (aDuck)

This says that Ducks can bark, and their bark function is implemented by punching the duck before making it quack. With that out of the way, we can call makeItBark on ducks.

Standard ML and OCaml are even more flexible in that you can satisfy the same type class in more than one way. In these languages I can say that integers can be ordered using the conventional ordering and then turn around and say they're also orderable by divisibility (e.g. 10 > 5 because 10 is divisible by 5). In Haskell you can only instantiate a type class once. (This allows Haskell to automatically know that it's ok to call bark on a duck; in SML or OCaml you have to be explicit about which bark function you want, because there might be more than one.)

Conciseness

Of course, there's syntactical differences. The Python code you presented is far more concise than the Java equivalent I wrote. In practice, that conciseness is a big part of the allure of dynamically-typed languages. But type inference allows you to write code that's just as concise in statically-typed languages, by relieving you of having to explicitly write the types of every variable. A statically-typed language can also provide native support for dynamic typing, removing the verbosity of all the casting and map manipulations (e.g. C#'s dynamic).

Correct but ill-typed programs

To be fair, static typing necessarily rules out some programs that are technically correct even though the type checker can't verify it. For example:

if this_variable_is_always_true:
    return "some string"
else:
    return 6

Most statically-typed languages would reject this if statement, even though the else branch will never occur. In practice it seems no one makes use of this type of code - anything too clever for the type checker will probably make future maintainers of your code curse you and your next of kin. Case in point, someone successfully translated 4 open source Python projects into Haskell which means they weren't doing anything that a good statically-typed language couldn't compile. What's more, the compiler found a couple of type-related bugs that the unit tests weren't catching.

The strongest argument I've seen for dynamic typing is Lisp's macros, since they allow you to arbitrarily extend the language's syntax. However, Typed Racket is a statically-typed dialect of Lisp that has macros, so it seems static typing and macros are not mutually exclusive, though perhaps harder to implement simultaneously.

Apples and Oranges

Finally, don't forget that there's bigger differences in languages than just their type system. Prior to Java 8, doing any kind of functional programming in Java was practically impossible; a simple lambda would require 4 lines of boilerplate anonymous class code. Java also has no support for collection literals (e.g. [1, 2, 3]). There can also be differences in the quality and availability of tooling (IDEs, debuggers), libraries, and community support. When someone claimed to be more productive in Python or Ruby than Java, that feature disparity needs to be taken into account. There's a difference between comparing languages with all batteries included, language cores and type systems.

Doval
  • 15,347
  • 3
  • 43
  • 58
  • Very interesting and insightful answer on type systems. It is a pity you do not expand more on the Lisp macros; I think the really interesting comparison would be Lisp vs Haskell (or OCaml), instead of Python vs Java. But I also find you have arrived to a bit of a contradictory conclusion: expanding the language syntax on the fly is a big advantage that is favored by dynamic types; I understand Typed Racket proves both are not mutually exclusive, but just simplifying the coexistence is a plus for me. – logc Jul 03 '14 at 13:04
  • @logc Unfortunately I only have passing familiarity with Lisp. I've never written a Lisp macro in my life, I just know what they do and why they're useful. But it really depends on your definition of "simplifying the coexistence" - if I write a macro I'd want to know that it type-checks. Having the language silently accept something nonsensical may be simpler, but it's not what I want! For whatever it's worth I've heard [Haskell can have some form of macros](http://en.wikipedia.org/wiki/Template_Haskell) as well, and I know [D](http://dlang.org/) also has some metaprogramming support. – Doval Jul 03 '14 at 13:09
  • Type checking is often thought about as belonging to data (variables), but it applies also on code (functions, but also any expression like loops, conditionals ...). If you type-check variables then you have to type-check expressions. Taken to extremes, you will end up with a dedicated `for` for integer arrays, another one for double arrays, etc ... Which is BTW what happens in OCaml with those `+` for ints and `.+` for doubles. A language with a lot of type checking is also a language with a lot of syntactic rules, and those rules are more difficult to expand from within the language. – logc Jul 03 '14 at 13:28
  • 1
    You don't necessarily need a dedicated `for` for each type of array; just like `makeItBark` works for any type with a `bark` function, `for` can be made to work with any type that has an indexing function or a function for retrieving an iterator. Yes, each type needs to implement its own indexing function but this is no different in a dynamically-typed language. OCaml's use of `+` is `.+` is related to resolving ambiguity with type inference. Consider `fun (a, b) => a + b`. If they had overloaded `+`, the language wouldn't know whether the arguments are `ints` or floating point... – Doval Jul 03 '14 at 13:41
  • ...it'd have to assume one or the other or fail to compile. This is the case in Standard ML, where most compiler will assume `int`. – Doval Jul 03 '14 at 13:43
  • 2
    You forgot to attribute your source for the first paragraph -- http://existentialtype.wordpress.com/2011/03/19/dynamic-languages-are-static-languages/ –  Jul 03 '14 at 13:51
  • Unjustified assumptions: 1) "[...] you can achieve the same effect in just about any statically-typed language, **syntactic convenience aside** [...]" -- assuming that 'syntactic convenience' is not a concern of language designers and users; 2) "[...] a good statically-typed language [...]" -- assuming that a good statically-typed language exists, and that everybody is perfectly free to use it; 3) "[...] type inference allows you to write code that's just as concise [...]". –  Jul 03 '14 at 14:04
  • 3
    @Matt Re: 1, I haven't assumed it's not important; I addressed it under Conciseness. Re: 2, although I never explicitly said it, by "good" I mean "has thorough type inference" and "has a module system that allows you to match code to type signatures *after the fact*", not up-front like Java/C#'s interfaces. Re 3, the burden of proof is on you to explain to me how given two languages with equivalent syntax and features, one dynamically-typed and the other with full type inference, you wouldn't be able to write code of equal length in both. – Doval Jul 03 '14 at 14:11
  • @Doval nice try, but I haven't made a claim. You, however, have -- without justification. –  Jul 03 '14 at 14:15
  • 4
    @MattFenwick I've already justified it - given two languages with the same features, one dynamically-typed and the other statically-typed, the main difference between them will be the presence of type annotations, and type inference takes that away. Any other differences in syntax are superficial, and any differences in features turns the comparison into apples vs oranges. It's on you to show how this logic is wrong. – Doval Jul 03 '14 at 14:29
  • 1
    You should have a look at Boo. It's statically typed with type inference, and has macros that allow for the language's syntax to be extended. – Mason Wheeler Jul 03 '14 at 14:36
  • @Doval please stop moving the goalposts. I am questioning the statement in your original post "But type inference allows you to write code that's just as concise in statically-typed languages, by relieving you of having to explicitly write the types of every variable.", in which you did not state "given two languages with equivalent syntax and features, one dynamically-typed and the other with full type inference". If that's part of the argument, then it belongs in the post. –  Jul 03 '14 at 14:53
  • And of course, that's leaving aside the issue of whether there even *are* two such languages. –  Jul 03 '14 at 14:53
  • To revisit 1): you stated "even without reflection, you can achieve the same effect in just about any statically-typed language, syntactic convenience aside." So this argument relies on 'syntactic convenience' not being important, no? But then you claim that syntactic convenience *is* important? So doesn't that nullify your statement? –  Jul 03 '14 at 15:00
  • @MattFenwick The entire "Apples and Oranges" section is dedicated to pointing out that language features and the presence/absence of libraries affects productivity. If Java had list comprehensions and Python didn't, it'd be easy to show an example of Java code that's more concise than Python even with type annotations. Re: 1, my goal was to show that dynamic typing is a special case of static typing; the differences are syntactical. Yes, the syntactic convenience is important, but that's why I bring up that static typing doesn't rule out syntactic convenience (type inference and `dynamic`). – Doval Jul 03 '14 at 15:05
  • @Doval are you saying that 'type annotations' are the sole cause of 'syntactic inconvenience'? –  Jul 03 '14 at 15:09
  • @MattFenwick It can be - especially when the type annotations aren't part of a public API. It can be a hassle to annotate every local variable or private function, especially when dealing with higher-order functions. I believe the types should be explicitly written in public APIs, but dynamic languages don't have an edge there because the API user still needs to know the expected types and you'll have to write it in the documentation. – Doval Jul 03 '14 at 15:15
  • @Doval I see -- thanks for clarifying. This may be a major point of disagreement between us: I do not accept the premise that type annotations are the only (or even most important) contributor to syntactic inconvenience (without justification, of course :) ). Therefore, from my POV, type inference by itself does not necessarily solve the syntactic convenience issue. (Of course, we may simply have different definitions of 'syntactic convenience'.) It should also be noted that type systems have to be carefully designed to enable type inference; for instance, subtyping does not play nicely. –  Jul 03 '14 at 15:30
  • @MattFenwick Ah, I answered before I saw your edit. I don't think it's the *only* form of syntactic inconvenience but it's the main one that comes to mind that's directly related to static typing vs dynamic typing. The only other one I can think of is not having syntax sugar for a dynamic type in your statically-typed language of choice. – Doval Jul 03 '14 at 16:30
  • 1
    "a simple lambda would require 4 lines of boilerplate anonymous class code": Even with anonymous closures, I would not use Java for functional programming. I would rather use Scala or, even better, Haskell, Ocaml, SML, Common Lisp, Scheme, Clojure, etc. Lambdas have been added to Java to recover some popularity, but Java was not designed for functional programming and I would definitely look for better options. – Giorgio Jul 03 '14 at 18:20
  • @Giorgio Agreed, but for some problems the solution really is "pass some functions around", or the OO work-around of "pass an object whose sole purpose is to hold a function". – Doval Jul 03 '14 at 18:39
  • 1
    @Doval: True. BTW, lambda notation is not used exclusively in functional programming: as far as I know, Smalltalk has anonymous blocks, and Smalltalk is as object-oriented as it can get. So, often the solution is to pass around an anonymous block of code with some parameters, no matter if this is an anonymous function or an anonymous object with exactly one anonymous method. I think these two constructs express essentially the same idea from two different perspectives (the functional and the object-oriented one). – Giorgio Jul 03 '14 at 18:48
  • +1 There are some situations where the necessary semantics aren't really a good fit for the type relationships available in statically-typed languages, but the proper remedy shouldn't be to throw all the useful aspects of static typing out the window. Rather, what's needed IMHO is a means by which an interface could deem itself to be implemented by any class meeting certain criteria [the system should auto-generate method implementations which chain to static methods designated by the interface]. Consumers of the interface would then not have to worry about whether they are receiving... – supercat May 13 '15 at 02:27
  • ...references to types that implement the interface directly, or references to types with auto-generated implementations. This would achieve many of the advantages of dynamic typing, but without the semantic free-for-all of making every object behave as a `Map`. – supercat May 13 '15 at 02:28
  • When I look at how a method handles variable numbers of arguments or different typed arguments, I don't want to write or look at a dozen overload methods or interface implementations doing the same or very similar things. That offends my DRY sensibilities and it's tedious to both read and write. I've also come to believe that having a gatekeeper in your arguments in higher-level code mostly protects you from bad architecture or poorly trained teammates. Do something non-trivial in a dynamically typed language. You might be surprised at the new stuff you bring to your next static-typed project. – Erik Reppen Dec 09 '15 at 01:23
13

This is a difficult, and quite subjective issue. (And your question may get closed as opinion-based, but that doesn't mean it's a bad question - on the contrary, even thinking about such meta-language questions is a good sign - it's just not well-suited to the Q&A format of this forum.)

Here's my view of it: The point of high-level languages is to restrict what a programmer can do with the computer. This is surprising to many people, since they believe the purpose is to give users more power and achieve more. But since everything you write in Prolog, C++ or List is eventually executed as machine code, it is actually impossible to give the programmer more power than assembly language already provides.

The point of a high-level language is to help the programmer to understand the code they themselves have created better, and to make them more efficient at doing the same thing. A subroutine name is easier to remember than a hexadecimal address. An automatic argument counter is easier to use than a call sequence here you have to get the number of arguments exactly right on your own, with no help. A type system goes further and restricts the kind of arguments you can provide in a given place.

Here is where people's perception differs. Some people (I'm among them) think that as long as your password checking routine is going to expect exactly two arguments anyway, and always a string followed by a numeric id, it's useful to declare this in the code and be automatically reminded if you later forget to follow that rule. Outsourcing such small-scale book-keeping to the compiler helps free your mind for higher-level concerns and makes you better at designing and architecting your system. Therefore, type systems are a net win: they let the computer do what it's good at, and humans do what they're good at.

Others see to quite differently. They dislike being told by a compiler what to do. They dislike the extra up-front effort to decide on the type declaration and to type it. They prefer an exploratory programming style where you write actual business code without having a plan that would tell you exactly which types and arguments to use where. And for the style of programming they use, that may be quite true.

I'm oversimplifying dreadfully here, of course. Type checking is not strictly tied to explicit type declarations; there is also type inference. Programming with routines that actually do take arguments of varying types does allow quite different and very powerful things that would otherwise be impossible, it's just that a lot of people aren't attentive and consistent enough to use such leeway successfully.

In the end, the fact that such different languages are both very popular and show no signs of dying off shows you that people go about programming very differently. I think that programming language features are largely about human factors - what supports the human decision-making process better - and as long as people work very differently, the market will provide very different solutions simultaneously.

Kilian Foth
  • 107,706
  • 45
  • 295
  • 310
  • 3
    Thanks for the answer. You said that some people *' dislike being told by a compiler what to do. [..] They prefer an exploratory programming style where you write actual business code without having a plan that would tell you exactly which types and arguments to use where.'* This is the thing that I don't understand: programming isn't like musical improvisation. In music if you hit a wrong note, it may sound cool. In programming, if you pass something into a function that isn't supposed to be there, you'll most likely get nasty bugs. (continuing in next comment). – Aviv Cohn Jul 03 '14 at 10:06
  • 4
    I agree, but many people *don't* agree. And people are quite possessive about their mental preconceptions, particularly since they're often unaware of them. That's why debates about programming style usually degenerate into arguments or fights, and it's rarely useful to start them with random strangers on the internet. – Kilian Foth Jul 03 '14 at 10:09
  • 1
    This is why - judging by what I read - people using dynamic languages take types into account just as much people using static languages. Because when you write a function, it's supposed to take arguments of a specific kind. Doesn't matter if the compiler enforces this or not. So it comes down to static typing helping you with this, and dynamic typing doesn't. In both cases, a function has to take a specific kind of input. So I don't see what the advantage of dynamic typing is. Even if you prefer an 'exploratory programming style', you still can't pass whatever you want into a function. – Aviv Cohn Jul 03 '14 at 10:09
  • 1
    People often talk about very different types of projects (especially regarding size). The business logic for a web site will be very simple compared to say a full ERP system. There is less risk that you get things wrong and the advantage of being able to very simply reuse some code is more relevant. Say I have some code that generates a Pdf (or some HTML) from a data structure. Now I have a different data source (first was JSON from some REST API, now it's Excel importer). In a language like Ruby it can be super easy to 'simulate' the first structure, 'make it bark' and reuse the Pdf code. – thorsten müller Jul 03 '14 at 10:40
  • @Prog: The real advantage of dynamic languages is when it comes to describing things which is really hard with a static type system. A function in python, for example, could be a function reference, a lambda, a function object, or god knows what and it'll all work the same. You can build an object that wraps another object and automatically dispatches methods with zero syntactic overhead, and every function essentially magically has parametrized types. Dynamic languages are amazing for quickly getting stuff done. – Phoshi Jul 03 '14 at 12:06
  • "Therefore, type systems are a net win" -- this was not shown in the post. "Others see [to] quite differently. They dislike being told by a compiler what to do. They dislike the extra up-front effort to decide on the type declaration and to type it" -- horribly biased mischaracterization. The structure of the post also seems to imply that these "others" do not see the value of type systems -- which, again, was not shown in the post. –  Jul 03 '14 at 15:56
  • @KilianFoth: I strongly agree with your views: Even though I have some positive experience with dynamic languages and I find them good for prototyping, I think that the extra help you get from a static type system is extremely valuable. Dynamic languages fans normally say that "you have to unit-test your code anyway, so the help provided by a static type system is negligible". On the other hand, I find myself writing much more tests when using dynamic languages, just to defend myself against type errors that the compiler of a statically typed language would have detected for me. – Giorgio Jul 03 '14 at 18:41
  • "But since everything you write in Prolog, C++ or List is eventually executed...": Did you mean **Lisp**? – Giorgio Jul 07 '14 at 17:40
5

Code written using dynamic languages is not coupled to a static type system. Therefore, this lack of coupling is an advantage compared to poor/inadequate static type systems (although it may be a wash or a disadvantage compared to a great static type system).

Furthermore, for a dynamic language, a static type system doesn't have to be designed, implemented, tested, and maintained. This could make the implementation simpler compared to a language with a static type system.

  • 2
    Don't people tend to eventually re-implement a basic static type system with their unit tests (when targeting a good test coverage)? – Den Jul 07 '14 at 15:28
  • Also what do you mean by "coupling" here? How would it manifest in an e.g. micro-services architecture? – Den Jul 07 '14 at 15:28
  • @Den 1) good question, however, I feel that it's outside the scope of the OP and of my answer. 2) I mean coupling [in this sense](http://en.wikipedia.org/wiki/Coupling_(computer_programming)); briefly, different type systems impose different (incompatible) constraints on code written in that language. Sorry, I can't answer the last question -- I don't understand what's special about micro-services in this regard. –  Jul 07 '14 at 16:57
  • 2
    @Den: Very good point: I often observe that unit tests I write in Python cover errors that would be caught by a compiler in a statically typed language. – Giorgio Jul 07 '14 at 17:09
  • @MattFenwick: You wrote that it is an advantage that "... for a dynamic language, a static type system doesn't have to be designed, implemented, tested, and maintained." and Den observed that you often do have to design and test your types directly in your code. So the effort is not removed but moved from language design to the application code. – Giorgio Jul 07 '14 at 17:30
  • @MattFenwick: On the other hand, how much of the unit tests deal with type-checking is debatable: some claim that by unit-testing the application logic you also cover most type errors and therefore there is very little overhead due to type checking. In other words, some maintain that you do not have to write unit tests that specifically cover type errors. – Giorgio Jul 07 '14 at 17:37
  • @Giorgio 1) no, I did not write that *"it is an advantage that ..."*. Furthermore, in the second paragraph I was referring to the implementation of the language, not to code written using the language. 1a) that's not what Den wrote, 1b) Den did not present any evidence to back up the claim, if claim it was. 2) Not relevant to the OP or to my answer. –  Jul 07 '14 at 18:05
  • @MattFenwick I don't understand what "coupling to a static type system" means and can only guess (no intent to be picky here, some elaboration will help understanding your answer). – Den Jul 07 '14 at 21:24
  • @Den no problem. What I mean is that if you write the same (meaning same behavior, semantics) program in different languages, you'll have to write it differently because of characteristics of the language. The same thing goes if you write the same program in different static type systems. So, coarsely, coupling to the static type system here would be the degree to which your code reflects characteristics of the type system (as opposed to characteristics of the problem and its solution). –  Jul 08 '14 at 20:26
  • @MattFenwick I guess it's a valid point but not one relevant in practice. Otherwise we can start discussing coupling to syntax, coupling to programming paradigms and coupling to patterns as well. – Den Jul 09 '14 at 08:15
  • @Den that's pretty flippantly dismissive. Of course, you are welcome to your own opinion, but can you support your claim that it's not relevant in practice? (And let's set aside the issues of syntax, paradigms, and patterns; those are red herrings and/or slippery slope) –  Jul 09 '14 at 14:41
  • @MattFenwick Maybe I just don't see it. Can you provide a small example? The closest I could think of is the problematic refactoring I faced when converting Java code to C# code that would use value type semantics instead of reference type one. However I believe that would still be a problem with e.g. Python. – Den Jul 09 '14 at 15:04
  • @MattFenwick On the other hand I often wish I could simply use an identical type from different libraries without manual runtime conversion/casting. Is this closer? Dynamic typing would kill performance same way as using something like Automapper however, so only an advantage in some scenarios. – Den Jul 09 '14 at 15:08
  • @Den tiny example: a set of `Integer`s. Implement in both Java from 1999, and Java 1.7. –  Jul 14 '14 at 15:17