31

I've read a related question Are there any design patterns that are unnecessary in dynamic languages like Python? and remembered this quote on Wikiquote.org

The wonderful thing about dynamic typing is it lets you express anything that is computable. And type systems don’t — type systems are typically decidable, and they restrict you to a subset. People who favor static type systems say “it’s fine, it’s good enough; all the interesting programs you want to write will work as types”. But that’s ridiculous — once you have a type system, you don’t even know what interesting programs are there.

--- Software Engineering Radio Episode 140: Newspeak and Pluggable Types with Gilad Bracha

I wonder, are there useful design patterns or strategies that, using the formulation of the quote, "don't work as types"?

user7610
  • 419
  • 4
  • 13
  • 3
    I've found double dispatch and the Visitor pattern to be very difficult to accomplish in statically typed languages, but easily accomplishable in dynamic languages. See this answer (and the question) for example: http://programmers.stackexchange.com/a/288153/122079 – user3002473 Aug 13 '16 at 00:04
  • 7
    Of course. Any pattern that involves creating new classes at runtime, for example. (that's also possible in Java, but not in C++; there's a sliding scale of dynamism). – user253751 Aug 13 '16 at 00:16
  • 1
    It would depend a lot on how sophisticated your type system is :-) Functional languages usually do quite good at this. – Bergi Aug 13 '16 at 20:58
  • 1
    Everyone seems to be talking type systems like Java and C# instead of Haskell or OCaml. A language with a powerful type system can be as concise as a dynamic language but keep type safety. – Andrew Sep 02 '17 at 00:54
  • @immibis That's incorrect. Static type systems can absolutely create new, "dynamic" classes at run-time. See Chapter 33 of Practical Foundations for Programming Languages. – gardenhead Sep 03 '17 at 05:34

9 Answers9

41

Short answer: no, because Turing equivalence.

Long answer: This guy's being a troll. While it's true that type systems "restrict you to a subset," the stuff outside that subset is, by definition, stuff that does not work.

Anything you're able to do in any Turing-complete programming language (which is language designed for general-purpose programming, plus plenty that aren't; it's a pretty low bar to clear and there are several examples of a system becoming Turing-complete unintentionally) you are able to do in any other Turing-complete programming language. This is called "Turing equivalence," and it only means exactly what it says. Importantly, it does not mean that you can do the other thing just as easily in the other language--some would argue that that's the entire point of creating a new programming language in the first place: to give you a better way of doing certain things that existing languages suck at.

A dynamic type system, for example, can be emulated on top of a static OO type system by just declaring all variables, parameters, and return values as the base Object type and then using reflection to access the specific data within, so when you realize this you see that there's literally nothing you can do in a dynamic language that you can't do in a static language. But doing it that way would be a huge mess, of course.

The guy from the quote is correct that static types restrict what you can do, but that's an important feature, not a problem. The lines on the road restrict what you can do in your car, but do you find them restrictive, or helpful? (I know I wouldn't want to drive on a busy, complex road where there's nothing telling the cars going the opposite direction to keep to their side and not come over where I'm driving!) By setting up rules that clearly delineate what's considered invalid behavior and ensuring that it won't happen, you greatly decrease the chances of a nasty crash occurring.

Also, he's mischaracterizing the other side. It's not that "all the interesting programs you want to write will work as types", but rather "all the interesting programs you want to write will require types." Once you get beyond a certain level of complexity, it becomes very difficult to maintain the codebase without a type system to keep you in line, for two reasons.

First, because code with no type annotations is hard to read. Consider the following Python:

def sendData(self, value):
   self.connection.send(serialize(value.someProperty))

What do you expect the data to look like that the system at the other end of the connection receives? And if it's receiving something that looks completely wrong, how do you figure out what's going on?

It all depends on the structure of value.someProperty. But what does it look like? Good question! What's calling sendData()? What is it passing? What does that variable look like? Where did it come from? If it's not local, you have to trace through the entire history of value to track down what's going on. Maybe you're passing something else that also has a someProperty property, but it doesn't do what you think it does?

Now let's look at it with type annotations, as you might see in the Boo language, which uses very similar syntax but is statically typed:

def SendData(value as MyDataType):
   self.Connection.Send(Serialize(value.SomeProperty))

If there's something going wrong, suddenly your job of debugging just got an order of magnitude easier: look up the definition of MyDataType! Plus, the chance of getting bad behavior because you passed some incompatible type that also has a property with the same name suddenly goes to zero, because the type system won't let you make that mistake.

The second reason builds on the first: in a large and complex project, you've most likely got multiple contributors. (And if not, you're building it yourself over a long time, which is essentially the same thing. Try reading code you wrote 3 years ago if you don't believe me!) This means that you don't know what was going through the head of the person who wrote almost any given part of the code at the time they wrote it, because you weren't there, or don't remember if it was your own code a long time ago. Having type declarations really helps you understand what the intention of the code was!

People like the guy in the quote frequently mischaracterize the benefits of static typing as being about "helping the compiler" or "all about efficiency" in a world where nearly unlimited hardware resources make that less and less relevant with each passing year. But as I've shown, while those benefits certainly do exist, the primary benefit is in the human factors, particularly code readability and maintainability. (The added efficiency is certainly a nice bonus, though!)

Mason Wheeler
  • 82,151
  • 24
  • 234
  • 309
  • 2
    ad Short answer: irrelevant, because of [Turing tarpit](https://en.wikipedia.org/wiki/Turing_tarpit). I am now going to read the long answer ;) – user7610 Aug 12 '16 at 21:17
  • ad Long answer: My takeaway: The undecidability thing means that given the rules of the language and the structure of the program, the type of some variable/expression can only be determined by running the program and seeing what happens. Such programs are harder to reason about than programs that can be typed and therefore less desirable. If something is deemed useful but cannot be typed, we can always get better type system, for example, one with union types, to deal with the `5`, `'five'`, `Cat` example from kamilk. Or just cast to Object/void and then use reflection to get the value back. – user7610 Aug 12 '16 at 22:19
  • I think he'd actually agree with you that a) types provide great documentation and b) typing should not be about "helping the compiler". – user7610 Aug 12 '16 at 22:19
  • 5
    "[It is the] stuff that does not work". No necessarily, it can be the stuff for which the compiler cannot decide whether there will be a problem or not at runtime. – coredump Aug 12 '16 at 22:52
  • @JiriDanek It's not just that types are "documentation". The problem with documentation is that it has a tendency to drift. (I bet you've seen, more than once, comments that describe what an earlier version of the code did, but that are badly inaccurate now!) Types are *canonical, enforceable documentation.* – Mason Wheeler Aug 12 '16 at 22:55
  • 6
    The road analogy is terrible. For static typing I prefer to think about hydraulic/sewage system where you use pipes to guarantee in advance that you don't mix clean water with used water, for example. For dynamic typing, see postal system (each package has a tag) or, simply, the IP protocol. – coredump Aug 12 '16 at 22:57
  • @coredump The road analogy is to show that restricting what you can do helps prevent crashes. – Mason Wheeler Aug 12 '16 at 23:02
  • @MasonWheeler Pipes analogy reminds me of http://www.yesodweb.com/page/about and their use of typesystem to enforce that unsanitized string cannot be rendered on page. – user7610 Aug 12 '16 at 23:05
  • @MasonWheeler The "static" aspect of roads is quite limited (paint!). Most of the restrictions are enforced by protocols and rules that have to be evaluated in context. And this done by humans, not blind computers, so its is hard to make a fair comparison. – coredump Aug 12 '16 at 23:13
  • 24
    "This guy's being a troll." – I'm not sure that an ad hominem attack is going to help your otherwise well-presented case. And while I am well aware that argument from authority is an equally bad fallacy as ad hominem, I would still like to point out that Gilad Bracha has probably designed more languages and (most relevant for this discussion) more static type systems than most. Just a small excerpt: he is the sole designer of Newspeak, co-designer of Dart, co-author of the Java Language Specification and Java Virtual Machine Specification, worked on the design of Java and the JVM, designed … – Jörg W Mittag Aug 12 '16 at 23:32
  • 11
    Strongtalk (a static type system for Smalltalk), the Dart type system, the Newspeak type system, his PhD thesis on modularity is the basis of pretty much every modern module system (e.g. Java 9, ECMAScript 2015, Scala, Dart, Newspeak, Ioke, Seph), his paper(s) on mixins revolutionized the way we think about them. Now, that does *not* mean that he is right, but I *do* think that having designed multiple static type systems makes him a bit more than a "troll". – Jörg W Mittag Aug 12 '16 at 23:35
  • 17
    "While it's true that type systems "restrict you to a subset," the stuff outside that subset is, by definition, stuff that does not work." – This is wrong. We know from the Undecidability of the Halting Problem, Rice's Theorem, and the myriad of other Undecidability and Uncomputability results that a static type checker can not decide for all programs whether they are type-safe or type-unsafe. It can't accept those programs (some of which are type-unsafe), so the only sane choice is to *reject* them (however, some of those are type-safe). Alternatively, the language has to be designed in … – Jörg W Mittag Aug 12 '16 at 23:38
  • 9
    … such a way as to make it impossible for the programmer to write those undecidable programs, but again, some of those are actually type-safe. So, no matter how you slice it: the programmer is prevented from writing type-safe programs. And since there are actually infinitely many of them (usually), we can be almost certain that at least some of them are not only stuff that *does* work, but also useful. – Jörg W Mittag Aug 12 '16 at 23:40
  • 5
    "People like the guy in the quote frequently mischaracterize the benefits of static typing as being about "helping the compiler" or "all about efficiency"" – I have no idea where you are getting that. That is pretty much exactly the *opposite* of what Gilad Bracha is saying. He has an *extremely* strong opinion about the fact that type systems **MUST NOT** have any influence whatsoever on runtime semantics, *especially* on runtime performance. The *one thing* Gilad Bracha does *not* want type systems to do is "help the compiler". For him, the most important feature of type systems is … – Jörg W Mittag Aug 12 '16 at 23:45
  • 1
    @JörgWMittag As you say, appeal to authority, and not a particularly relevant one at that. (For example, Dijkstra, one of the most accomplished computer scientists of all time, could troll with the best of them.) And the Halting Problem is an academic curiosity that basically says that you can't produce a perfect `Halts()` function because some joker could theoretically come along and troll it by asking it to perform a task analogous to evaluating the truth value of `this sentence is false`. Have you ever seen it come up in real-world code? Because I haven't. – Mason Wheeler Aug 12 '16 at 23:47
  • 3
    … *documentation*. In other words, it's all about code readability and maintainability. Heck, he didn't even bother to implement the type checker for Newspeak yet, because that's the least important part. For Strongtalk, they even experimented with using the types for performance optimizations, but they quickly found that any optimizations they could implement by using types, they could just as well implement using *dynamic* type inference in the JITter. – Jörg W Mittag Aug 12 '16 at 23:48
  • @JörgWMittag Yeah. If this guy thinks that the type checker is the least important part, that just goes to show he doesn't know what he's talking about. (Not surprising if he comes from a Smalltalk background; the *entire language* is built on a massive foundation of cluelessness.) And it's not surprising that a fundamentally dynamic language is difficult to optimize well with static techniques; Facebook found out exactly the same thing with their HipHop project. – Mason Wheeler Aug 12 '16 at 23:54
  • 1
    @Mason *Have you ever seen it come up in real-world code? Because I haven't.* Exactly! CS teacher was saying that knowing Gödel's incompleteness theorem is very important for a graduate. I asked whether there is some useful theory that cannot be proved because of that. Long pause... "Well, there are all these paradoxes, ..." – user7610 Aug 12 '16 at 23:54
  • 8
    @MasonWheeler: the Halting Problem comes up all the time, precisely in the context of static type checking. Unless languages are carefully designed to prevent the programmer from writing certain kinds of programs, static type checking quickly becomes equivalent to solving the Halting Problem. Either you end up with programs you aren't allowed to write because they might confuse the type checker, or you end up with programs you *are* allowed to write but they take an infinite amount of time to type check. – Jörg W Mittag Aug 12 '16 at 23:57
  • 1
    @JörgWMittag ...such as? Real example please? – Mason Wheeler Aug 12 '16 at 23:58
  • 2
    @Mason *fundamentally dynamic language is difficult to optimize well with static techniques*. The point is that such dynamic language is actually surprisingly easy to optimize with _dynamic_ techniques. For example method caching and inlining. Even though programs can be very dynamic, they usually aren't, and the optimizer exploits that. See e.g. http://neverworkintheory.org/2016/06/13/polymorphism-in-python.html. – user7610 Aug 12 '16 at 23:58
  • 6
    @MasonWheeler *Real example please?* `int f() {if (false) return "hello" else return 42}`. To typecheck that (and realize that it is typed correctly), I have to evaluate the if condition. That can be arbitrary complex expression. So I either say I won't bother (Java) or... well, does any language tackles it? If you try to evaluate the condition while type checking, you may end up running an infinite loop or something. So you'd have to have a timeout there. And the result would be either "it typechecks" or "I don't know, buy faster CPU". – user7610 Aug 13 '16 at 00:04
  • @JiriDanek This example is trivial to reject as improperly typed, as "hello" is not a valid int. You appear to be conflating constant-folding and dead code pruning with typechecking. – Mason Wheeler Aug 13 '16 at 00:09
  • 3
    @Mason If I type a program, I am proposing a theorem about my code. "This function will always return an int". Typechecker's task is to prove the theorem. If it cannot prove it, it is a type error. Even a child can see that the function is typed correctly, because it always returns an int constant, 42. Java has a stupid type checker, gets confused by the "hello" I put there and rejects the program. http://c2.com/cgi/wiki?TypeChecking – user7610 Aug 13 '16 at 00:13
  • 1
    @JiriDanek The "math problem" model of programming is generally one favored by functional programmers and rejected by object oriented programmers, who see code more in terms of a baking recipe. Is there anything poisonous in this recipe? Why yes, there is. Therefore, it's a bad recipe; throw it out. – Mason Wheeler Aug 13 '16 at 00:17
  • 1
    @JiriDanek Note that in practice, that's what "should never happen" exceptions are for. Any statement that doesn't typecheck, that you believe will never be executed anyway, can be replaced with `throw new AssertionError("shouldn't get here");` which will typecheck. – user253751 Aug 13 '16 at 00:26
  • I may have misunderstood your answer, because it reads to me as an argument against _polymorphic types_ rather than an argument against _dynamic types_. Perhaps you could expand your answer to clarify my confusion. – dcorking Aug 13 '16 at 11:39
  • 6
    @MasonWheeler: For an example that comes up more often, consider creating a cache of class instances by writing `Map, ?>`. The compiler cannot verify that the values are instances of the corresponding keys. (But honestly, the reason these things don't come up for you in your "realistic" code is that your concept of "realistic" has, unbeknownst to you, been shaped by the strictures that static type systems enforce in order to be decidable.) – ruakh Aug 13 '16 at 16:42
  • 1
    @ruakh maybe explain Map, ?> better... because I'm not sure what your problem is. – NPSF3000 Aug 14 '16 at 06:49
  • @NPSF3000: I don't have a "problem", because Java uses both static and dynamic types, and lets you circumvent the former when needed (using explicit casts and so on). But in a pure static type system, where the compiler insisted on its decidability, there'd be no way to implement a method like ` T getCachedInstance(Class clazz)`. – ruakh Aug 14 '16 at 07:18
  • 2
    @ruakh so you're saying type systems are limiting because some imaginary type system that's not actually being used has issues? I'ma C# dev who has written an awful lot of generic code like that example you've indicated, using types to improve my code, without being limited by this imaginary limitation. Given the claim in OP, this is not a good example. – NPSF3000 Aug 14 '16 at 12:32
  • @ruakh I've mentioned the Boo language already. What you're describing sounds a whole lot like [`My`](https://github.com/boo-lang/boo/blob/master/src/Boo.Lang/Environments/My.cs), which is included in the runtime, complete with configurable rules regarding caching, asking for objects that have not yet been created, etc... – Mason Wheeler Aug 14 '16 at 13:23
  • 2
    @NPSF3000: What? No. I'm saying that common type systems (such as Java's and C#'s) are limiting, and that we can see this from the combination of two facts: (1) Java and C# provide ways to circumvent parts of their type systems; (2) these circumvention mechanisms enable useful patterns, like a cache of class instances, that would otherwise be difficult or impossible to write and use. That's not an "imaginary limitation"; it's a real limitation of the type system, that the language compensates for by letting you work around the type system. – ruakh Aug 14 '16 at 16:01
  • 1
    @Ruakh so what your saying is C# has a strong and practical type system that doesn't get in your way? Or in other terms, the quote in OP is wrong and that type systems don't necessarily limit programmers? – NPSF3000 Aug 14 '16 at 17:38
  • 2
    @NPSF3000: I think you might be misunderstanding the term "type system", which refers to *static* types. Also, please calm down; when I say that C#'s type system necessarily has some limitations, and that C# therefore provides workarounds that bypass its type system, I do not intend this as a criticism of C# (nor of its type system). So you don't need to get defensive. (It's like how I might mention immutability as a limitation of .Net's `String` class. The limitation is fine, because if you need mutability, you use `StringBuilder`. So it's not a criticism of .Net, nor of `String`.) – ruakh Aug 14 '16 at 18:07
  • 2
    @NPSF3000: (That said, I think there are also some limitations that C# does *not* fully address. But that's still not a criticism of C#. Type systems have obvious advantages; we can calmly recognize that they must always have unnecessary strictures -- and that code written in languages *without* a given stricture will almost inevitably find a good use for that non-stricture -- without therefore concluding that this outweighs those advantages.) – ruakh Aug 14 '16 at 18:07
  • @ruakh "we can calmly recognize that they must always have unnecessary strictures" True, but we can also recognize that most of the examples given to support that case are poor. \ – NPSF3000 Aug 14 '16 at 18:23
  • @NPSF3000: Well, sure. No one is saying that C# is unusable. C# would never have been created, and would not be in wide use, unless the overall effect of its strictures was to facilitate good code. But for there to be a downside, there only has to be *one* *good* example of something that is impossible or unnecessarily difficult. And when we see that downside, we just take a deep breath and keep going. Because *no* language is perfect. – ruakh Aug 14 '16 at 18:38
  • 2
    @Ruakh "But for there to be a downside, there only has to be one good example of something that is impossible or unnecessarily difficult." Then why do we have so many examples that are so poor? I'm not attacking dynamic languages, I'm questioning the examples given. – NPSF3000 Aug 14 '16 at 18:43
  • @NPSF3000: Who is "we"? Are you talking about the examples on this page? Are you saying that you see examples on this page where you agree that something is impossible or unnecessarily difficult in C#, but where you think that this impossibility or unnecessary difficulty is, in itself, a good thing? (And not just a good thing in that it's an essential part of a type system that's a good thing overall?) If so -- which? If not -- please clarify. – ruakh Aug 14 '16 at 19:01
  • 2
    @ruakh "If so -- which? If not -- please clarify. " I'm talking about the numerous examples of things given, that are apparently 'hard' to do in typed languages, they look almost exactly like standard C# code. I'm not sure I've seen a single example on this page where I think 'aha, that definitely is difficult to do in a typed language'. The accepted answer looks pretty much like routine json parsing in C# with only the slightest syntactic differences. Note the question quotes: "once you have a type system, you don’t even know what interesting programs are there". – NPSF3000 Aug 14 '16 at 19:14
  • 3
    I think, you answer completely misses the point of the question: That you can solve any solveable problem in any turing-complete language is one thing. That you can emulate programming constructs in any turing-complete language is also true. Heck, I can do fully fledged object oriented programming, including virtual dispatch and multiple inheritance in pure, raw assembler. But I would never say that assembler is an object oriented programming language. This question is about the ways that a language can be used **sensibly**, not about what it can be **abused** for. – cmaster - reinstate monica Aug 15 '16 at 11:54
  • @MasonWheeler: Turing completeness is irrelevant. There are some things you can only do in a statically language if you basically implement a dynamic language inside the static language (as in your example using reflection). I will argue you are then programming in a dynamic language of your own creation, not a static language. We dont call Python a static language even though it is implemented in C. – JacquesB Aug 27 '16 at 12:33
  • 1
    I'll add to this discussion that while Gilad Bracha is indeed a well-known name and experienced in types, he can also be trollish and dismissive of things he does not understand; and I've watched talks by him to know there's plenty of stuff he doesn't understand (no shame in this). I've seen embarrassing talks by him dismissing statically typed FP with arguments that amount to "Monads are stupid and the name is confusing, let's laugh about this and dismiss them". He definitely *can* be a troll, and his quote from the question is definitely flamebait. Shame on Bracha, he should know better. – Andres F. Sep 01 '17 at 19:57
  • The halting problem thing is true about any arbitrary program so static analysis is never 100%, but our computers have a limited amount of memory so aren't really turing complete, if you could compress your program or make your type checker give up after some recursion debt, I think you could do type inference in like 95%+ of cases, and maybe if it gave up you could give insights to the type checker. The thing about not knowing what send(arg.myattribute) does is the worst case, in other cases .myattribute is unique and send constrains type of myattribute although its true its not 100%. – aoeu256 Jun 05 '23 at 12:52
28

I'm going to side-step the 'pattern' part because I think it devolves into the definition of what is or isn't a pattern and I've long lost interest in that debate. What I will say is that there are things you can do in some languages that you can't do in others. Let me be clear, I'm not saying there are problems you can solve in one language that you can't solve in another. Mason has already pointed out Turing completeness.

For example, I've written a class in python that takes wraps a XML DOM element and makes it into a first class object. That is, you can write the code:

doc.header.status.text()

and you have the contents of that path in from a parsed XML object. kind of neat and tidy, IMO. And if there isn't a head node, it just returns a dummy objects that contain nothing but dummy objects (turtles all the way down.) There's no real way to do that in, say, Java. You'd have to have compiled a class ahead of time that based on some knowledge of the structure of the XML. Putting aside whether this is a good idea, this kind of thing really does change the way you solve problems in a dynamic language. I'm not saying it changes in a way that is necessarily always better, however. There are some definite costs to dynamic approaches and Mason's answer gives a decent overview. Whether they are a good choice depends on many factors.

On a side note, you can do this in Java because you can build a python interpreter in Java. The fact that solving a specific problem in a given language may mean building an interpreter or something akin to it is often overlooked when people talk about Turing completeness.

JimmyJames
  • 24,682
  • 2
  • 50
  • 92
  • You can do the XML DOM thing in F# with Type Providers. It works essentially along the lines you've sketched. – user7610 Aug 12 '16 at 21:56
  • 4
    You can't do this in Java because Java's poorly designed. It wouldn't be that hard in C# using `IDynamicMetaObjectProvider`, and it's dead simple in Boo. ([Here's an implementation in less than 100 lines,](https://github.com/boo-lang/boo/blob/master/examples/duck-typing/XmlObject.boo) included as part of the standard source tree on GitHub, because it's that easy!) – Mason Wheeler Aug 12 '16 at 22:48
  • 6
    @MasonWheeler `"IDynamicMetaObjectProvider"`? Is that related to C#'s `dynamic` keyword? ...which effectively just tacks on dynamic typing to C#? Not sure your argument is valid if I'm right. – jpmc26 Aug 13 '16 at 00:01
  • @jpmc26 It's a part of the mechanism for building dynamic types for certain purposes on top of the existing static type system. As Anders Hejlsberg put it, "static where possible, dynamic when necessary." – Mason Wheeler Aug 13 '16 at 00:06
  • @MasonWheeler I'm not completely sure whether you just said yes or no. Sounds more like a yes to me. =) – jpmc26 Aug 13 '16 at 00:07
  • 1
    @jpmc26 It's a "it's not that simple". `dynamic` is a bunch of compiler magic that transforms what looks like duck-typed source code into method calls that use mechanisms such as reflection and dictionary lookups to implement the semantics described in a typesafe way. – Mason Wheeler Aug 13 '16 at 00:12
  • 9
    @MasonWheeler You're getting into semantics. Without getting into a debate about minutiae (We're not developing a mathematical formalism on SE here.), dynamic typing is the practice of foregoing compile time decisions around types, especially the verification that each type has the particular members the program accesses. That is the goal that `dynamic` accomplishes in C#. "Reflection and dictionary lookups" happen at runtime, not compile time. I'm really not sure how you can make a case that it *doesn't* add dynamic typing to the language. My point is that Jimmy's last paragraph covers that. – jpmc26 Aug 13 '16 at 01:12
  • 45
    Despite not being a huge fan of Java, I also dare say that calling Java "poorly designed" *specifically* because it didn't add dynamic typing is... overzealous. – jpmc26 Aug 13 '16 at 01:15
  • @jpmc26 It's not that *specifically*; it's just... a lot of little things that all add up. And this is one of them. – Mason Wheeler Aug 13 '16 at 02:15
  • If you want dynamic metaobjects, you could just use Groovy. – chrylis -cautiouslyoptimistic- Aug 13 '16 at 08:26
  • 5
    Apart from the slightly more convenient syntax, how is this different from a dictionary? – Theodoros Chatzigiannakis Aug 13 '16 at 08:29
  • Just to add to the JVM world and elaborate on what @chrylis has said. It's quite easy to do that in Groovy (http://groovy-lang.org/processing-xml.html) and you can use Groovy in Java as well. Still, Java itself doesn't support (IMHO, pretty neat) such a feature. – Pateman Aug 13 '16 at 10:21
  • This doesn't need dynamic typing. All it needs is a compile-time source rewriter that translates your member accesses into element lookups using constant strings. I actually implemented a C++ extension that would allow it. – Sebastian Redl Aug 13 '16 at 16:07
  • @chrylis Why use Groovy when I can use Python? – JimmyJames Aug 13 '16 at 19:46
  • @JimmyJames If you're dealing with the JVM anyway. The Java ecosystem has some very nice systems (Spring, JPA) that you might want to use for some particular application. – chrylis -cautiouslyoptimistic- Aug 13 '16 at 19:48
  • @TheodorosChatzigiannakis It isn't really fundamentally different other than that I can take this and pass around like any other object. – JimmyJames Aug 13 '16 at 19:55
  • @chrylis You can use any (in my experience) Java library in Jython. – JimmyJames Aug 13 '16 at 19:57
  • @JimmyJames You *can*, but it's a lot higher impedance to mix. If you want Python, great, but if you want Java with dynamic invocation (as complained about), Groovy gives you that. – chrylis -cautiouslyoptimistic- Aug 13 '16 at 20:00
  • @SebastianRedl Interesting but not completely equivalent. You can use this kind of approach with any code in python. There's no need for the code referencing these objects to know they are dynamic or run through a special pre-compiler step. – JimmyJames Aug 13 '16 at 20:03
  • @chrylis Can you give me some examples of the impedance you are referring to? – JimmyJames Aug 13 '16 at 20:05
  • 1
    "There's no real way to do that in, say, Java" Really? As a C# user one could very easily implement the exact same pattern by passing in a string, e.g. new Doc(XML)["header.status.text"]. Similar patterns are actually used: http://www.newtonsoft.com/json/help/html/LINQtoJSON.htm. So I do not think this is a good example of code that cannot be done in typed languages. – NPSF3000 Aug 14 '16 at 06:16
  • @JimmyJames The way I implemented it, the code using the objects didn't need to know the objects were special either, nor was there any preprocessor step. I made it part of the language, and the class representing the XML node just had to overload a special operator. Perfectly self-contained. – Sebastian Redl Aug 14 '16 at 11:20
  • Your final paragraph should probably have a reference to the idea of a [Turing tarpit](https://en.wikipedia.org/wiki/Turing_tarpit). – Jules Aug 15 '16 at 07:13
  • @Jules I see the parallel but I wouldn't classify a statically typed language as a 'Turing Tarpit'. There are costs and benefits to dynamic typing. I'm not convinced it's always preferable. – JimmyJames Aug 15 '16 at 13:30
  • This is already available with VB using its `!` syntax for `XmlDocument` e.g. `doc!header!status.InnerText`. (Of course, with the new `XDocument` you have the VB XML Axis Properties: `doc.
    ..Value`.)
    – Mark Hurd Aug 17 '16 at 05:38
10

The quote is correct, but also really disingenuous. Let's break it down to see why:

The wonderful thing about dynamic typing is it lets you express anything that is computable.

Well, not quite. A language with dynamic typing lets you express anything as long as it's Turing complete, which most are. The type system itself does not let you express everything. Let's give him the benefit of the doubt here though.

And type systems don’t — type systems are typically decidable, and they restrict you to a subset.

This is true, but notice we are now firmly talking about what the type system allows, not what the language that uses a type system allows. While it is possible to use a type system to calculate stuff at compile time, this in generally not Turing complete (as the type system is generally decidable), but almost any statically typed language is also Turing complete in its runtime (dependently typed languages are not, but I don't believe we are talking about them here).

People who favor static type systems say “it’s fine, it’s good enough; all the interesting programs you want to write will work as types”. But that’s ridiculous — once you have a type system, you don’t even know what interesting programs are there.

The trouble is that dynamically types languages do have a static type. Sometimes everything is a string, and more commonly there is some tagged union where every thing is either a bag of properties or a value like an int or a double. The trouble is that static languages can do this as well, historically it was a bit clunkier to do this, but modern statically typed languages make this pretty much as easy to do as using a dynamically types language, so how can there be a difference in what the programmer can see as an interesting program? Static languages have exactly the same tagged unions as well as other types.

To answer the question in the title: No, there are no design patterns that can't be implemented in a statically typed language, because you can always implement enough of a dynamic system to get them. There may be patterns that you get for 'free' in a dynamic language; this may or may not be worth putting up with the downsides of those languages for YMMV.

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
jk.
  • 10,216
  • 1
  • 33
  • 43
  • 2
    I'm not completely sure whether you just answered yes or no. Sounds more like a no to me. – user7610 Aug 13 '16 at 08:08
  • There is a very simple and correct answer in this post (although it could have been condensed to a single sentence): statically typed languages can easily express the structures that dynamically typed languages use to store their objects. – Theodoros Chatzigiannakis Aug 13 '16 at 09:01
  • 1
    @TheodorosChatzigiannakis Yes, how else would dynamic languages be implemented? First, you'll pass for an astronaut architect if you ever want to implement a dynamic class system or anything else a little bit involved. Second, you probably don't have the resource to make it debuggable, fully introspectable, performant ("just use a dictionary" is how slow languages are implemented). Third, some dynamic features are best used when being integrated in the whole language, not just as a library: think garbage collection for example (there *are* GCs as libraries, but they are not commonly used). – coredump Aug 13 '16 at 09:55
  • 1
    @Theodoros According to the paper I already linked here once, all but 2.5% of structures (in Python modules the researches looked at) can be easily expressed in a typed language. Maybe the 2.5% makes paying the costs of dynamic typing worth it. That's essentially what my question was about. http://neverworkintheory.org/2016/06/13/polymorphism-in-python.html – user7610 Aug 13 '16 at 10:17
  • 3
    @JiriDanek As far as I can tell, there is nothing that prevents a statically typed language from having polymorphic call spots and maintaining static typing in the process. See [Static Type Checking of Multi-Methods](http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=EDEA41541148C589271502FE70DB1E2F?doi=10.1.1.41.8799&rep=rep1&type=pdf). Maybe I'm misunderstanding your link. – Theodoros Chatzigiannakis Aug 13 '16 at 10:22
  • 1
    " A language with dynamic typing lets you express anything as long as it's Turing complete, which most are. " While this is of course a true statement, it doesn't *really* hold in "the real world" because the *amount* of text one has to write could be extremely large. – Daniel Jour Aug 13 '16 at 17:02
  • The thing is that static programs have to be rewritten to add dynamicity when (although this can be done by code refactoring tools sometimes), while dynamic programs can be retrofitted with features not forecasted at first. At least what I got from Rich Hickeys (Maybe not) and my not long usage of Haskell. If you suddenly wanted to add I/O or Maybe deep in Haskell in the call stack, huge parts of your program have to know it, and often refactoring tools aren't talked about in Haskell. Also handling edge cases wrapped in multiple layers of App, Monad, Maybe, can be PIA. – aoeu256 Jun 05 '23 at 13:23
5

The Dynamic Proxy pattern is a shortcut for implementing proxy objects without needing one class per type you need to proxy.

class Proxy(object):
    def __init__(self, obj):
        self.__target = obj

    def __getattr__(self, attr):
        return getattr(self.__target, attr)

Using this, Proxy(someObject) creates a new object that behaves the same as someObject. Obviously you'll also want to add additional functionality somehow, but this is a useful base to begin from. In a complete static language, you'd need to either write one Proxy class per type you want to proxy or use dynamic code generation (which, admittedly, is included in the standard library of many static languages, largely because their designers are aware of the problems not being able to do this cause).

Another use case of dynamic languages is so-called "monkey patching". In many ways, this is an anti-pattern rather than a pattern, but it can be used in useful ways if done carefully. And while there's no theoretical reason monkey patching couldn't be implemented in a static language, I've never seen one that actually has it.

Jules
  • 17,614
  • 2
  • 33
  • 63
  • I think I might be able to emulate this in Go. There is a set of methods that all proxied objects must have (otherwise the duck might not quack and all falls apart). I can create a Go interface with these methods. I'll have to think on it more, but I think what I have in mind will work. – user7610 Aug 15 '16 at 08:26
  • You can something similar in any .NET language with RealProxy and generics. – LittleEwok Aug 21 '16 at 20:59
  • @LittleEwok - RealProxy uses runtime code generation -- like I say, many modern static languages have a workaround like this, but it's still easier in a dynamic language. – Jules Aug 22 '16 at 11:42
  • C# extension methods are kinda like monkey patching made safe. You can't change existing methods, but you can add new ones. – Andrew Sep 02 '17 at 00:45
5

First-class types

Dynamic typing means that you have first-class types: you can inspect, create and store types at runtime, including the language's own types. It also means that values are typed, not variables.

Statically typed language may produce code that relies on dynamic types too, like method dispatch, type classes, etc. but in a way that is generally invisible to the runtime. At best, they give you some way to perform introspection. Alternatively, you could simulate types as values but then you have an ad-hoc dynamic type system.

However, dynamic type systems rarely have only first-class types. You can have first-class symbols, first-class packages, first-class .... everything. This is in contrast to the strict separation between the compiler's language and the runtime language in statically typed languages. What the compiler or interpreter can do the runtime can do, too.

Now, let's agree that type inference is a good thing and that I like to have my code checked before running it. However, I also like being able to produce and compile code at runtime. And I love to precompute things at compile-time too. In a dynamically typed language, this is done with the same language. In OCaml, you have the module/functor type system, which is different from the main type system, which is different from the preprocessor language. In C++, you have the template language which has nothing to do with the main language, which is generally ignorant of types during execution. And that's fine in those language, because they don't want to provide more.

Ultimately, that does not change really what kind of software you can develop, but the expressivity changes how you develop them and whether it is hard or not.

Patterns

Patterns that rely on dynamic types are patterns which involve dynamic environments: open classes, dispatching, in-memory databases of objects, serialization, etc. Simple things like generic containers work because a vector does not forget at runtime about the type of objects it holds (no need for parametric types).

I tried to introduce the many ways code is evaluated in Common Lisp as well as examples of possible static analyses (this is SBCL). The sandbox example compiles a tiny subset of Lisp code fetched from a separate file. In order to be reasonably safe, I change the readtable, allow only a subset of standard symbols and wrap things with a timeout.

;;
;; Fetching systems, installing them, etc. 
;; ASDF and QL provide provide resp. a Make-like facility 
;; and system management inside the runtime: those are
;; not distinct programs.
;; Reflexivity allows to develop dedicated tools: for example,
;; being able to find the transitive reduction of dependencies
;; to parallelize builds. 
;; https://gitlab.common-lisp.net/xcvb/asdf-dependency-grovel
;;
(ql:quickload 'trivial-timeout)

;;
;; Readtables are part of the runtime.
;; See also NAMED-READTABLES.
;;
(defparameter *safe-readtable* (copy-readtable *readtable*))
(set-macro-character #\# nil t *safe-readtable*)
(set-macro-character #\: (lambda (&rest args)
                           (declare (ignore args))
                           (error "Colon character disabled."))
                     nil
                     *safe-readtable*)

;; eval-when is necessary when compiling the whole file.
;; This makes the result of the form available in the compile-time
;; environment. 
(eval-when (:compile-toplevel :load-toplevel :execute)
  (defvar +WHITELISTED-LISP-SYMBOLS+ 
    '(+ - * / lambda labels mod rem expt round 
      truncate floor ceiling values multiple-value-bind)))

;;
;; Read-time evaluation #.+WHITELISTED-LISP-SYMBOLS+
;; The same language is used to control the reader.
;;
(defpackage :sandbox
  (:import-from
   :common-lisp . #.+WHITELISTED-LISP-SYMBOLS+)
  (:export . #.+WHITELISTED-LISP-SYMBOLS+))

(declaim (inline read-sandbox))

(defun read-sandbox (stream &key (timeout 3))
  (declare (type (integer 0 10) timeout))
  (trivial-timeout:with-timeout (timeout)
    (let ((*read-eval* nil)
          (*readtable* *safe-readtable*)
          ;;
          ;; Packages are first-class: no possible name collision.
          ;;
          (package (make-package (gensym "SANDBOX") :use '(:sandbox))))
      (unwind-protect
           (let ((*package* package))
             (loop
                with stop = (gensym)
                for read = (read stream nil stop)
                until (eq read stop)
                ;;
                ;; Eval at runtime
                ;;
                for value = (eval read)
                ;;
                ;; Type checking
                ;;
                unless (functionp value)
                do (error "Not a function")
                ;; 
                ;; Compile at run-time
                ;;
                collect (compile nil value)))
        (delete-package package)))))

;;
;; Static type checking.
;; warning: Constant 50 conflicts with its asserted type (MOD 11)
;;
(defun read-sandbox-file (file)
  (with-open-file (in file)
    (read-sandbox in :timeout 50)))

;; get it right, this time
(defun read-sandbox-file (file)
  (with-open-file (in file)
    (read-sandbox in)))

#| /tmp/plugin.lisp
(lambda (x) (+ (* 3 x) 100))
(lambda (a b c) (* a b))
|#

(read-sandbox-file #P"/tmp/plugin.lisp")

;; 
;; caught COMMON-LISP:STYLE-WARNING:
;;   The variable C is defined but never used.
;;

(#<FUNCTION (LAMBDA (#:X)) {10068B008B}>
 #<FUNCTION (LAMBDA (#:A #:B #:C)) {10068D484B}>)

Nothing above is "impossible" to do with other languages. The plug-in approach in Blender, in music software or IDEs for statically compiled languages which do on-the-fly recompilation, etc. Instead of external tools, dynamic languages favor tools which make use of information that is already there. All the known callers of FOO? all the subclasses of BAR? all methods that are specialized by class ZOT? this is internalized data. Types are just another one aspect of this.


(see also: CFFI)

coredump
  • 5,895
  • 1
  • 21
  • 28
4

There are surely things you can only do in dynamically typed languages. But they wouldn't necessarily be good design.

You might assign first an integer 5 then a string 'five', or a Cat object, to the same variable. But you're only making it harder for a reader of your code to figure out what's going on, what's the purpose of every variable.

You might add a new method to a library Ruby class and access its private fields. There might be cases where such a hack can be useful but this would be a violation of encapsulation. (I don't mind adding methods only relying on the public interface, but that's nothing that statically typed C# extension methods can't do.)

You might add a new field to an object of somebody else's class to pass some extra data around with it. But it's better design to just create a new structure, or extend the original type.

Generally, the more organized you want your code to stay, the less advantage you should take from being able to dynamically change type definitions or assign values of different types to the same variable. But then your code is no different from what you could achieve in a statically typed language.

What dynamic languages are good at is syntactic sugar. For example, when reading a deserialized JSON object you may refer to a nested value simply as obj.data.article[0].content - much neater than say obj.getJSONObject("data").getJSONArray("article").getJSONObject(0).getString("content").

Ruby developers especially could talk at lengths about magic that can be achieved by implementing method_missing, which is a method allowing you to handle attempted calls to undeclared methods. For example, ActiveRecord ORM uses it so that you could make a call User.find_by_email('joe@example.com') without ever declaring find_by_email method. Of course it's nothing that couldn't be achived as UserRepository.FindBy("email", "joe@example.com") in a statically typed language, but you can't deny it its neatness.

kamilk
  • 428
  • 2
  • 9
  • 4
    There are surely things you can only do in statically typed languages. But they wouldn't necessarily be good design. – coredump Aug 13 '16 at 09:01
  • 2
    The point about syntactic sugar has very little to do with dynamic typing and everything with, well, syntax. – leftaroundabout Aug 13 '16 at 23:04
  • @leftaroundabout Patterns have everything to do with syntax. Type systems also have a lot to do with it. – user253751 Aug 14 '16 at 01:26
  • The methodmissing thingy is a way of building findAll and find and other types of methods for all fields, so its more like User('find email', arguments). One way you could do it with compiling like in LISP is code generating all method combinations of verb and methods with macros. Method missing and monkey patching is also used for proxies and mocking which is good for caching, counting method calls, auto-logging, converting sync to async code. – aoeu256 May 22 '23 at 18:33
4

Yes, there are many patterns and techniques which are only possible in a dynamically typed language.

Monkey patching is a technique where properties or methods are added to objects or classes at runtime. This technique not possible in a statically typed language since this means types and operations cannot be verified at compile time. Or to put it another way, if a language supports monkey-patching it is by definition a dynamic language.

It can be proven that if a language supports monkey patching (or similar techniques to modify types at runtime), it cannot be statically type checked. So it is not just a limitation in currently existing languages, it is a fundamental limitation of static typing.

So the quote is definitely correct - more things are possible in a dynamic language than in a statically typed language. On the other hand, certain kinds of analysis is only possible in a statically typed language. For example you always know which operations are allowed on a given type, which allows you to detect illegal operations at compile type. No such verification is possible in a dynamic language when operations can be added or removed at runtime.

This is why there is no obvious "best" in the conflict between static and dynamic languages. Static languages give up certain power at runtime in exchange for a different kind of power at compile time, which they believe reduces the number of bugs and makes development easier. Some believe the trade-off is worth it, others doesn't.

Other answers have argued Turing-equivalence means anything possible in one language is possible in all languages. But this does not follow. To support something like monkey-patching in a static language, you basically have to implement a dynamic sub-language inside the static language. This is of course possible, but I would argue you are then programming in a embedded dynamic language, since you also lose the static type checking which exist in the host language.

C# since version 4 have supported dynamically typed objects. Clearly the language designers see benefit in having both kind of typing available. But it also shows you can't have your cake and eat i too: When you use dynamic objects in C# you gain the ability do something like monkey patching, but you also lose static type verification for interaction with these objects.

JacquesB
  • 57,310
  • 21
  • 127
  • 176
  • +1 your second to last paragraph I think is the crucial argument. I'd still argue that there is a difference though as with static types you have full control of where and what you can monkey patch – jk. Oct 14 '16 at 11:53
  • In the same way you can implement dynamic features within static, you can implement static languages within dynamic by pre-running macro programs at "compile time" to check if certain conditions hold. You can also use type inference and JIT compiler function run logs to recreate types. You could use something like JQuery(I believe for s-expressions paredit is similar) on the AST to give static types to dynamic code. – aoeu256 May 22 '23 at 18:40
2

I wonder, are there useful design patterns or strategies that, using the formulation of the quote, "don't work as types"?

Yes and no.

There are situation in which the programmer knows the type of a variable with more precision then a compiler. The compiler may know that something is an Object, but the programmer will know (due the invariants of the program) that it is actually a String.

Let me show some examples of this:

Map<Class<?>, Function<?, String>> someMap;
someMap.get(object.getClass()).apply(object);

I know that someMap.get(T.class) will return a Function<T, String>, because of how I constructed someMap. But Java is only sure that I've got a Function.

Another example:

data = parseJSON(someJson)
validate(data, someJsonSchema);
print(data.properties.rowCount);

I know that data.properties.rowCount will be a valid reference and an integer, because I've validated the data against a schema. If that field were missing, an exception would have been thrown. But a compiler would only know that is either throwing an exception or return some sort of generic JSONValue.

Another example:

x, y, z = struct.unpack("II6s", data)

The "II6s" defines the way that data encode three variables. Since I've specified the format, I know which types will be returned. A compiler would only know that it returns a tuple.

The unifying theme of all these examples is that the programmer knows the type, but a Java level type system won't be able to reflect that. The compiler won't know the types, and thus a statically typed language won't let me call them, whereas a dynamicly typed language will.

That's what the original quote it getting at:

The wonderful thing about dynamic typing is it lets you express anything that is computable. And type systems don’t — type systems are typically decidable, and they restrict you to a subset.

When using dynamic typing I can use the most derived type I know about, not simply the most derived type my language's type system knows. In all the cases above, I have code which is semantically correct, but will be rejected by a static typing system.

However, to return to your question:

I wonder, are there useful design patterns or strategies that, using the formulation of the quote, "don't work as types"?

Any of the above examples, and indeed any example of dynamic typing can be made valid in static typing by adding appropriate casts. If you know a type your compiler doesn't, simply tell the compiler by casting the value. So, at some level, you aren't going to get any additional patterns by using dynamic typing. You just might need to cast more to get working staticly typed code.

The advantage of dynamic typing is that you can simply use these patterns without fretting about the fact that its tricky to convince your type system of their validity. It doesn't change the patterns available, it just possibly makes them easier to implement because you don't have to figure out how to make your type system recognize the pattern or add casts to subvert the type system.

Winston Ewert
  • 24,732
  • 12
  • 72
  • 103
  • 1
    why is java the cut off point at which you shouldn't go to a 'more advanced/complicated type system'? – jk. Aug 13 '16 at 21:32
  • 2
    @jk, what leads you to think that's what I'm saying? I explicitly avoided taking sides on whether or not a more advanced/complicated type system was worthwhile. – Winston Ewert Aug 13 '16 at 21:50
  • 2
    Some of these are terrible examples, and the others seem to be more language decisions rather than typed vs non-typed. I'm particularly confused at why people think deserialization is so complex in typed languages. The typed result would be `data = parseJSON(someJson); print(data.properties.rowCount);` and if one hasn't got a class to deserialize to we can fall back to `data = parseJSON(someJson); print(data["properties.rowCount"]);` - which is still typed and expresses the same intent. – NPSF3000 Aug 14 '16 at 06:44
  • @NPSF3000 My thought exactly. Static typing does not preclude syntactic shorthands to exist in the language. It's just Java that's verbose, that's all that's to it. Kotlin has similarly succinct data classes, Haskell has https://hackage.haskell.org/package/binary-0.8.4.1/docs/Data-Binary-Get.html – user7610 Aug 14 '16 at 08:18
  • 2
    @NPSF3000, how does the parseJSON function work? It would seem to either use reflection or macros. How could data["properties.rowCount"] be typed in a static language? How could it know that the resulting value is an integer? – Winston Ewert Aug 14 '16 at 14:34
  • @JiriDanek, Yes, many language are better about data classes then Java. But that's not the my point. My point was that you with dynamic typing you don't need to build such a shortcut into the language, you can implement the shortcut or a million like it, in a library. – Winston Ewert Aug 14 '16 at 14:37
  • @WinstonEwert why does it need to know it is a integer? If you read the intent of my code, there is no integer, and the type system lets me express that quite succinctly while still providing sanity checks. – NPSF3000 Aug 14 '16 at 14:37
  • 2
    @NPSF3000, how do you plan on using it if you don't know its an integer? How do you plan on looping over elements in a list in the JSON without knowing that it was an array? The point of my example was that I knew that `data.properties` was an object and I knew that `data.properties.rowCount` was an integer and I could simply write code that used them. Your proposed `data["properties.rowCount"]` doesn't provide the same thing. – Winston Ewert Aug 14 '16 at 14:46
  • 1
    "how do you plan on using it if you don't know its an integer?" See the code example that you constructed, and I ported. "Your proposed data["properties.rowCount"] doesn't provide the same thing" Why? because you don't like it? How do they differ in any meaningful way? Whether JS or C# neither will guarantee that rowCount is an int, but both can be smart enough to know that *it might be* an integer, or even better, is actually a JToken or JObject which is type that better represents it's nature. The pattern is the same, and works for both. – NPSF3000 Aug 14 '16 at 14:54
  • For reference the quote we are discussion claims "But that’s ridiculous — once you have a type system, you don’t even know what interesting programs are there." Yet the programs being demonstrated here are well known and used patterns in statically typed languages... not a very good argument. – NPSF3000 Aug 14 '16 at 14:55
  • Let us [continue this discussion in chat](http://chat.stackexchange.com/rooms/43954/discussion-between-npsf3000-and-winston-ewert). – NPSF3000 Aug 14 '16 at 15:10
1

Here are a few examples from Objective-C (dynamically typed) which are not possible in C++ (statically typed):

  • Putting objects of several distinct classes into the same container.
    Of course, this requires runtime type inspection to subsequently interpret the contents of the container, and most friends of static typing will object that you shouldn't be doing this in the first place. But I have found that, beyond the religious debates, this can come in handy.

  • Expanding a class without subclassing.
    In Objective-C, you can define new member functions for existing classes, including language defined ones like NSString. For example, you can add a method stripPrefixIfPresent:, so that you can say [@"foo/bar/baz" stripPrefixIfPresent:@"foo/"] (note the use of the NSSring literals @"").

  • Use of object-oriented callbacks.
    In statically typed languages like Java and C++, you have to go to considerable lengths to allow a library to call an arbitrary member of a user-supplied object. In Java, the workaround is the interface/adaptor pair plus an anonymous class, in C++ the workaround is usually template based, which implies that library code has to be exposed to user code. In Objective-C, you just pass the object reference plus the selector for the method to the library, and the library can simply and directly invoke the callback.

  • I can do the first in C++ by casting to void*, but that is circumventing the type system, so it does not count. I can do the second in C# with extension methods, perfectly within a type system. For the third, I think the "selector for the method" can be a lambda, so any statically typed language with lambdas can do the same, if I understand correctly. I am not familiar with ObjC. – user7610 Aug 16 '16 at 07:11
  • 1
    @JiriDanek "I can do the first in C++ by casting to void*", not exactly, the code that reads elements has no way to retrieve the actual type on its own. You need type tags. Besides, I don't think that saying "I can do this in " is the appropriate/productive way to look at this, because you can always emulate them. What matters is the gain in expressivity vs. complexity of implementation. Also, you seem to think that if a language has both static and dynamic capabilities (Java, C#), it belongs exclusively to the "static" family of languages. – coredump Aug 16 '16 at 08:34
  • @coredump You'd say that because C++ has void* and RTTI, it is not a "purely statically typed language"? But viewed this way, then no widely popular language today is purely statically typed! – user7610 Aug 16 '16 at 09:03
  • 1
    @JiriDanek `void*` alone is not dynamic typing, it is lack of typing. But yes, dynamic_cast, virtual tables etc. make C++ not purely statically typed. Is that bad? – coredump Aug 16 '16 at 09:26
  • 1
    It suggests that having the option of subverting the type system when needed is useful. Having an escape hatch when you need it. Or somebody considered it useful. Otherwise they would not put it into the language. – user7610 Aug 16 '16 at 11:04
  • 2
    @JiriDanek I think, you pretty much nailed it with your last comment. Those escape hatches can be extremely useful if used with care. Nevertheless, with great power comes great responsibility, and plenty are the people who abuse it... Thus, it feels a lot better to use a pointer to a generic base class that all other classes are derived from by definition (as is the case both in Objective-C and Java), and to rely on RTTI to tell the cases apart, than to cast a `void*` to some specific object type. The former produces a runtime error if you messed up, the later results in undefined behavior. – cmaster - reinstate monica Aug 16 '16 at 18:42