25

Why hasn't the dream of declarative programming been realized? What are some concrete obstacles getting in the way? For a simple example why can't I say

sort(A) is defined by sort(A) in perm(A) && asc(sort(A))

and automatically get a sorting algorithm out of it. perm means permutations and asc means ascending.

  • Is it "asc(sort(A))" or "asc(A)" in the last declaration? – Den Mar 09 '15 at 09:30
  • 4
    By the way your specific example is kind of already available: http://gkoberger.github.io/stacksort – Den Mar 09 '15 at 09:32
  • 4
    Have you heard of Prolog? Just look for "Answer Set Programming". There are a lot of systems build upon default logic. – schlingel Mar 09 '15 at 10:57
  • Scala can kind of achieve this using implicits. I have a few bits of code that read like ESL English. – Carcigenicate Mar 09 '15 at 13:33
  • Somewhat relevant: https://programmers.stackexchange.com/a/268076/96101 – yurez Mar 09 '15 at 13:34
  • Read http://bootstrappingartificialintelligence.fr/WordPress3/2014/03/know-thyself/ – Basile Starynkevitch Mar 09 '15 at 19:41
  • 16
    Well, this question is easily answered. **Attempt to implement such a system**. What stopped you from doing it successfully? Odds are good that whatever stopped you has stopped everyone else. – Eric Lippert Mar 09 '15 at 23:16
  • 4
    I'm tempted to believe this question deserves more credit than it's getting. When you look at it at first glance, you might think, *Well, that's simple! You have to program all that logic behind it, and computers are just not that smart.* ...But then you come back and take a second glance at this question, and you think once more, *Well, yes, that is simple, and you do have to program all that logic - and computers aren't necessarily the sharpest tools in the shed, true - but there is a lot more depth to that explanation than what simply lies at the surface.* – Panzercrisis Mar 10 '15 at 02:27
  • 3
    Your description of a sorting algorithm is declarative, yes, but it sure as hell isn't efficient. There are `n!` permutations of a sequence and in the worst case your algorithm will have to try them all to find a sorted one. Factorial time is about as badly as an algorithm to process a sequence can do. – Benjamin Hodgson Mar 10 '15 at 08:30
  • 1
    @Den No. The declaration states that sort(A) fullfills the following two criteria. It's an element of the set of premutations of A, and it's ascending. – Taemyr Mar 10 '15 at 09:39
  • Sounds like you're asking the system to implement a sorting algorithm. Declarative programming is already possible (specially in functional languages), but it's not like this. Examples of real life declarative programming: list comprehensions. Also look at things like https://lodash.com/docs – hasen Mar 10 '15 at 10:52
  • 1
    Just define the language itself declaratively: `Language(L) is defined by L in programming languages && declarative(L)`. – MSalters Mar 10 '15 at 14:09
  • @davidk01 Your question would probably benefit from a little more description of what you perceive to be the dream. – itsbruce Mar 10 '15 at 14:13
  • I agree with @EricLippert ... build a declarative language yourself. Who knows... you might catch something that's avoided everyone else and become the [next David Huffman](http://en.wikipedia.org/wiki/Huffman_coding#History). – JDB Mar 10 '15 at 19:46
  • A modern take on this is [Search-Based Software Engineering](https://en.wikipedia.org/wiki/Search-based_software_engineering) where progs are tested, optimized (for energy use, speed, memsize, trading off against "# test cases passed"), composited (from existing parts) and extended (given hi-lev reqs and a lib of usable fncts) based on search in the space of programs. Check out: [Vision Paper](http://www0.cs.ucl.ac.uk/staff/m.harman/icst15.pdf) and: [Tutorial](http://www.cs.ucl.ac.uk/staff/mharman/laser.pdf) by Mark Harman, Phil McMinn, Jereson Teixeira de Souza, and Shin Yoo. – David Tonhofer Oct 17 '15 at 17:11

9 Answers9

50

Logical languages already do this. You can define sort similarly like you are doing it.

The main problem is performance. Computers might be great at computing lots of stuff, but they are inherently dumb. Every "clever" decision a computer might make was programmed into it by a programmer. And this decision is usually described not by how the end result looks like, but by how to achieve, step by step, this end result.

Imagine the story of a Golem. If you try to give him an abstract command, then at best, he will do it inefficiently and at worst, will hurt himself, you or someone else. But if you describe what you want in the greatest detail possible, you are guaranteed that task will be completed effectively and efficiently.

It is the programmer's job to decide on what level of abstraction to use. For the application you are making, are you going to go high-level and describe it in abstract way and take the performance hit or go low and dirty, spend 10x more time on it, but get algorithm that is 1000x more performant?

atk
  • 115
  • 4
Euphoric
  • 36,735
  • 6
  • 78
  • 110
  • 6
    It might help to know that the word Golem גולם actually means "raw material" i.e. the most basic state at which the machine / entity can be at. – dotancohen Mar 09 '15 at 11:57
  • 2
    Declarative languages are not inherently an obstacle to lower levels of abstraction. Haskell and Standard ML both, in their different ways, allow you to make simple declarative statements about types/functions in one place, provide a range of concrete and specific function implementations in a separate place and ways to match types to implementations in yet another. Meanwhile, best practice in OO/Imperative languages is now very much about starting high/simple and then adding implementation detail. The difference is that high abstraction is easier in FP, low level is easier imperatively. – itsbruce Mar 09 '15 at 14:38
  • 2
    Should say that it's also possible, in either language mentioned, to resolve the choice of specific implementation automatically based on properties of the type rather than hard-coding specific matches, which pretty much delivers what the OP wants. In Haskell, type classes would be a key tool for this. In Standard ML, functors. – itsbruce Mar 09 '15 at 14:52
  • 2
    Replacing Golem with genie, like a lamp genie granting wishes, might make your example make more sense to non LotR ppl, like me – BAR Mar 09 '15 at 15:22
  • 22
    @BAR Golem != Golum Golem is from Jewish folklore – Euphoric Mar 09 '15 at 15:47
  • @Euphoric Oh I see, will have to read that story. I thought it was a LotR ref from what one of the commenters said. – BAR Mar 09 '15 at 15:50
  • 10
    My takeaway from this answer is to write אמת on my laptop. – Dan J Mar 09 '15 at 16:20
  • 2
    @BAR: Or Terry Pratchett's "Feet of Clay". – jamesqf Mar 09 '15 at 17:38
  • 1
    If I'm reading this correctly, in short, humans are still better at building efficient algorithms because we are better at understanding the problem. – jpmc26 Mar 10 '15 at 19:06
44

In addition to Euphoric's excellent point, I'd like to add that we already are using declarative languages in many places where they work well, i.e. describing state that isn't likely to change or requesting something for which the computer actually can generate efficient code on its own:

  • HTML declares what the content of a web page is.

  • CSS declares what various types of elements in a web page should look like.

  • Every relational database has a data definition language that declares what the structure of the database is.

  • SQL is much closer to declarative than imperative, since you tell it what you want to see and the database's query planner figures out how to actually make it happen.

  • One could argue that most configuration files (.vimrc, .profile, .bashrc, .gitconfig) are using a domain-specific language that's largely declarative

Ixrec
  • 27,621
  • 15
  • 80
  • 87
  • 3
    I'll mention GNU Make, XSLT, Angular.js as widely used things which are also declarative (although angular perhaps pushes the definition a bit). – Mark K Cowan Mar 09 '15 at 15:30
  • Let me add regular expressions to that list. – Schwern Mar 09 '15 at 22:15
  • 7
    People tend to forget that declarative languages are **common**. They're just not typically turing complete languages. Add regex to the that list. – slebetman Mar 09 '15 at 23:55
  • A bit pedantic, but still: Not every database has a DDL, just think of the huge numbers of schemaless NoSQL databases. Every _relational_ database might have, but not every database. – dirkk Mar 10 '15 at 11:10
  • 1
    @dirkk Hadn't thought of that. Corrected my answer. – Ixrec Mar 10 '15 at 11:49
18

Abstractions are leaky

You can implement a declarative system where you declare what you want, and the compiler or interpretator figures out an order of execution. The theoretical benefit is that it frees you from having to think about the 'how', and you don't have to detail this implementation. However, in practice for general purpose computing you still have to think about the 'how' and write all kinds of tricks while keeping in mind how this will be implemented, since otherwise the compiler can (and often will) choose an implementation that will be very, very, very slow (e.g. n! operations where n would suffice).

In your particular example, you will get A sorting algorithm - it doesn't mean that you will get a good or even a somewhat usable one. Your given definition, if implemented literally (as a compiler likely would) results in http://en.wikipedia.org/wiki/Bogosort which is unusable for larger datasets - it is technically correct, but needs an eternity to sort a thousand numbers.

For some limited domains you can write systems that almost always do well in figuring out a good implementation, for example, SQL. For general purpose computing that doesn't work particularly well - you can write systems in, say, Prolog but you have to visualize how exactly your declarations will be converted to an imperative execution order in the end, and that loses much of the expected declarative programming benefits.

Peteris
  • 1,097
  • 6
  • 9
  • While what you say is essentially true, bad performance is not a sign of a leaky abstraction unless the interface/contract provides you guarantees e.g. about execution time. – valenterry Mar 09 '15 at 12:35
  • 3
    Peters is not saying that bad performance is a sign of leaky abstraction, @valenterry. If anything, he is saying the opposite: that to achieve good performance, implementation details are *forced* to leak. – itsbruce Mar 09 '15 at 13:44
  • The headline should be then be renamed, though. Like in *Most Abstractions are forced to be leaky to be performant* or similar. Then I completely agree. – valenterry Mar 09 '15 at 14:55
  • Re "The theoretical benefit is that it frees you from having to think about the 'how'...": So how is this different from e.g. calling the qsort standard library function from C code? You don't have to think about how qsort is implemented (other than providing a comparison function, which you'd have to do somehow in the OP's example), and you're fairly well guaranteed to get an efficient routine. – jamesqf Mar 09 '15 at 18:41
  • 2
    I think it's misleading to say that abstractions are leaky just because you need to understand the implementation to understand how it affects performance. The purpose of an abstraction isn't to shield you from having to think about performance. – Doval Mar 09 '15 at 19:31
  • 1
    @jamesqf In declarative programming, you would just state that something is sorted. You could declare sort order be bound to some variable/property. And then it would be so. No need to explicitly call sort every time new data is added or sort order changes. – hyde Mar 10 '15 at 05:23
  • @hyde: So I still fail to see how this changes/improves anything. You need to explicitly do something to specify correct sort order/key (since those are arbitrary in principle), the sort must be called regardless of whether it's done explicitly, or automatically behind the scenes. Seems as though this "dream" is taking simple things and making them vastly complicated, to no net benefit - other than perhaps job security :-) – jamesqf Mar 10 '15 at 19:37
  • 1
    @jamesqf You can't really see the point without actually trying yourself (I'd recommend QML of Qt for playing with declarative ideas). Imagine someone who only knows imperative programming, and them trying to understand point of OOP or functional programming without actually trying it out for real. – hyde Mar 10 '15 at 20:10
  • @hyde Job security is why people insist that the huge amount of boilerplate, defensive coding and copy-paste code forced by most imperative languages is necessary, rather than admit it to be a set of horrible warts. http://programmers.stackexchange.com/a/263295/67057 – itsbruce Mar 11 '15 at 11:07
  • But to answer your main point, declarative languages can allow a very clean separation between definition of a type (data or a function) and the implementation. In a declarative *functional* language, you would still be defining various efficient sort algorithms, but those would be somewhere else (and entirely ignorant of any types you define). In yet another place you would map sorting algorithms to types for those contexts where sorting is required. In a logic language, otoh, you would iteratively refine your definitions to approach better performance. – itsbruce Mar 11 '15 at 11:22
  • The important point is that in declarative languages the high level abstractions are *meaningful* and apply constraints which force implementation to honour the intent. Imperative OO languages can only simulate that kind of abstraction, with tests being the only (very inadequate) proof of correctness. – itsbruce Mar 11 '15 at 11:29
  • Oops. Sorry Hyde, those comments were for @jamesqf – itsbruce Mar 11 '15 at 18:50
11

Computational decidability is the most important reason declarative programming has not proven to be as easy as it seems to be.

Many problems that are relatively easy to state have proven to be undecidable or have NP-complete complexity to solve. This often occurs when we take into account negative classes and classification, countability and recursion.

I'd like to examplify this with some domains that are well known.

Decision on which CSS class to use needs knowledge and consideration of all CSS rules. Adding new rules might invalidate all other decisions. Negative CSS classes are intentionally not added to the language, because of NP-complete problems, but the lack of negative classes complicates CSS design decisions.

Within an (SQL) query optimizer, there is the nettlesome problem of deciding in which order to join, which indices to use and which memory to allocate to which temporary results. This is a known NP-complete problem and complicates database-design and query formulation. To formulate it differently: when designing a database or a query, the designer needs to know the actions and order of actions the query optimizer is likely to take. An experienced engineer needs knowledge of the heuristics used by major database vendors.

Configuration files are declarative, but certain configurations are hard to declare. For example, to properly configure features one needs to take into account versioning, deployment (and deployment history), possible manual overrides and possible conflicts with other settings. To properly validate a configuration might become an NP-complete problem.

The result is that these complications take beginners by surprise, they break the 'beauty' of declarative programming and they cause some engineers to search for other solutions. The migration of inexperienced engineers from SQL to NoSQL might have been triggered by the underlying complexities of relational databases.

Dibbeke
  • 2,514
  • 1
  • 16
  • 13
  • 2
    "Negative CSS classes are intentionally not added to the language, because of NP-complete problems" -- can you elaborate? – John Dvorak Mar 09 '15 at 11:12
  • It's a bit of an exercise, but with negative CSS selectors it is possible to rewrite these to a 3SAT problem (with the last clause being the DOM), which would require trying all possible combinations, to see if it matches. – Dibbeke Mar 09 '15 at 14:33
  • 1
    Small addition. In CSS 3 and 4, negative selectors are allowed, but :not pseudo-classes may not be nested. – Dibbeke Mar 09 '15 at 14:51
8

There are some very good answers. I will try to contribute to the discussion.

On the topic of declarative, logic programming in Prolog, there is the great book "The Craft of Prolog" by Richard O'Keefe. It is about writing efficient programs using a programming language that lets you write very inefficient programs. In this book, while discussing the efficient implementations of several algorithms (in the chapter "Methods of Programming"), the author takes the following approach:

  • define the problem in English
  • write a working solution that is as declarative as possible; usually, that means pretty much exactly what you have in your question, just correct Prolog
  • from there, take steps to refine the implementation to make it faster

The most enlightening (for me) observation I was able to make while working my way through these:

Yes, the final version of the implementation is much more efficient than the "declarative" specification the author started with. It is still very declarative, succinct, and easy to understand. What has happened in between is that the final solution captures properties of the problem to which the initial solution was oblivious.

In other words, while implementing a solution, we have used as much of our knowledge about the problem as we can. Compare:

Find a permutation of a list such that all elements are in ascending order

to:

Merging two sorted lists will result in a sorted list. Since there might be sublists that are already sorted, use these as a starting point, instead of sublists of length 1.

A small aside: a definition like the one you have given is attractive because it is very general. However, I cannot escape the feeling that it purposefully ignores the fact that permutations are, well, a combinatorial problem. This is something we already know! This is not a criticism, just an observation.

As to the real question: how to move forward? Well, one way is to provide as much knowledge about the problem we are declaring to the computer.

The best attempt I know of to really solve the problem is presented in the books co-authored by Alexander Stepanov, "Elements of Programming" and "From Mathematics to Generic Programming". I am sadly not up to the task of summarizing (or even fully understanding) everything in these books. However, the approach there is to define efficient (or even optimal) library algorithms and data structures, under the provision that all relevant properties of the input are known in advance. The final result is:

  • Each well-defined transformation is a a refinement of the constraints that are already in place (the properties that are known);
  • We let the computer decide which transformation is optimal based on the existing constraints.

As to why it hasn't quite happened yet, well, computer science is a really young field, and we are still coping with truly appreciating the novelty of most of it.

PS

To give you a taste of what I mean by "refining the implementation": take for example the easy problem of getting the last element of a list, in Prolog. The canonical declarative solution is to say:

last(List, Last) :-
    append(_, [Last], List).

Here, the declarative meaning of append/3 is:

List1AndList2 is the concatenation of List1 and List2

Since in the second argument to append/3 we have a list with only one element, and the first argument is ignored (the underscore), we get a split of the original list which discards the front of the list (List1 in the context of append/3) and demands that the back (List2 in the context of append/3) is indeed a list with only one element: so, it is the last element.

The actual implementation provided by SWI-Prolog, however, says:

last([X|Xs], Last) :-
    last_(Xs, X, Last).

last_([], Last, Last).
last_([X|Xs], _, Last) :-
    last_(Xs, X, Last).

This is still nicely declarative. Read top to bottom, it says:

The last element of a list only makes sense for a list of at least one element. The last element for a pair of the tail and the head of a list, then, is: the head, when the tail is empty, or the last of the non-empty tail.

The reason why this implementation is provided is to work around the practical issues surrounding the execution model of Prolog. Ideally, it shouldn't make a difference which implementation is used. Similarly, we could have said:

last(List, Last) :-
    reverse(List, [Last|_]).

The last element of a list is the first element of the reversed list.

If you want to get your fill of inconclusive discussions about what is good, declarative Prolog, just go through some of the questions and answers in the Prolog tag on Stack Overflow.

XXX
  • 193
  • 1
  • 6
  • 2
    +1 for showing how a declarative design can progress from a simple abstraction to a more concrete implementation through an iterative design process. – itsbruce Mar 10 '15 at 14:35
  • @itsbruce Thank you, although the example I gave is trivial almost to the point of being useless. The examples in "The Craft of Prolog" are excellent, but of course it would take too much space and the point might get buried in the detail. – XXX Mar 10 '15 at 14:44
  • 1
    @Boris This is a good answer. That book is sitting on my bookshelf. It's probably time I opened it up. –  Mar 10 '15 at 17:09
  • 1
    @davidk01 One of the better books out there. It assumes that you are quite comfortable with Prolog and programming in general, but the approach it takes to programming is both pragmatic and very thorough. – XXX Mar 10 '15 at 18:54
  • 2
    @Boris I know the example is not complex but the productivity of the iterative design process - a real strength of declarative languages - and it's very practicl value, is crucial. Declarative languages offer a clear, consistent, *recursive* approach to iterative improvement herre. Imperative languages do not. – itsbruce Mar 10 '15 at 23:49
  • 1
    +1 for "get your fill of inconclusive discussions about what is good, declarative Prolog"… very true that we tend to disagree! – Daniel Lyons Dec 09 '15 at 07:07
2

We have a difference in declarativeness of programming languages that is put to good use in the verification of digital logic.

Normally digital logic is described at the register transfer level (RTL) where the logic level of the signals between registers is defined. To check that we are increasingly adding properties defined in a more abstract and declarative manner.

One of the more declarative languages/language subsets is called PSL for Property Specification Language. When testing an RTL model of a multiplier in which, for example all the logic operations of shift and add over multiple clock cycles needs to be specified; you can write a property in the manner of assert that when enable is high, this output will equal the multiplication of these two inputs after no more than 8 clock cycles. The PSL description can then be checked together with the RTL in a simulation, or the PSL may be formally proved to hold for the RTL description.

The more declarative PSL model forces one to describe the same behaviour as the RTL description but in a sufficiently different way that can be automatically checked against the RTL to see if they agree.

Paddy3118
  • 617
  • 1
  • 5
  • 10
1

Mostly the problem is how you model the data; and declarative programming isn't helping here. In imperative languages you already have tons of libraries that does lots of stuff for you, so you only need to know what to call. In a particular way one might consider this declarative programming (probably the best example for this is Stream API in Java 8). Having this, the abstraction is already resolved and declarative programming isn't necessary.

Also, as it has been said, logic programming languages already do exactly what you want. One might say the problem is performance, but with today's hardware and research in this area, things can be improved to be ready for production use; actually Prolog is used for AI stuff, but I believe only by academia.

It is to be noted that it applies for general-purpose programming languages. For domain specific languages, declarative languages are way better; SQL probably is the best example.

Random42
  • 10,370
  • 10
  • 48
  • 65
  • 3
    Data modelling? You picked the thing imperatives are worst at. Declarative functional languages like Haskell and ML excel at data modelling. Algebraic data types and recursive data types, for example, can usually be defined comprehensively in one or two lines. Sure, you still have the functions to write but their code follows inexorably from the type definition and is constrained by it. Bizarre comparison to make. – itsbruce Mar 09 '15 at 12:17
  • Erlang is not similar to Prolog. Some syntax was borrowed but the semantics are very different. Erlang is also one of the least declarative of modern functional languages, making it not very relevant here. It is terrible for data modelling, though, by the standards of functional languages. Closer to imperative languages on that front. – itsbruce Mar 09 '15 at 12:30
  • 1
    @itsbruce The thing is most real data is not easily mapped to ADT; think of how most databases work. As Prolog - Erlang, you are right, they are different languages. I mentioned that one is functional while the other one is logical but is best if I delete the whole comparison. – Random42 Mar 09 '15 at 17:00
  • 1
    @m3th0dman A database is just a ton of tuples/records. Haskell's a bit crippled there, because it lacks records, but it does have tuples, and ML has both. And in Haskell's case the amount of boilerplate needed to declare a new pseudo-record datatype is still much smaller than what it takes to create a faux-struct in the average statically-typed OOP language. Can you elaborate on how most data is not easily mapped to ADTs? – Doval Mar 09 '15 at 17:13
  • 1
    @m3th0dman Ah, that's why database schemas are defined in an imperative language well suited to the task. Oh, no, that'd be declarative DDLs. In fact, the overall process of data modelling is relevant to the *application* that will work with it, the data flows and structures, not the language implementing the application. Sometimes those are distorted to match the OO features of a language and what its ORM supports but that's usually a bad thing, not a feature. Declarative languages are better suited to expressing the conceptual/logical data model. – itsbruce Mar 09 '15 at 21:13
  • Historically, declarative languages have been used primarily on the planning/design side of the data model, with imperative languages used on the application implementation side. But that's because imperative languages have traditionally been used for almost all practical implementations of everything. Historically, declarative languages were not helpful in many practical implementations, something which is changing. Imperative languages *still* offer only fragile guarantees of their data model implementation, almost entirely in the form of their ORM libraries. – itsbruce Mar 09 '15 at 21:26
  • @Doval As you said it mapping to Algebraic DT is not easier than mapping it to a typical struct or a class; and I don't think here is a matter of verbosity. That's why declarative programming isn't helping here. Also most database implementations (at least the most used ones) also don't support Algebraic DT. – Random42 Mar 10 '15 at 07:37
  • @itsbruce I already mentioned in the post that declarative language are very good as DSL and gave SQL as probably the best example. I don't understand what you're trying to suggest here. As for the problematic imperative model when handling data: keep in mind that the extensions to SQL that manipulate the data are procedural (TSQL, PLSQL etc.) and not declarative. As for ORM the problems appear from mapping relational model to an Abstract Data Type model; again I don't see how declarative programming is helping here. – Random42 Mar 10 '15 at 07:41
  • I was questioning the assertion that data modelling is a strength of imperative languages. Data modelling is almost entirely orthogonal to the imperative language, which is used to implement part of the interface with the finished model, not create it. As for your new points - stored procedures are *not* the only way to update data in a database. There is *nothing* imperative about **delete * from users where name = ""**. The mapping issue *is* relevant but rather than stretch this too far, I'd just recommend "Out of the Tar Pit" as reading. http://shaffner.us/cs/papers/tarpit.pdf – itsbruce Mar 10 '15 at 14:11
  • @itsbruce: I looked at that paper, got about 1/3 of the way in, and couldn't keep from rolling my eyes when it started talking about creating "executable specifications," which, as Joel pointed out, is [the software equivalent of a perpetual motion machine... one of those things that crackpots keep trying to do, no matter how much you tell them it could never work.](http://www.joelonsoftware.com/items/2007/12/03.html) You got anything non-crazy to recommend on the subject? – Mason Wheeler Mar 10 '15 at 17:41
  • 1
    @itsbruce I wasn't saying that procedural paradigm is better than the declarative one at defining data; I was saying that declarative paradigm isn't better (nor worse) than procedural one (for general purpose languages). As for manipulating data, the declarative part of SQL is not enough for real life applications; otherwise no one would have invented and used procedural extensions. As for the article, I disagree with it from the abstract where it contradicts Brooks; he built his ideas from real projects while those guys didn't build anything outstanding to prove their theory. – Random42 Mar 10 '15 at 21:08
  • Oh hai @MasonWheeler :) I already bowed out of these comments because they've gone on too long. Not even tempted by the fact you found a reason not to read far enough. Please do start another question (rather better defined than this one, pls) and invite us all in. – itsbruce Mar 10 '15 at 23:35
  • @m3th0dman I am not here to champion declarative languages, just to raise the well known problem of the impedance mismatch https://en.wikipedia.org/wiki/Object-relational_impedance_mismatch . It tickled me that you cited as a strength a notorious achilles heel. – itsbruce Mar 10 '15 at 23:47
0

It would look something like this.. {(whatever => read a file & call a url) | call a url & read a file} However, these are actions to execute, and the state of the system changes as a result, but that is not obvious from the source.

Declaratives can describe a finite state machine and its transitions. The FSM is like the opposite of declaratives without actions, even if the only action is to change state to the next state.

The advantage of using this method is that transitions and actions can be specified by predicates that apply to multiple transitions, rather than just one.

I know this sounds a little strange, but in 2008 I wrote a program generator that uses this method, and the generated C++ is from 2 to 15 times as much as the source. I now have over 75,000 lines of C++ from 20,000 lines of input. Two things go with this: consistency and completeness.

Consistentcy: No two predicates that can be true at the same time can imply inconsistent actions, as both x = 8 and x = 9, nor different next states.

Completeness: The logic for every state transition is specified. These may be difficult to check for systems with N substates, with > 2**N states, but there are interesting combinatorial methods that can verify everything. In 1962 I wrote phase 1 of a system sort for the 7070 machines, using this kind of conditional code generation and combinatorial debug. Of the 8,000 lines in the sort, the number of bugs from the day of the first release forever was zero!

Phase two of the sort, 12,000 lines, had over 60 errors in the first two months. There is much more to say about this, but it works. If car manufacturers used this method to check the firmware, we would not see the failures that we see now.

  • 1
    This really doesn't answer the original question. How do consistency and completeness factor into the fact that most programming is still procedural, not declarative? – Jay Elston Mar 11 '15 at 00:46
  • Your opening paragraph seems to be an answer to a point in arnaud's answer http://programmers.stackexchange.com/a/275839/67057, not to the question itself. It should be a comment there (on my screen, your answer is no longer below his, for one thing). I *think* the rest of your answer is an illustration of how a small amount of declarative code could generate a large amount of equivalent imperative code but it isn't clear. Your answer needs some tidying, particularly with regard to the salient points. – itsbruce Mar 11 '15 at 12:54
-2

Not everything can be represented in a declarative way.

Often, you explicitely want to control the flow of execution

For example following pseudo-code: if whatever read a file call an URL else call an URL write a file How would you represent it declaratively?

Sure, there are probably some exotic ways to do it. Like monads. But these usually are more cumbersome, complicated and far less intuitive than their procedural part.

It all boils down to the fact that "interacting" with your environment/system is not declarative. Everything related to I/O is procedural by essence. You have to tell exactly when and what should happen, and in what order.

Declarative is great for everything that is purely related to computation. Like a giant function, you put X in and you get Y. That's great. An example for this is HTML, the input is text, the output is what you see in your browser.

dagnelies
  • 5,415
  • 3
  • 20
  • 26
  • 2
    I don't buy this. Why is your example *not* declarative? Is it the `if`/`else`, in which case what would a declarative alternative look like? It's certainly not the `read`/`write`/`call` parts, since they're nice declarative lists of values (if you're implying they're wrapped in `{...; ...}`, why not imply they're wrapped in `[..., ...]`?) Of course, list are just free monoids; many others would do too. I don't see why monads are relevant here; they're just an API. Haskell went from streams -> monads to aid manual composition, but logic languages can compose lists in order automatically. – Warbo Mar 10 '15 at 13:39
  • 2
    -1 for Monads alone. 1. They're not really exotic (lists and sets are Monads and everybody uses them). 2. They have nothing to do with forcing things to be done in a specific sequence (Haskell **do** notation *looks* imperative but isn't). Declarative/functional languages specify relationships and dependencies. If function X needs input Y, Y will be generated before X. Get the dependencies right and the proper sequence of events will define itself. Much interaction is *event* driven, not in one set sequence. Declarative languages do not make reacting to events harder. – itsbruce Mar 10 '15 at 13:49
  • Laziness complicates some of that but laziness is not part of the definition of Declarative languages, most of which do not use it. And in those that do, the way to guarantee evaluation has nothing to do with monads. For an example of where a declarative language is used exclusively for interaction rather than abstract computation, doesn't specify order but makes damn sure the right things happen in sequence, look no further than the Puppet DSL. Which has the bonus that only the necessary things happen - not something imperative languages make easy. – itsbruce Mar 10 '15 at 14:14
  • 1
    In addition to @itsbruce example reactive programming is consider declarative and is about interaction with environment. – Maciej Piechotka Mar 10 '15 at 15:35