14

From my understanding, in declarative programming, programmer only need to define the end result, but not how to compute it. But for execute that said function, the function must be pre-defined by the language, that instruct the machine to compute, step by step, for user to use in the first place - hence it implementation is imperative.

So is declarative programming just imperative programming 'under the hood'? If so then how to differentiate them two?

And if I happen to write a very complex program (or any program at all), at some point I will have to give implicitly step-by-step instruction, is there any way I can get around this and use the declarative approach instead?

Loc
  • 167
  • 1
  • 6
  • 108
    **Everything** is just processors executing opcodes "under the hood". The entire point of software engineering is to design better hoods, so that humans can deal with code in a format better suited to our perceptory system than opcode streams. – Kilian Foth Aug 21 '23 at 10:38
  • 11
    The question "how to distinguish them?" is a strange question since you already know the answer. A declarative language is differentiated from an imperative language by whether it declares an intention or specifies a command. That's how you distinguish them. Languages are distinguished by their characteristics, not by their implementation details. – Eric Lippert Aug 21 '23 at 20:14
  • 15
    I mean, we could turn your question around just as easily. Every imperative language is just a description of the desired evolution of an abstract machine; we "declare" what changes we want that machine to undergo, and the compiler spits out a program that simulates that abstract machine. So, isn't every imperative language really declarative then? This is just playing with semantics. – Eric Lippert Aug 21 '23 at 20:17
  • 3
    This question needs a lot more clarity before it can have a concise answer. What are you really asking here? – Eric Lippert Aug 21 '23 at 20:18
  • Thanks @EricLippert, I do understand it more now. By the definition, I though declarative and imperative differentiate in the level of abstraction of said language, and it is ambiguous that a language is one or another. But it is wrong as other answers point out there is a difference in the provided features of the language, and we could easily tell them apart. – Loc Aug 22 '23 at 03:51
  • 8
    For an interesting view from the opposite side, [HDL](https://en.wikipedia.org/wiki/Hardware_description_language)s often have imperative features that the compiler converts into a declarative hardware structure. – jpa Aug 22 '23 at 05:42
  • You can always convert code to data, and back the other direction. Remember that a program (machine code) is just data interpreted by a CPU. So all code is effectively data. It also works on higher-level languages; maybe look for `eval`. – U. Windl Aug 22 '23 at 06:57
  • 1
    One of the most important concepts to learn in any engineering practice is "layers of abstraction". – keshlam Aug 24 '23 at 05:11

7 Answers7

27

At a lower level, every(?) CPU in current use is essentially imperative so yes, everything has to be imperative at some level.

However, to some extent that's not really important: everything is an abstraction and the real question is "is this abstraction useful?", and we certainly find that declarative programming is a useful abstraction in many circumstances. Sure, if you're actually implementing a declarative programming language you may have to write some imperative code, but that's the same as if you're implementing an imperative language at some point you have to write some assembly. And if you're implementing a CPU, you have at some point to deal with transistors etc.

Philip Kendall
  • 22,899
  • 9
  • 58
  • 61
  • 2
    LawnMowerMan's answer asks "Can I enumerate exactly when and where this expression will be evaluated, even with respect to other expressions"? For every major CPU, the answer is no. Out-of-Order Execution is pretty much a given at Gigahertz speeds. But OOE is hidden at a *higher* layer. And the immutable results? See register renaming, another optimization. Even if you write twice to `EAX`, the results can still end up in different physical registers. – MSalters Aug 22 '23 at 12:19
  • 3
    @MSalters And indeed from the perspective of a CPU the instruction set is just a declarative language for describing a data flow graph even though it resembles an imperative language (and once was evaluated in exactly that manner). A *truly* imperative language precludes the possibility of optimization, hence they tend to evolve into declarative languages over time. So you could even say "imperative programming is declarative under the hood"! – Mario Carneiro Aug 22 '23 at 21:47
  • Who said you have to evaluate declarative programs on CPUs? – einpoklum Aug 23 '23 at 08:57
  • @einpoklum _Practically_ that is what people do >99.9% of the time (acknowledging things like VHDL as mentioned in Jiří's answer) – Philip Kendall Aug 23 '23 at 10:54
19

Differences

So is declarative programming just imperative programming 'under the hood'? If so then how to differentiate them two?

Two of the major differences between declarative and imperative programming are mutation and eager evaluation.

Mutation

If you are writing some code, just ask yourself: "After I define variable x, am I able to reassign it to something else?" If the answer is: "No, not at all", then you are probably working in a declarative language.

Eager Evaluation

If you write an expression, ask yourself: "Can I enumerate exactly when and where this expression will be evaluated, even with respect to other expressions?" If the answer is "No, the language decides that because the order does not change the output", then you are probably using a declarative language.

Limits

And if I happen to write a very complex program (or any program at all), at some point I will have to give implicitly step-by-step instruction, is there any way I can get around this and use the declarative approach instead?

Well, a more precise way to say this is that every computation produces an output that depends upon its inputs (though a write-only program may have the empty set as its input). And at the minimum, the programmer must specify what parts of the output depend on which parts of the input. You can't properly specify a computation without providing this information. What you may or may not need to do, however, is specify the order in which those evaluations take place.

Let's consider a mostly declarative language that hopefully a lot of people are familiar with: SQL. Now, UPDATE and DELETE queries make it imperative, so let's just consider the subset using SELECT and INSERT. You can perform some fairly complex calculations with just these primitives. And you can perform them on very large tables. If they are indexed properly, something somewhat magical happens: when you run the query, you get the result in a reasonable amount of time, even though you didn't specify the manner in which the tables should be joined.

You can somewhat force a join order by using sub-select statements, but this would almost never be considered best practice. In general, it is enough to tell the query engine which tables contain the rows of interest, and let it decide how to join those tables to produce the desired result. At no point are you ever required to direct the query engine to join tables in a particular order, or to use one index or another.

At the same time, you can investigate whether the query engine is doing a good job by running EXPLAIN PLAN. This lifts the hood so you can see the "imperative engine running inside". But if you follow best practices for creating indices and regularly updating statistics, then this is not necessary. It doesn't matter whether you are querying 10 rows from a table with 1000 or querying tens of thousands of rows from a join of dozens of tables. The entire goal of SQL is to remove the need for the programmer to tell the query engine how to do its job.

A similar process is at work in other declarative languages.

Lawnmower Man
  • 1,874
  • 1
  • 5
  • 7
  • 11
    SQL is a great example because compilation is not just machine/platform dependent, it is also data dependent and the same query running on the same database can become different imperative instructions depending on parameters or how much data there is at runtime. – Cong Chen Aug 21 '23 at 20:20
  • 2
    If you're allowed to interleave `SELECT` and `INSERT` queries, you can still do "imperative" programming in SQL. (`INSERT` mutates global state, after all.) But I assume you mean something like "using only `SELECT` (and `INSERT` to initially populate the database)". – Ilmari Karonen Aug 22 '23 at 11:00
  • 3
    *"The entire goal of SQL is to remove the need for the programmer to tell the query engine how to do its job."* - isn't that really true for every language? You use C to avoid telling the combined compiler/machine instruction engine how to do its job, which registers to use, etc. – Steve Aug 22 '23 at 12:49
  • @IlmariKaronen Maybe restrict it to `SELECT`, `VALUES`, and `CREATE TABLE AS …` – Bergi Aug 22 '23 at 14:32
  • @Steve That's the "semantics" that many mentioned in comments above. But we generally view imperative languages like C as specifying a sequence of actions to be performed in order. "Add this, then subtract that, then print the output". While it's true that we don't specify "how" to add/subtract/print, those are viewed as simple primitives. There's an enormous difference in abstraction level when you get to a SQL join. – Barmar Aug 22 '23 at 14:48
  • 1
    @Barmar, SQL specifies a sequence of actions to be performed in order. The optimisations then occur within those constraints, like any programming language. The evaluation order is just somewhat non-obvious in SQL. Clearly SQL traces its roots to a wrongheaded notion that evaluation order didn't matter - and originally when it was only inner joins, which are associative and commutative, and when there were no left joins or nested queries or anything else, it truly didn't matter - but that kind of thinking is all superseded now. – Steve Aug 22 '23 at 15:39
  • @Steve If you say "SELECT * FROM t1 JOIN t2 on t1.col = t2.col" it doesn't say whether you should loop over t1 finding the matching rows from t2, or vice versa. The SQL query planner will decide the order based on the indexes available and the current contents of the tables. – Barmar Aug 22 '23 at 15:43
  • Of course, sometimes it doesn't optimize queries well, and we do have to break up into separate queries (e.g. to create temporary tables), which are executed sequentially. But it's declarative within a specific query. – Barmar Aug 22 '23 at 15:44
  • 2
    @Barmar, yes but that's *exactly* how optimising compilers work in ordinary languages. You say `a = b + c; d = e + f;` in C, and you've no idea what registers are used, whether the statements execute in reverse order, whether a function call is inlined, whatever. And that's before you even talk about pre-emptive multi-tasking at the OS level, branch prediction at the CPU hardware level, the list is endless of things that are fiddled with in the name of optimisation and systemic balancing. – Steve Aug 22 '23 at 15:48
  • 2
    @Steve What it compiles to is irrelevant. We're talking about the abstraction level of the source code. When you write sequential statements in C, you think of them as happening in order. But when you write SQL, there's no order (no one thinks `t1 JOIN t2` is different from `t2 JOIN t1`. – Barmar Aug 22 '23 at 15:51
  • 1
    @Barmar, I'm an expert user of SQL, and whilst I don't deny the technology is the most fancy of its kind, there's nothing about it which is alien to other languages. When I write SQL, I think of it as happening in order. As I say, the idea that ordering doesn't matter dates from when the only item in the armoury was inner join - which, because it is associative and commutative, can be specified in any order with no consequence for the meaning of the query. That is not the case nowadays. With left joins, for example, you have to keep track of the ordering. – Steve Aug 22 '23 at 16:00
  • @Steve Perhaps a better example would be Prolog. – Barmar Aug 22 '23 at 16:01
  • 1
    @Steve The order in which you *write* left joins matters not because it determines the chronological sequence of their execution, but because it determines the *logical* relationship between them. The query planner might well choose an execution order where later tables are fetched first, then matched and filtered to give the appropriate result. If every access to a row in your database echoed a message, you would *not* see the order of messages consistently matching the order of your query. – IMSoP Aug 22 '23 at 18:34
  • @IMSoP, I agree, but *that's the same for any other language nowadays*. *All* languages and systems deal in merely "logical" order. – Steve Aug 22 '23 at 18:43
  • 1
    Eager evaluation can happen also in imperative languages, even at hardware level, but it has a different name: https://en.wikipedia.org/wiki/Out-of-order_execution. – Razvan Socol Aug 22 '23 at 19:16
  • @Steve I didn't say logical *order*, I said logical *relationship* - the relationship expressed by LEFT JOIN (and, in a different order, by RIGHT JOIN) is a *dependency* between two or more tables or sub-queries; the full set of joins in a query doesn't form a sequence, it forms a partially-directed graph (where inner joins are bidrectional relationships). That isn't the same as saying that two calls to printf() will produce output in a specific order. – IMSoP Aug 22 '23 at 19:25
  • 2
    @IMSoP, I think these words are just synonyms. When I write SQL joins, I'm simply specifying a series of array operators. The joins *do* form a sequence - if I wrote them in a different order, the results would often be different. I'm aware of certain algebraic properties but that doesn't make inner joins "unordered", it makes them reorderable. In the same way that `2 + 3` in basic arithmetic is not an "unordered" expression, it's an algebraically reorderable expression because of the properties of the operator involved (as `3 + 2`). (1/2) – Steve Aug 23 '23 at 07:24
  • 1
    As a programmer, the ordering often carries more information than just for the use of the engine, but reflects crucial information which the engine cannot analyse but which I as the programmer can and do analyse as part of designing the query correctly. This imposes a latent meaning on the written order which cannot be algebraically rearranged without impairing the code as an artefact that speaks to the programmer (not the machine). (2/2) – Steve Aug 23 '23 at 07:25
  • 2
    @Steve OK, let's go with "reorderable" then. We could say that a *declarative* language is one which is designed to maximise the number of reorderable operations, by encouraging the programmer to think in an abstraction where the exact order of operations is not important. As you say, you are free to arrange the query in a way that helps you understand it, because (within certain bounds) the order in the source code does not represent a *chronological* order. In contrast, an imperative language is one that encourages the programmer to think of operations as happening in a fixed order. – IMSoP Aug 23 '23 at 08:45
  • 1
    @IMSoP it is not black and white. Some languages (or frameworks, or packages) are more declarative than others. There are probably packages in C which have a very declarative interface. I think it would be correct to say SQL is designed more declarative than C, because most SQL code is obviously reorderable (and many think about it in set operations) while most C code looks very sequential (and many think about it as ordered execution) – Falco Aug 23 '23 at 10:10
  • 1
    @Steve while you are right that the compiler will reorder and optimize statements, the programmer usually thinks about the average C code as a sequential ordered execution of statements - and the compiler tries to optimize without breaking this expectation. While the average SQL programmer will not think about the order in which the referenced tables are accessed at all. I think this expectation stems from the language syntax and makes SQL more declarative than C – Falco Aug 23 '23 at 10:13
  • 1
    @IMSoP, I think we're getting somewhere here. What you mean by "declarative" is a language designed to maximise the desirable algebraic properties of its operators, and therefore enable a larger scope for rearrangement and optimisation. SQL is the pre-eminent example of this. But it's not something absent from other languages - block structured programming, for example, was driven partly to enable better compiler optimisation (although definitely second priority to the human factors arguments). – Steve Aug 23 '23 at 12:33
  • @Falco, personally I do approach SQL in an ordered way. I think of a query as describing an algorithm. It's true I don't usually concern myself with the actual order of access (unless there are problems), but I do concern myself with the logical order of access. You can't analyse the effect of each join on cardinality otherwise - in fact, losing control of this analysis and allowing joins to either multiply or eliminate rows unintentionally, is one of the more common errors I've seen. I don't think practitioners are well-advised at all to think of ordering as an unimportant aspect of design. – Steve Aug 23 '23 at 12:45
  • @Steve Firstly, you're still mixing two types of "order": using the order of source code to express a dependency graph is *not the same* as using the order of source code to express execution order. Secondly, I think your focus on optimisation is misplaced. The primary motivation for language designs is *allowing the programmer to express themselves to the computer*. A declarative language is one where *the programmer* can specify their desired result in terms of constraints and relationships, rather than in terms of sequential operations. – IMSoP Aug 23 '23 at 13:45
  • @IMSoP, I don't think I'm confusing between the "execution order" and the "logical order" (or whatever term is preferred for that thing which corresponds to the order in source code, but isn't the actual execution order). The crucial thing about the "logical order" is that, for me, it describes a canonical execution order - any other execution order which the engine actually chooses, must produce the same results as if the canonical execution order had been followed. I can't emphasise enough that when I write SQL, I see myself as specifying sequential instructions, in the ordinary way. – Steve Aug 23 '23 at 14:52
  • @Steve I agree that the line is not 100% black and white, and "mostly declarative" languages can have elements and uses that "feel imperative", and vice versa. However, SQL is optimised for expressing things like "these 5 sets of data are related by these relationships, calculate this subset using the most efficient means possible"; trying to express that in plain C would be incredibly difficult. I would argue that *the reason* for that difficulty is that C is designed for writing imperative programs, and you would first need to build a declarative language, and an executor for that language. – IMSoP Aug 23 '23 at 15:12
  • @Steve: Consider `SELECT SUM(maxima.field) FROM (SELECT MAX(t.field) field FROM t GROUP BY t.key) maxima`. Here it's clear that the order in which you write the expressions can't possibly be the order in which the SQL engine actually carries them out; even if it explicitly runs the subquery as a separate/distinct step, it has to run that step *before* computing the sum! – ruakh Aug 23 '23 at 17:33
  • @Steve: Also, a better example with `LEFT OUTER JOIN` is `SELECT ... FROM (SELECT ...) LEFT OUTER JOIN (SELECT ...) ON ...`: note that you don't care which subquery is executed first, because it can't affect the result. (Queries can't have side-effects.) That's different from C, where statements are only executed for their side-effects, so you very often care about ordering. – ruakh Aug 23 '23 at 18:23
  • 1
    @ruakh, I'm not seeing the point there. Everyone knows the evaluation order in SQL is not top-to-bottom. The inner query evaluates as a one-column table `maxima(field)`, and this then forms the sole input to the result, which summarises into an unnamed scalar. – Steve Aug 23 '23 at 18:36
11

To expand already existing answers, there are languages like VHDL which are mainly for writing declarative code and they can produce truly declarative output in the form of hardware, which executes "everything at once", so the order of execution is irrelevant.

On the other hand, there exists high level synthesis which takes imperative language like C and then the final output is hardware, so the imperative programming languages can be "declarative under the hood".

So the answer is that imperative/declarative code can be imperative/declarative under the hood. You have to differentiate the languages by their features. You can tell language is declarative mostly by the fact you can't reassign variables or it doesn't make sense and that it doesn't have loops in normal sense.

Jiří
  • 239
  • 3
3

All programming languages are essentially abstractions for humans to express problems, and automate the solution to those problems. The difference between declarative and imperative is in how they allow and encourage programmers to express their problems, and as a consequence how the solutions to those problems are automated.

The existence of a lower-level implementation written in a different abstraction is irrelevant, except to the extent that practical implementations do not form perfect abstractions, so "leak" details all the way down to electrical interactions.

In an imperative language, the abstraction provided by the language is that of a sequence of actions, specified in chronological order. The language definition will generally guarantee that the observed behaviour of the program is consistent with that order of operations. At a lower level, an optimising compiler might rearrange some of those operations where it can prove it will not change the observed result; and a processor may evaluate branches in advance and discard the result if that path is not taken. Those abstractions may leak, leading to bugs and vulnerabilities, but the primary abstraction of the language is that things happen in order.

In a declarative language, the abstraction provided is that of a set of declarations, which may use source code order to define dependencies and other relationships, but are not laid out in a strict chronological order. The compiler or execution engine has much more freedom to rearrange the order of operations, or choose between multiple implementations of a particular operation, to achieve the requested result. Generally, such languages are modelled as having immutable data structures and side-effect free operations - the program is more closely related to a mathematical proof than a recipe. Again, the abstraction might leak, and the programmer may need to force a particular order of operations, but the primary abstraction of the language is that statements are re-orderable.

As with most categorisations, these are not black and white: a language can incorporate features of both styles, and be "more declarative" or "more imperative". A C program written in a Functional Programming style might allow a compiler to make more assumptions than one written in a traditional Procedural style, but most C programs contain a large amount of code that is imperative - i.e. reasoned about as an ordered sequence of operations.

IMSoP
  • 5,722
  • 1
  • 21
  • 26
2

A programming language is only a mean to express some abstract computational solution, independently of how the language abstractions will be implemented.

So no, it's not just imperative under the hood. The language implementation may transform the higher language in imperative code. Or it may apply imperative code to deal with the non imperative stuff (e.g. term rewriting, unification, etc...). But you don't have to care, you just have to select the most suitable language for the kind of problems you're trying to solve.

It's not because every language ends up being executed in machine code, that one could say every programming language is only machine code under the hood.

Christophe
  • 74,672
  • 10
  • 115
  • 187
0

Is declarative programming just imperative programming 'under the hood'? If so then how to differentiate them two?

Yes, declarative programming is imperative programming 'under the hood'. In fact, all declarative programming languages are implemented in another imperative programming language (directly or indirectly).

But to differentiate between them you should not look 'under the hood'. You should look at the controls, from the point of view of the driver. Does it have a steering wheel? If yes, then it's a normal car. Do you enter the destination in a map and then just wait to get there? Then it's a fully autonomous car.

It's the same way with programming languages: do you specify each step that needs to be done? Then it's an imperative language. Do you just specify the characteristics of the output, without specifying how to compute the results? Then it's a declarative language.

Razvan Socol
  • 109
  • 3
-2

I'd argue that the difference between imperative and declarative is simply the presence of an optimising compiler.

I'd also argue that nowadays the distinction isn't a useful one.

The classic example given of a declarative language is SQL, and that's because once upon a time its optimising capabilities and execution engine had no recognisable analogy in languages deemed imperative.

But that kind of talk is old hat. Nowadays, powerful optimising compilers are completely ordinary amongst basically all languages (certainly commercial compilers for mainstream languages), as are various kinds of execution engines which supervise and alter their own workings dynamically to improve efficiency.

Steve
  • 6,998
  • 1
  • 14
  • 24
  • 2
    Optimising compilers can only rearrange those parts of a language which can be proven to be free of side-effects - that is, the parts of the language which are in fact declarative. What we generally mean by an "imperative language" is one where large amounts of code will not be in that category. Even a mostly-declarative language like an SQL dialect may have parts with side effects, which are therefore imperative; this then limits the optimisation capabilities of the compiler, see e.g. [PostgreSQL's function volatility categories](https://www.postgresql.org/docs/current/xfunc-volatility.html). – IMSoP Aug 22 '23 at 11:20
  • 3
    @IMSoP, few things in the database world are side-effect free. Even a plain query causes writes, and is capable of deadlocking or degrading performance in a way that has non-local effects. A fully-declarative language, if "declarative" means an absence of "side effects", is synonymous with static data. Most programming languages are concerned with specifying computation/algorithms, not describing static data. – Steve Aug 22 '23 at 12:40
  • Of course, in reality everything can be considered to have side effects, just as [every change can be considered a breaking change](https://xkcd.com/1172/), but what I'm talking about is *the abstractions and guarantees that the language provides*. In many languages, the expression `( doSomething() && doSomethingElse() )` is guaranteed not only to execute the two functions in the specified order, but to only execute the second if the first is true; that is not something that can be changed by a better optimising compiler for the same language, it is part of the language design. – IMSoP Aug 22 '23 at 16:07
  • In contrast, a declarative language would not make that guarantee, because *the abstraction it is based on* is that functions will be "pure", and the only thing that matters is the output for a given input. So a compiler for *that* language *could* optimise the expression `( doSomething() && doSomethingElse() )` by re-ordering the function calls, running them in parallel, or caching their results. In practice, it might not produce any better performance, but that doesn't mean there is not a useful distinction between the way the two languages are designed. – IMSoP Aug 22 '23 at 16:13
  • 1
    @IMSoP, there's no compiler that guarantees not to optimise. What you call "guarantees" are often just de-facto behaviour, which will be overhauled tomorrow if some part of the system becomes sophisticated enough to analyse whether an optimisation is possible. For example, most branches are guaranteed not to execute unless the test passes - but then CPUs execute them in advance anyway, and discard the results if the test fails. There's simply nothing sacred anymore. – Steve Aug 22 '23 at 18:26
  • 1
    While it is true that an optimizing compiler can detect portions of imperative code which can be handled as-if it were declarative, there are strong limits to the extent this can go. In the other direction, immutability in a pure (declarative) language can make some optimizations like in-place sorting complicated to impossible. A naive programmer would not even know it's an issue. So I think that saying the distinction isn't useful is overstating the case. ChatGPT is definitely that naive programmer that is happy to give elegant and poorly-performing code in many languages. – Lawnmower Man Aug 22 '23 at 19:43
  • Again, I think it's about levels of abstraction: language standards absolutely do make guarantees about execution order, at a level that can be observed by the author of the program. A C program compiled to a quantum computer would still have to produce output consistent with that order, even if it also computed all possible incorrect outputs at the same time. The question is whether *the way the programmer interacts with the language* uses the concept of "this, then that, then the other" (imperative), or "this, also that, relatedly the other" (declarative). – IMSoP Aug 22 '23 at 19:46
  • @IMSoP no, the language only promises that the state of the abstract machine is *as-if* `( doSomething() && doSomethingElse() )` happened in order. That might involve speculative execution of (parts of) `doSomethingElse` before `doSomething` is done. – Caleth Aug 23 '23 at 09:42
  • @Caleth That's exactly the point of my quantum computer example - that the *observed behaviour* is guaranteed to have a particular order, even if *at a lower level* there is other stuff going on. Programming languages are abstractions for humans to express problems, and automate the solution to those problems; the difference between declarative and imperative is not in how they are automated, it's in how the problems are expressed. The fact that those different ways of expressing problems might or might not lead to different ways of automating the processing is, in a sense, coincidental. – IMSoP Aug 23 '23 at 10:04
  • 1
    @IMSoP, I think the problem is that I find SQL can be fully reduced to an imperative interpretation. I view a query as essentially specifying a series of imperative operations. The engine can reorder, but only if the results would be "the same" as what I have already specified imperatively. Another aspect is that I treat source code not just as machine instructions, but as something that often encodes things that aren't important for the machine, but does say something to the reader or corresponds to my thinking. Code usually contains only solutions, not all of "the problem" as I know it. – Steve Aug 23 '23 at 13:05
  • "source code ... encodes things that aren't important for the machine" - precisely! The fact that "under the hood", the order of operations is more or less fixed isn't the relevant distinction, the distinction is *how you express yourself* in the language. As an extreme example, take a mathematical theorem solver: the input is a set of axioms and constraints; the order of operations that leads to the proof is the desired *output*. You *could* write an ad hoc imperative program to produce each proof, but the declarative syntax means you can concentrate on expressing the parts that matter. – IMSoP Aug 23 '23 at 13:50
  • Similarly, you *could* write an SQL statement as a series of operations: fetch, filter, map, reduce, project, etc. However, the language doesn't *require* you to do so, it allows you to use `WHERE` and `ON` to specify conditions in any order; to rearrange inner joins to match your mental model of the data; to mix left and right outer joins; and so on. The freedom for the *programmer* to rearrange the query so that it "corresponds to their thinking" is the original aim; the freedom for the *compiler* to rearrange it a different way to optimise performance is a *consequence* of that aim. – IMSoP Aug 23 '23 at 13:57
  • 1
    @IMSoP, I do write it as a series of operations - a series that suits my intention. I'm unable to understand what it means not to specify things as a series of operations. A particular order of operations is not always crucial or carefully chosen (just as in any language, not every line is always in a strictly necessary order), but sometimes it is. – Steve Aug 23 '23 at 15:13
  • Then maybe SQL isn't a useful example for you, because you don't make full use of its declarative abilities. A clearer example might be Prolog - you write multiple independent statements, and the job of the compiler / executor is to use any combination of those statements to check some predicate. The order you write the statements has no bearing at all on the result - it is more like entering data than specifying operations. Similarly, "infrastructure as code" languages like Terraform allow you to specify the desired characteristics of a system, which will be compared to its current state. – IMSoP Aug 23 '23 at 15:24
  • 1
    @IMSoP, the question is how *would* I make use of the additional abilities you say exist in SQL? It's obviously possible to write some queries in SQL without regard to logical evaluation order - and in that sense SQL is "able" to be used that way - but I'm not convinced that the mental model that ignores logical evaluation order is the more general understanding of what SQL is. By diverting to Prolog, we are simply diverting from a mainstream industrial language that I use regularly and understand deeply, to one whose name I've heard of but never used (and sees no mainstream use). – Steve Aug 23 '23 at 17:33
  • I moved on from SQL because I got the sense we weren't getting anywhere, and wanted to bring the discussion back to declarative programming _per se_. Prolog happens to be a very pure example, so I thought it might demonstrate the principle that _the way you write the program_ is distinct from _whether the compiler can optimise that program or not_. I also mentioned Terraform, which very definitely sees mainstream use; not only do you not specify actions in order, you don't specify most actions *at all*: changes are planned automatically by comparing the current state against the desired state. – IMSoP Aug 23 '23 at 19:17
  • 1
    @IMSoP, this may seem a bit weasely, but to my eye these purely declarative languages cease to be programming languages. They are more like configuration files, and the engines become equation solvers, where the constrains are specified so strictly that all the operations of the relevant program can be inferred mechanically. I suspect the reason they aren't used more widely, is because it's much harder than it seems to specify the system of constraints necessary for inference of a method, than for the programmer to perform the inference and specify the method for the machinery to follow. – Steve Aug 23 '23 at 20:04
  • Frankly, yes, that comes across as a No True Scotsman argument, and makes the discussion not worth continuing. If SQL is "not declarative enough" and Terraform is "too decorative", then you've neatly defined declarative languages out of existence, without actually engaging in what anyone else means by the term. And just to repeat: Terraform absolutely is used, extremely widely, within it's specific role. It's probably true that declarative languages tend to be more *specialist* than imperative ones, because they need richer built in functionality. – IMSoP Aug 23 '23 at 21:07
  • @IMSoP, I make the point in earnest, and I'm genuinely trying to reconcile what you say. I strongly suspect what is at stake here are the usual philosophical differences between mathematicians and programmers. The kind of difference where `a = b` is a static constraint to a mathematician, whereas to a programmer it is an assignment. There's too little space to discuss such a complicated subject in general, but I think this is the issue at root. My argument was not that "SQL is not declarative enough". My argument was that, whatever "declarative" means, that SQL is no longer distinct... (1/4) – Steve Aug 24 '23 at 08:49
  • from many other "imperative" programming languages. Now, I accept that SQL was originally designed somewhat by people who share your philosophy. It is manifest in the basic design of the syntax, where the evaluation order doesn't match the written order of clauses. No sensible person writing an imperative language would have done that. Indeed, in similar but more modern facilities like LINQ (in DotNet), the From clause now comes first. (2/4) – Steve Aug 24 '23 at 08:49
  • And in ANSI-86 syntax, where the tables are simply listed in the From clause, and then the join conditions are specified in the Where clause (and when there was no such thing as outer joins, except vendor-specific hacks to the syntax), again it is obvious that this design is not intended to be parsed like an imperative series of operations. But many of these features have been found to be wrongheaded, and over time the syntax has converged more closely with specifying operators. (3/4) – Steve Aug 24 '23 at 08:49
  • That is broadly where I am - I have a mental model that overcomes the deficiencies of the syntax, and allows me to specify the operators imperatively and understand the joins and their control expressions not as "relationships" but as algorithms. I understand of course that this specification is "logical" (not "actual"). But it is an imperative design as it leaves my fingers - that the engine may rearrange it is as irrelevant as that a C# compiler may rearrange. I'm unable to understand what the "declarative" view would give me that the "imperative" view doesn't cope with. (4/4) – Steve Aug 24 '23 at 08:50
  • Fundamentally, I just don't agree that there is such a convergence. Plain SQL has a completely different feel - there are no intermediate variables, no loops, no conditional operations. Maybe you think that "declarative" is the wrong label for that difference, but the difference is there, and it hasn't gone away just because we now write `a JOIN b ON` rather than `a,b WHERE` – IMSoP Aug 24 '23 at 09:41
  • One thing that really drives home the difference is places where you actually have the choice. Take using C# iterators with loops vs using Linq, for instance - they are compiled to the same thing under the hood, but Linq exists to _express the problem in a different form_. Similarly, one or the rivals to Terraform is AWS CDK, where you write code in TypeScript or other "imperative" languages, and it's compiled _back_ to a declarative description of the desired resources. As you say, Terraform *feels* like a config file - so what should we call it, if not "declarative"? – IMSoP Aug 24 '23 at 10:28
  • @IMSoP, I regularly use intermediate variables (either via 'cross apply' or 'with'), and join operators are the looping operators (and conditions can be specified in the control expression for a join). I'm not disputing over the label "declarative" or otherwise. The only thing I'm contending for, is that SQL doesn't differ (in relevant respects) from other languages with optimising compilers. The difference you're now alluding to is the presence of array operators - where the order in which each row or field is processed, cannot be specified. (1/2) – Steve Aug 24 '23 at 10:32
  • In my view, there is nothing inherently declarative about array operators - I don't see why operating on tables is inconsistent with imperative code. Moreover, SQL has recursive queries, for when you do need to control the order of processing rows. We know looping is equivalent to recursion - often plain loops, but certainly loops and stack storage. I return to my point that perhaps SQL as originally conceived, does not reflect it's modern nature. (2/2) – Steve Aug 24 '23 at 10:36
  • There's no mainstream CPUs designed for evaluation of declarative languages. There have been in the past, such a LISP machine, the Connection Machine and Graph Reduction Machines at research institutions. All of these focused on processing functional/AI languages such as Lisp, Prolog, Standard ML and Haskell. – stevel Aug 24 '23 at 10:40
  • @IMSoP, on Terraform I'm just not familiar with the technology - but as a "cloud infrastructure configurator" (per some quick research) it's obviously not a "programming language" as we know it, or at least not suited for expressing general computation. We were talking about SQL. I'm not saying Terraform is or isn't "declarative". And don't mistake me - I do acknowledge the existence of the "declarativism" that mathematicians use. It's hard to fully characterise the philosophical features, but I know the difference is there. – Steve Aug 24 '23 at 10:47
  • @stevel, even if there was, I'm convinced it would just move the issue around deeper into the hardware, so that there was no superficial evidence of instructions at a software level, and then we'd all have to be hardware experts to discern where the imperative layer makes its return. The problem is that mathematicians always want to think in static terms - in dead terms. Computers are fundamentally machines with moving parts, and often they have to move in response to future circumstances that haven't happened yet, precluding any total dead analysis of the situation. (1/2) – Steve Aug 24 '23 at 11:08
  • Often, too, the real constraints on machinery are not encodable in a single machine-interpretable language. Often, I know there is something the machine must do specifically, for reasons that relate either to other parts of the mechanical system that aren't under the purview of the relevant engine, or because it relates to human affairs which requires natural language to express (and brains to interpret it). Sometimes as well, I must know exactly what the machine will do, in order to interdict it for the purpose of control. This all seems to get lost on the mathematicians. (2/2) – Steve Aug 24 '23 at 11:10
  • SQL isn't "suited for expressing general computation" either; as I said earlier, I suspect declarative languages tend more to specialisation. That doesn't mean they don't exist. And as I wrote in my answer, all abstractions "leak" - as you say, sometimes you _do_ care how the implementation works. But the purpose of **all** programming languages is to allow you *as much as possible* to think in more abstract terms: there would be no point in using an OO language if you were constantly aware of how the CPU was going to execute every line of code. – IMSoP Aug 24 '23 at 11:38
  • I just saw this: "we know looping is equivalent to recursion", and again would like to stress **that's the wrong level of abstraction**. Looping is equivalent to recursion *in a mathematical sense*, and may or may not be equivalent in an implementation. But **to the programmer**, a piece of code that uses recursion and a piece of code that uses a loop *do not look the same*. The point I'm arguing for is *not* that there are languages meeting some mathematical definition of "declarative", it's that there are languages which *present a different type of abstraction to the programmer*. – IMSoP Aug 24 '23 at 13:10
  • @IMSoP, I think the problem with certain "abstractions" is whether they are mere simplifications of the reality, or whether they are actually differing (and less capable) philosophies which seem simpler and work in relation to a certain number of tasks or in a narrow scope of application, but clash with the general reality and have to be jettisoned (and the correct philosophy learned) whenever you want to dive into the detail (which if the abstraction is "leaky", you may often have to). (1/2) – Steve Aug 25 '23 at 07:37
  • The sense one gets is that "imperative" languages are somehow deemed inferior or less powerful, whereas "declarative" languages are somehow simple and yet also powerful - at the very least, the argument seems to be that more declarativism is better and more modern. The reality seems to be that the more declarative and less imperative a language is, the smaller the scope it applies to, the less use it sees in industry, and the less practitioners seem even to be able to understand it. As I've said, I interpret SQL imperatively - that it can be, is probably why it is in widespread use. (2/2) – Steve Aug 25 '23 at 07:39
  • None of this page is about whether declarative languages are better or worse, popular or unpopular, niche or general-purpose. It's about whether the distinction is meaningful and useful, and how to tell the difference. Your answer says there is no difference; I maintain that that is a misunderstanding of the terminology. – IMSoP Aug 25 '23 at 09:50