11

Stack-oriented programming is a pretty non-widely used paradigm (well, PostScript gets used under the hood a bit here and there). This in mind, what problems are stack-oriented languages good at? What would they do better than [paradigm] does? Also, for what problems are they a bad choice?

Is stack-oriented programming just a niche paradigm?

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
Anto
  • 11,157
  • 13
  • 67
  • 103
  • It's funny, but I do stack oriented programming every day, in every language I use (currently Java and Haskell). Well, not exactly "oriented", but .... – Ingo Apr 10 '11 at 20:05
  • 3
    @Ingo: You mean, you make use of a stack? Yeah, that isn't stack *oriented* ;) – Anto Apr 10 '11 at 20:08
  • right, it occured to me while writing it, therefore I struggled to come up with a serious answer. – Ingo Apr 10 '11 at 20:13
  • 1
    Most traditional languages compile down to a stack oriented approach. The simplest expression evaluator uses a stack. –  Apr 10 '11 at 20:19

4 Answers4

17

The main problem with stack-orient programming languages is that they are conceptually quite difficult for humans to understand. The advantage is that they are very easy for computers to evaluate and generate.

Which is why it was chosen for PostScript. It wasn't that long ago that printers only had a very small amount of memory and very under-powered CPUs. So giving them programs in a language that was easy to interpret and also easy to generate, you didn't require a super-computer inside your printer to evaluate.

Also keep in mind that many languages today, while not stack-oriented themselves, actually get compiled down to stack-oriented bytecode (think Java and .NET). Again, the reason being that stack-oriented processing is very easy for a computer to reason about.

Dean Harding
  • 19,871
  • 3
  • 51
  • 70
  • Who needs complex operator precedence rules if you do all of your expressions in *RPN*. *8') http://en.wikipedia.org/wiki/Reverse_Polish_notation – Mark Booth May 01 '11 at 12:12
  • Have you ever tried using an RPN calculator? *shudder* Fortunately the [Shunting-yard algorithm](http://en.wikipedia.org/wiki/Shunting-yard_algorithm) can help take usual infix to RPN style output easily for stacky goodness - and I wouldn't feel too bad about using it even on embedded platforms. – J Trana Jan 27 '14 at 05:14
7

Thorbjørn already made the point about compilation - if I remember correctly, Java bytecode is an example of a stack-oriented language. Though this isn't universal. LLVM uses a "static single assignment" thing which certainly has a stack, but in which local variable accesses and expression evaluation use a kind of write-once-then-immutable virtual register model. For mutable data, these things basically hold an immutable pointer.

Anyway, a stack-oriented language can do anything that a Lisp-like or ALGOL-like language can do, computationally speaking. The stack may even have implicit metadata, doing things roughly equivalent to the type inference in ML, for instance - evaluated at compile time, without doing a single run-time push or pop.

The real issue is readability, so the question is which of these do you find more readable...

a * b + c * d
(+ (* a b) (* c d))
a b * c d * +

In principle, the Lisp-like prefix thing could be done without parentheses - but would have to use a fixed number of arguments per operator (just like postfix and infix) to do it. I think there's some truth in the old "Lots of Infuriatingly Stupid Parentheses" joke, but that could be avoided while still keeping the operator-comes-first ordering. In other words, you could write a compiler that would accept...

+ * a b * c d

On what is most readable, that is clearly debatable. Infix is more familiar for newbies, but it doesn't take much to get comfortable with any of those forms.

Personally, I'm not even convinced this counts as a paradigm. It's just notation, with little or any semantic impact. For example, all three forms could be handled using a Yacc grammar - all three using precisely the same AST form, so that the semantic analysis and code generation doesn't even need to know whether the syntax appears to be "stack oriented".

EDIT

Oops - I think the basic point is above, but I guess I should explicitly state my answer. Which is basically that IMO there isn't any particular problem that favours any of those notations. People may favour one or another, but that's a different thing.

What Lisp programmers will point out, though, is that metaprogramming works quite well with a Lisp-like notation. That's more about the parentheses than where the operator appears, though - a variant of Forth that allowed...

(1 2 3 4 +)

... would be just as good. The benefit either way is the simple notation to manipulate using code - the same reason that stack-oriented intermediate languages (bytecodes, etc.) are popular. Though arguably, manipulating the ordering-agnostic AST form can be done fairly easily too - that's kind of what pattern matching does in Haskell and ML.

EDIT - A clarification...

Using a postfix/stack-based form means just that. For example, it doesn't imply any change of argument order. The following examples can all be equivalent, other than the syntax used to express them...

(< 1 2)
(1 < 2)
(1 2 <)

Note - there isn't any need to swap the order of the 1 and 2 just because the operator is postfix

(if (< a b) (handle a_smaller) (handle b_smaller_or_equal))

((a b <) (a_smaller handle) (b_smaller_or_equal handle) if)
((a_smaller handle) (b_smaller_or_equal handle) (a b <) if)

Maybe you'd want to keep the condition near the if, maybe you wouldn't. Either is fine.

Forth doesn't express an if either of these ways, but there isn't any reason why a postfix/stack oriented language can't do so. Forth only allowed integers on its stack (at least originally), but Lisp allowing side-effecting expressions to be passed as parameters has nothing to do with being prefix rather than postfix. A postfix language could allow code to be passed as arguments (that is, pushed on the stack) too.

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
  • The problem with all stack-oriented languages is the syntax, to me. vs. Lisp, which is more readable? ((0 3 *) (7 6 +) (3 5 <) if) (if (< 5 3) (* 3 0) (+ 6 7)) – mhr Oct 01 '12 at 00:36
  • It's backwards, I think. It's difficult to read because it's backwards. – mhr Oct 01 '12 at 00:37
  • 1
    @mhr - really? Are you sure you're not just a LISP fan, and therefore unsurprisingly more familiar with that form? I can't help noticing that one of your two questions states [I'm making a programming language, and, having spent some time in Lisp/Scheme...](http://programmers.stackexchange.com/questions/166640/besides-macros-are-there-any-other-metaprogramming-techniques). Have you spent equal time getting used to Forth? –  Oct 01 '12 at 01:03
  • 1
    Readability is learned. I had a HP-48 calculator, so `a b * c d * +` was exactly what I would type for years. – Edwin Buck Oct 01 '12 at 03:30
  • @Edwin - yes, but mhr is trying to suggest you'd type `1 2 <` and expect the result to be `false` (see the swapping of the `3` and `5` in his examples). If that happened, it *would* affect readability, but obviously it doesn't happen. You *could* write backward-logic relative operators in a postfix language, but you could equally do that in a prefix or infix language. It's not necessary and it's not done. –  Oct 01 '12 at 03:44
  • I'm sorry, I misunderstood: so it should have been (5 3 <) which is like saying "5 is less than 3". It does seem more readable to me, though, that lisp is prefix instead of postfix. If you did (((5 5 +) +) +), you have to know how many plus functions you will have in order to balance the parens initially. Whereas in prefix, you can add as many plus functions as you want and balance at the end. To balance at the beginning is to limit how many plus functions you can make, don't you agree? (+ (+ (+ (+ (+ ... – mhr Oct 01 '12 at 04:31
  • Not only that, but saying if by stating the "then" and "else" clauses first is unintuitive to me. In English, you say "if this, then that, else that", not "then, else, if this". – mhr Oct 01 '12 at 04:32
  • *"then, else, this if", I meant – mhr Oct 01 '12 at 04:39
  • @mhr - IOW in a postfix language you'd get the problem at the beginning which, in LISP, happens at the end - instead of heaps of `)))))` which your IDE needs to count for you, you'd have heaps of `(((((`. Sure, it's nasty, but it's just as nasty in LISP. And who says you know how many additions you need at the start (before typing the arguments)? Anyway, since you have parens, why not at least allow the partial fix that LISP has - write `(1 2 3 4 5 +)`, keeping track of how many additions are needed at compile-time and overloading the `+` operator based on the argument count. –  Oct 01 '12 at 04:39
  • Yes, but what if you say (* (+ (/ (+ (* (- 7 6) 6) 7) 8) 9)? Yeah, overloading an operator works, but what if you have more than one operation to be done? – mhr Oct 01 '12 at 04:41
  • Hmm, I realized that I'm wrong. You'd know how many parens because you know the numbers...wow. Okay, nvm, then. – mhr Oct 01 '12 at 04:43
  • But still, the unnaturalness of saying "then, else, this if" is still applicable, I think. – mhr Oct 01 '12 at 04:44
  • @mhr - the standard Forth syntax for `if` isn't either of my suggestions, but then I think that's awkward too. Just as I think the LISP `cond` is awkward. What works for one spelling of one construct doesn't work for another. The nice thing about infix languages - they don't force everything to be infix. But then, getting back to my original answer, that makes no difference to what paradigms can be supported. The only difference is readability, and while I explicitly stated that readability can differ, it's something you can get used to either way. I never said readability is always the same. –  Oct 01 '12 at 04:46
  • 1
    @mhr - check the history - although it has been edited, my answer said "On what is most readable, that is clearly debatable. Infix is more familiar for newbies, but it doesn't take much to get comfortable with any of those forms." from the beginning. And I still think this is right - debatable partly because it's personal preference and familiarity, and partly because it depends which case you're looking at. "Debatable" doesn't mean "all are equally readable in all cases". –  Oct 01 '12 at 04:51
2

I've used a few stack-oriented languages over the years.

At university, I had some fun confusing system administrators with tiny PostScript programs which seemed to take an in-ordinate amount of time to render just one page of output. Yes, the laser printer we had access to had more floating point power than the mini computers we were given accounts on, so it was great for doing mandelbrots.

I also worked with transputers. These were processors which were highly optimised for performance per transistor. Every bit of silicon needed to pull its weight, so it had a Huffman encoded instruction set (the most common instructions were coded in a single byte) and rather than having traditional register set it had a pair of stacks. One stack was low priority, the other high, which meant that it had a context switch time (low priority process to high) measured in microseconds rather than milliseconds - great for fast interrupt response in the robot controllers we made.

I also worked on some robotic systems which used embedded Forth processors which ran Forth pretty much like machine code. I remember that we would buy twice as many CPUs as we needed and bin them according to performance, selecting the best half to put into systems, that's because it was cheaper to buy several un-rated chips than buy a single chip rated at the speed we needed them to run at. *8')

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
Mark Booth
  • 14,214
  • 3
  • 40
  • 79
0

Forth is almost dead, I think. There is J, however, I don't know whether it has a notable influence. Postscript is not really used for programming by humans, is it? So I'd say, stack oriented languages virtually do not exist anymore.

As far as I see it, they are good as replacement for assembler on stack oriented hardware architectures. Firmware and such stuff. But I am not sure if such hardware exists anymore in the post 16-bit processor area.

Ingo
  • 3,903
  • 18
  • 23